Kubernetes: Cross-namespace Ingress

94

As far as I can tell right now it's only possible to create an ingress to address services inside the namespace in which the Ingress resides. It would be good to be able to address services in any namespace.

It's possible that I'm missing something and this is already possible - if so it'd be great if this was documented.

paralin picture paralin  ·  11 Nov 2015

Most helpful comment

172

I would tend to imagine the use case that @paralin described is common. I'm looking at an ingress controller as a _system_ component and a means of reflecting _any_ service in the cluster to the outside world. Running _one_ (perhaps even in the kube-system namespace) that can handle ingress for all services just seems to make a lot of sense.

krancour picture krancour  ·  19 Nov 2015

All comments

9

Nope, not allowing this was a conscious decision (but one that I can be convinced against). Can you describe your use case? The beta model partitions users on namespace boundaries and disallows service sharing across namespaces. You might argue that you want a single loadbalancer for the entire cluster, to which I ask, what is in the namespaces? i.e why not use 1 namespace if you want to allow sharing.

bprashanth picture bprashanth  ·  11 Nov 2015
75

@bprashanth I'm running multiple projects on a cluster - kubernetes tests, blog, API for a project. I want to address these as subdomains on my domain using a single ingress controller because load balancers and IP addresses are expensive on GCE.

paralin picture paralin  ·  11 Nov 2015
6

It would be good to be able to address services in any namespace.

It was intentionally avoided. Cross namespace references would be a prime source of privilege escalation attacks.

cc @kubernetes/kube-iam

liggitt picture liggitt  ·  13 Nov 2015
0

I'll close this for now, makes sense.

paralin picture paralin  ·  16 Nov 2015
6

FWIW you _can_ set up a Service in namespace X with no selector and a
manual Endpoints that just lists another Service's IP. It's yet another
bounce, but it seems to work. :)

On Thu, Nov 12, 2015 at 5:14 PM, Jordan Liggitt [email protected]
wrote:

It would be good to be able to address services in any namespace.

It was intentionally avoided. Cross namespace references would be a prime
source of privilege escalation attacks.

cc @kubernetes/kube-iam
https://github.com/orgs/kubernetes/teams/kube-iam


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-156287304
.

thockin picture thockin  ·  16 Nov 2015
172

I would tend to imagine the use case that @paralin described is common. I'm looking at an ingress controller as a _system_ component and a means of reflecting _any_ service in the cluster to the outside world. Running _one_ (perhaps even in the kube-system namespace) that can handle ingress for all services just seems to make a lot of sense.

krancour picture krancour  ·  19 Nov 2015
0

Cross namespace references would be a prime source of privilege escalation attacks.

That depends on what you have in your namespaces right (which is why i asked for clarification)? Isn't it only risky IFF you're partitioning users across a namespace security boundry?

bprashanth picture bprashanth  ·  19 Nov 2015
125

There seems to be demand for cross namespace ingress x service resolution. We should at least reconsider.

bprashanth picture bprashanth  ·  6 Dec 2015
0

I think we want some kind of admission controller which does:

if ! req.Kind = "Ingress" { return }
ingress  := req.AsIngress()
for each serviceRef field in ingress {
  if req.User is not Authorized() to modify the service pointed to by serviceRef {
    reject request
  }
}

Then, to modify an ingress you have to have and owner-like permission on all the services it targets.

erictune picture erictune  ·  10 Dec 2015
9

Might be good to revisit this now in 2016 :)

paralin picture paralin  ·  10 Jan 2016
0

@kubernetes/kube-iam thoughts/volunteers to implement an admission controller? Do we authorize based on the user field of a request today or is that unprecedented?

bprashanth picture bprashanth  ·  12 Jan 2016
0

to modify an ingress you have to have and owner-like permission on all the services it targets.

I think I'd want some record or indication of the cross-namespace relationship to exist, so the targeted service could know it was exposed. I want to avoid the scenario where someone had access to a service (legitimately or otherwise), set up ingress from other namespaces, then had their access removed and continued accessing the services without the service owner's awareness.

Do we authorize based on the user field of a request today or is that unprecedented?

The authorization layer is based on the user info on a request. This would be the first objectref authorization I know of.

liggitt picture liggitt  ·  12 Jan 2016
0

@liggitt's raises some good concerns. I broke them down into two cases when thinking about them.

  1. assuming everyone is trustworthy, it might still be hard to reason about the network security of a service just by looking at the service object (or just by looking at objects in the same namespace). IT might be misconfigured.

    • I agree with this, to a point

    • however, creating an object that represents a connection between two services seems like it would scale poorly.

    • we need a solution that scales with the number of services, not the number of interconnections, I think.

  2. assuming there is someone not-trustworthy, they can misconfigure they network in a way where the misconfiguration persists after some of their access is revoked.

    • Yes. But we have this problem worse with pods, configmap, etc. The bad actor might have run pods that that are doing the wrong thing, and auditing this is very hard.

erictune picture erictune  ·  20 Jan 2016
0

Is this moving into the topic of micro-segmentation and access policy?

On Wed, Jan 20, 2016 at 2:19 PM, Eric Tune [email protected] wrote:

@liggitt https://github.com/liggitt's raises some good concerns. I
broke them down into two cases when thinking about them.

  1. assuming everyone is trustworthy, it might still be hard to reason
    about the network security of a service just by looking at the service
    object (or just by looking at objects in the same namespace). IT might be
    misconfigured.

    • I agree with this, to a point

    • however, creating an object that represents a connection between

      two services seems like it would scale poorly.

    • we need a solution that scales with the number of services, not

      the number of interconnections, I think.

  2. assuming there is someone not-trustworthy, they can misconfigure
    they network in a way where the misconfiguration persists after some of
    their access is revoked.

    • Yes. But we have this problem worse with pods, configmap, etc.

      The bad actor might have run pods that that are doing the wrong thing, and

      auditing this is very hard.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-173379992
.

thockin picture thockin  ·  22 Jan 2016
0

yes.

erictune picture erictune  ·  22 Jan 2016
2

The two main models proposed for network segmentation are:

1) decorate Services (and maybe Pods) with a field indicting "allow-from".
This basically allows one to draw the directed graph of an application,
sort of.

2) implement a "policy group" object which selects Pods to which to apply
policy, and includes some simple policy statements like "allow-from"

On Thu, Jan 21, 2016 at 11:52 PM, Tim Hockin [email protected]
wrote:

Is this moving into the topic of micro-segmentation and access policy?

On Wed, Jan 20, 2016 at 2:19 PM, Eric Tune [email protected]
wrote:

@liggitt https://github.com/liggitt's raises some good concerns. I
broke them down into two cases when thinking about them.

  1. assuming everyone is trustworthy, it might still be hard to reason
    about the network security of a service just by looking at the service
    object (or just by looking at objects in the same namespace). IT might be
    misconfigured.
  2. I agree with this, to a point
  3. however, creating an object that represents a connection between
    two services seems like it would scale poorly.
  4. we need a solution that scales with the number of services, not
    the number of interconnections, I think.
  5. assuming there is someone not-trustworthy, they can misconfigure
    they network in a way where the misconfiguration persists after some of
    their access is revoked.
  6. Yes. But we have this problem worse with pods, configmap, etc.
    The bad actor might have run pods that that are doing the wrong thing,
    and
    auditing this is very hard.


Reply to this email directly or view it on GitHub
<
https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-173379992

.


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-173836967
.

ghost picture ghost  ·  22 Jan 2016
0

@thockin what issue does one go to to learn more and comment?

erictune picture erictune  ·  22 Jan 2016
0

It's being discussed in the network SIG mailing list as we haggle over
multitudes of ideas and whittle it down to a few viable ones.

Start here:

https://docs.google.com/document/d/1_w77-zG_Xj0zYvEMfQZTQ-wPP4kXkpGD8smVtW_qqWM/edit

One proposal:

https://docs.google.com/document/d/1_w77-zG_Xj0zYvEMfQZTQ-wPP4kXkpGD8smVtW_qqWM/edit

Another is in email:

https://groups.google.com/forum/#!topic/kubernetes-sig-network/Zcxl0lfGYLY

On Fri, Jan 22, 2016 at 12:46 PM, Eric Tune [email protected]
wrote:

@thockin https://github.com/thockin what issue does one go to to learn
more and comment?


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-174041517
.

thockin picture thockin  ·  22 Jan 2016
0

Talked to @thockin and @bprashanth
It sounds like the Ingress resource may undergo some refactoring this quarter, possibly splitting into two objects. We should revisit ingress security when those discussion happen.

erictune picture erictune  ·  23 Jan 2016
33

This would be a nice feature to have. For example, if you want a pseudo multi-tenant solution - with each tenant running in a separate namespace. The ingress could do hostname based routing to the right backend namespace. ${tenant}.example.com -> service "foo" in namespace ${tenant}

I suppose one can do this today on GKE, but I gather you end up with one HTTP load balancer per namespace - which could get quite expensive and seems unnecessary.

wstrange picture wstrange  ·  20 Apr 2016
72

This limitation throws a big wrench in how my company was planning to use ingresses. Our use case is running multiple copies of the same application stack at different versions, and to keep the stacks isolated from each other, we use namespaces. We'd planned to run a single ingress controller that knows how to determine which applications and versions of those applications by the subdomain of the incoming requests.

The reason for using namespaces for isolating these stacks are:

  1. To be extra safe about not having applications interfere with each other based on what else happens to be running in the cluster or from similarly named services.
  2. To get around name collisions for services. It's not possible to have two services in the same namespace with the same name, so application dependencies like "redis" or "mysql" need to be in different namespaces to use those simple names without faking a namespace by changing the name of the service.

See my unanswered Stack Overflow question, Kubernetes services for different application tracks, for more details on our use case.

Our ingress controller is exposed to the outside world via NodePort (80 and 443), and we have an ELB in AWS pointing at the whole cluster. With the namespace restriction for ingresses, we'd need one ingress controller per namespace and there would be no way to have a single ELB forwarding ports 80 and 443 to the cluster.

jimmycuadra picture jimmycuadra  ·  24 May 2016
22

@jimmycuadra You should use the approach that our group is using for Ingresses.

Think of an Ingress not as much as a LoadBalancer but just a document specifying some mappings between URLs and services within the same namespace.

An example, from a real document we use:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: ingress
    namespace: dev-1
  spec:
    rules:
    - host: api-gateway-dev-1.faceit.com
      http:
        paths:
        - backend:
            serviceName: api-gateway
            servicePort: 80
          path: /
    - host: api-shop-dev-1.faceit.com
      http:
        paths:
        - backend:
            serviceName: api-shop
            servicePort: 80
          path: /
    - host: api-search-dev-1.faceit.com
      http:
        paths:
        - backend:
            serviceName: api-search
            servicePort: 8080
          path: /
    tls:
    - hosts:
      - api-gateway-dev-1.faceit.com
      - api-search-dev-1.faceit.com
      - api-shop-dev-1.faceit.com
      secretName: faceitssl

We make one of these for each of our namespaces for each track.

Then, we have a single namespace with an Ingress Controller which runs automatically configured NGINX pods. Another AWS Load balancer points to these pods which run on a NodePort using a DaemonSet to run at most and at least one on every node in our cluster.

As such, the traffic is then routed:

Internet -> AWS ELB -> NGINX (on node) -> Pod

We keep the isolation between namespaces while using Ingresses as they were intended. It's not correct or even sensible to use one ingress to hit multiple namespaces. It just doesn't make sense, given how they are designed. The solution is to use one ingress per each namespace with a cluster-scope ingress controller which actually does the routing.

All an Ingress is to Kubernetes is an object with some data on it. It's up to the Ingress Controller to do the routing.

See the document here for more info on Ingress Controllers.

With this post I will close this issue because I think it's actually a non-issue - Ingresses work fine even for cross namespace routing.

Eventually the ingress object might be refactored / split. That would be a redesign of this concept. But as of now, this is how Ingresses are designed and meant to be used, so it only makes sense to use them the "right" way :)

paralin picture paralin  ·  24 May 2016
0

I might open up another issue about actually properly documenting this with examples. Seems there's some confusions around Ingresses. They're really powerful when applied correctly.

paralin picture paralin  ·  24 May 2016
0

I think your @-mention was the wrong person. :P

I understand the difference between ingress resources and the ingress controller, but I'm not sure I understand why my use case is not appropriate here, or how the setup you describe does the same thing we're trying to do.

In our imagined setup, the ingress controller and all ingress resources exist in one namespace. However, we create ingress resources that map host names to services in more than one namespace. This way we can access apps from outside the cluster as "my-app.app-version.example.com" and the request follows the path ELB --> random node --> ingress controller --> my-app in the my-app-my-version namespace.

jimmycuadra picture jimmycuadra  ·  24 May 2016
0

Yeah whoops. I just assumed the first jimmy in the @ list was you.

There's absolutely no reason to have your ingress resources in one namespace, as far as I can tell. What's keeping you from putting the routes for your app into a single Ingress object and then replicating this object for each namespace you want to run? It just makes sense from a symmetry perspective - if every namespace has the same set of objects, including the ingress object, then everything will work properly...

paralin picture paralin  ·  25 May 2016
0

I don't think duplication of ingress objects would be a big deal, but needing an ingress controller for each namespace would be a problem, as we want to have a single entrypoint for requests going into the cluster. If each namespace had it's own ingress controller, we'd need a separate NodePort allocation for each controller, and an ELB with its own DNS records for each namespace, which is not something we want.

jimmycuadra picture jimmycuadra  ·  25 May 2016
0

@jimmycuadra Read through my comment again. I said you would have a single ingress controller for the entire cluster, exactly as you want...

paralin picture paralin  ·  25 May 2016
14

It does look like I misunderstood what this issue was about! It sounds like ingress _controllers_ do work across namespaces, but ingress _resources_ cannot see outside of their own namespaces, which was the subject of this issue. Apologies for the confusion and thanks for your responses!

jimmycuadra picture jimmycuadra  ·  25 May 2016
0

@jimmycuadra I had the same confusion up to a point.

krancour picture krancour  ·  27 May 2016
0

I read this discussions many times and still do not understand what is the recommended way to achieve the desired goal of multiple sub/domains to be served from different namespaces.

My first workaround is to use single namespace with Ingress for everything that should be exposed via domain name to the world.
And the second way is to not use Ingress, but a simple Nginx as a proxy to my apps in different namespaces.

Isn't the goal of Ingress to simplify this scenario? There is mentions about security implications if crossing the namespace. However, there is no simple explanation of them.

@paralin Could you please share more details about what are those pods, which reside with the Ingress Controller?

gramic picture gramic  ·  20 Jan 2017
4

I think I need to be able to reference services in different namespaces from a single ingress controller. My use-case is doing blue/green deployments, with a "blue" namespace and a "green" namespace. My application's public traffic comes via an ingress controller. I would like to be able to route that traffic to blue or green services via making a small change to the ingress controller (i.e. the service namespace).

robhaswell picture robhaswell  ·  23 Jan 2017
0

You're both still missing the point, you can use a single ingress controller for as many namespaces full of ingresses you want

paralin picture paralin  ·  23 Jan 2017
0

@paralin Yes, I did understand that this is possible. What I am missing is how you do that?

Does the controller receive events for ingress resource in every namespace, no matter in which namespace the controller resides in?

gramic picture gramic  ·  24 Jan 2017
0

you can use a single ingress controller for as many namespaces full of ingresses you want

And I was not understanding that when I weighed in on this way back when. Given my current understanding, I actually have come around to believing there's no issue or truly damning limitation here.

krancour picture krancour  ·  25 Jan 2017
0

@paralin I believe I am the same as @gramic in that I do not understand how to achieve my goal. Please let me explain. I am trying to implement the blue/green deployment pattern using "blue" and "green" as my namespaces - each namespace hosts my application stack at different versions. My desire is to have a single Ingress resource routing traffic for https://myapp/ to, e.g. the blue namespace, and then at the point of release, make a change to that Ingress so that now traffic is being routed to my green namespace (without the public IP of that Ingress changing).

Unfortunately I believe all the solutions mentioned above require interaction with some entity outside of Kubernetes, e.g. an external load balancer, which is not desirable. If it helps, I'm on Google Container Engine.

I hope now that we are on the same page? My problem is that if I believe that without a service namespace selector, I can't achieve what I want to achieve.

robhaswell picture robhaswell  ·  31 Jan 2017
0

@robhaswell nope, still doable without a namespace selector. Remember that an ingress controller does not change ip between different ingress resources. It just combines them together and serves them.

Try running the nginx controller for example, and setting up what you want. You can either use a different URL in one namespace, and then change it to the main URL when you want to enable that namespace (kubectl edit it) or you can write a little bash script that deletes the ingress object in one namespace and immediately recreates it in the other.

The main thing I think you're missing is that the ingress objects are just data. Deleting them doesn't actually cause kubernetes to delete any resources. The ingress controller however is watching for changes to these resources and uses them to update its configuration. For the nginx controller this means updating the nginx.conf which does not change anything about how you chose to route traffic to the nginx pods. The IP remains the same.

paralin picture paralin  ·  31 Jan 2017
0

@gramic yes. It depends on their implementation, but most of the controllers monitor all the namespaces on default (which is changeable)

paralin picture paralin  ·  31 Jan 2017
0

@paralin thanks for your help, however I believe that it is not possible to change the namespace of an ingress object using kubectl edit:

$ kubectl edit ingress/tick-tock
A copy of your changes has been stored to "/var/folders/jv/_p33nwxx0jd8b7gr2qgx0mrc0000gr/T/kubectl-edit-fplln.yaml"
error: the namespace from the provided object "snowflakes" does not match the namespace "default". You must pass '--namespace=snowflakes' to perform this operation.

In this operation I attempted to change the namespace of the Ingress resource to snowflakes, where it was previously default.

Additionally if I delete this (my only) Ingress resource, I lose the IP address which Google Cloud load balancing has provisioned for me. Perhaps this behaviour would change if I had another Ingress resource.

robhaswell picture robhaswell  ·  1 Feb 2017
7

one thing i don't seem to be able to understand is: why insist on ingresses not being able to use services from other namespace, when there aren't any security measures in place (as far as i know) to prevent any pod from just using the <service-name>.<another-namespace-name>.svc.cluster.local fqnd to get data from another namespace.
or is the fact, that this is possible a security flaw, that will be fixed in future versions?

update: this is what i mean https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-157187876

stephanlindauer picture stephanlindauer  ·  7 Feb 2017
-1

@stephanlindauer I believe the intent is that cross namespace service access will not be allowed in the future.

wstrange picture wstrange  ·  7 Feb 2017
0

put then the question still stands: why not have a certain kind of flag or even kind to allow cross-namespace access to certain pods or have a service definition which namespaces they can be accessed from.
without this there would be a n:n (namespace:ingress-host) relation between namespace and hosts that expose them, while it would be nicer to have a n:1 relation to cut down on costs for additional public facing static-ip hosts.

stephanlindauer picture stephanlindauer  ·  7 Feb 2017
0

@robhaswell I'm not familiar with how the Google Cloud router works with Ingress right now.

However. If you use the nginx ingress controller and point a LoadBalancer at the ingress controller you will get what you want - a single stable IP address TCP load balancer created by the Service pointing to a horizontally scaleable nginx layer. Moving Ingress objects between namespaces just updates the Nginx configuration, and does not release any IP addresses. It'll also give you more direct control over how the router works, and might save money on Google Cloud resources.

The Google Cloud implementation SHOULD keep a single IP address and make HTTP rules in that balancer to point to however many Ingress objects exist in the cluster - that might not be the case right now.

paralin picture paralin  ·  7 Feb 2017
0

@paralin thanks for the suggestion, I haven't tested it, however I am leaning towards a similar solution for my specific requirement by using a LoadBalancer as you suggest to point to my own nginx-based router's service. This approach uncovered a bug in kube-proxy, see #40870.

I'm afraid that I have currently put this issue down while I am working on other priorities, but I will be able to return to this problem and conduct some conclusive tests of this suggestion and other suggestions in this thread. I apologise that I don't have time to do this right now. I also think that, so far, efforts in achieving my goal should be documented in a new thread entitled "Support blue/green deployments", as this issue is very clearly "support cross-namespace ingress" and the answer seems to be a firm "no, we won't".

robhaswell picture robhaswell  ·  8 Feb 2017
0

You might argue that you want a single loadbalancer for the entire cluster, to which I ask, what is in the namespaces? i.e why not use 1 namespace if you want to allow sharing.

@bprashanth in my instance, my frontend app connects to its component services using DNS as service discovery. They connect to a static name, e.g. postgres or redis. By deploying different versions in different namespaces, we get 100% satisfactory service discovery. We are unwilling to implement a different method of service discovery for the following reasons:

  1. More complexity brings more opportunity for failure.
  2. More per-deployment environmental configurations brings more opportunity for failure through less deployment parity.
  3. Our local dev setup is using pure Docker networking which is compatible with this DNS-based approach (which is a design feature of kube-dns, I believe).

We are very satisfied with this approach in every way, with the exception that cross-namespace load balancing (switching) is impossible with pure-k8s at the moment.

robhaswell picture robhaswell  ·  8 Feb 2017
0

FWIW you can set up a Service in namespace X with no selector and a manual Endpoints that just lists another Service's IP. It's yet another bounce, but it seems to work. :)

@thockin at some point between you making this comment and today, this functionality regressed. See #40870. I would appreciate it if you could accelerate @taimir's request for a second opinion on that thread please!

robhaswell picture robhaswell  ·  8 Feb 2017
0

If you need a huge design change in something like Kubernetes to do a relatively common deployment pattern, you're doing it wrong.

I rest my case, but urge you to rethink your approach. Reread the docs on how DNS and pod ips work, particularly around multiple / cross namespace communication. Namespaces are used usually in a one namespace per deployment fashion - one for prod, dev, etc although the isolation they provide can be used for other things too.

paralin picture paralin  ·  8 Feb 2017
0

@paralin please re-read my response to your suggestions. You suggested using a LoadBalancer service to direct traffic to the correct endpoint. I have tried this and this functionality does not work as described in the documentation. I tried your suggestion of multiple Ingress resources, and that didn't work. I also took inspiration from this, and tried having multiple Ingress resources in different namespaces, and changing which HTTP host they were configured for. This also didn't work - traffic was never re-routed, and I could confirm that the underlying Google Cloud routing rules had not been updated. I apologise for not reporting this, however at that point I had already given up on using Ingress for my solution - I understand and accept that this is not what it's for. I have merely been replying to your suggestions. That is why I intend to open a new issue, as I have not yet seen any workable solutions that function as advertised.

I'm not sure why you have now taken this hostile attitude, but I am disappointed that you seem to think I am stubbornly ignoring your help. Trust me, I am VERY eager to solve this problem. Thank you for continuing to help me, but please could I ask that if you think you have suggested workable solution, could you re-iterate it so there is less confusion between what you think works, and what I think works?

robhaswell picture robhaswell  ·  8 Feb 2017
0

You've still not read and understood my suggestion, which means at this point you're probably skipping reading anything I'm posting at all. Remember that I'm the one that made this issue in the first place, and yes, I was in the exact same position with the exact same deployment model as you.

I suggested using a Nginx Ingress Controller from the official kubernetes ingress controller implementations, and directing a load balancer at that ingress controller to get a stable IP. From what you've said above it seems you misunderstood.

This is exactly what ingress is for, so don't give up yet.

paralin picture paralin  ·  8 Feb 2017
2

OK, I think I understand your suggestion, but let me play that back to you:

  • Create an nginx Ingress Controller so that we have something we can route to.
  • Create a LoadBalancer service and include the Ingress Controller in its selector. OK great, now we have a single IP, all Ingress resources are served under this IP.
  • Create an Ingress resource under some given namespace, and specify our app's HTTP endpoint service as the backend. We're now serving our live app.

All good so far.

  • Now, we want to serve the new release. What happens? Delete the Ingress resource in the previous namespace, and create the same Ingress resource in the new namespace? OK... that will cause a service interruption, but maybe I can live with that.

Now we're serving the newest release with a small loss of service.

  • Next up, we want to test the development release (as per the blue/green pattern). It's deployed to a new namespace. Problem - how do I route traffic to it? Any Ingress resource can only use the single LoadBalancer IP I first created, so I can't differentiate the traffic as I can't add another IP. Unless... I use the hostname-based routing functionality of Ingress? I guess we can additionally configure static DNS names, but it's extra complexity.

If this is what you're suggesting, then I'm not really inclined to proceed with Ingress, and would rather pursue the service-to-service approach as described in the documentation and suggested by thockin. However, I can see that it would just about fit our use-case, so thank you.

robhaswell picture robhaswell  ·  8 Feb 2017
0

Ingress is designed to do http level routing of traffic while services do TCP level routing. For your use case I would use a subdomain to route to the dev environment for sure.

Usually people don't switch entire namespaces to do deployments, so kubernetes isn't designed towards that model as heavily. However, I can understand why you're doing it, and I definitely think you'll do fine with this approach.

You shouldn't see a service interruption by the way if you do it with a script that swaps them around immediately. The nginx config should be updated immediately and nginx doesn't have to restart to apply the change and start rerouting traffic.

paralin picture paralin  ·  8 Feb 2017
17

@paralin I think the main confusion here is due to how Google Container Engine behaves. My first expectation on how Ingress should work was exactly what you described: Simply have multiple Ingress resources and the ingress controller will make all of them available on the same node port / IP address to the outside world, right?

But then I tested this:

> kubectl get ingress
NAME             HOSTS         ADDRESS   PORTS     AGE
test-ingress-1   foo.bar.com             80        14s
test-ingress-2   bar.foo.com             80        4s

And this is what I got on Google Container Engine:
image

You can see that there are two public IP addresses now, one for each ingress. So I thought: Apparently my mental model on how ingresses are supposed to work is wrong, let's look some more at the documentation: https://kubernetes.io/docs/user-guide/ingress/#name-based-virtual-hosting Ah! The documentation suggests to use a single ingress if you want to have a single IP address. All right, seems to work in a single namespace, let's try to add services from other namespaces... The Ingress "..." is invalid What? Now what am I supposed to do?

And that's how I ended up at this issue. I hope that this shows why the model of ingresses that you describe (and which I would prefer) is not what we get by default on GKE. Maybe we are supposed to not use the default ingress controller created by GKE, but that really is not obvious.

How can we improve this situation?

neelance picture neelance  ·  14 Feb 2017
0

@neelance you're right, the GCE controller behaves differently than one might expect, and from how I think it's supposed to work, at least from how the developers explain it in this thread.

Maybe open another issue? This one is too far gone I think.

paralin picture paralin  ·  14 Feb 2017
0

@neelance, I had the same confusion initially. Container Engine's "L7" ingress controller isn't implemented using the familiar model that others have described here. It actually provisions an HTTP/S load balancer _per_ ingress. Besides not being what you want, that could also be costly. fwiw, I highly recommend dumping that ingress controller. I _think_ you can just uninstall it, but a more resilient option could be to annotate your ingress resources in a way that will make that ingress controller ignore them. Then you're free to install some _other_ ingress controller and let it handle things differently than Google's does. I prefer Traefik, personally, so I use the following annotation:

kubernetes.io/ingress.class: "traefik"

My understanding is that the Container Engine ingress controller will _ignore_ any ingress annotated with kubernetes.io/ingress.class != gcp.

Hope that helps.

krancour picture krancour  ·  14 Feb 2017
5

I have tried Nginx now and it works fine. I'll also take a look at Traefik.

So you and me have figured it out, but others will probably run into it again and again. Google Container Engine is one of the easiest ways to try/use Kubernetes and obviously people try the preinstalled ingress controller first.

How can we bring this up with the right people? The Kubernetes project is not directly responsible for GKE, right?

neelance picture neelance  ·  14 Feb 2017
0

It is. GLBC is written by the same devs in the same repos

paralin picture paralin  ·  14 Feb 2017
0

I filed https://github.com/kubernetes/ingress/issues/276, that seems like the right place.

neelance picture neelance  ·  14 Feb 2017
0

My use case is:

  • jenkins, grafana, etc. in different namespaces
  • I want a single ELB with Ingress per namespace

so the question is, will it work right now? if not, why?

pawelprazak picture pawelprazak  ·  7 Jun 2017
0

@pawelprazak it does.

krancour picture krancour  ·  8 Jun 2017
4

Sorry if this is wrong discussion, just want to describe my case.

I have some developer teams working on one complex product.
Product is splitted to semi-independent parts, each of particular parts is developed by its own team and namespaced respectively.

Everything was great before marketing team demand us to change components addressing from <componentname>.domain.com to domain.com/<componentname>.

So for now we still have namespaces with own ingresses for every component, but: it's absolutely impossible to build ingress resource for "main" part, because of structure like this is invalid:

spec:
  rules:
  - host: domain.com
    http:
      paths:
      - component1:
          namespace: component1
          serviceName: component1
          servicePort: 80
        path: /component1
      - component2:
          namespace: component2
          serviceName: component2
          servicePort: 80
        path: /component2
      - ...

Any chances to use ingress resources in this case, or I need to build homebrewed NGINX configuration to achieve this requirement?

Thanks in advance!

Bregor picture Bregor  ·  12 Jun 2017
31

@Bregor what ingress controller are you using?
If is the nginx ingress controller then you just need to split de definition of the ingress (to be able to reference services from different namespaces)
Like

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: component1
  namespace: component1
spec:
  rules:
  - host: domain.com
    http:
      paths:
      - backend:
          serviceName: component1
          servicePort: 80
        path: /component1

```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: component2
namespace: component2
spec:
rules:

  • host: domain.com
    http:
    paths:

    • backend:

      serviceName: component2

      servicePort: 80

      path: /component2

      ```

and the controller will merge the paths under the same nginx server

aledbf picture aledbf  ·  12 Jun 2017
0

This is what people fail to understand... An ingress controller can read in multiple Ingress objects and build a single config to direct traffic, it's not 1 ingress <-> 1 controller...

paralin picture paralin  ·  12 Jun 2017
0

With the default GKE ingress controller it is 1 ingress <-> 1 IP addy (== 1 domain).

neelance picture neelance  ·  12 Jun 2017
0

@aledbf oh, looks like I got it at last!
Thank you, will try it in the morning.

Bregor picture Bregor  ·  12 Jun 2017
0

@Bregor I'm still interested: Which ingress controller were you using?

neelance picture neelance  ·  12 Jun 2017
1

@neelance NGINX by kubernetes team

Bregor picture Bregor  ·  13 Jun 2017
0

Probably out of context, but how does the Nginx controller by kubernetes differ with the one provided by nginx? And which one is preferred and why?
I was trying the approach suggested by @neelance on nginx plus controller by nginx but it does not seem to work.

arunkjn picture arunkjn  ·  5 Jul 2017
0

For scenario “Bregor commented on Jun 13”, the solution from "aledbf commented on Jun 13", not work .

Here is example in AWS:
Per solution, each namespace must generate one ELB (load balance)
However, in AWS route 53, one domain www.abc.com can only point to single ELB (load balance).

So, you cannot combine all namespace api into one domain: www.abc.com
You have to divide api into different sub-domain by namespace.
namespace1.abc.com
namesapce2.abc.com
namespace3.abc.com

johnzheng1975 picture johnzheng1975  ·  12 Jul 2017
0

@johnzheng1975: The issue is about cross-namespace rule definitions in ingress. You seems to have conceptual problem on AWS, even before the traffic goes to k8s cluster. If you choose single domain you are stick to single ELB. If you are using subdomains, you are free to choose - single/multiple ELB. You have to choose.

PS. To generate many ELB create multiple kubernetes services (type LoadBalancer) with identical selector pointing to same ingress-controller.

krogon-dp picture krogon-dp  ·  12 Jul 2017
6

I can confirm that we made work cross namespace on GKE 1.6.4 and traefik.
The trick was to use the current (2017-07-13) stable helm.

Then, every namespace can have its ingress (referencing its own services) and the main controller aggregates the routes. Just awesome.

https://docs.traefik.io/user-guide/kubernetes/

paurullan picture paurullan  ·  13 Jul 2017
7

every namespace can have its ingress (referencing its own services) and the main controller aggregates the routes

This is exactly the same as currently nginx controller works. The issue is to enable in single namespace reference to services in other namespace. Such feature is implemented in voyager ingress controller :

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  namespace: foo
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1.bar # serviceName.Namespace
          servicePort: '80'
krogon-dp picture krogon-dp  ·  13 Jul 2017
5

Most users will think from perspective of the page view flow. So they want install the ingresss into namespace of the loadbalancer or controller. Then they would expect to connect to the namespace of the service like this:

spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: s1
          namespace: foo
          servicePort: '80' 

But it is exactly the other way around.

apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  namespace: foo

You have to install the ingress into the namespace of the service, then connect <- back to the loadbalancer.
This concept is not obvious and should be reevaluated or have better documentation.

Understanding this would also make it more obvious that there can and must be many ingresses.

delcon picture delcon  ·  20 Jan 2018
0

I started using GKE, after years of using home grown clusters on Azure with the nginx ingress, and was surprised by the IP per Ingress behaviour of the default ingress controller. I thought I could get around this by using a single ingress file in the kube-system namespace that aggregated all host/paths -> service rules but it turns out I cannot reference services in other namespaces.

Is there a better option other than dumping the default controller for the nginx controller or manually creating endpoints for a service without selectors?

edevil picture edevil  ·  23 Mar 2018
0

I think nobody is AGAINST a broader solution, but there are
conflict-resolution rules that need to be decided and access control issues
to work out. I'd live to see a proposal.

On Fri, Mar 23, 2018 at 8:06 AM André Cruz notifications@github.com wrote:

I started using GKE, after years of using home grown clusters with the
nginx ingress, and was surprised by the IP per Ingress behaviour of the
default ingress controller. I thought I could get around this by using a
single ingress file in the kube-system namespace that aggregated all
host/paths -> service rules but it turns out I cannot reference services in
other namespaces.

Is there a better option other than dumping the default controller for the
nginx controller or manually creating endpoints for a service without
selectors?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-375695402,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ALNCD4Sa3uVJkQ57iAE91UhEp4_ue9NDks5thQ-MgaJpZM4Gf_tu
.

ghost picture ghost  ·  23 Mar 2018
34

my workaround:

  1. serviceA in namespaceA
  2. create serviceB in namespaceB
spec:
    ...
    type: ExertalName
    externalName: serviceA.namespaceA.svc.cluster.local
  1. add ingress rule into ingressB in namespaceB
     - path: /****
        backend:
          serviceName: serviceB
          servicePort: ***
chestack picture chestack  ·  4 Jun 2018
14

I first came across this issue a few months ago and after reading it too quickly I was still confused. After a recent second read I realised that my requirement was actually really easy to achieve with the NGINX Ingress Controller.

I've written this up as an article on Medium.

I hope someone else finds it useful.

cesartl picture cesartl  ·  12 Jun 2018
0

cesartl,

Came to same solution by myself, but it is not working. Ingress installed via helm to kube-system namespace with "rbac create" option.

BorisDr picture BorisDr  ·  12 Jun 2018
0

hi @BorisDr,

I've installed the Ingress Controller manually:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

and it works well.

I've actually updated the article above to show how to use Ambassador, perhaps this is a better solution for you?

cesartl picture cesartl  ·  13 Jun 2018
0

Hi, @cesartl
Thanks for awesome job. Anyway, I finally made it, but outside of rancher.

BorisDr picture BorisDr  ·  15 Jun 2018
2

For Google Kubernetes Engine (GKE) specifically, I have written a step by step guide here for setting up nginx ingress controller to route traffic across cross namespace ingress resources:
https://github.com/ProProgrammer/cross-namespace-nginx-ingress-kubernetes

Thanks @cesartl for your post, it was very helpful in getting somethings off ground for me.

ProProgrammer picture ProProgrammer  ·  21 Jun 2018
0

Hi @cesartl ,

Thanks a lot for your post, it actually helps a lot !

But I'm still stuck on the last step : I tried a sample use-case with a quick nginx deployment as a test web server (yeah using a proxy as a test server is a little weird, ;)), and I'm trying to contact this server using the "Ambassador" option.
=> when I use kube-ctl port-forward on the nginx pod, I can display the test page without issue.
=> when I use the ELB built by ambassador, I get a "no healthy upstream" error. In a way, it means that the ambassador annotations are correctly detected and ambassador is trying to do something, but it just doesn't work (I think this error is coming from the envoy proxy, but this is where we reach my limits on the subject ;))

Would you have any idea of the possible root cause / how to debug this ?

Ambassador config :
exactly a copy-paste of the tutorial, except that

  • I created everything in the "ambassador" namespace
  • I have "use-proxy-proto" to "true" instead of "lower" (I think it was a typo)

Test nginx deployment (note : I deployed nginx in the "default" namespace) :
--- apiVersion: apps/v1 kind: Deployment metadata: name: mynginx spec: selector: matchLabels: run: mynginx replicas: 1 template: metadata: labels: run: mynginx spec: containers: - name: mynginx image: nginx ports: - containerPort: 80

Test nginx service:
--- apiVersion: v1 kind: Service metadata: name: httptest annotations: getambassador.io/config: | --- apiVersion: ambassador/v0 kind: Mapping name: default-mynginx prefix: /httptest/ service: default-mynginx:80 spec: type: NodePort selector: app: mynginx ports: - protocol: TCP port: 80 targetPort: 80

Thanks !

Seb-SK picture Seb-SK  ·  5 Jul 2018
0

Hi @Seb-SK,

I don't think this issue is the best place to discuss that. You can always discuss with the Ambassador team or their Gitter or the new slack here

I'm not an Ambassador expert but I can try to help you there.

cesartl picture cesartl  ·  6 Jul 2018
13

What's the argument against referencing a service in another namespace in an ingress? With workarounds like https://github.com/kubernetes/kubernetes/issues/17088#issuecomment-394313647, it would seem whatever supposed security benefits are nonexistent. With the ingress-nginx controller, the ability to inject arbitrary nginx config via nginx.ingress.kubernetes.io/server-snippet means I can essentially emulate cross-namespace service references, but in a janky and error-prone manner. So why not relax this restriction, and let us do what we are already able to do via workarounds, and let us do it cleanly.

guoshimin picture guoshimin  ·  11 Sep 2018
2

The recommendation of @aledbf to split the ingress from the service worked for us but because we also use cert-manager and an OAuth proxy, having two different ingress objects in two different namespaces caused cert-manager to generate the SSL cert with Let's Encrypt twice. It would've been better to be able to reference the OAuth proxy in the other namespace to avoid procuring the cert twice.

tsuna picture tsuna  ·  11 Sep 2018
3

@tsuna We have a similar use case where we use an oauth2 proxy and we really don't want to run one in every namespace.

guoshimin picture guoshimin  ·  11 Sep 2018
9

I'm sorry to resurrect a dead thread, but we definitely need this. Our reasoning is this.

Let's say we have n number of api's. As n grows we want to create some apis that should be cross cutting in that all teams use them, but managed by one team. It is impossible to manage x number of cross cutting apps in every namespace. So we need to have this cross cutting api in its own namespace for security/managibility reasons. The biggest being since we are now proxying to a new namespace, we can request a new audience for our token, and not bloating the original token.

Ingress is by definition a reverse proxy. So it should be easy enough to set

rules:
  - host: app.domain.com
    http:
      paths:
      - path: /api/crosscut
        backend:
          serviceProxy: crosscut.domain.com
          urlrewrite: true
CodeSwimBikeRunner picture CodeSwimBikeRunner  ·  22 Jan 2019
0

@ChristopherLClark This seems like something that could be better solved by a service mesh like istio?

wstrange picture wstrange  ·  23 Jan 2019
4

my workaround:

  1. serviceA in namespaceA
  2. create serviceB in namespaceB
spec:
    ...
    type: ExertalName
    externalName: serviceA.namespaceA.svc.cluster.local
  1. add ingress rule into ingressB in namespaceB
     - path: /****
        backend:
          serviceName: serviceB
          servicePort: ***

I tried,but i got error like this
Error resolving host "serviceB.namespaceB.svc.cluster.local": lookupserviceB.namespaceB.svc.cluster.local on 114.114.114.114:53: no such host
It seems like ingress controller using public dns to resolve this hostname.
And k8s doc says
"ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. "

captainjack0x7C8 picture captainjack0x7C8  ·  20 Mar 2019
5

My solution

kind: Service
apiVersion: v1
metadata:
  name: serviceB
  namespace: namespaceB
spec:
  ports:
  - protocol: TCP
    port: <serviceB port>
    targetPort: <serviceA port>
---
kind: Endpoints
apiVersion: v1
metadata:
  name: serviceB
  namespace: namespaceB
subsets:
  - addresses:
    - ip: <serviceA clusterIP>
    ports:
    - port: <serviceA port>
captainjack0x7C8 picture captainjack0x7C8  ·  21 Mar 2019
5

my workaround:

  1. serviceA in namespaceA
  2. create serviceB in namespaceB
spec:
    ...
    type: ExertalName
    externalName: serviceA.namespaceA.svc.cluster.local
  1. add ingress rule into ingressB in namespaceB
     - path: /****
        backend:
          serviceName: serviceB
          servicePort: ***

This solution is not working in GKE.
It only allows me to use LoadBalancer or NodePort when I'm trying to reference my service with type as externalName.

My workaround was to go a bit lower and fire up an nginx pod in namespace A and point that to my service in namespace B, so I can use my nginx pod's service for the ingress in namespace A.

kHRISl33t picture kHRISl33t  ·  27 Mar 2019
2

my workaround:

  1. serviceA in namespaceA
  2. create serviceB in namespaceB
spec:
    ...
    type: ExertalName
    externalName: serviceA.namespaceA.svc.cluster.local
  1. add ingress rule into ingressB in namespaceB
     - path: /****
        backend:
          serviceName: serviceB
          servicePort: ***

Worked for me. Thanks

ronaldodia picture ronaldodia  ·  11 Apr 2019
5

my workaround:

  1. serviceA in namespaceA
  2. create serviceB in namespaceB
spec:
    ...
    type: ExertalName
    externalName: serviceA.namespaceA.svc.cluster.local
  1. add ingress rule into ingressB in namespaceB
     - path: /****
        backend:
          serviceName: serviceB
          servicePort: ***

Worked for me. Thanks

How? GKE does not allow ExternalName and asks for NodePort or LoadBalancer.

Any news on that issue? The workarounds do not work for me, since I consider using GCLB (anycast) for multi cluster. Thanks

canmanmake picture canmanmake  ·  23 May 2019
2

Warning Translate 14s (x13 over 24s) loadbalancer-controller error while evaluating the ingress spec: service "default/prometheus" is type "ExternalName", expected "NodePort" or "LoadBalancer"

The ExternalName approach doesn't work on GKE as canmanmake pointed out, here is the error message for people googling this issue.

CanKattwinkel picture CanKattwinkel  ·  3 Jun 2019
0

Any news?

Having ingress per namespace does not seems like the best approach, I would also like to use one ingress with its issuer for all namespaces.

josefkorbel picture josefkorbel  ·  2 Aug 2019
1

Any news?

Having ingress per namespace does not seems like the best approach, I would also like to use one ingress with its issuer for all namespaces.

I'm not sure if you're having the same problem as me, but I managed to solve it by having a single ingress deployment and its service in the "ingress-nginx" namespace, then in each of my other namespaces I wanted to use ingress for, I defined the Ingress definition in that namespace. It seems to "magically" combine all the individual ingress defintions under the main one, and it now works as I hoped (e.g one ingress + external load balancer per cluster)

spenceclark picture spenceclark  ·  2 Aug 2019
2

some ingress providers (like nginx) will serve multiple ingress definitions in different namespaces via a single load balancer

others (like gce) will match load balancers 1:1 with ingress resources

liggitt picture liggitt  ·  2 Aug 2019
0

I'm not sure if you're having the same problem as me, but I managed to solve it by having a single ingress deployment and its service in the "ingress-nginx" namespace, then in each of my other namespaces I wanted to use ingress for, I defined the Ingress definition in that namespace. It seems to "magically" combine all the individual ingress defintions under the main one, and it now works as I hoped (e.g one ingress + external load balancer per cluster)

What? :O What about the issuers? Do they also magically connect?

josefkorbel picture josefkorbel  ·  6 Aug 2019
0

There is a misunderstanding here, the kube-proxy process running in all nodes in the cluster (including the master) will open the Ingress Nginx NodePort in all the Nodes in the cluster, but only the "-host:" that you defined in your Ingress.yml will match, and from there it will redirect you to the ingress nginx proxy, then the nginx controller will redirect you to the final service which is expected to be in the same namespace as where your service, endpoint and tls secrets are defined. E.g.
-host:31029(kube-proxy) -> NodePort:31029 (ingress-nginx pod:80/443) -> Service/EndPoint Pod

So in other words the config that you apply using kubectl not only configures your ingress proxy, it also configures your kube-proxy as they work together. Or at least that's what I see.

ernestomedina17 picture ernestomedina17  ·  12 Aug 2019
0

So in other words the config that you apply using kubectl not only configures your ingress proxy, it also configures your kube-proxy as they work together. Or at least that's what I see.

I don't think that's accurate. The kube-proxy role don't even include Ingress read permissions:

https://github.com/kubernetes/kubernetes/blob/0610bf0c7ed73a8e8204cb870e20c724b24c0600/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml#L1099-L1123

I think the open node port probably comes from the Nginx service config

liggitt picture liggitt  ·  12 Aug 2019
8

Is there currently a solution or workaround availble that works on GKE?
I tried the externalName approach with both gce-ingress as well as nginx-ingress, but in any case I end up with

error while evaluating the ingress spec: service "..." is type "ExternalName", expected "NodePort" or "LoadBalancer"

RobertoDonPedro picture RobertoDonPedro  ·  29 Jan 2020
0

Using Istio allows you this kind of things (among others). You can create a "selector-less" service in your Ingress namespace and point your ingress on it. Then, you create a VirtualService that can point to another namespace service.

cabrinoob picture cabrinoob  ·  16 Mar 2020
0

@spenceclark
Did you name the ingress in your other namespace the same as the main one or different? Are you maybe on AWS?

EDIT: I managed to describe the behaviour you described by creating an ingress per workspace and tieing to the same host, i.e. AWS ELB dns name. It somehow works.

Erokos picture Erokos  ·  6 Apr 2020
5

Just wanted to +1 implementing this.
Our use case is multiple staging versions of an application (one of the use cases the docs suggest for namespaces) that all have a new database that we would like to isolate from one another.

It would be great if we could use one load balancer/ingress to point to services in different namespaces to isolate the versions, but use one TLS cert, one DNS record, etc on the load balancer. It could even require an annotation like app.kubernetes.io/ingress.allow-across-namespace: true with a big red warning in the docs that it is a potential security risk.

tymokvo picture tymokvo  ·  15 Jun 2020
0

@Erokos

Did you name the ingress in your other namespace the same as the main one or different? Are you maybe on AWS?

On Azure, k8s 1.18, single node in case it matters (this is my 'test ideas out' cluster).
I created three different Ingress resources with different names, one in ingress-basic, and two more (with the path rules) over in the default namespace. I'm not exactly sure _how_ but it seems to be working great! :)
I speculate maybe because the Ingresses are all defined with the same host that is the glue that helps wire things up - everything is currently working in the desired way with a single LB + services on a different namespace.

in my ingress-basic namespace:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ing
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  tls:
  - hosts:
    - example.dev
    secretName: tls-secret
  rules:
  - host: example.dev

I didn't define specific http.paths here in the 'parent' Ingress resource, since it feels more like a concern of the individual service, not the whole ingress.

Meanwhile over in the default namespace:
service1:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ing-be
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  tls:
  - hosts:
    - example.dev
    secretName: tls-secret
  rules:
  - host: example.dev
    http:
      paths:
      - path: /api(/|$)(.*)
        backend:
          serviceName: be-service
          servicePort: 80

service2:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ing-fe
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/rewrite-target: /$1
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  tls:
  - hosts:
    - example.dev
    secretName: tls-secret
  rules:
  - host: example.dev
    http:
      paths:
      - path: /(.*)
        backend:
          serviceName: fe-service
          servicePort: 80
onpaws picture onpaws  ·  21 Jul 2020
0

+1 on this. In our case, we separate our environments in different namespaces, each one having its own Ingress.

We wanted to transparently redirect to a unique service instance (an IAM) used by all of these environments, by adding rules in each Ingress config, but it turns out ExternalName services are not accepted as targets so we can't.

I don't see any obvious solution: IAM service itself is in another namespace so we can't reference it from our Ingresses in other namespaces, ExternalName is not accepted so this trickery (changing our IAM access service from NodePort to ExternalName) is also a no-no, we are just stuck on this issue right now.

I read some messages about implementing a "bridge" service, referencing the target ClusterIp of a sevice in another namespace but does this actually work? Has anyone succeeded and if so, how? I'm puzzled.

cfecherolle picture cfecherolle  ·  7 Oct 2020
0

+1 on this. In our case, we separate our environments in different namespaces, each one having its own Ingress.

We wanted to transparently redirect to a unique service instance (an IAM) used by all of these environments, by adding rules in each Ingress config, but it turns out ExternalName services are not accepted as targets so we can't.

I don't see any obvious solution: IAM service itself is in another namespace so we can't reference it from our Ingresses in other namespaces, ExternalName is not accepted so this trickery (changing our IAM access service from NodePort to ExternalName) is also a no-no, we are just stuck on this issue right now.

I read some messages about implementing a "bridge" service, referencing the target ClusterIp of a sevice in another namespace but does this actually work? Has anyone succeeded and if so, how? I'm puzzled.

@cfecherolle You just need to deploy an ingress and service on a namespace where the service is configured as ExternalName to the service in the other namespace

carlosjgp picture carlosjgp  ·  9 Oct 2020
0

Hi @carlosjgp.
I'm not sure I understand your answer. Could you please clarify? Which namespace, which ingress and service, in short : how do you separate and wire things together, regarding namespaces?

cfecherolle picture cfecherolle  ·  9 Oct 2020
0

I am not sure if the issue is satisfactorily resolved, in which cases does the externalname trick actually work?
to confirm for AWS EKS (v1.16) with alb ingress controller I don't seem to be able to have a service in namespace B (where ingress lives) point to another with the same name in namespace A.
I tried two approaches:

Using headless endpoint service

apiVersion: v1
kind: Service
metadata:
  namespace: NAMESPACEB
  name: checks-service-integration-service
  annotations:
    "alb.ingress.kubernetes.io/healthcheck-path": "/v1/health_check"
    "alb.ingress.kubernetes.io/target-type": "ip"
    "alb.ingress.kubernetes.io/backend-protocol": "HTTP"
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
  name: checks-service-integration-service
  namespace: NAMESPACEB
subsets:
  - addresses:
      - ip: 172.20.156.11 <- this is the service's virtualIP in NAMESPACEA
    ports:
      - protocol: TCP
        port: 80

This solution doesn't work (times out) due to kube-proxy not supporting virtualIP (according to docs) and this works if i put the pod ip but obviously not an acceptable solution.

Using externalname service (as suggested in above response)

apiVersion: v1
kind: Service
metadata:
  namespace: *NAMESPACEB*
  name: checks-service-integration-service
  annotations:
    "alb.ingress.kubernetes.io/healthcheck-path": "/v1/health_check"
    "alb.ingress.kubernetes.io/backend-protocol": "HTTP"
   "alb.ingress.kubernetes.io/target-type": "ip"
spec:
  externalName: checks-service-integration-service.NAMESPACEA.svc.cluster.local
  ports:
    - port: 80

This causes connection refused errors when accesed checks-service-integration-service.NAMESPACEA.svc.cluster.local.
I am specifying IP mode because i don't think externalName works with NodePort and hence cannot use instance mode..

Thanks!

SpectralHiss picture SpectralHiss  ·  16 Oct 2020