An API object that manages external access to the services in a cluster, typically HTTP.
Ingress can provide load balancing, SSL termination and name-based virtual hosting.
For the sake of clarity, this guide defines the following terms:
Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the ingress resource.
internet | [ Ingress ] --|-----|-- [ Services ]
An ingress can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, and offer name based virtual hosting. An ingress controller is responsible for fulfilling the ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Before you start using an ingress, there are a few things you should understand. The ingress is a beta resource. You will need an ingress controller to satisfy an ingress, simply creating the resource will have no effect.
In order for the ingress resource to work, the cluster must have an ingress controller running. This is unlike other types of controllers, which run as part of the
kube-controller-manager binary, and are typically started automatically with a cluster. Choose the ingress controller implementation that best fits your cluster.
Additional controllers include:
Note: Review the documentation for your controller to find its specific support policy.
You may deploy any number of ingress controllers within a cluster.
When you create an ingress, you should annotate each ingress with the appropriate
ingress-class to indicate which ingress
controller should be used if more than one exists within your cluster.
If you do not define a class, your cloud provider may use a default ingress provider.
A minimal ingress example:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
Lines 1-6: As with all other Kubernetes resources, an ingress needs
For general information about working with config files, see deploying applications, configuring containers, managing resources. Ingress frequently uses annotations to configure some options depending on the ingress controller, an example of which is the rewrite-target annotation. Different ingress controller support different annotations. Review the documentation for the ingress controller you are using to learn which annotations are supported and valid options.
Lines 7-9: Ingress spec has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Ingress resource only supports rules for directing http traffic.
Lines 10-11: Each http rule contains the following information:
servicePort. Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the referenced service.
Lines 12-14: A backend is a service:port combination as described in the services doc. Ingress traffic is typically sent directly to the endpoints matching a backend.
A default backend is often configured in an ingress controller that will service any requests that don’t match a path in the spec.
Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers operate slightly differently. The Kubernetes project supports and maintain GCE and nginx ingress controllers.
Note: Make sure you review your ingress controller’s specific docs to understand the caveats.
There are existing Kubernetes concepts that allow you to expose a single Service (see alternatives). You can also do this with an ingress by specifying a default backend with no rules.
If you create it using
kubectl create -f you should see:
kubectl get ingress test-ingress
NAME HOSTS ADDRESS PORTS AGE test-ingress * 188.8.131.52 80 59s
184.108.40.206 is the IP allocated by the ingress controller to satisfy
Note: Ingress controllers and load balancers may take a minute or two to allocate an IP address. Until that time you will often see the address listed as
A fanout configuration routes traffic from a single IP address to more than one service, based on the HTTP URI being requested. An ingress allows you to keep the number of loadbalancers down to a minimum. For example, a setup like:
foo.bar.com -> 220.127.116.11 -> / foo service1:4200 / bar service2:8080
would require an ingress such as:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: simple-fanout-example annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: service1 servicePort: 4200 - path: /bar backend: serviceName: service2 servicePort: 8080
When you create the ingress with
kubectl create -f:
kubectl describe ingress simple-fanout-example
Name: simple-fanout-example Namespace: default Address: 18.104.22.168 Default backend: default-http-backend:80 (10.8.2.3:8080) Rules: Host Path Backends ---- ---- -------- foo.bar.com /foo service1:4200 (10.8.0.90:4200) /bar service2:8080 (10.8.0.91:8080) Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 22s loadbalancer-controller default/test
The ingress controller will provision an implementation specific loadbalancer
that satisfies the ingress, as long as the services (
When it has done so, you will see the address of the loadbalancer at the
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
foo.bar.com --| |-> foo.bar.com s1:80 | 22.214.171.124 | bar.foo.com --| |-> bar.foo.com s2:80
The following ingress tells the backing loadbalancer to route requests based on the Host header.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: name-virtual-host-ingress spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: service1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: service2 servicePort: 80
Default Backends: An ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website’s 404 page, by specifying a set of rules and a default backend. Traffic is routed to your default backend if none of the Hosts in your ingress match the Host in the request header, and/or none of the paths match the URL of the request.
You can secure an ingress by specifying a secret
that contains a TLS private key and certificate. Currently the ingress only
supports a single TLS port, 443, and assumes TLS termination. If the TLS
configuration section in an ingress specifies different hosts, they will be
multiplexed on the same port according to the hostname specified through the
SNI TLS extension (provided the ingress controller supports SNI). The TLS secret
must contain keys named
tls.key that contain the certificate
and private key to use for TLS, e.g.:
apiVersion: v1 data: tls.crt: base64 encoded cert tls.key: base64 encoded key kind: Secret metadata: name: testsecret-tls namespace: default type: Opaque
Referencing this secret in an ingress will tell the ingress controller to
secure the channel from the client to the loadbalancer using TLS. You need to make
sure the TLS secret you created came from a certificate that contains a CN
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: tls-example-ingress spec: tls: - hosts: - sslexample.foo.com secretName: testsecret-tls rules: - host: sslexample.foo.com http: paths: - path: / backend: serviceName: service1 servicePort: 80
An ingress controller is bootstrapped with some load balancing policy settings that it applies to all ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet exposed through the ingress. You can still get these features through the service loadbalancer.
It’s also worth noting that even though health checks are not exposed directly through the ingress, there exist parallel concepts in Kubernetes such as readiness probes which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks ( nginx, GCE).
To update an existing ingress to add a new Host, you can update it by editing the resource:
kubectl describe ingress test
Name: test Namespace: default Address: 126.96.36.199 Default backend: default-http-backend:80 (10.8.2.3:8080) Rules: Host Path Backends ---- ---- -------- foo.bar.com /foo s1:80 (10.8.0.90:80) Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 35s loadbalancer-controller default/test
kubectl edit ingress test
This should pop up an editor with the existing yaml, modify it to include the new Host:
spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 path: /foo - host: bar.baz.com http: paths: - backend: serviceName: s2 servicePort: 80 path: /foo ..
Saving the yaml will update the resource in the API server, which should tell the ingress controller to reconfigure the loadbalancer.
kubectl describe ingress test
Name: test Namespace: default Address: 188.8.131.52 Default backend: default-http-backend:80 (10.8.2.3:8080) Rules: Host Path Backends ---- ---- -------- foo.bar.com /foo s1:80 (10.8.0.90:80) bar.baz.com /foo s2:80 (10.8.0.91:80) Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ADD 45s loadbalancer-controller default/test
You can achieve the same by invoking
kubectl replace -f on a modified ingress yaml file.
Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant ingress controller for details. You can also refer to the federation documentation for details on deploying ingress in a federated cluster.
You can expose a Service in multiple ways that don’t directly involve the ingress resource: