Throughout this doc you will see a few terms that are sometimes used interchangeably elsewhere, that might cause confusion. This section attempts to clarify them.
Typically, services and pods have IPs only routable by the cluster network. All traffic that ends up at an edge router is either dropped or forwarded elsewhere. Conceptually, this might look like:
internet | ------------ [ Services ]
An Ingress is a collection of rules that allow inbound connections to reach the cluster services.
internet | [ Ingress ] --|-----|-- [ Services ]
It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting etc. Users request ingress by POSTing the Ingress resource to the API server. An Ingress controller is responsible for fulfilling the Ingress, usually with a loadbalancer, though it may also configure your edge router or additional frontends to help handle the traffic in an HA manner.
Before you start using the Ingress resource, there are a few things you should understand. The Ingress is a beta resource, not available in any Kubernetes release prior to 1.1. You need an Ingress controller to satisfy an Ingress, simply creating the resource will have no effect.
GCE/GKE deploys an ingress controller on the master. You can deploy any number of custom ingress controllers in a pod. You must annotate each ingress with the appropriate class, as indicated here and here.
A minimal Ingress might look like:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /testpath backend: serviceName: test servicePort: 80
POSTing this to the API server will have no effect if you have not configured an Ingress controller.
Lines 1-6: As with all other Kubernetes config, an Ingress needs
metadata fields. For general information about working with config files, see deploying applications, configuring containers, managing resources and ingress configuration rewrite.
Lines 7-9: Ingress spec has all the information needed to configure a loadbalancer or proxy server. Most importantly, it contains a list of rules matched against all incoming requests. Currently the Ingress resource only supports http rules.
Lines 10-11: Each http rule contains the following information: A host (e.g.: foo.bar.com, defaults to * in this example), a list of paths (e.g.: /testpath) each of which has an associated backend (test:80). Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend.
Lines 12-14: A backend is a service:port combination as described in the services doc. Ingress traffic is typically sent directly to the endpoints matching a backend.
Global Parameters: For the sake of simplicity the example Ingress has no global parameters, see the API reference for a full definition of the resource. One can specify a global default backend in the absence of which requests that don’t match a path in the spec are sent to the default backend of the Ingress controller.
In order for the Ingress resource to work, the cluster must have an Ingress controller running. This is unlike other types of controllers, which typically run as part of the
kube-controller-manager binary, and which are typically started automatically as part of cluster creation. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found here.
The following document describes a set of cross platform features exposed through the Ingress resource. Ideally, all Ingress controllers should fulfill this specification, but we’re not there yet. The docs for the GCE and nginx controllers are here and here respectively. Make sure you review controller specific docs so you understand the caveats of each one.
There are existing Kubernetes concepts that allow you to expose a single service (see alternatives), however you can do so through an Ingress as well, by specifying a default backend with no rules.
If you create it using
kubectl create -f you should see:
$ kubectl get ing NAME RULE BACKEND ADDRESS test-ingress - testsvc:80 18.104.22.168
22.214.171.124 is the IP allocated by the Ingress controller to satisfy this Ingress. The
RULE column shows that all traffic sent to the IP is directed to the Kubernetes Service listed under
As described previously, pods within kubernetes have IPs only visible on the cluster network, so we need something at the edge accepting ingress traffic and proxying it to the right endpoints. This component is usually a highly available loadbalancer. An Ingress allows you to keep the number of loadbalancers down to a minimum, for example, a setup like:
foo.bar.com -> 126.96.36.199 -> / foo s1:80 / bar s2:80
would require an Ingress such as:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: foo.bar.com http: paths: - path: /foo backend: serviceName: s1 servicePort: 80 - path: /bar backend: serviceName: s2 servicePort: 80
When you create the Ingress with
kubectl create -f:
$ kubectl get ing NAME RULE BACKEND ADDRESS test - foo.bar.com /foo s1:80 /bar s2:80
The Ingress controller will provision an implementation specific loadbalancer that satisfies the Ingress, as long as the services (s1, s2) exist. When it has done so, you will see the address of the loadbalancer under the last column of the Ingress.
Name-based virtual hosts use multiple host names for the same IP address.
foo.bar.com --| |-> foo.bar.com s1:80 | 188.8.131.52 | bar.foo.com --| |-> bar.foo.com s2:80
The following Ingress tells the backing loadbalancer to route requests based on the Host header.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
Default Backends: An Ingress with no rules, like the one shown in the previous section, sends all traffic to a single default backend. You can use the same technique to tell a loadbalancer where to find your website’s 404 page, by specifying a set of rules and a default backend. Traffic is routed to your default backend if none of the Hosts in your Ingress match the Host in the request header, and/or none of the paths match the URL of the request.
You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. If the TLS configuration section in an Ingress specifies different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension (provided the Ingress controller supports SNI). The TLS secret must contain keys named
tls.key that contain the certificate and private key to use for TLS, e.g.:
apiVersion: v1 data: tls.crt: base64 encoded cert tls.key: base64 encoded key kind: Secret metadata: name: testsecret namespace: default type: Opaque
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: no-rules-map spec: tls: - secretName: testsecret backend: serviceName: s1 servicePort: 80
Note that there is a gap between TLS features supported by various Ingress controllers. Please refer to documentation on nginx, GCE, or any other platform specific Ingress controller to understand how TLS works in your environment.
An Ingress controller is bootstrapped with some loadbalancing policy settings that it applies to all Ingress, such as the loadbalancing algorithm, backend weight scheme etc. More advanced loadbalancing concepts (e.g.: persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can still get these features through the service loadbalancer. With time, we plan to distill loadbalancing patterns that are applicable cross platform into the Ingress resource.
It’s also worth noting that even though health checks are not exposed directly through the Ingress, there exist parallel concepts in Kubernetes such as readiness probes which allow you to achieve the same end result. Please review the controller specific docs to see how they handle health checks (nginx, GCE).
Say you’d like to add a new Host to an existing Ingress, you can update it by editing the resource:
$ kubectl get ing NAME RULE BACKEND ADDRESS test - 184.108.40.206 foo.bar.com /foo s1:80 $ kubectl edit ing test
This should pop up an editor with the existing yaml, modify it to include the new Host:
spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 path: /foo - host: bar.baz.com http: paths: - backend: serviceName: s2 servicePort: 80 path: /foo ..
Saving it will update the resource in the API server, which should tell the Ingress controller to reconfigure the loadbalancer.
$ kubectl get ing NAME RULE BACKEND ADDRESS test - 220.127.116.11 foo.bar.com /foo s1:80 bar.baz.com /foo s2:80
You can achieve the same by invoking
kubectl replace -f on a modified Ingress yaml file.
Techniques for spreading traffic across failure domains differs between cloud providers. Please check the documentation of the relevant Ingress controller for details. Please refer to the federation doc for details on deploying Ingress in a federated cluster.
You can expose a Service in multiple ways that don’t directly involve the Ingress resource: