Federation V1, the current Kubernetes federation API which reuses the Kubernetes API resources ‘as is’, is currently considered alpha for many of its features. There is no clear path to evolve the API to GA; however, there is a
Federation V2 effort in progress to implement a dedicated federation API apart from the Kubernetes API. The details are available at sig-multicluster community page.
This page explains why and how to manage multiple Kubernetes clusters using federation.
Federation makes it easy to manage multiple clusters. It does so by providing 2 major building blocks:
Some other use cases that federation enables are:
Federation is not helpful unless you have multiple clusters. Some of the reasons why you might want multiple clusters are:
While there are a lot of attractive use cases for federation, there are also some caveats:
Federations of Kubernetes Clusters can include clusters running in different cloud providers (e.g. Google Cloud, AWS), and on-premises (e.g. on OpenStack). Kubefed is the recommended way to deploy federated clusters.
Thereafter, your API resources can span different clusters and cloud providers.
To be able to federate multiple clusters, you first need to set up a federation control plane. Follow the setup guide to set up the federation control plane.
Once you have the control plane set up, you can start creating federation API resources. The following guides explain some of the resources in detail:
The API reference docs list all the resources supported by federation apiserver.
Kubernetes version 1.6 includes support for cascading deletion of federated resources. With cascading deletion, when you delete a resource from the federation control plane, you also delete the corresponding resources in all underlying clusters.
Cascading deletion is not enabled by default when using the REST API. To enable
it, set the option
DeleteOptions.orphanDependents=false when you delete a
resource from the federation control plane using the REST API. Using
enables cascading deletion by default. You can disable it by running
Note: Kubernetes version 1.5 included cascading deletion support for a subset of federation resources.
On IaaS providers such as Google Compute Engine or Amazon Web Services, a VM exists in a zone or availability zone. We suggest that all the VMs in a Kubernetes cluster should be in the same availability zone, because:
It is recommended to run fewer clusters with more VMs per availability zone; but it is possible to run multiple clusters per availability zones.
Reasons to prefer fewer clusters per availability zone are:
Reasons to have multiple clusters include:
The selection of the number of Kubernetes clusters may be a relatively static choice, only revisited occasionally. By contrast, the number of nodes in a cluster and the number of pods in a service may change frequently according to load and growth.
To pick the number of clusters, first, decide which regions you need to be in to have adequate latency to all your end users, for services that will run
on Kubernetes (if you use a Content Distribution Network, the latency requirements for the CDN-hosted content need not
be considered). Legal issues might influence this as well. For example, a company with a global customer base might decide to have clusters in US, EU, AP, and SA regions.
Call the number of regions to be in
Second, decide how many clusters should be able to be unavailable at the same time, while still being available. Call
the number that can be unavailable
U. If you are not sure, then 1 is a fine choice.
If it is allowable for load-balancing to direct traffic to any region in the event of a cluster failure, then
you need at least the larger of
U + 1 clusters. If it is not (e.g. you want to ensure low latency for all
users in the event of a cluster failure), then you need to have
R * (U + 1) clusters
U + 1 in each of
R regions). In any case, try to put each cluster in a different zone.
Finally, if any of your clusters would need more than the maximum recommended number of nodes for a Kubernetes cluster, then you may need even more clusters. Kubernetes v1.3 supports clusters up to 1000 nodes in size. Kubernetes v1.8 supports clusters up to 5000 nodes. See Building Large Clusters for more guidance.