Out of the box it comes with the following components on 9 machines:
The Juju Kubernetes work is curated by a dedicated team of community members, let us know how we are doing. If you find any problems please open an issue on our tracker so we can find them.
|IaaS Provider||Config. Mgmt||OS||Networking||Docs||Conforms||Support Level|
|Amazon Web Services (AWS)||Juju||Ubuntu||flannel, calico*||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|OpenStack||Juju||Ubuntu||flannel, calico||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|Microsoft Azure||Juju||Ubuntu||flannel||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|Google Compute Engine (GCE)||Juju||Ubuntu||flannel, calico||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|Joyent||Juju||Ubuntu||flannel||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|Rackspace||Juju||Ubuntu||flannel||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|VMWare vSphere||Juju||Ubuntu||flannel, calico||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
|Bare Metal (MAAS)||Juju||Ubuntu||flannel, calico||docs||Commercial, Community ( @mbruzek, @chuckbutler )|
For support level information on all solutions, see the Table of solutions chart.
Ubuntu 16.04 introduced the Canonical Distribution of Kubernetes, a pure upstream distribution of Kubernetes designed for production usage. This page shows you how to deploy a cluster.
After deciding which cloud to deploy to, follow the cloud setup page to configure deploying to that cloud.
Load your cloud credentials for each cloud provider you would like to use.
In this example
juju add-credential aws credential name: my_credentials select auth-type [userpass, oauth, etc]: userpass enter username: jorge enter password: *******
You can also just auto load credentials for popular clouds with the
juju autoload-credentials command, which will auto import your credentials from the default files and environment variables for each cloud.
Next we need to bootstrap a controller to manage the cluster. You need to define the cloud you want to bootstrap on, the region, and then any name for your controller node:
juju update-clouds # This command ensures all the latest regions are up to date on your client juju bootstrap aws/us-east-2
or, another example, this time on Azure:
juju bootstrap azure/centralus
You will need a controller node for each cloud or region you are deploying to. See the controller documentation for more information.
Note that each controller can host multiple Kubernetes clusters in a given cloud or region.
The following command will deploy the initial 9-node starter cluster. The speed of execution is very dependent of the performance of the cloud you’re deploying to:
juju deploy canonical-kubernetes
After this command executes the cloud will then launch instances and begin the deployment process.
juju status command provides information about each unit in the cluster. Use the
watch -c juju status --color command to get a real-time view of the cluster as it deploys. When all the states are green and “Idle”, the cluster is ready to be used:
Model Controller Cloud/Region Version default aws-us-east-2 aws/us-east-2 2.0.1 App Version Status Scale Charm Store Rev OS Notes easyrsa 3.0.1 active 1 easyrsa jujucharms 3 ubuntu etcd 3.1.2 active 3 etcd jujucharms 14 ubuntu flannel 0.6.1 maintenance 4 flannel jujucharms 5 ubuntu kubeapi-load-balancer 1.10.0 active 1 kubeapi-load-balancer jujucharms 3 ubuntu exposed kubernetes-master 1.6.1 active 1 kubernetes-master jujucharms 6 ubuntu kubernetes-worker 1.6.1 active 3 kubernetes-worker jujucharms 8 ubuntu exposed topbeat active 3 topbeat jujucharms 5 ubuntu Unit Workload Agent Machine Public address Ports Message easyrsa/0* active idle 0 126.96.36.199 Certificate Authority connected. etcd/0 active idle 3 188.8.131.52 2379/tcp Healthy with 3 known peers. etcd/1* active idle 4 184.108.40.206 2379/tcp Healthy with 3 known peers. (leader) etcd/2 active idle 5 220.127.116.11 2379/tcp Healthy with 3 known peers. kubeapi-load-balancer/0* active idle 7 18.104.22.168 443/tcp Loadbalancer ready. kubernetes-master/0* active idle 8 22.214.171.124 6443/tcp Kubernetes master services ready. flannel/3 active idle 126.96.36.199 Flannel subnet 10.1.48.1/24 kubernetes-worker/0* active idle 9 188.8.131.52 Kubernetes worker running. flannel/2 active idle 184.108.40.206 Flannel subnet 10.1.53.1/24 kubernetes-worker/1 active idle 10 220.127.116.11 Kubernetes worker running. flannel/0* active idle 18.104.22.168 Flannel subnet 10.1.31.1/24 kubernetes-worker/2 active idle 11 22.214.171.124 Kubernetes worker running. flannel/1 active idle 126.96.36.199 Flannel subnet 10.1.83.1/24 Machine State DNS Inst id Series AZ 0 started 188.8.131.52 i-06e66414008eca61c xenial us-east-2c 3 started 184.108.40.206 i-0038186d2c5103739 xenial us-east-2b 4 started 220.127.116.11 i-0ac66c86a8ec93b18 xenial us-east-2a 5 started 18.104.22.168 i-078cfe79313d598c9 xenial us-east-2c 7 started 22.214.171.124 i-00fd70321a51b658b xenial us-east-2c 8 started 126.96.36.199 i-0109a5fc942c53ed7 xenial us-east-2b 9 started 188.8.131.52 i-0ab63e34959cace8d xenial us-east-2b 10 started 184.108.40.206 i-0108a8cc0978954b5 xenial us-east-2a 11 started 220.127.116.11 i-0f5562571c649f0f2 xenial us-east-2c
After the cluster is deployed you may assume control over the cluster from any kubernetes-master, or kubernetes-worker node.
First you need to download the credentials and client application to your local workstation:
Create the kubectl config directory.
mkdir -p ~/.kube
Copy the kubeconfig file to the default location.
juju scp kubernetes-master/0:config ~/.kube/config
Fetch a binary for the architecture you have deployed. If your client is a
different architecture you will need to get the appropriate
through other means. In this example we copy kubectl to
~/bin for convenience,
by default this should be in your $PATH.
mkdir -p ~/bin juju scp kubernetes-master/0:kubectl ~/bin/kubectl
Query the cluster:
Kubernetes master is running at https://18.104.22.168:443 Heapster is running at https://22.214.171.124:443/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at https://126.96.36.199:443/api/v1/namespaces/kube-system/services/kube-dns/proxy Grafana is running at https://188.8.131.52:443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy InfluxDB is running at https://184.108.40.206:443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
Congratulations, you’ve now set up a Kubernetes cluster!
Want larger Kubernetes nodes? It is easy to request different sizes of cloud
resources from Juju by using constraints. You can increase the amount of
CPU or memory (RAM) in any of the systems requested by Juju. This allows you
to fine tune the Kubernetes cluster to fit your workload. Use flags on the
bootstrap command or as a separate
juju constraints command. Look to the
Juju documentation for machine
Need more workers? We just add more units:
juju add-unit kubernetes-worker
Or multiple units at one time:
juju add-unit -n3 kubernetes-worker
You can also ask for specific instance types or other machine-specific constraints. See the constraints documentation for more information. Here are some examples, note that generic constraints such as
mem are more portable between clouds. In this case we’ll ask for a specific instance type from AWS:
juju set-constraints kubernetes-worker instance-type=c4.large juju add-unit kubernetes-worker
You can also scale the etcd charm for more fault tolerant key/value storage:
juju add-unit -n3 etcd
It is strongly recommended to run an odd number of units for quorum.
If you want stop the servers you can destroy the Juju model or the
controller. Use the
juju switch command to get the current controller name:
juju switch juju destroy-controller $controllername --destroy-all-models
This will shutdown and terminate all running instances on that cloud.
The Ubuntu Kubernetes deployment uses open-source operations, or operations as code, known as charms. These charms are assembled from layers which keeps the code smaller and more focused on the operations of just Kubernetes and its components.
The Kubernetes layer and bundles can be found in the
project on github.com:
Feature requests, bug reports, pull requests or any feedback would be much appreciated.Create an Issue Edit this Page