This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7.
The installation uses a tool called
kubeadm which is part of Kubernetes 1.4.
This process works with local VMs, physical servers and/or cloud servers. It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).
kubeadm tool is currently in alpha but please try it out and give us feedback!
You will install the following packages on all the machines:
docker: the container runtime, which Kubernetes depends on.
kubelet: the most core component of Kubernetes. It runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command to control the cluster once it’s running. You will only use this on the master.
kubeadm: the command to bootstrap the cluster.
For each host in turn:
rootif you are not already (for example, run
sudo su -).
If the machine is running Ubuntu 16.04, run:
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - # cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF # apt-get update # apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni
If the machine is running CentOS 7, run:
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF # setenforce 0 # yum install -y docker kubelet kubeadm kubectl kubernetes-cni # systemctl enable docker && systemctl start docker # systemctl enable kubelet && systemctl start kubelet
The kubelet is now restarting every few seconds, as it waits in a crashloop for
kubeadm to tell it what to do.
setenforce 0 will no longer be necessary on CentOS once #33555 is included in a released version of
The master is the machine where the “control plane” components run, including
etcd (the cluster database) and the API server (which the
kubectl CLI communicates with).
All of these components run in pods started by
To initialize the master, pick one of the machines you previously installed
kubeadm on, and run:
# kubeadm init
Note: this will autodetect the network interface to advertise the master on as the interface with the default gateway.
If you want to use a different interface, specify
--api-advertise-addresses=<ip-address> argument to
This will download and install the cluster database and “control plane” components. This may take several minutes.
The output should look like:
<master/tokens> generated token: "f0c861.753c505740ecde4c" <master/pki> created keys and certificates in "/etc/kubernetes/pki" <util/kubeconfig> created "/etc/kubernetes/kubelet.conf" <util/kubeconfig> created "/etc/kubernetes/admin.conf" <master/apiclient> created API client configuration <master/apiclient> created API client, waiting for the control plane to become ready <master/apiclient> all control plane components are healthy after 61.346626 seconds <master/apiclient> waiting for at least one node to register and become ready <master/apiclient> first node is ready after 4.506807 seconds <master/discovery> created essential addon: kube-discovery <master/addons> created essential addon: kube-proxy <master/addons> created essential addon: kube-dns Kubernetes master initialised successfully! You can connect any number of nodes by running: kubeadm join --token <token> <master-ip>
Make a record of the
kubeadm join command that
kubeadm init outputs.
You will need this in a moment.
The key included here is secret, keep it safe — anyone with this key can add authenticated nodes to your cluster.
The key is used for mutual authentication between the master and the joining nodes.
By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, for example if you want a single-machine Kubernetes cluster for development, run:
# kubectl taint nodes --all dedicated- node "test-01" tainted taint key="dedicated" and effect="" not found. taint key="dedicated" and effect="" not found.
This will remove the “dedicated” taint from any nodes that have it, including the master node, meaning that the scheduler will then be able to schedule pods everywhere.
The nodes are where your workloads (containers and pods, etc) run.
If you want to add any new machines as nodes to your cluster, for each machine: SSH to that machine, become root (e.g.
sudo su -) and run the command that was output by
# kubeadm join --token <token> <master-ip> <util/tokens> validating provided token <node/discovery> created cluster info discovery client, requesting info from "http://22.214.171.124:9898/cluster-info/v1/?token-id=0f8588" <node/discovery> cluster info object received, verifying signature using given token <node/discovery> cluster info signature and contents are valid, will use API endpoints [https://126.96.36.199:443] <node/csr> created API client to obtain unique certificate for this node, generating keys and certificate signing request <node/csr> received signed certificate from the API server, generating kubelet configuration <util/kubeconfig> created "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
A few seconds later, you should notice that running
kubectl get nodes on the master shows a cluster with as many machines as you created.
YOUR CLUSTER IS NOT READY YET!
Before you can deploy applications to it, you need to install a pod network.
You must install a pod network add-on so that your pods can communicate with each other when they are on different hosts. It is necessary to do this before you try to deploy any applications to your cluster.
Several projects provide Kubernetes pod networks. You can see a complete list of available network add-ons on the add-ons page.
By way of example, you can install Weave Net by logging in to the master and running:
# kubectl apply -f https://git.io/weave-kube daemonset "weave-net" created
Once a pod network has been installed, you can confirm that it is working by checking that the
kube-dns pod is
Running in the output of
kubectl get pods --all-namespaces.
This signifies that your cluster is ready.
As an example, install a sample microservices application, a socks shop, to put your cluster through its paces. To learn more about the sample microservices app, see the GitHub README.
# git clone https://github.com/microservices-demo/microservices-demo # kubectl apply -f microservices-demo/deploy/kubernetes/manifests/sock-shop-ns.yml -f microservices-demo/deploy/kubernetes/manifests
You can then find out the port that the NodePort feature of services allocated for the front-end service by running:
# kubectl describe svc front-end -n sock-shop Name: front-end Namespace: sock-shop Labels: name=front-end Selector: name=front-end Type: NodePort IP: 100.66.88.176 Port: <unset> 80/TCP NodePort: <unset> 31869/TCP Endpoints: <none> Session Affinity: None
It takes several minutes to download and start all the containers, watch the output of
kubectl get pods -n sock-shop to see when they’re all up and running.
Then go to the IP address of your cluster’s master node in your browser, and specify the given port.
So for example,
In the example above, this was
31869, but it is a different port for you.
If there is a firewall, make sure it exposes this port to the internet before you try to access it.
See the list of add-ons to explore other add-ons, including tools for logging, monitoring, network policy, visualization & control of your Kubernetes cluster.
kubeadm’s advanced usage on the advanced reference doc
To uninstall the socks shop, run
kubectl delete -f microservices-demo/deploy/kubernetes/manifests on the master.
To undo what
kubeadm did, simply delete the machines you created for this tutorial, or run the script below and then start over or uninstall the packages.
Reset local state:
systemctl stop kubelet; docker rm -f -v $(docker ps -q); find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v; rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
If you wish to start over, run
systemctl start kubelet followed by
kubeadm init or
kubeadm is a work in progress and these limitations will be addressed in due course.
The cluster created here doesn’t have cloud-provider integrations, so for example won’t work with (for example) Load Balancers (LBs) or Persistent Volumes (PVs). To easily obtain a cluster which works with LBs and PVs Kubernetes, try the “hello world” GKE tutorial or one of the other cloud-specific installation tutorials.
Workaround: use the NodePort feature of services for exposing applications to the internet.
The cluster created here has a single master, with a single
etcd database running on it.
This means that if the master fails, your cluster loses its configuration data and will need to be recreated from scratch.
Adding HA support (multiple
etcd servers, multiple API servers, etc) to
kubeadm is still a work-in-progress.
Workaround: regularly back up etcd.
etcd data directory configured by
kubeadm is at
/var/lib/etcd on the master.
kubectl logs is broken with
kubeadm clusters due to #22770.
docker logs on the nodes where the containers are running as a workaround.
There is not yet an easy way to generate a
kubeconfig file which can be used to authenticate to the cluster remotely with
kubectl on, for example, your workstation.
Workaround: copy the kubelet’s
kubeconfig from the master: use
scp root@<master>:/etc/kubernetes/admin.conf . and then e.g.
kubectl --kubeconfig ./admin.conf get nodes from your workstation.
If you are using VirtualBox (directly or via Vagrant), you will need to ensure that
hostname -i returns a routable IP address (i.e. one on the second network interface, not the first one).
By default, it doesn’t do this and kubelet ends-up using first non-loopback network interface, which is usually NATed.
/etc/hosts, take a look at this
Vagrantfile for how you this can be achieved.