This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.


1 - Cloudstack

CloudStack is a software to build public and private clouds based on hardware virtualization principles (traditional IaaS). To deploy Kubernetes on CloudStack there are several possibilities depending on the Cloud being used and what images are made available. CloudStack also has a vagrant plugin available, hence Vagrant could be used to deploy Kubernetes either using the existing shell provisioner or using new Salt based recipes.

CoreOS templates for CloudStack are built nightly. CloudStack operators need to register this template in their cloud before proceeding with these Kubernetes deployment instructions.

This guide uses a single Ansible playbook, which is completely automated and can deploy Kubernetes on a CloudStack based Cloud using CoreOS images. The playbook, creates an ssh key pair, creates a security group and associated rules and finally starts coreOS instances configured via cloud-init.


sudo apt-get install -y python-pip libssl-dev
sudo pip install cs
sudo pip install sshpubkeys
sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible

On CloudStack server you also have to install libselinux-python :

yum install libselinux-python

cs is a python module for the CloudStack API.

Set your CloudStack endpoint, API keys and HTTP method used.


Or create a ~/.cloudstack.ini file:

endpoint = <your cloudstack api endpoint>
key = <your api access key>
secret = <your api secret key>
method = post

We need to use the http POST method to pass the large userdata to the coreOS instances.


git clone
cd kubernetes-cloudstack


You simply need to run the playbook.

ansible-playbook k8s.yml

Some variables can be edited in the k8s.yml file.

  ssh_key: k8s
  k8s_num_nodes: 2
  k8s_security_group_name: k8s
  k8s_node_prefix: k8s2
  k8s_template: <templatename>
  k8s_instance_type: <serviceofferingname>

This will start a Kubernetes master node and a number of compute nodes (by default 2). The instance_type and template are specific, edit them to specify your CloudStack cloud specific template and instance type (i.e. service offering).

Check the tasks and templates in roles/k8s if you want to modify anything.

Once the playbook as finished, it will print out the IP of the Kubernetes master:

TASK: [k8s | debug msg='k8s master IP is {{ k8s_master.default_ip }}'] ********

SSH to it using the key that was created and using the core user.

ssh -i ~/.ssh/id_rsa_k8s core@<master IP>

And you can list the machines in your cluster:

fleetctl list-machines
MACHINE        IP             METADATA
a017c422...    <node #1 IP>   role=node
ad13bf84...    <master IP>    role=master
e9af8293...    <node #2 IP>   role=node


IaaS ProviderConfig. MgmtOSNetworkingDocsConformsSupport Level
CloudStackAnsibleCoreOSflanneldocsCommunity (@Guiques)

2 - DC/OS上のKubernetes


  • 純粋なアップストリームのKubernetes
  • シングルクリッククラスター構築
  • デフォルトで高可用であり安全
  • Kubernetesが高速なデータプラットフォーム(例えばAkka、Cassandra、Kafka、Spark)と共に稼働




3 - oVirt

oVirt is a virtual datacenter manager that delivers powerful management of multiple virtual machines on multiple hosts. Using KVM and libvirt, oVirt can be installed on Fedora, CentOS, or Red Hat Enterprise Linux hosts to set up and manage your virtual data center.


The oVirt cloud provider allows to easily discover and automatically add new VM instances as nodes to your Kubernetes cluster. At the moment there are no community-supported or pre-loaded VM images including Kubernetes but it is possible to import or install Project Atomic (or Fedora) in a VM to generate a template. Any other distribution that includes Kubernetes may work as well.

It is mandatory to install the ovirt-guest-agent in the guests for the VM ip address and hostname to be reported to ovirt-engine and ultimately to Kubernetes.

Once the Kubernetes template is available it is possible to start instantiating VMs that can be discovered by the cloud provider.


The oVirt Cloud Provider requires access to the oVirt REST-API to gather the proper information, the required credential should be specified in the ovirt-cloud.conf file:

uri = https://localhost:8443/ovirt-engine/api
username = admin@internal
password = admin

In the same file it is possible to specify (using the filters section) what search query to use to identify the VMs to be reported to Kubernetes:

# Search query used to find nodes
vms = tag=kubernetes

In the above example all the VMs tagged with the kubernetes label will be reported as nodes to Kubernetes.

The ovirt-cloud.conf file then must be specified in kube-controller-manager:

kube-controller-manager ... --cloud-provider=ovirt --cloud-config=/path/to/ovirt-cloud.conf ...


This short screencast demonstrates how the oVirt Cloud Provider can be used to dynamically add VMs to your Kubernetes cluster.



IaaS ProviderConfig. MgmtOSNetworkingDocsConformsSupport Level
oVirtdocsCommunity (@simon3z)