KRIBを使用してDigital Rebar Provision (DRP)と共にKubernetesをインストールする
This guide helps to install a Kubernetes cluster hosted on bare metal with Digital Rebar Provision using only its Content packages and kubeadm.
Digital Rebar Provision (DRP) is an integrated Golang DHCP, bare metal provisioning (PXE/iPXE) and workflow automation platform. While DRP can be used to invoke kubespray, it also offers a self-contained Kubernetes installation known as KRIB (Kubernetes Rebar Integrated Bootstrap).
- zero-touch, self-configuring cluster without pre-configuration or inventory
- very fast, no-ssh required automation
- bare metal, on-premises focused platform
- highly available cluster options (including splitting etcd from the controllers)
- dynamic generation of a TLS infrastructure
- composable attributes and automatic detection of hardware by profile
- options for persistent, immutable and image-based deployments
- support for Ubuntu 18.04, CentOS/RHEL 7, CoreOS, RancherOS and others
Review Digital Rebar documentation for details about installing the platform.
The Digital Rebar Provision Golang binary should be installed on a Linux-like system with 16 GB of RAM or larger (Packet.net Tiny and Rasberry Pi are also acceptable).
Following the Digital Rebar installation, allow one or more servers to boot through the Sledgehammer discovery process to register with the API. This will automatically install the Digital Rebar runner and to allow for next steps.
Upload the KRIB Content bundle (or build from source) and the Cert Plugin for your DRP platform. Both are freely available via the RackN UX or using the upload from catalog feature of the DRPCLI (shown below).
drpcli plugin_providers upload certs from catalog:certs-stable drpcli contents upload catalog:krib-stable
備考: KRIB documentation is dynamically generated from the source and will be more up to date than this guide.
Following the KRIB documentation, create a Profile for your cluster and assign your target servers into the cluster Profile. The Profile must set
etcd\cluster-name Params to be the name of the Profile. Cluster configuration choices can be made by adding additional Params to the Profile; however, safe defaults are provided for all Params.
Once all target servers are assigned to the cluster Profile, start a KRIB installation Workflow by assigning one of the included Workflows to all cluster servers. For example, selecting
krib-live-cluster will perform an immutable deployment into the Sledgehammer discovery operating system. You may use one of the pre-created read-only Workflows or choose to build your own custom variation.
For basic installs, no further action is required. Advanced users may choose to assign the controllers, etcd servers or other configuration values in the relevant Params.
Digital Rebar Provision provides detailed logging and live updates during the installation process. Workflow events are available via a websocket connection or monitoring the Jobs list.
During the installation, KRIB writes cluster configuration data back into the cluster Profile.
The cluster is available for access via kubectl once the
krib/cluster-admin-conf Param has been set. This Param contains the
kubeconfig information necessary to access the cluster.
For example, if you named the cluster Profile
krib then the following commands would allow you to connect to the installed cluster from your local terminal.
drpcli profiles get krib params krib/cluster-admin-conf > admin.conf export KUBECONFIG=admin.conf kubectl get nodes
The installation continues after the
krib/cluster-admin-conf is set to install the Kubernetes UI and Helm. You may interact with the cluster as soon as the
admin.conf file is available.
KRIB provides additional Workflows to manage your cluster. Please see the KRIB documentation for an updated list of advanced cluster operations.
You can add servers into your cluster by adding the cluster Profile to the server and running the appropriate Workflow.
You can reset your cluster and wipe out all configuration and TLS certificates using the
krib-reset-cluster Workflow on any of the servers in the cluster.
注意: When running the reset Workflow, be sure not to accidentally target your production cluster!