If you’re a developer looking to run applications on Kubernetes, this page and its linked topics can help you get started with the fundamentals. Though this page primarily describes development workflows, the subsequent page in the series covers more advanced, production setups.
A quick note
This app developer “user journey” is not a comprehensive overview of Kubernetes. It focuses more on what you develop, test, and deploy to Kubernetes, rather than how the underlying infrastructure works.
Though it’s possible for a single person to manage both, in many organizations, it’s common to assign the latter to a dedicated cluster operatorA person who configures, controls, and monitors clusters. .
Get started with a cluster
If you’re brand new to Kubernetes and simply want to experiment without setting up a full development environment, web-based environments are a good place to start:
Kubernetes Basics - Introduces you to six common Kubernetes workflows. Each section walks you through browser-based, interactive exercises complete with their own Kubernetes environment.
Play with Kubernetes - A less structured environment than the Katacoda playground, for those who are more comfortable with Kubernetes concepts and want to explore further. It supports the ability to spin up multiple nodes.
Web-based environments are easy to access, but are not persistent. If you want to continue exploring Kubernetes in a workspace that you can come back to and change, Minikube is a good option.
Minikube can be installed locally, and runs a simple, single-node Kubernetes cluster inside a virtual machine (VM). This cluster is fully functioning and contains all core Kubernetes components. Many developers have found this sufficient for local application development.
(Optional) Install Docker if you plan to run your Minikube cluster as part of a local development environment.
Minikube includes a Docker daemon, but if you’re developing applications locally, you’ll want an independent Docker instance to support your workflow. This allows you to create containersA lightweight and portable executable image that contains software and all of its dependencies. and push them to a container registry.
Version 1.12 is recommended for full compatibility with Kubernetes, but a few other versions are tested and known to work.
You can get basic information about your cluster with the commands
kubectl cluster-info and
kubectl get nodes. However, to get a good idea of what’s really going on, you need to deploy an application to your cluster. This is covered in the next section.
Deploy an application
The following examples demonstrate the fundamentals of deploying Kubernetes apps:
Through these deployment tasks, you’ll gain familiarity with the following:
Configuration files - Written in YAML or JSON, these files describe the desired state of your application in terms of Kubernetes API objects. A file can include one or more API object descriptions (manifests). (See the example YAML from the stateless app).
PodsThe smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster. - This is the basic unit for all of the workloads you run on Kubernetes. These workloads, such as Deployments and Jobs, are composed of one or more Pods. To learn more, check out this explanation of Pods and Nodes.
Common workload objects
DeploymentAn API object that manages a replicated application. - The most common way of running X copies (Pods) of your application. Supports rolling updates to your container images.
ServiceAn API object that describes how to access applications, such as a set of Pods, and can describe ports and load-balancers. - By itself, a Deployment can’t receive traffic. Setting up a Service is one of the simplest ways to configure a Deployment to receive and loadbalance requests. Depending on the
typeof Service used, these requests can come from external client apps or be limited to apps within the same cluster. A Service is tied to a specific Deployment using labelTags objects with identifying attributes that are meaningful and relevant to users. selection.
The subsequent topics are also useful to know for basic application deployment.
You can also specify custom information about your Kubernetes API objects by attaching key/value fields. Kubernetes provides two ways of doing this:
LabelsTags objects with identifying attributes that are meaningful and relevant to users. - Identifying metadata that you can use to sort and select sets of API objects. Labels have many applications, including the following:
To keep the right number of replicas (Pods) running in a Deployment. The specified label (
app: nginxin the stateless app example) is used to stamp the Deployment’s newly created Pods (as the value of the
spec.template.labelsconfiguration field), and to query which Pods it already manages (as the value of
To tie a Service to a Deployment using the
selectorfield, which is demonstrated in the stateful app example.
To look for specific subset of Kubernetes objects, when you are using kubectlA command line tool for communicating with a Kubernetes API server. . For instance, the command
kubectl get deployments --selector=app=nginxonly displays Deployments from the nginx app.
AnnotationsA key-value pair that is used to attach arbitrary non-identifying metadata to objects. - Nonidentifying metadata that you can attach to API objects, usually if you don’t intend to use them for sorting purposes. These often serve as supplementary data about an app’s deployment, such as Git SHAs, PR numbers, or URL pointers to observability dashboards.
You’ll also want to think about storage. Kubernetes provides different types of storage API objects for different storage needs:
VolumesA directory containing data, accessible to the containers in a pod. - Let you define storage for your cluster that is tied to the lifecycle of a Pod. It is therefore more persistent than container storage. Learn how to configure volume storage, or read more about volume storage.
PersistentVolumesAn API object that represents a piece of storage in the cluster. Available as a general, pluggable resource that persists beyond the lifecycle of any individual Pod. and PersistentVolumeClaimsClaims storage resources defined in a PersistentVolume so that it can be mounted as a volume in a container. - Let you define storage at the cluster level. Typically a cluster operator defines the PersistentVolume objects for the cluster, and cluster users (application developers, you) define the PersistentVolumeClaim objects that your application requires. Learn how to set up persistent storage for your cluster or read more about persistent volumes.
To avoid having to unnecessarily rebuild your container images, you should decouple your application’s configuration data from the code required to run it. There are a couple ways of doing this, which you should choose according to your use case:
|Approach||Type of Data||How it's mounted||Example|
|Using a manifest's container definition||Non-confidential||Environment variable||Command-line flag|
|Using ConfigMapsAn API object used to store non-confidential data in key-value pairs. Can be consumed as environment variables, command-line arguments, or config files in a volume.||Non-confidential||Environment variable OR local file||nginx configuration|
|Using SecretsStores sensitive information, such as passwords, OAuth tokens, and ssh keys.||Confidential||Environment variable OR local file||Database credentials|
If you have any data that you want to keep private, you should be using a Secret. Otherwise there is nothing stopping that data from being exposed to malicious users.
Understand basic Kubernetes architecture
As an app developer, you don’t need to know everything about the inner workings of Kubernetes, but you may find it helpful to understand it at a high level.
What Kubernetes offers
Say that your team is deploying an ordinary Rails application. You’ve run some calculations and determined that you need five instances of your app running at any given time, in order to handle external traffic.
If you’re not running Kubernetes or a similar automated system, you might find the following scenario familiar:
One instance of your app (a complete machine instance or just a container) goes down.
Because your team has monitoring set up, this pages the person on call.
The on-call person has to go in, investigate, and manually spin up a new instance.
Depending how your team handles DNS/networking, the on-call person may also need to also update the service discovery mechanism to point at the IP of the new Rails instance rather than the old.
This process can be tedious and also inconvenient, especially if (2) happens in the early hours of the morning!
If you have Kubernetes set up, however, manual intervention is not as necessary. The Kubernetes control plane, which runs on your cluster’s master node, gracefully handles (3) and (4) on your behalf. As a result, Kubernetes is often referred to as a self-healing system.
There are two key parts of the control plane that facilitate this behavior: the Kubernetes API server and the Controllers.
Kubernetes API server
For Kubernetes to be useful, it needs to know what sort of cluster state you want it to maintain. Your YAML or JSON configuration files declare this desired state in terms of one or more API objects, such as DeploymentsAn API object that manages a replicated application.
. To make updates to your cluster’s state, you submit these files to the Kubernetes APIThe application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.
Examples of state include but are not limited to the following:
- The applications or other workloads to run
- The container images for your applications and workloads
- Allocation of network and disk resources
Note that the API server is just the gateway, and that object data is actually stored in a highly available datastore called etcd. For most intents and purposes, though, you can focus on the API server. Most reads and writes to cluster state take place as API requests.
You can read more about the Kubernetes API here.
Once you’ve declared your desired state through the Kubernetes API, the controllers work to make the cluster’s current state match this desired state.
All of these controllers implement a control loop. For simplicity, you can think of this as the following:
What is the current state of the cluster (X)?
What is the desired state of the cluster (Y)?
X == Y ?
true- Do nothing.
false- Perform tasks to get to Y, such as starting or restarting containers, or scaling the number of replicas of a given application. Return to 1.
By continuously looping, these controllers ensure the cluster can pick up new updates and avoid drifting from the desired state. These ideas are covered in more detail here.
The Kubernetes documentation is rich in detail. Here’s a curated list of resources to help you start digging deeper.
Hello Minikube (Runs on Mac only)
If you feel fairly comfortable with the topics on this page and want to learn more, check out the following user journeys: