Note: The preferred way to create a replicated application is to use a Deployment, which in turn uses a ReplicaSet. For more information, see Running a Stateless Application Using a Deployment.
To update a service without an outage,
kubectl supports what is called rolling update, which updates one pod at a time, rather than taking down the entire service at the same time. See the rolling update design document and the example of rolling update for more information.
kubectl rolling-update only supports Replication Controllers. However, if you deploy applications with Replication Controllers,
consider switching them to Deployments. A Deployment is a higher-level controller that automates rolling updates
of applications declaratively, and therefore is recommended. If you still want to keep your Replication Controllers and use
kubectl rolling-update, keep reading:
A rolling update applies changes to the configuration of pods being managed by a replication controller. The changes can be passed as a new replication controller configuration file; or, if only updating the image, a new container image can be specified directly.
A rolling update works by:
Rolling updates are initiated with the
kubectl rolling-update command:
$ kubectl rolling-update NAME \ ([NEW_NAME] --image=IMAGE | -f FILE)
To initiate a rolling update using a configuration file, pass the new file to
$ kubectl rolling-update NAME -f FILE
The configuration file must:
Specify a different
Overwrite at least one common label in its
Use the same
Replication controller configuration files are described in Creating Replication Controllers.
// Update pods of frontend-v1 using new replication controller data in frontend-v2.json. $ kubectl rolling-update frontend-v1 -f frontend-v2.json // Update pods of frontend-v1 using JSON data passed into stdin. $ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -
To update only the container image, pass a new image name and tag with the
--image flag and (optionally) a new controller name:
$ kubectl rolling-update NAME [NEW_NAME] --image=IMAGE:TAG
--image flag is only supported for single-container pods. Specifying
--image with multi-container pods returns an error.
NEW_NAME is specified, a new replication controller is created with
a temporary name. Once the rollout is complete, the old controller is deleted,
and the new controller is updated to use the original name.
The update will fail if
IMAGE:TAG is identical to the
current value. For this reason, we recommend the use of versioned tags as
opposed to values such as
:latest. Doing a rolling update from
to a new
image:latest will fail, even if the image at that tag has changed.
Moreover, the use of
:latest is not recommended, see
Best Practices for Configuration for more information.
// Update the pods of frontend-v1 to frontend-v2 $ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2 // Update the pods of frontend, keeping the replication controller name $ kubectl rolling-update frontend --image=image:v2
Required fields are:
NAME: The name of the replication controller to update.
as well as either:
-f FILE: A replication controller configuration file, in either JSON or YAML format. The configuration file must specify a new top-level
idvalue and include at least one of the existing
spec.selectorkey:value pairs. See the Run Stateless AP Replication Controller page for details.
--image IMAGE:TAG: The name and tag of the image to update to. Must be different than the current image:tag currently specified.
Optional fields are:
NEW_NAME: Only used in conjunction with
-f FILE). The name to assign to the new replication controller.
--poll-interval DURATION: The time between polling the controller status after update. Valid units are
h(hours). Units can be combined (e.g.
1m30s). The default is
--timeout DURATION: The maximum time to wait for the controller to update a pod before exiting. Default is
5m0s. Valid units are as described for
--update-period DURATION: The time to wait between updating pods. Default is
1m0s. Valid units are as described for
Additional information about the
kubectl rolling-update command is available
Let’s say you were running version 1.7.9 of nginx:
apiVersion: v1 kind: ReplicationController metadata: name: my-nginx spec: replicas: 5 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
To update to version 1.9.1, you can use
kubectl rolling-update --image to specify the new image:
$ kubectl rolling-update my-nginx --image=nginx:1.9.1 Created my-nginx-ccba8fbd8cc8160970f63f9a2696fc46
In another window, you can see that
kubectl added a
deployment label to the pods, whose value is a hash of the configuration, to distinguish the new pods from the old:
$ kubectl get pods -l app=nginx -L deployment NAME READY STATUS RESTARTS AGE DEPLOYMENT my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-k156z 1/1 Running 0 1m ccba8fbd8cc8160970f63f9a2696fc46 my-nginx-ccba8fbd8cc8160970f63f9a2696fc46-v95yh 1/1 Running 0 35s ccba8fbd8cc8160970f63f9a2696fc46 my-nginx-divi2 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e my-nginx-o0ef1 1/1 Running 0 2h 2d1d7a8f682934a254002b56404b813e my-nginx-q6all 1/1 Running 0 8m 2d1d7a8f682934a254002b56404b813e
kubectl rolling-update reports progress as it progresses:
Scaling up my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 0 to 3, scaling down my-nginx from 3 to 0 (keep 3 pods available, don't exceed 4 pods) Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 1 Scaling my-nginx down to 2 Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 2 Scaling my-nginx down to 1 Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 up to 3 Scaling my-nginx down to 0 Update succeeded. Deleting old controller: my-nginx Renaming my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 to my-nginx replicationcontroller "my-nginx" rolling updated
If you encounter a problem, you can stop the rolling update midway and revert to the previous version using
$ kubectl rolling-update my-nginx --rollback Setting "my-nginx" replicas to 1 Continuing update with existing controller my-nginx. Scaling up nginx from 1 to 1, scaling down my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 from 1 to 0 (keep 1 pods available, don't exceed 2 pods) Scaling my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 down to 0 Update succeeded. Deleting my-nginx-ccba8fbd8cc8160970f63f9a2696fc46 replicationcontroller "my-nginx" rolling updated
This is one example where the immutability of containers is a huge asset.
If you need to update more than just the image (e.g., command arguments, environment variables), you can create a new replication controller, with a new name and distinguishing label value, such as:
apiVersion: v1 kind: ReplicationController metadata: name: my-nginx-v4 spec: replicas: 5 selector: app: nginx deployment: v4 template: metadata: labels: app: nginx deployment: v4 spec: containers: - name: nginx image: nginx:1.9.2 args: ["nginx", "-T"] ports: - containerPort: 80
and roll it out:
$ kubectl rolling-update my-nginx -f ./nginx-rc.yaml Created my-nginx-v4 Scaling up my-nginx-v4 from 0 to 5, scaling down my-nginx from 4 to 0 (keep 4 pods available, don't exceed 5 pods) Scaling my-nginx-v4 up to 1 Scaling my-nginx down to 3 Scaling my-nginx-v4 up to 2 Scaling my-nginx down to 2 Scaling my-nginx-v4 up to 3 Scaling my-nginx down to 1 Scaling my-nginx-v4 up to 4 Scaling my-nginx down to 0 Scaling my-nginx-v4 up to 5 Update succeeded. Deleting old controller: my-nginx replicationcontroller "my-nginx-v4" rolling updated
timeout duration is reached during a rolling update, the operation will
fail with some pods belonging to the new replication controller, and some to the
To continue the update from where it failed, retry using the same command.
To roll back to the original state before the attempted update, append the
--rollback=true flag to the original command. This will revert all changes.