Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
A Pod (as in a pod of whales or pea pod) is a group of one or more
containers, with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and
co-scheduled, and run in a shared context. A Pod models an
application-specific "logical host": it contains one or more application
containers which are relatively tightly coupled.
In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
As well as application containers, a Pod can contain
init containers that run
during Pod startup. You can also inject
ephemeral containers
for debugging a running Pod.
What is a Pod?
Note:
You need to install a container runtime
into each node in the cluster so that Pods can run there.
The shared context of a Pod is a set of Linux namespaces, cgroups, and
potentially other facets of isolation - the same things that isolate a container. Within a Pod's context, the individual applications may have
further sub-isolations applied.
A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes.
Pods in a Kubernetes cluster are used in two main ways:
Pods that run a single container. The "one-container-per-Pod" model is the
most common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container; Kubernetes manages Pods rather than managing
the containers directly.
Pods that run multiple containers that need to work together. A Pod can
encapsulate an application composed of
multiple co-located containers that are
tightly coupled and need to share resources. These co-located containers
form a single cohesive unit.
Grouping multiple co-located and co-managed containers in a single Pod is a
relatively advanced use case. You should use this pattern only in specific
instances in which your containers are tightly coupled.
You don't need to run multiple containers to provide replication (for resilience
or capacity); if you need multiple replicas, see
Workload management.
Using Pods
The following is an example of a Pod which consists of a container running the image nginx:1.14.2.
Pods are generally not created directly and are created using workload resources.
See Working with Pods for more information on how Pods are used
with workload resources.
Workload resources for managing pods
Usually you don't need to create Pods directly, even singleton Pods. Instead, create them using workload resources such as Deployment or Job.
If your Pods need to track state, consider the
StatefulSet resource.
Each Pod is meant to run a single instance of a given application. If you want to
scale your application horizontally (to provide more overall resources by running
more instances), you should use multiple Pods, one for each instance. In
Kubernetes, this is typically referred to as replication.
Replicated Pods are usually created and managed as a group by a workload resource
and its controller.
See Pods and controllers for more information on how
Kubernetes uses workload resources, and their controllers, to implement application
scaling and auto-healing.
Pods natively provide two kinds of shared resources for their constituent containers:
networking and storage.
Working with Pods
You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This
is because Pods are designed as relatively ephemeral, disposable entities. When
a Pod gets created (directly by you, or indirectly by a
controller), the new Pod is
scheduled to run on a Node in your cluster.
The Pod remains on that node until the Pod finishes execution, the Pod object is deleted,
the Pod is evicted for lack of resources, or the node fails.
Note:
Restarting a container in a Pod should not be confused with restarting a Pod. A Pod
is not a process, but an environment for running container(s). A Pod persists until
it is deleted.
The name of a Pod must be a valid
DNS subdomain
value, but this can produce unexpected results for the Pod hostname. For best compatibility,
the name should follow the more restrictive rules for a
DNS label.
Pod OS
FEATURE STATE:Kubernetes v1.25 [stable]
You should set the .spec.os.name field to either windows or linux to indicate the OS on
which you want the pod to run. These two are the only operating systems supported for now by
Kubernetes. In the future, this list may be expanded.
In Kubernetes v1.32, the value of .spec.os.name does not affect
how the kube-scheduler
picks a node for the Pod to run on. In any cluster where there is more than one operating system for
running nodes, you should set the
kubernetes.io/os
label correctly on each node, and define pods with a nodeSelector based on the operating system
label. The kube-scheduler assigns your pod to a node based on other criteria and may or may not
succeed in picking a suitable node placement where the node OS is right for the containers in that Pod.
The Pod security standards also use this
field to avoid enforcing policies that aren't relevant to the operating system.
Pods and controllers
You can use workload resources to create and manage multiple Pods for you. A controller
for the resource handles replication and rollout and automatic healing in case of
Pod failure. For example, if a Node fails, a controller notices that Pods on that
Node have stopped working and creates a replacement Pod. The scheduler places the
replacement Pod onto a healthy Node.
Here are some examples of workload resources that manage one or more Pods:
Controllers for workload resources create Pods
from a pod template and manage those Pods on your behalf.
PodTemplates are specifications for creating Pods, and are included in workload resources such as
Deployments,
Jobs, and
DaemonSets.
Each controller for a workload resource uses the PodTemplate inside the workload
object to make actual Pods. The PodTemplate is part of the desired state of whatever
workload resource you used to run your app.
When you create a Pod, you can include
environment variables
in the Pod template for the containers that run in the Pod.
The sample below is a manifest for a simple Job with a template that starts one
container. The container in that Pod prints a message then pauses.
apiVersion:batch/v1kind:Jobmetadata:name:hellospec:template:# This is the pod templatespec:containers:- name:helloimage:busybox:1.28command:['sh','-c','echo "Hello, Kubernetes!" && sleep 3600']restartPolicy:OnFailure# The pod template ends here
Modifying the pod template or switching to a new pod template has no direct effect
on the Pods that already exist. If you change the pod template for a workload
resource, that resource needs to create replacement Pods that use the updated template.
For example, the StatefulSet controller ensures that the running Pods match the current
pod template for each StatefulSet object. If you edit the StatefulSet to change its pod
template, the StatefulSet starts to create new Pods based on the updated template.
Eventually, all of the old Pods are replaced with new Pods, and the update is complete.
Each workload resource implements its own rules for handling changes to the Pod template.
If you want to read more about StatefulSet specifically, read
Update strategy in the StatefulSet Basics tutorial.
On Nodes, the kubelet does not
directly observe or manage any of the details around pod templates and updates; those
details are abstracted away. That abstraction and separation of concerns simplifies
system semantics, and makes it feasible to extend the cluster's behavior without
changing existing code.
Pod update and replacement
As mentioned in the previous section, when the Pod template for a workload
resource is changed, the controller creates new Pods based on the updated
template instead of updating or patching the existing Pods.
Kubernetes doesn't prevent you from managing Pods directly. It is possible to
update some fields of a running Pod, in place. However, Pod update operations
like
patch, and
replace
have some limitations:
Most of the metadata about a Pod is immutable. For example, you cannot
change the namespace, name, uid, or creationTimestamp fields;
the generation field is unique. It only accepts updates that increment the
field's current value.
If the metadata.deletionTimestamp is set, no new entry can be added to the
metadata.finalizers list.
Pod updates may not change fields other than spec.containers[*].image,
spec.initContainers[*].image, spec.activeDeadlineSeconds or
spec.tolerations. For spec.tolerations, you can only add new entries.
When updating the spec.activeDeadlineSeconds field, two types of updates
are allowed:
setting the unassigned field to a positive number;
updating the field from a positive number to a smaller, non-negative
number.
Resource sharing and communication
Pods enable data sharing and communication among their constituent
containers.
Storage in Pods
A Pod can specify a set of shared storage
volumes. All containers
in the Pod can access the shared volumes, allowing those containers to
share data. Volumes also allow persistent data in a Pod to survive
in case one of the containers within needs to be restarted. See
Storage for more information on how
Kubernetes implements shared storage and makes it available to Pods.
Pod networking
Each Pod is assigned a unique IP address for each address family. Every
container in a Pod shares the network namespace, including the IP address and
network ports. Inside a Pod (and only then), the containers that belong to the Pod
can communicate with one another using localhost. When containers in a Pod communicate
with entities outside the Pod,
they must coordinate how they use the shared network resources (such as ports).
Within a Pod, containers share an IP address and port space, and
can find each other via localhost. The containers in a Pod can also communicate
with each other using standard inter-process communications like SystemV semaphores
or POSIX shared memory. Containers in different Pods have distinct IP addresses
and can not communicate by OS-level IPC without special configuration.
Containers that want to interact with a container running in a different Pod can
use IP networking to communicate.
Containers within the Pod see the system hostname as being the same as the configured
name for the Pod. There's more about this in the networking
section.
Pod security settings
To set security constraints on Pods and containers, you use the
securityContext field in the Pod specification. This field gives you
granular control over what a Pod or individual containers can do. For example:
Drop specific Linux capabilities to avoid the impact of a CVE.
Force all processes in the Pod to run as a non-root user or as a specific
user or group ID.
Set a specific seccomp profile.
Set Windows security options, such as whether containers run as HostProcess.
Caution:
You can also use the Pod securityContext to enable
privileged mode
in Linux containers. Privileged mode overrides many of the other security
settings in the securityContext. Avoid using this setting unless you can't grant
the equivalent permissions by using other fields in the securityContext.
In Kubernetes 1.26 and later, you can run Windows containers in a similarly
privileged mode by setting the windowsOptions.hostProcess flag on the
security context of the Pod spec. For details and instructions, see
Create a Windows HostProcess Pod.
Static Pods are managed directly by the kubelet daemon on a specific node,
without the API server
observing them.
Whereas most Pods are managed by the control plane (for example, a
Deployment), for static
Pods, the kubelet directly supervises each static Pod (and restarts it if it fails).
Static Pods are always bound to one Kubelet on a specific node.
The main use for static Pods is to run a self-hosted control plane: in other words,
using the kubelet to supervise the individual control plane components.
The kubelet automatically tries to create a mirror Pod
on the Kubernetes API server for each static Pod.
This means that the Pods running on a node are visible on the API server,
but cannot be controlled from there. See the guide Create static Pods for more information.
Pods are designed to support multiple cooperating processes (as containers) that form
a cohesive unit of service. The containers in a Pod are automatically co-located and
co-scheduled on the same physical or virtual machine in the cluster. The containers
can share resources and dependencies, communicate with one another, and coordinate
when and how they are terminated.
Pods in a Kubernetes cluster are used in two main ways:
Pods that run a single container. The "one-container-per-Pod" model is the
most common Kubernetes use case; in this case, you can think of a Pod as a
wrapper around a single container; Kubernetes manages Pods rather than managing
the containers directly.
Pods that run multiple containers that need to work together. A Pod can
encapsulate an application composed of
multiple co-located containers that are
tightly coupled and need to share resources. These co-located containers
form a single cohesive unit of service—for example, one container serving data
stored in a shared volume to the public, while a separate
sidecar container
refreshes or updates those files.
The Pod wraps these containers, storage resources, and an ephemeral network
identity together as a single unit.
For example, you might have a container that
acts as a web server for files in a shared volume, and a separate
sidecar container
that updates those files from a remote source, as in the following diagram:
Some Pods have init containers
as well as app containers.
By default, init containers run and complete before the app containers are started.
You can also have sidecar containers
that provide auxiliary services to the main application Pod (for example: a service mesh).
FEATURE STATE:Kubernetes v1.29 [beta]
Enabled by default, the SidecarContainersfeature gate
allows you to specify restartPolicy: Always for init containers.
Setting the Always restart policy ensures that the containers where you set it are
treated as sidecars that are kept running during the entire lifetime of the Pod.
Containers that you explicitly define as sidecar containers
start up before the main application Pod and remain running until the Pod is
shut down.
Container probes
A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:
ExecAction (performed with the help of the container runtime)
TCPSocketAction (checked directly by the kubelet)
HTTPGetAction (checked directly by the kubelet)
You can read more about probes
in the Pod Lifecycle documentation.
To understand the context for why Kubernetes wraps a common Pod API in other resources (such as StatefulSets or Deployments), you can read about the prior art, including:
This page describes the lifecycle of a Pod. Pods follow a defined lifecycle, starting
in the Pendingphase, moving through Running if at least one
of its primary containers starts OK, and then through either the Succeeded or
Failed phases depending on whether any container in the Pod terminated in failure.
Like individual application containers, Pods are considered to be relatively
ephemeral (rather than durable) entities. Pods are created, assigned a unique
ID (UID), and scheduled
to run on nodes where they remain until termination (according to restart policy) or
deletion.
If a Node dies, the Pods running on (or scheduled
to run on) that node are marked for deletion. The control
plane marks the Pods for removal after a timeout period.
Pod lifetime
Whilst a Pod is running, the kubelet is able to restart containers to handle some
kind of faults. Within a Pod, Kubernetes tracks different container
states and determines what action to take to make the Pod
healthy again.
In the Kubernetes API, Pods have both a specification and an actual status. The
status for a Pod object consists of a set of Pod conditions.
You can also inject custom readiness information into the
condition data for a Pod, if that is useful to your application.
Pods are only scheduled once in their lifetime;
assigning a Pod to a specific node is called binding, and the process of selecting
which node to use is called scheduling.
Once a Pod has been scheduled and is bound to a node, Kubernetes tries
to run that Pod on the node. The Pod runs on that node until it stops, or until the Pod
is terminated; if Kubernetes isn't able to start the Pod on the selected
node (for example, if the node crashes before the Pod starts), then that particular Pod
never starts.
You can use Pod Scheduling Readiness
to delay scheduling for a Pod until all its scheduling gates are removed. For example,
you might want to define a set of Pods but only trigger scheduling once all the Pods
have been created.
Pods and fault recovery
If one of the containers in the Pod fails, then Kubernetes may try to restart that
specific container.
Read How Pods handle problems with containers to learn more.
Pods can however fail in a way that the cluster cannot recover from, and in that case
Kubernetes does not attempt to heal the Pod further; instead, Kubernetes deletes the
Pod and relies on other components to provide automatic healing.
If a Pod is scheduled to a node and that
node then fails, the Pod is treated as unhealthy and Kubernetes eventually deletes the Pod.
A Pod won't survive an eviction due to
a lack of resources or Node maintenance.
Kubernetes uses a higher-level abstraction, called a
controller, that handles the work of
managing the relatively disposable Pod instances.
A given Pod (as defined by a UID) is never "rescheduled" to a different node; instead,
that Pod can be replaced by a new, near-identical Pod. If you make a replacement Pod, it can
even have same name (as in .metadata.name) that the old Pod had, but the replacement
would have a different .metadata.uid from the old Pod.
Kubernetes does not guarantee that a replacement for an existing Pod would be scheduled to
the same node as the old Pod that was being replaced.
Associated lifetimes
When something is said to have the same lifetime as a Pod, such as a
volume,
that means that the thing exists as long as that specific Pod (with that exact UID)
exists. If that Pod is deleted for any reason, and even if an identical replacement
is created, the related thing (a volume, in this example) is also destroyed and
created anew.
Pod phase
A Pod's status field is a
PodStatus
object, which has a phase field.
The phase of a Pod is a simple, high-level summary of where the Pod is in its
lifecycle. The phase is not intended to be a comprehensive rollup of observations
of container or Pod state, nor is it intended to be a comprehensive state machine.
The number and meanings of Pod phase values are tightly guarded.
Other than what is documented here, nothing should be assumed about Pods that
have a given phase value.
Here are the possible values for phase:
Value
Description
Pending
The Pod has been accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a Pod spends waiting to be scheduled as well as the time spent downloading container images over the network.
Running
The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.
Succeeded
All containers in the Pod have terminated in success, and will not be restarted.
Failed
All containers in the Pod have terminated, and at least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system, and is not set for automatic restarting.
Unknown
For some reason the state of the Pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the Pod should be running.
Note:
When a pod is failing to start repeatedly, CrashLoopBackOff may appear in the Status field of some kubectl commands. Similarly, when a pod is being deleted, Terminating may appear in the Status field of some kubectl commands.
Make sure not to confuse Status, a kubectl display field for user intuition, with the pod's phase.
Pod phase is an explicit part of the Kubernetes data model and of the
Pod API.
NAMESPACE NAME READY STATUS RESTARTS AGE
alessandras-namespace alessandras-pod 0/1 CrashLoopBackOff 200 2d9h
A Pod is granted a term to terminate gracefully, which defaults to 30 seconds.
You can use the flag --force to terminate a Pod by force.
Since Kubernetes 1.27, the kubelet transitions deleted Pods, except for
static Pods and
force-deleted Pods
without a finalizer, to a terminal phase (Failed or Succeeded depending on
the exit statuses of the pod containers) before their deletion from the API server.
If a node dies or is disconnected from the rest of the cluster, Kubernetes
applies a policy for setting the phase of all Pods on the lost node to Failed.
Container states
As well as the phase of the Pod overall, Kubernetes tracks the state of
each container inside a Pod. You can use
container lifecycle hooks to
trigger events to run at certain points in a container's lifecycle.
Once the scheduler
assigns a Pod to a Node, the kubelet starts creating containers for that Pod
using a container runtime.
There are three possible container states: Waiting, Running, and Terminated.
To check the state of a Pod's containers, you can use
kubectl describe pod <name-of-pod>. The output shows the state for each container
within that Pod.
Each state has a specific meaning:
Waiting
If a container is not in either the Running or Terminated state, it is Waiting.
A container in the Waiting state is still running the operations it requires in
order to complete start up: for example, pulling the container image from a container
image registry, or applying Secret
data.
When you use kubectl to query a Pod with a container that is Waiting, you also see
a Reason field to summarize why the container is in that state.
Running
The Running status indicates that a container is executing without issues. If there
was a postStart hook configured, it has already executed and finished. When you use
kubectl to query a Pod with a container that is Running, you also see information
about when the container entered the Running state.
Terminated
A container in the Terminated state began execution and then either ran to
completion or failed for some reason. When you use kubectl to query a Pod with
a container that is Terminated, you see a reason, an exit code, and the start and
finish time for that container's period of execution.
If a container has a preStop hook configured, this hook runs before the container enters
the Terminated state.
How Pods handle problems with containers
Kubernetes manages container failures within Pods using a restartPolicy defined in the Pod spec. This policy determines how Kubernetes reacts to containers exiting due to errors or other reasons, which falls in the following sequence:
Initial crash: Kubernetes attempts an immediate restart based on the Pod restartPolicy.
Repeated crashes: After the initial crash Kubernetes applies an exponential
backoff delay for subsequent restarts, described in restartPolicy.
This prevents rapid, repeated restart attempts from overloading the system.
CrashLoopBackOff state: This indicates that the backoff delay mechanism is currently
in effect for a given container that is in a crash loop, failing and restarting repeatedly.
Backoff reset: If a container runs successfully for a certain duration
(e.g., 10 minutes), Kubernetes resets the backoff delay, treating any new crash
as the first one.
In practice, a CrashLoopBackOff is a condition or event that might be seen as output
from the kubectl command, while describing or listing Pods, when a container in the Pod
fails to start properly and then continually tries and fails in a loop.
In other words, when a container enters the crash loop, Kubernetes applies the
exponential backoff delay mentioned in the Container restart policy.
This mechanism prevents a faulty container from overwhelming the system with continuous
failed start attempts.
The CrashLoopBackOff can be caused by issues like the following:
Application errors that cause the container to exit.
Configuration errors, such as incorrect environment variables or missing
configuration files.
Resource constraints, where the container might not have enough memory or CPU
to start properly.
Health checks failing if the application doesn't start serving within the
expected time.
Container liveness probes or startup probes returning a Failure result
as mentioned in the probes section.
To investigate the root cause of a CrashLoopBackOff issue, a user can:
Check logs: Use kubectl logs <name-of-pod> to check the logs of the container.
This is often the most direct way to diagnose the issue causing the crashes.
Inspect events: Use kubectl describe pod <name-of-pod> to see events
for the Pod, which can provide hints about configuration or resource issues.
Review configuration: Ensure that the Pod configuration, including
environment variables and mounted volumes, is correct and that all required
external resources are available.
Check resource limits: Make sure that the container has enough CPU
and memory allocated. Sometimes, increasing the resources in the Pod definition
can resolve the issue.
Debug application: There might exist bugs or misconfigurations in the
application code. Running this container image locally or in a development
environment can help diagnose application specific issues.
Container restart policy
The spec of a Pod has a restartPolicy field with possible values Always, OnFailure,
and Never. The default value is Always.
The restartPolicy for a Pod applies to app containers
in the Pod and to regular init containers.
Sidecar containers
ignore the Pod-level restartPolicy field: in Kubernetes, a sidecar is defined as an
entry inside initContainers that has its container-level restartPolicy set to Always.
For init containers that exit with an error, the kubelet restarts the init container if
the Pod level restartPolicy is either OnFailure or Always:
Always: Automatically restarts the container after any termination.
OnFailure: Only restarts the container if it exits with an error (non-zero exit status).
Never: Does not automatically restart the terminated container.
When the kubelet is handling container restarts according to the configured restart
policy, that only applies to restarts that make replacement containers inside the
same Pod and running on the same node. After containers in a Pod exit, the kubelet
restarts them with an exponential backoff delay (10s, 20s, 40s, …), that is capped at
300 seconds (5 minutes). Once a container has executed for 10 minutes without any
problems, the kubelet resets the restart backoff timer for that container.
Sidecar containers and Pod lifecycle
explains the behaviour of init containers when specify restartpolicy field on it.
Configurable container restart delay
FEATURE STATE:Kubernetes v1.32 [alpha] (enabled by default: false)
With the alpha feature gate KubeletCrashLoopBackOffMax enabled, you can
reconfigure the maximum delay between container start retries from the default
of 300s (5 minutes). This configuration is set per node using kubelet
configuration. In your kubelet configuration,
under crashLoopBackOff set the maxContainerRestartPeriod field between
"1s" and "300s". As described above in Container restart
policy, delays on that node will still start at 10s and
increase exponentially by 2x each restart, but will now be capped at your
configured maximum. If the maxContainerRestartPeriod you configure is less
than the default initial value of 10s, the initial delay will instead be set to
the configured maximum.
See the following kubelet configuration examples:
# container restart delays will start at 10s, increasing# 2x each time they are restarted, to a maximum of 100skind:KubeletConfigurationcrashLoopBackOff:maxContainerRestartPeriod:"100s"
# delays between container restarts will always be 2skind:KubeletConfigurationcrashLoopBackOff:maxContainerRestartPeriod:"2s"
Pod conditions
A Pod has a PodStatus, which has an array of
PodConditions
through which the Pod has or has not passed. Kubelet manages the following
PodConditions:
PodScheduled: the Pod has been scheduled to a node.
PodReadyToStartContainers: (beta feature; enabled by default) the
Pod sandbox has been successfully created and networking configured.
ContainersReady: all containers in the Pod are ready.
Initialized: all init containers
have completed successfully.
Ready: the Pod is able to serve requests and should be added to the load
balancing pools of all matching Services.
Field name
Description
type
Name of this Pod condition.
status
Indicates whether that condition is applicable, with possible values "True", "False", or "Unknown".
lastProbeTime
Timestamp of when the Pod condition was last probed.
lastTransitionTime
Timestamp for when the Pod last transitioned from one status to another.
reason
Machine-readable, UpperCamelCase text indicating the reason for the condition's last transition.
message
Human-readable message indicating details about the last status transition.
Pod readiness
FEATURE STATE:Kubernetes v1.14 [stable]
Your application can inject extra feedback or signals into PodStatus:
Pod readiness. To use this, set readinessGates in the Pod's spec to
specify a list of additional conditions that the kubelet evaluates for Pod readiness.
Readiness gates are determined by the current state of status.condition
fields for the Pod. If Kubernetes cannot find such a condition in the
status.conditions field of a Pod, the status of the condition
is defaulted to "False".
Here is an example:
kind:Pod...spec:readinessGates:- conditionType:"www.example.com/feature-1"status:conditions:- type:Ready # a built in PodConditionstatus:"False"lastProbeTime:nulllastTransitionTime:2018-01-01T00:00:00Z- type:"www.example.com/feature-1"# an extra PodConditionstatus:"False"lastProbeTime:nulllastTransitionTime:2018-01-01T00:00:00ZcontainerStatuses:- containerID:docker://abcd...ready:true...
The Pod conditions you add must have names that meet the Kubernetes
label key format.
Status for Pod readiness
The kubectl patch command does not support patching object status.
To set these status.conditions for the Pod, applications and
operators should use
the PATCH action.
You can use a Kubernetes client library to
write code that sets custom Pod conditions for Pod readiness.
For a Pod that uses custom conditions, that Pod is evaluated to be ready only
when both the following statements apply:
All containers in the Pod are ready.
All conditions specified in readinessGates are True.
When a Pod's containers are Ready but at least one custom condition is missing or
False, the kubelet sets the Pod's condition to ContainersReady.
Pod network readiness
FEATURE STATE:Kubernetes v1.29 [beta]
Note:
During its early development, this condition was named PodHasNetwork.
After a Pod gets scheduled on a node, it needs to be admitted by the kubelet and
to have any required storage volumes mounted. Once these phases are complete,
the kubelet works with
a container runtime (using Container Runtime Interface (CRI)) to set up a
runtime sandbox and configure networking for the Pod. If the
PodReadyToStartContainersConditionfeature gate is enabled
(it is enabled by default for Kubernetes 1.32), the
PodReadyToStartContainers condition will be added to the status.conditions field of a Pod.
The PodReadyToStartContainers condition is set to False by the Kubelet when it detects a
Pod does not have a runtime sandbox with networking configured. This occurs in
the following scenarios:
Early in the lifecycle of the Pod, when the kubelet has not yet begun to set up a sandbox for
the Pod using the container runtime.
Later in the lifecycle of the Pod, when the Pod sandbox has been destroyed due to either:
the node rebooting, without the Pod getting evicted
for container runtimes that use virtual machines for isolation, the Pod
sandbox virtual machine rebooting, which then requires creating a new sandbox and
fresh container network configuration.
The PodReadyToStartContainers condition is set to True by the kubelet after the
successful completion of sandbox creation and network configuration for the Pod
by the runtime plugin. The kubelet can start pulling container images and create
containers after PodReadyToStartContainers condition has been set to True.
For a Pod with init containers, the kubelet sets the Initialized condition to
True after the init containers have successfully completed (which happens
after successful sandbox creation and network configuration by the runtime
plugin). For a Pod without init containers, the kubelet sets the Initialized
condition to True before sandbox creation and network configuration starts.
Container probes
A probe is a diagnostic performed periodically by the kubelet
on a container. To perform a diagnostic, the kubelet either executes code within the container,
or makes a network request.
Check mechanisms
There are four different ways to check a container using a probe.
Each probe must define exactly one of these four mechanisms:
exec
Executes a specified command inside the container. The diagnostic
is considered successful if the command exits with a status code of 0.
grpc
Performs a remote procedure call using gRPC.
The target should implement
gRPC health checks.
The diagnostic is considered successful if the status
of the response is SERVING.
httpGet
Performs an HTTP GET request against the Pod's IP
address on a specified port and path. The diagnostic is
considered successful if the response has a status code
greater than or equal to 200 and less than 400.
tcpSocket
Performs a TCP check against the Pod's IP address on
a specified port. The diagnostic is considered successful if
the port is open. If the remote system (the container) closes
the connection immediately after it opens, this counts as healthy.
Caution:
Unlike the other mechanisms, exec probe's implementation involves the creation/forking of multiple processes each time when executed.
As a result, in case of the clusters having higher pod densities, lower intervals of initialDelaySeconds, periodSeconds, configuring any probe with exec mechanism might introduce an overhead on the cpu usage of the node.
In such scenarios, consider using the alternative probe mechanisms to avoid the overhead.
Probe outcome
Each probe has one of three results:
Success
The container passed the diagnostic.
Failure
The container failed the diagnostic.
Unknown
The diagnostic failed (no action should be taken, and the kubelet
will make further checks).
Types of probe
The kubelet can optionally perform and react to three kinds of probes on running
containers:
livenessProbe
Indicates whether the container is running. If
the liveness probe fails, the kubelet kills the container, and the container
is subjected to its restart policy. If a container does not
provide a liveness probe, the default state is Success.
readinessProbe
Indicates whether the container is ready to respond to requests.
If the readiness probe fails, the endpoints controller removes the Pod's IP
address from the endpoints of all Services that match the Pod. The default
state of readiness before the initial delay is Failure. If a container does
not provide a readiness probe, the default state is Success.
startupProbe
Indicates whether the application within the container is started.
All other probes are disabled if a startup probe is provided, until it succeeds.
If the startup probe fails, the kubelet kills the container, and the container
is subjected to its restart policy. If a container does not
provide a startup probe, the default state is Success.
If the process in your container is able to crash on its own whenever it
encounters an issue or becomes unhealthy, you do not necessarily need a liveness
probe; the kubelet will automatically perform the correct action in accordance
with the Pod's restartPolicy.
If you'd like your container to be killed and restarted if a probe fails, then
specify a liveness probe, and specify a restartPolicy of Always or OnFailure.
When should you use a readiness probe?
If you'd like to start sending traffic to a Pod only when a probe succeeds,
specify a readiness probe. In this case, the readiness probe might be the same
as the liveness probe, but the existence of the readiness probe in the spec means
that the Pod will start without receiving any traffic and only start receiving
traffic after the probe starts succeeding.
If you want your container to be able to take itself down for maintenance, you
can specify a readiness probe that checks an endpoint specific to readiness that
is different from the liveness probe.
If your app has a strict dependency on back-end services, you can implement both
a liveness and a readiness probe. The liveness probe passes when the app itself
is healthy, but the readiness probe additionally checks that each required
back-end service is available. This helps you avoid directing traffic to Pods
that can only respond with error messages.
If your container needs to work on loading large data, configuration files, or
migrations during startup, you can use a
startup probe. However, if you want to
detect the difference between an app that has failed and an app that is still
processing its startup data, you might prefer a readiness probe.
Note:
If you want to be able to drain requests when the Pod is deleted, you do not
necessarily need a readiness probe; on deletion, the Pod automatically puts itself
into an unready state regardless of whether the readiness probe exists.
The Pod remains in the unready state while it waits for the containers in the Pod
to stop.
When should you use a startup probe?
Startup probes are useful for Pods that have containers that take a long time to
come into service. Rather than set a long liveness interval, you can configure
a separate configuration for probing the container as it starts up, allowing
a time longer than the liveness interval would allow.
If your container usually starts in more than
initialDelaySeconds + failureThreshold × periodSeconds, you should specify a
startup probe that checks the same endpoint as the liveness probe. The default for
periodSeconds is 10s. You should then set its failureThreshold high enough to
allow the container to start, without changing the default values of the liveness
probe. This helps to protect against deadlocks.
Termination of Pods
Because Pods represent processes running on nodes in the cluster, it is important to
allow those processes to gracefully terminate when they are no longer needed (rather
than being abruptly stopped with a KILL signal and having no chance to clean up).
The design aim is for you to be able to request deletion and know when processes
terminate, but also be able to ensure that deletes eventually complete.
When you request deletion of a Pod, the cluster records and tracks the intended grace period
before the Pod is allowed to be forcefully killed. With that forceful shutdown tracking in
place, the kubelet attempts graceful
shutdown.
Typically, with this graceful termination of the pod, kubelet makes requests to the container runtime to attempt to stop the containers in the pod by first sending a TERM (aka. SIGTERM) signal, with a grace period timeout, to the main process in each container. The requests to stop the containers are processed by the container runtime asynchronously. There is no guarantee to the order of processing for these requests. Many container runtimes respect the STOPSIGNAL value defined in the container image and, if different, send the container image configured STOPSIGNAL instead of TERM.
Once the grace period has expired, the KILL signal is sent to any remaining
processes, and the Pod is then deleted from the
API Server. If the kubelet or the
container runtime's management service is restarted while waiting for processes to terminate, the
cluster retries from the start including the full original grace period.
Pod termination flow, illustrated with an example:
You use the kubectl tool to manually delete a specific Pod, with the default grace period
(30 seconds).
The Pod in the API server is updated with the time beyond which the Pod is considered "dead"
along with the grace period.
If you use kubectl describe to check the Pod you're deleting, that Pod shows up as "Terminating".
On the node where the Pod is running: as soon as the kubelet sees that a Pod has been marked
as terminating (a graceful shutdown duration has been set), the kubelet begins the local Pod
shutdown process.
If one of the Pod's containers has defined a preStophook and the terminationGracePeriodSeconds
in the Pod spec is not set to 0, the kubelet runs that hook inside of the container.
The default terminationGracePeriodSeconds setting is 30 seconds.
If the preStop hook is still running after the grace period expires, the kubelet requests
a small, one-off grace period extension of 2 seconds.
Note:
If the preStop hook needs longer to complete than the default grace period allows,
you must modify terminationGracePeriodSeconds to suit this.
The kubelet triggers the container runtime to send a TERM signal to process 1 inside each
container.
There is special ordering if the Pod has any
sidecar containers defined.
Otherwise, the containers in the Pod receive the TERM signal at different times and in
an arbitrary order. If the order of shutdowns matters, consider using a preStop hook
to synchronize (or switch to using sidecar containers).
At the same time as the kubelet is starting graceful shutdown of the Pod, the control plane
evaluates whether to remove that shutting-down Pod from EndpointSlice (and Endpoints) objects,
where those objects represent a Service
with a configured selector.
ReplicaSets and other workload resources
no longer treat the shutting-down Pod as a valid, in-service replica.
Pods that shut down slowly should not continue to serve regular traffic and should start
terminating and finish processing open connections. Some applications need to go beyond
finishing open connections and need more graceful termination, for example, session draining
and completion.
Any endpoints that represent the terminating Pods are not immediately removed from
EndpointSlices, and a status indicating terminating state
is exposed from the EndpointSlice API (and the legacy Endpoints API).
Terminating endpoints always have their ready status as false (for backward compatibility
with versions before 1.26), so load balancers will not use it for regular traffic.
If traffic draining on terminating Pod is needed, the actual readiness can be checked as a
condition serving. You can find more details on how to implement connections draining in the
tutorial Pods And Endpoints Termination Flow
The kubelet ensures the Pod is shut down and terminated
When the grace period expires, if there is still any container running in the Pod, the
kubelet triggers forcible shutdown.
The container runtime sends SIGKILL to any processes still running in any container in the Pod.
The kubelet also cleans up a hidden pause container if that container runtime uses one.
The kubelet transitions the Pod into a terminal phase (Failed or Succeeded depending on
the end state of its containers).
The kubelet triggers forcible removal of the Pod object from the API server, by setting grace period
to 0 (immediate deletion).
The API server deletes the Pod's API object, which is then no longer visible from any client.
Forced Pod termination
Caution:
Forced deletions can be potentially disruptive for some workloads and their Pods.
By default, all deletes are graceful within 30 seconds. The kubectl delete command supports
the --grace-period=<seconds> option which allows you to override the default and specify your
own value.
Setting the grace period to 0 forcibly and immediately deletes the Pod from the API
server. If the Pod was still running on a node, that forcible deletion triggers the kubelet to
begin immediate cleanup.
Using kubectl, You must specify an additional flag --force along with --grace-period=0
in order to perform force deletions.
When a force deletion is performed, the API server does not wait for confirmation
from the kubelet that the Pod has been terminated on the node it was running on. It
removes the Pod in the API immediately so a new Pod can be created with the same
name. On the node, Pods that are set to terminate immediately will still be given
a small grace period before being force killed.
Caution:
Immediate deletion does not wait for confirmation that the running resource has been terminated.
The resource may continue to run on the cluster indefinitely.
If you need to force-delete Pods that are part of a StatefulSet, refer to the task
documentation for
deleting Pods from a StatefulSet.
Pod shutdown and sidecar containers
If your Pod includes one or more
sidecar containers
(init containers with an Always restart policy), the kubelet will delay sending
the TERM signal to these sidecar containers until the last main container has fully terminated.
The sidecar containers will be terminated in the reverse order they are defined in the Pod spec.
This ensures that sidecar containers continue serving the other containers in the Pod until they
are no longer needed.
This means that slow termination of a main container will also delay the termination of the sidecar containers.
If the grace period expires before the termination process is complete, the Pod may enter forced termination.
In this case, all remaining containers in the Pod will be terminated simultaneously with a short grace period.
Similarly, if the Pod has a preStop hook that exceeds the termination grace period, emergency termination may occur.
In general, if you have used preStop hooks to control the termination order without sidecar containers, you can now
remove them and allow the kubelet to manage sidecar termination automatically.
Garbage collection of Pods
For failed Pods, the API objects remain in the cluster's API until a human or
controller process
explicitly removes them.
The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up
terminated Pods (with a phase of Succeeded or Failed), when the number of Pods exceeds the
configured threshold (determined by terminated-pod-gc-threshold in the kube-controller-manager).
This avoids a resource leak as Pods are created and terminated over time.
Additionally, PodGC cleans up any Pods which satisfy any of the following conditions:
are orphan Pods - bound to a node which no longer exists,
Along with cleaning up the Pods, PodGC will also mark them as failed if they are in a non-terminal
phase. Also, PodGC adds a Pod disruption condition when cleaning up an orphan Pod.
See Pod disruption conditions
for more details.
For detailed information about Pod and container status in the API, see
the API reference documentation covering
status for Pod.
2 - Init Containers
This page provides an overview of init containers: specialized containers that run
before app containers in a Pod.
Init containers can contain utilities or setup scripts not present in an app image.
You can specify init containers in the Pod specification alongside the containers
array (which describes app containers).
In Kubernetes, a sidecar container is a container that
starts before the main application container and continues to run. This document is about init containers:
containers that run to completion during Pod initialization.
Understanding init containers
A Pod can have multiple containers
running apps within it, but it can also have one or more init containers, which are run
before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
If a Pod's init container fails, the kubelet repeatedly restarts that init container until it succeeds.
However, if the Pod has a restartPolicy of Never, and an init container fails during startup of that Pod, Kubernetes treats the overall Pod as failed.
To specify an init container for a Pod, add the initContainers field into
the Pod specification,
as an array of container items (similar to the app containers field and its contents).
See Container in the
API reference for more details.
The status of the init containers is returned in .status.initContainerStatuses
field as an array of the container statuses (similar to the .status.containerStatuses
field).
Differences from regular containers
Init containers support all the fields and features of app containers,
including resource limits, volumes, and security settings. However, the
resource requests and limits for an init container are handled differently,
as documented in Resource sharing within containers.
Regular init containers (in other words: excluding sidecar containers) do not support the
lifecycle, livenessProbe, readinessProbe, or startupProbe fields. Init containers
must run to completion before the Pod can be ready; sidecar containers continue running
during a Pod's lifetime, and do support some probes. See sidecar container
for further details about sidecar containers.
If you specify multiple init containers for a Pod, kubelet runs each init
container sequentially. Each init container must succeed before the next can run.
When all of the init containers have run to completion, kubelet initializes
the application containers for the Pod and runs them as usual.
Differences from sidecar containers
Init containers run and complete their tasks before the main application container starts.
Unlike sidecar containers,
init containers are not continuously running alongside the main containers.
Init containers run to completion sequentially, and the main container does not start
until all the init containers have successfully completed.
init containers do not support lifecycle, livenessProbe, readinessProbe, or
startupProbe whereas sidecar containers support all these probes to control their lifecycle.
Init containers share the same resources (CPU, memory, network) with the main application
containers but do not interact directly with them. They can, however, use shared volumes
for data exchange.
Using init containers
Because init containers have separate images from app containers, they
have some advantages for start-up related code:
Init containers can contain utilities or custom code for setup that are not present in an app
image. For example, there is no need to make an image FROM another image just to use a tool like
sed, awk, python, or dig during setup.
The application image builder and deployer roles can work independently without
the need to jointly build a single app image.
Init containers can run with a different view of the filesystem than app containers in the
same Pod. Consequently, they can be given access to
Secrets that app containers cannot access.
Because init containers run to completion before any app containers start, init containers offer
a mechanism to block or delay app container startup until a set of preconditions are met. Once
preconditions are met, all of the app containers in a Pod can start in parallel.
Init containers can securely run utilities or custom code that would otherwise make an app
container image less secure. By keeping unnecessary tools separate you can limit the attack
surface of your app container image.
Examples
Here are some ideas for how to use init containers:
Wait for a Service to
be created, using a shell one-line command like:
for i in {1..100}; do sleep 1; if nslookup myservice; thenexit 0; fi; done; exit1
Register this Pod with a remote server from the downward API with a command like:
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$(<POD_NAME>)&ip=$(<POD_IP>)'
Wait for some time before starting the app container with a command like
Place values into a configuration file and run a template tool to dynamically
generate a configuration file for the main app container. For example,
place the POD_IP value in a configuration and generate the main app
configuration file using Jinja.
Init containers in use
This example defines a simple Pod that has two init containers.
The first waits for myservice, and the second waits for mydb. Once both
init containers complete, the Pod runs the app container from its spec section.
apiVersion:v1kind:Podmetadata:name:myapp-podlabels:app.kubernetes.io/name:MyAppspec:containers:- name:myapp-containerimage:busybox:1.28command:['sh','-c','echo The app is running! && sleep 3600']initContainers:- name:init-myserviceimage:busybox:1.28command:['sh','-c',"until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]- name:init-mydbimage:busybox:1.28command:['sh','-c',"until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
You can start this Pod by running:
kubectl apply -f myapp.yaml
The output is similar to this:
pod/myapp-pod created
And check on its status with:
kubectl get -f myapp.yaml
The output is similar to this:
NAME READY STATUS RESTARTS AGE
myapp-pod 0/1 Init:0/2 0 6m
or for more details:
kubectl describe -f myapp.yaml
The output is similar to this:
Name: myapp-pod
Namespace: default
[...]
Labels: app.kubernetes.io/name=MyApp
Status: Pending
[...]
Init Containers:
init-myservice:
[...]
State: Running
[...]
init-mydb:
[...]
State: Waiting
Reason: PodInitializing
Ready: False
[...]
Containers:
myapp-container:
[...]
State: Waiting
Reason: PodInitializing
Ready: False
[...]
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16s 16s 1 {default-scheduler } Normal Scheduled Successfully assigned myapp-pod to 172.17.4.201
16s 16s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulling pulling image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Pulled Successfully pulled image "busybox"
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Created Created container init-myservice
13s 13s 1 {kubelet 172.17.4.201} spec.initContainers{init-myservice} Normal Started Started container init-myservice
To see logs for the init containers in this Pod, run:
kubectl logs myapp-pod -c init-myservice # Inspect the first init containerkubectl logs myapp-pod -c init-mydb # Inspect the second init container
At this point, those init containers will be waiting to discover Services named
mydb and myservice.
Here's a configuration you can use to make those Services appear:
You'll then see that those init containers complete, and that the myapp-pod
Pod moves into the Running state:
kubectl get -f myapp.yaml
The output is similar to this:
NAME READY STATUS RESTARTS AGE
myapp-pod 1/1 Running 0 9m
This simple example should provide some inspiration for you to create your own
init containers. What's next contains a link to a more detailed example.
Detailed behavior
During Pod startup, the kubelet delays running init containers until the networking
and storage are ready. Then the kubelet runs the Pod's init containers in the order
they appear in the Pod's spec.
Each init container must exit successfully before
the next container starts. If a container fails to start due to the runtime or
exits with failure, it is retried according to the Pod restartPolicy. However,
if the Pod restartPolicy is set to Always, the init containers use
restartPolicy OnFailure.
A Pod cannot be Ready until all init containers have succeeded. The ports on an
init container are not aggregated under a Service. A Pod that is initializing
is in the Pending state but should have a condition Initialized set to false.
If the Pod restarts, or is restarted, all init containers
must execute again.
Changes to the init container spec are limited to the container image field.
Directly altering the image field of an init container does not restart the
Pod or trigger its recreation. If the Pod has yet to start, that change may
have an effect on how the Pod boots up.
For a pod template
you can typically change any field for an init container; the impact of making
that change depends on where the pod template is used.
Because init containers can be restarted, retried, or re-executed, init container
code should be idempotent. In particular, code that writes into any emptyDir volume
should be prepared for the possibility that an output file already exists.
Init containers have all of the fields of an app container. However, Kubernetes
prohibits readinessProbe from being used because init containers cannot
define readiness distinct from completion. This is enforced during validation.
Use activeDeadlineSeconds on the Pod to prevent init containers from failing forever.
The active deadline includes init containers.
However it is recommended to use activeDeadlineSeconds only if teams deploy their application
as a Job, because activeDeadlineSeconds has an effect even after initContainer finished.
The Pod which is already running correctly would be killed by activeDeadlineSeconds if you set.
The name of each app and init container in a Pod must be unique; a
validation error is thrown for any container sharing a name with another.
Resource sharing within containers
Given the order of execution for init, sidecar and app containers, the following rules
for resource usage apply:
The highest of any particular resource request or limit defined on all init
containers is the effective init request/limit. If any resource has no
resource limit specified this is considered as the highest limit.
The Pod's effective request/limit for a resource is the higher of:
the sum of all app containers request/limit for a resource
the effective init request/limit for a resource
Scheduling is done based on effective requests/limits, which means
init containers can reserve resources for initialization that are not used
during the life of the Pod.
The QoS (quality of service) tier of the Pod's effective QoS tier is the
QoS tier for init containers and app containers alike.
Quota and limits are applied based on the effective Pod request and
limit.
Init containers and Linux cgroups
On Linux, resource allocations for Pod level control groups (cgroups) are based on the effective Pod
request and limit, the same as the scheduler.
Pod restart reasons
A Pod can restart, causing re-execution of init containers, for the following
reasons:
The Pod infrastructure container is restarted. This is uncommon and would
have to be done by someone with root access to nodes.
All containers in a Pod are terminated while restartPolicy is set to Always,
forcing a restart, and the init container completion record has been lost due
to garbage collection.
The Pod will not be restarted when the init container image is changed, or the
init container completion record has been lost due to garbage collection. This
applies for Kubernetes v1.20 and later. If you are using an earlier version of
Kubernetes, consult the documentation for the version you are using.
Sidecar containers are the secondary containers that run along with the main
application container within the same Pod.
These containers are used to enhance or to extend the functionality of the primary app
container by providing additional services, or functionality such as logging, monitoring,
security, or data synchronization, without directly altering the primary application code.
Typically, you only have one app container in a Pod. For example, if you have a web
application that requires a local webserver, the local webserver is a sidecar and the
web application itself is the app container.
Sidecar containers in Kubernetes
Kubernetes implements sidecar containers as a special case of
init containers; sidecar containers remain
running after Pod startup. This document uses the term regular init containers to clearly
refer to containers that only run during Pod startup.
Provided that your cluster has the SidecarContainersfeature gate enabled
(the feature is active by default since Kubernetes v1.29), you can specify a restartPolicy
for containers listed in a Pod's initContainers field.
These restartable sidecar containers are independent from other init containers and from
the main application container(s) within the same pod.
These can be started, stopped, or restarted without effecting the main application container
and other init containers.
You can also run a Pod with multiple containers that are not marked as init or sidecar
containers. This is appropriate if the containers within the Pod are required for the
Pod to work overall, but you don't need to control which containers start or stop first.
You could also do this if you need to support older versions of Kubernetes that don't
support a container-level restartPolicy field.
Example application
Here's an example of a Deployment with two containers, one of which is a sidecar:
If an init container is created with its restartPolicy set to Always, it will
start and remain running during the entire life of the Pod. This can be helpful for
running supporting services separated from the main application containers.
If a readinessProbe is specified for this init container, its result will be used
to determine the ready state of the Pod.
Since these containers are defined as init containers, they benefit from the same
ordering and sequential guarantees as regular init containers, allowing you to mix
sidecar containers with regular init containers for complex Pod initialization flows.
Compared to regular init containers, sidecars defined within initContainers continue to
run after they have started. This is important when there is more than one entry inside
.spec.initContainers for a Pod. After a sidecar-style init container is running (the kubelet
has set the started status for that init container to true), the kubelet then starts the
next init container from the ordered .spec.initContainers list.
That status either becomes true because there is a process running in the
container and no startup probe defined, or as a result of its startupProbe succeeding.
Upon Pod termination,
the kubelet postpones terminating sidecar containers until the main application container has fully stopped.
The sidecar containers are then shut down in the opposite order of their appearance in the Pod specification.
This approach ensures that the sidecars remain operational, supporting other containers within the Pod,
until their service is no longer required.
Jobs with sidecar containers
If you define a Job that uses sidecar using Kubernetes-style init containers,
the sidecar container in each Pod does not prevent the Job from completing after the
main container has finished.
Here's an example of a Job with two containers, one of which is a sidecar:
Sidecar containers run alongside app containers in the same pod. However, they do not
execute the primary application logic; instead, they provide supporting functionality to
the main application.
Sidecar containers have their own independent lifecycles. They can be started, stopped,
and restarted independently of app containers. This means you can update, scale, or
maintain sidecar containers without affecting the primary application.
Sidecar containers share the same network and storage namespaces with the primary
container. This co-location allows them to interact closely and share resources.
From Kubernetes perspective, sidecars graceful termination is less important.
When other containers took all alloted graceful termination time, sidecar containers
will receive the SIGTERM following with SIGKILL faster than may be expected.
So exit codes different from 0 (0 indicates successful exit), for sidecar containers are normal
on Pod termination and should be generally ignored by the external tooling.
Differences from init containers
Sidecar containers work alongside the main container, extending its functionality and
providing additional services.
Sidecar containers run concurrently with the main application container. They are active
throughout the lifecycle of the pod and can be started and stopped independently of the
main container. Unlike init containers,
sidecar containers support probes to control their lifecycle.
Sidecar containers can interact directly with the main application containers, because
like init containers they always share the same network, and can optionally also share
volumes (filesystems).
Init containers stop before the main containers start up, so init containers cannot
exchange messages with the app container in a Pod. Any data passing is one-way
(for example, an init container can put information inside an emptyDir volume).
Resource sharing within containers
Given the order of execution for init, sidecar and app containers, the following rules
for resource usage apply:
The highest of any particular resource request or limit defined on all init
containers is the effective init request/limit. If any resource has no
resource limit specified this is considered as the highest limit.
The Pod's effective request/limit for a resource is the sum of
pod overhead and the higher of:
the sum of all non-init containers(app and sidecar containers) request/limit for a
resource
the effective init request/limit for a resource
Scheduling is done based on effective requests/limits, which means
init containers can reserve resources for initialization that are not used
during the life of the Pod.
The QoS (quality of service) tier of the Pod's effective QoS tier is the
QoS tier for all init, sidecar and app containers alike.
Quota and limits are applied based on the effective Pod request and
limit.
Sidecar containers and Linux cgroups
On Linux, resource allocations for Pod level control groups (cgroups) are based on the effective Pod
request and limit, the same as the scheduler.
This page provides an overview of ephemeral containers: a special type of container
that runs temporarily in an existing Pod to
accomplish user-initiated actions such as troubleshooting. You use ephemeral
containers to inspect services rather than to build applications.
Understanding ephemeral containers
Pods are the fundamental building
block of Kubernetes applications. Since Pods are intended to be disposable and
replaceable, you cannot add a container to a Pod once it has been created.
Instead, you usually delete and replace Pods in a controlled fashion using
deployments.
Sometimes it's necessary to inspect the state of an existing Pod, however, for
example to troubleshoot a hard-to-reproduce bug. In these cases you can run
an ephemeral container in an existing Pod to inspect its state and run
arbitrary commands.
What is an ephemeral container?
Ephemeral containers differ from other containers in that they lack guarantees
for resources or execution, and they will never be automatically restarted, so
they are not appropriate for building applications. Ephemeral containers are
described using the same ContainerSpec as regular containers, but many fields
are incompatible and disallowed for ephemeral containers.
Ephemeral containers may not have ports, so fields such as ports,
livenessProbe, readinessProbe are disallowed.
Pod resource allocations are immutable, so setting resources is disallowed.
Ephemeral containers are created using a special ephemeralcontainers handler
in the API rather than by adding them directly to pod.spec, so it's not
possible to add an ephemeral container using kubectl edit.
Like regular containers, you may not change or remove an ephemeral container
after you have added it to a Pod.
Note:
Ephemeral containers are not supported by static pods.
Uses for ephemeral containers
Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image
doesn't include debugging utilities.
In particular, distroless images
enable you to deploy minimal container images that reduce attack surface
and exposure to bugs and vulnerabilities. Since distroless images do not include a
shell or any debugging utilities, it's difficult to troubleshoot distroless
images using kubectl exec alone.
When using ephemeral containers, it's helpful to enable process namespace
sharing so
you can view processes in other containers.
This guide is for application owners who want to build
highly available applications, and thus need to understand
what types of disruptions can happen to Pods.
It is also for cluster administrators who want to perform automated
cluster actions, like upgrading and autoscaling clusters.
Voluntary and involuntary disruptions
Pods do not disappear until someone (a person or a controller) destroys them, or
there is an unavoidable hardware or system software error.
We call these unavoidable cases involuntary disruptions to
an application. Examples are:
a hardware failure of the physical machine backing the node
cluster administrator deletes VM (instance) by mistake
cloud provider or hypervisor failure makes VM disappear
a kernel panic
the node disappears from the cluster due to cluster network partition
Except for the out-of-resources condition, all these conditions
should be familiar to most users; they are not specific
to Kubernetes.
We call other cases voluntary disruptions. These include both
actions initiated by the application owner and those initiated by a Cluster
Administrator. Typical application owner actions include:
deleting the deployment or other controller that manages the pod
updating a deployment's pod template causing a restart
Draining a node from a cluster to scale the cluster down (learn about
Cluster Autoscaling).
Removing a pod from a node to permit something else to fit on that node.
These actions might be taken directly by the cluster administrator, or by automation
run by the cluster administrator, or by your cluster hosting provider.
Ask your cluster administrator or consult your cloud provider or distribution documentation
to determine if any sources of voluntary disruptions are enabled for your cluster.
If none are enabled, you can skip creating Pod Disruption Budgets.
Caution:
Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example,
deleting deployments or pods bypasses Pod Disruption Budgets.
Dealing with disruptions
Here are some ways to mitigate involuntary disruptions:
Replicate your application if you need higher availability. (Learn about running replicated
stateless
and stateful applications.)
For even higher availability when running replicated applications,
spread applications across racks (using
anti-affinity)
or across zones (if using a
multi-zone cluster.)
The frequency of voluntary disruptions varies. On a basic Kubernetes cluster, there are
no automated voluntary disruptions (only user-triggered ones). However, your cluster administrator or hosting provider
may run some additional services which cause voluntary disruptions. For example,
rolling out node software updates can cause voluntary disruptions. Also, some implementations
of cluster (node) autoscaling may cause voluntary disruptions to defragment and compact nodes.
Your cluster administrator or hosting provider should have documented what level of voluntary
disruptions, if any, to expect. Certain configuration options, such as
using PriorityClasses
in your pod spec can also cause voluntary (and involuntary) disruptions.
Pod disruption budgets
FEATURE STATE:Kubernetes v1.21 [stable]
Kubernetes offers features to help you run highly available applications even when you
introduce frequent voluntary disruptions.
As an application owner, you can create a PodDisruptionBudget (PDB) for each application.
A PDB limits the number of Pods of a replicated application that are down simultaneously from
voluntary disruptions. For example, a quorum-based application would
like to ensure that the number of replicas running is never brought below the
number needed for a quorum. A web front end might want to
ensure that the number of replicas serving load never falls below a certain
percentage of the total.
Cluster managers and hosting providers should use tools which
respect PodDisruptionBudgets by calling the Eviction API
instead of directly deleting pods or deployments.
For example, the kubectl drain subcommand lets you mark a node as going out of
service. When you run kubectl drain, the tool tries to evict all of the Pods on
the Node you're taking out of service. The eviction request that kubectl submits on
your behalf may be temporarily rejected, so the tool periodically retries all failed
requests until all Pods on the target node are terminated, or until a configurable timeout
is reached.
A PDB specifies the number of replicas that an application can tolerate having, relative to how
many it is intended to have. For example, a Deployment which has a .spec.replicas: 5 is
supposed to have 5 pods at any given time. If its PDB allows for there to be 4 at a time,
then the Eviction API will allow voluntary disruption of one (but not two) pods at a time.
The group of pods that comprise the application is specified using a label selector, the same
as the one used by the application's controller (deployment, stateful-set, etc).
The "intended" number of pods is computed from the .spec.replicas of the workload resource
that is managing those pods. The control plane discovers the owning workload resource by
examining the .metadata.ownerReferences of the Pod.
Involuntary disruptions cannot be prevented by PDBs; however they
do count against the budget.
Pods which are deleted or unavailable due to a rolling upgrade to an application do count
against the disruption budget, but workload resources (such as Deployment and StatefulSet)
are not limited by PDBs when doing rolling upgrades. Instead, the handling of failures
during application updates is configured in the spec for the specific workload resource.
It is recommended to set AlwaysAllowUnhealthy Pod Eviction Policy
to your PodDisruptionBudgets to support eviction of misbehaving applications during a node drain.
The default behavior is to wait for the application pods to become healthy
before the drain can proceed.
When a pod is evicted using the eviction API, it is gracefully
terminated, honoring the
terminationGracePeriodSeconds setting in its PodSpec.
PodDisruptionBudget example
Consider a cluster with 3 nodes, node-1 through node-3.
The cluster is running several applications. One of them has 3 replicas initially called
pod-a, pod-b, and pod-c. Another, unrelated pod without a PDB, called pod-x, is also shown.
Initially, the pods are laid out as follows:
node-1
node-2
node-3
pod-a available
pod-b available
pod-c available
pod-x available
All 3 pods are part of a deployment, and they collectively have a PDB which requires
there be at least 2 of the 3 pods to be available at all times.
For example, assume the cluster administrator wants to reboot into a new kernel version to fix a bug in the kernel.
The cluster administrator first tries to drain node-1 using the kubectl drain command.
That tool tries to evict pod-a and pod-x. This succeeds immediately.
Both pods go into the terminating state at the same time.
This puts the cluster in this state:
node-1 draining
node-2
node-3
pod-a terminating
pod-b available
pod-c available
pod-x terminating
The deployment notices that one of the pods is terminating, so it creates a replacement
called pod-d. Since node-1 is cordoned, it lands on another node. Something has
also created pod-y as a replacement for pod-x.
(Note: for a StatefulSet, pod-a, which would be called something like pod-0, would need
to terminate completely before its replacement, which is also called pod-0 but has a
different UID, could be created. Otherwise, the example applies to a StatefulSet as well.)
Now the cluster is in this state:
node-1 draining
node-2
node-3
pod-a terminating
pod-b available
pod-c available
pod-x terminating
pod-d starting
pod-y
At some point, the pods terminate, and the cluster looks like this:
node-1 drained
node-2
node-3
pod-b available
pod-c available
pod-d starting
pod-y
At this point, if an impatient cluster administrator tries to drain node-2 or
node-3, the drain command will block, because there are only 2 available
pods for the deployment, and its PDB requires at least 2. After some time passes, pod-d becomes available.
The cluster state now looks like this:
node-1 drained
node-2
node-3
pod-b available
pod-c available
pod-d available
pod-y
Now, the cluster administrator tries to drain node-2.
The drain command will try to evict the two pods in some order, say
pod-b first and then pod-d. It will succeed at evicting pod-b.
But, when it tries to evict pod-d, it will be refused because that would leave only
one pod available for the deployment.
The deployment creates a replacement for pod-b called pod-e.
Because there are not enough resources in the cluster to schedule
pod-e the drain will again block. The cluster may end up in this
state:
node-1 drained
node-2
node-3
no node
pod-b terminating
pod-c available
pod-e pending
pod-d available
pod-y
At this point, the cluster administrator needs to
add a node back to the cluster to proceed with the upgrade.
You can see how Kubernetes varies the rate at which disruptions
can happen, according to:
how many replicas an application needs
how long it takes to gracefully shutdown an instance
how long it takes a new instance to start up
the type of controller
the cluster's resource capacity
Pod disruption conditions
FEATURE STATE:Kubernetes v1.31 [stable] (enabled by default: true)
A dedicated Pod DisruptionTargetcondition
is added to indicate
that the Pod is about to be deleted due to a disruption.
The reason field of the condition additionally
indicates one of the following reasons for the Pod termination:
PreemptionByScheduler
Pod is due to be preempted by a scheduler in order to accommodate a new Pod with a higher priority. For more information, see Pod priority preemption.
DeletionByTaintManager
Pod is due to be deleted by Taint Manager (which is part of the node lifecycle controller within kube-controller-manager) due to a NoExecute taint that the Pod does not tolerate; see taint-based evictions.
In all other disruption scenarios, like eviction due to exceeding
Pod container limits,
Pods don't receive the DisruptionTarget condition because the disruptions were
probably caused by the Pod and would reoccur on retry.
Note:
A Pod disruption might be interrupted. The control plane might re-attempt to
continue the disruption of the same Pod, but it is not guaranteed. As a result,
the DisruptionTarget condition might be added to a Pod, but that Pod might then not actually be
deleted. In such a situation, after some time, the
Pod disruption condition will be cleared.
Along with cleaning up the pods, the Pod garbage collector (PodGC) will also mark them as failed if they are in a non-terminal
phase (see also Pod garbage collection).
When using a Job (or CronJob), you may want to use these Pod disruption conditions as part of your Job's
Pod failure policy.
Separating Cluster Owner and Application Owner Roles
Often, it is useful to think of the Cluster Manager
and Application Owner as separate roles with limited knowledge
of each other. This separation of responsibilities
may make sense in these scenarios:
when there are many application teams sharing a Kubernetes cluster, and
there is natural specialization of roles
when third-party tools or services are used to automate cluster management
Pod Disruption Budgets support this separation of roles by providing an
interface between the roles.
If you do not have such a separation of responsibilities in your organization,
you may not need to use Pod Disruption Budgets.
How to perform Disruptive Actions on your Cluster
If you are a Cluster Administrator, and you need to perform a disruptive action on all
the nodes in your cluster, such as a node or system software upgrade, here are some options:
Accept downtime during the upgrade.
Failover to another complete replica cluster.
No downtime, but may be costly both for the duplicated nodes
and for human effort to orchestrate the switchover.
Write disruption tolerant applications and use PDBs.
No downtime.
Minimal resource duplication.
Allows more automation of cluster administration.
Writing disruption-tolerant applications is tricky, but the work to tolerate voluntary
disruptions largely overlaps with work to support autoscaling and tolerating
involuntary disruptions.
Learn about updating a deployment
including steps to maintain its availability during the rollout.
6 - Pod Quality of Service Classes
This page introduces Quality of Service (QoS) classes in Kubernetes, and explains
how Kubernetes assigns a QoS class to each Pod as a consequence of the resource
constraints that you specify for the containers in that Pod. Kubernetes relies on this
classification to make decisions about which Pods to evict when there are not enough
available resources on a Node.
Quality of Service classes
Kubernetes classifies the Pods that you run and allocates each Pod into a specific
quality of service (QoS) class. Kubernetes uses that classification to influence how different
pods are handled. Kubernetes does this classification based on the
resource requests
of the Containers in that Pod, along with
how those requests relate to resource limits.
This is known as Quality of Service
(QoS) class. Kubernetes assigns every Pod a QoS class based on the resource requests
and limits of its component Containers. QoS classes are used by Kubernetes to decide
which Pods to evict from a Node experiencing
Node Pressure. The possible
QoS classes are Guaranteed, Burstable, and BestEffort. When a Node runs out of resources,
Kubernetes will first evict BestEffort Pods running on that Node, followed by Burstable and
finally Guaranteed Pods. When this eviction is due to resource pressure, only Pods exceeding
resource requests are candidates for eviction.
Guaranteed
Pods that are Guaranteed have the strictest resource limits and are least likely
to face eviction. They are guaranteed not to be killed until they exceed their limits
or there are no lower-priority Pods that can be preempted from the Node. They may
not acquire resources beyond their specified limits. These Pods can also make
use of exclusive CPUs using the
static CPU management policy.
Criteria
For a Pod to be given a QoS class of Guaranteed:
Every Container in the Pod must have a memory limit and a memory request.
For every Container in the Pod, the memory limit must equal the memory request.
Every Container in the Pod must have a CPU limit and a CPU request.
For every Container in the Pod, the CPU limit must equal the CPU request.
Burstable
Pods that are Burstable have some lower-bound resource guarantees based on the request, but
do not require a specific limit. If a limit is not specified, it defaults to a
limit equivalent to the capacity of the Node, which allows the Pods to flexibly increase
their resources if resources are available. In the event of Pod eviction due to Node
resource pressure, these Pods are evicted only after all BestEffort Pods are evicted.
Because a Burstable Pod can include a Container that has no resource limits or requests, a Pod
that is Burstable can try to use any amount of node resources.
Criteria
A Pod is given a QoS class of Burstable if:
The Pod does not meet the criteria for QoS class Guaranteed.
At least one Container in the Pod has a memory or CPU request or limit.
BestEffort
Pods in the BestEffort QoS class can use node resources that aren't specifically assigned
to Pods in other QoS classes. For example, if you have a node with 16 CPU cores available to the
kubelet, and you assign 4 CPU cores to a Guaranteed Pod, then a Pod in the BestEffort
QoS class can try to use any amount of the remaining 12 CPU cores.
The kubelet prefers to evict BestEffort Pods if the node comes under resource pressure.
Criteria
A Pod has a QoS class of BestEffort if it doesn't meet the criteria for either Guaranteed
or Burstable. In other words, a Pod is BestEffort only if none of the Containers in the Pod have a
memory limit or a memory request, and none of the Containers in the Pod have a
CPU limit or a CPU request.
Containers in a Pod can request other resources (not CPU or memory) and still be classified as
BestEffort.
Memory QoS with cgroup v2
FEATURE STATE:Kubernetes v1.22 [alpha] (enabled by default: false)
Memory QoS uses the memory controller of cgroup v2 to guarantee memory resources in Kubernetes.
Memory requests and limits of containers in pod are used to set specific interfaces memory.min
and memory.high provided by the memory controller. When memory.min is set to memory requests,
memory resources are reserved and never reclaimed by the kernel; this is how Memory QoS ensures
memory availability for Kubernetes pods. And if memory limits are set in the container,
this means that the system needs to limit container memory usage; Memory QoS uses memory.high
to throttle workload approaching its memory limit, ensuring that the system is not overwhelmed
by instantaneous memory allocation.
Memory QoS relies on QoS class to determine which settings to apply; however, these are different
mechanisms that both provide controls over quality of service.
Some behavior is independent of QoS class
Certain behavior is independent of the QoS class assigned by Kubernetes. For example:
Any Container exceeding a resource limit will be killed and restarted by the kubelet without
affecting other Containers in that Pod.
If a Container exceeds its resource request and the node it runs on faces
resource pressure, the Pod it is in becomes a candidate for eviction.
If this occurs, all Containers in the Pod will be terminated. Kubernetes may create a
replacement Pod, usually on a different node.
The resource request of a Pod is equal to the sum of the resource requests of
its component Containers, and the resource limit of a Pod is equal to the sum of
the resource limits of its component Containers.
The kube-scheduler does not consider QoS class when selecting which Pods to
preempt.
Preemption can occur when a cluster does not have enough resources to run all the Pods
you defined.
This page explains how user namespaces are used in Kubernetes pods. A user
namespace isolates the user running inside the container from the one
in the host.
A process running as root in a container can run as a different (non-root) user
in the host; in other words, the process has full privileges for operations
inside the user namespace, but is unprivileged for operations outside the
namespace.
You can use this feature to reduce the damage a compromised container can do to
the host or other pods in the same node. There are several security
vulnerabilities rated either HIGH or CRITICAL that were not
exploitable when user namespaces is active. It is expected user namespace will
mitigate some future vulnerabilities too.
Before you begin
Note: This section links to third party projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for these projects, which are listed alphabetically. To add a project to this list, read the content guide before submitting a change. More information.
This is a Linux-only feature and support is needed in Linux for idmap mounts on
the filesystems used. This means:
On the node, the filesystem you use for /var/lib/kubelet/pods/, or the
custom directory you configure for this, needs idmap mount support.
All the filesystems used in the pod's volumes must support idmap mounts.
In practice this means you need at least Linux 6.3, as tmpfs started supporting
idmap mounts in that version. This is usually needed as several Kubernetes
features use tmpfs (the service account token that is mounted by default uses a
tmpfs, Secrets use a tmpfs, etc.)
Some popular filesystems that support idmap mounts in Linux 6.3 are: btrfs,
ext4, xfs, fat, tmpfs, overlayfs.
In addition, the container runtime and its underlying OCI runtime must support
user namespaces. The following OCI runtimes offer support:
crun version 1.9 or greater (it's recommend version 1.13+).
Some OCI runtimes do not include the support needed for using user namespaces in
Linux pods. If you use a managed Kubernetes, or have downloaded it from packages
and set it up, it's possible that nodes in your cluster use a runtime that doesn't
include this support.
To use user namespaces with Kubernetes, you also need to use a CRI
container runtime
to use this feature with Kubernetes pods:
containerd: version 2.0 (and later) supports user namespaces for containers.
CRI-O: version 1.25 (and later) supports user namespaces for containers.
You can see the status of user namespaces support in cri-dockerd tracked in an issue
on GitHub.
Introduction
User namespaces is a Linux feature that allows to map users in the container to
different users in the host. Furthermore, the capabilities granted to a pod in
a user namespace are valid only in the namespace and void outside of it.
A pod can opt-in to use user namespaces by setting the pod.spec.hostUsers field
to false.
The kubelet will pick host UIDs/GIDs a pod is mapped to, and will do so in a way
to guarantee that no two pods on the same node use the same mapping.
The runAsUser, runAsGroup, fsGroup, etc. fields in the pod.spec always
refer to the user inside the container.
The valid UIDs/GIDs when this feature is enabled is the range 0-65535. This
applies to files and processes (runAsUser, runAsGroup, etc.).
Files using a UID/GID outside this range will be seen as belonging to the
overflow ID, usually 65534 (configured in /proc/sys/kernel/overflowuid and
/proc/sys/kernel/overflowgid). However, it is not possible to modify those
files, even by running as the 65534 user/group.
Most applications that need to run as root but don't access other host
namespaces or resources, should continue to run fine without any changes needed
if user namespaces is activated.
Understanding user namespaces for pods
Several container runtimes with their default configuration (like Docker Engine,
containerd, CRI-O) use Linux namespaces for isolation. Other technologies exist
and can be used with those runtimes too (e.g. Kata Containers uses VMs instead of
Linux namespaces). This page is applicable for container runtimes using Linux
namespaces for isolation.
When creating a pod, by default, several new namespaces are used for isolation:
a network namespace to isolate the network of the container, a PID namespace to
isolate the view of processes, etc. If a user namespace is used, this will
isolate the users in the container from the users in the node.
This means containers can run as root and be mapped to a non-root user on the
host. Inside the container the process will think it is running as root (and
therefore tools like apt, yum, etc. work fine), while in reality the process
doesn't have privileges on the host. You can verify this, for example, if you
check which user the container process is running by executing ps aux from
the host. The user ps shows is not the same as the user you see if you
execute inside the container the command id.
This abstraction limits what can happen, for example, if the container manages
to escape to the host. Given that the container is running as a non-privileged
user on the host, it is limited what it can do to the host.
Furthermore, as users on each pod will be mapped to different non-overlapping
users in the host, it is limited what they can do to other pods too.
Capabilities granted to a pod are also limited to the pod user namespace and
mostly invalid out of it, some are even completely void. Here are two examples:
CAP_SYS_MODULE does not have any effect if granted to a pod using user
namespaces, the pod isn't able to load kernel modules.
CAP_SYS_ADMIN is limited to the pod's user namespace and invalid outside
of it.
Without using a user namespace a container running as root, in the case of a
container breakout, has root privileges on the node. And if some capability were
granted to the container, the capabilities are valid on the host too. None of
this is true when we use user namespaces.
If you want to know more details about what changes when user namespaces are in
use, see man 7 user_namespaces.
Set up a node to support user namespaces
By default, the kubelet assigns pods UIDs/GIDs above the range 0-65535, based on
the assumption that the host's files and processes use UIDs/GIDs within this
range, which is standard for most Linux distributions. This approach prevents
any overlap between the UIDs/GIDs of the host and those of the pods.
Avoiding the overlap is important to mitigate the impact of vulnerabilities such
as CVE-2021-25741, where a pod can potentially read arbitrary
files in the host. If the UIDs/GIDs of the pod and the host don't overlap, it is
limited what a pod would be able to do: the pod UID/GID won't match the host's
file owner/group.
The kubelet can use a custom range for user IDs and group IDs for pods. To
configure a custom range, the node needs to have:
A user kubelet in the system (you cannot use any other username here)
The binary getsubids installed (part of shadow-utils) and
in the PATH for the kubelet binary.
A configuration of subordinate UIDs/GIDs for the kubelet user (see
man 5 subuid and
man 5 subgid).
This setting only gathers the UID/GID range configuration and does not change
the user executing the kubelet.
You must follow some constraints for the subordinate ID range that you assign
to the kubelet user:
The subordinate user ID, that starts the UID range for Pods, must be a
multiple of 65536 and must also be greater than or equal to 65536. In other
words, you cannot use any ID from the range 0-65535 for Pods; the kubelet
imposes this restriction to make it difficult to create an accidentally insecure
configuration.
The subordinate ID count must be a multiple of 65536
The subordinate ID count must be at least 65536 x <maxPods> where <maxPods>
is the maximum number of pods that can run on the node.
You must assign the same range for both user IDs and for group IDs, It doesn't
matter if other users have user ID ranges that don't align with the group ID
ranges.
None of the assigned ranges should overlap with any other assignment.
The subordinate configuration must be only one line. In other words, you can't
have multiple ranges.
For example, you could define /etc/subuid and /etc/subgid to both have
these entries for the kubelet user:
# The format is
# name:firstID:count of IDs
# where
# - firstID is 65536 (the minimum value possible)
# - count of IDs is 110 (default limit for number of) * 65536
kubelet:65536:7208960
Integration with Pod security admission checks
FEATURE STATE:Kubernetes v1.29 [alpha]
For Linux Pods that enable user namespaces, Kubernetes relaxes the application of
Pod Security Standards in a controlled way.
This behavior can be controlled by the feature
gateUserNamespacesPodSecurityStandards, which allows an early opt-in for end
users. Admins have to ensure that user namespaces are enabled by all nodes
within the cluster if using the feature gate.
If you enable the associated feature gate and create a Pod that uses user
namespaces, the following fields won't be constrained even in contexts that enforce the
Baseline or Restricted pod security standard. This behavior does not
present a security concern because root inside a Pod with user namespaces
actually refers to the user inside the container, that is never mapped to a
privileged user on the host. Here's the list of fields that are not checks for Pods in those
circumstances:
When using a user namespace for the pod, it is disallowed to use other host
namespaces. In particular, if you set hostUsers: false then you are not
allowed to set any of:
There are two ways to expose Pod and container fields to a running container: environment variables, and as files that are populated by a special volume type. Together, these two ways of exposing Pod and container fields are called the downward API.
It is sometimes useful for a container to have information about itself, without
being overly coupled to Kubernetes. The downward API allows containers to consume
information about themselves or the cluster without using the Kubernetes client
or API server.
An example is an existing application that assumes a particular well-known
environment variable holds a unique identifier. One possibility is to wrap the
application, but that is tedious and error-prone, and it violates the goal of low
coupling. A better option would be to use the Pod's name as an identifier, and
inject the Pod's name into the well-known environment variable.
In Kubernetes, there are two ways to expose Pod and container fields to a running container:
Together, these two ways of exposing Pod and container fields are called the
downward API.
Available fields
Only some Kubernetes API fields are available through the downward API. This
section lists which fields you can make available.
You can pass information from available Pod-level fields using fieldRef.
At the API level, the spec for a Pod always defines at least one
Container.
You can pass information from available Container-level fields using
resourceFieldRef.
Information available via fieldRef
For some Pod-level fields, you can provide them to a container either as
an environment variable or using a downwardAPI volume. The fields available
via either mechanism are:
the primary IP address of the node to which the Pod is assigned
status.hostIPs
the IP addresses is a dual-stack version of status.hostIP, the first is always the same as status.hostIP.
status.podIP
the pod's primary IP address (usually, its IPv4 address)
status.podIPs
the IP addresses is a dual-stack version of status.podIP, the first is always the same as status.podIP
The following information is available through a downwardAPI volume
fieldRef, but not as environment variables:
metadata.labels
all of the pod's labels, formatted as label-key="escaped-label-value" with one label per line
metadata.annotations
all of the pod's annotations, formatted as annotation-key="escaped-annotation-value" with one annotation per line
Information available via resourceFieldRef
These container-level fields allow you to provide information about
requests and limits
for resources such as CPU and memory.
resource: limits.cpu
A container's CPU limit
resource: requests.cpu
A container's CPU request
resource: limits.memory
A container's memory limit
resource: requests.memory
A container's memory request
resource: limits.hugepages-*
A container's hugepages limit
resource: requests.hugepages-*
A container's hugepages request
resource: limits.ephemeral-storage
A container's ephemeral-storage limit
resource: requests.ephemeral-storage
A container's ephemeral-storage request
Fallback information for resource limits
If CPU and memory limits are not specified for a container, and you use the
downward API to try to expose that information, then the
kubelet defaults to exposing the maximum allocatable value for CPU and memory
based on the node allocatable
calculation.