This page describes Kubernetes' Pod object and its use in Google Kubernetes Engine.
What is a Pod?
Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster.
Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod's resources. Generally, running multiple containers in a single Pod is an advanced use case.
Pods also contain shared networking and storage resources for their containers:
- Network: Pods are automatically assigned unique IP addresses. Pod
containers share the same network namespace, including IP address and
network ports. Containers in a Pod communicate with each other inside
the Pod on
- Storage: Pods can specify a set of shared storage volumes that can be shared among the containers.
You can consider a Pod to be a self-contained, isolated "logical host" that contains the systemic needs of the application it serves.
A Pod is meant to run a single instance of your application on your cluster. However, it is not recommended to create individual Pods directly. Instead, you generally create a set of identical Pods, called replicas, to run your application. Such a set of replicated Pods are created and managed by a controller, such as a Deployment. Controllers manage the lifecycle of their constituent Pods and can also perform horizontal scaling, changing the number of Pods as necessary.
Although you might occasionally interact with Pods directly to debug, troubleshoot, or inspect them, it is highly recommended that you use a controller to manage your Pods.
Pods run on nodes in your cluster. Once created, a Pod remains on its node until its process is complete, the Pod is deleted, the Pod is evicted from the node due to lack of resources, or the node fails. If a node fails, Pods on the node are automatically scheduled for deletion.
Pods are ephemeral. They are not designed to run forever, and when a Pod is terminated it cannot be brought back. In general, Pods do not disappear until they are deleted by a user or by a controller.
Pods do not "heal" or repair themselves. For example, if a Pod is scheduled on a node which later fails, the Pod is deleted. Similarly, if a Pod is evicted from a node for any reason, the Pod does not replace itself.
Each Pod has a
PodStatus API object, which is represented by a Pod's
field. Pods publish their phase to the
status: phase field. The
phase of a Pod is a high-level summary of the Pod in its current state.
When you run
kubectl get pod
to inspect a Pod running on your cluster, a Pod can be in one of the following
- Pending: Pod has been created and accepted by the cluster, but one or more of its containers are not yet running. This phase includes time spent being scheduled on a node and downloading images.
- Running: Pod has been bound to a node, and all of the containers have been created. At least one container is running, is in the process of starting, or is restarting.
- Succeeded: All containers in the Pod have terminated successfully. Terminated Pods do not restart.
- Failed: All containers in the Pod have terminated, and at least one container has terminated in failure. A container "fails" if it exits with a non-zero status.
- Unknown: The state of the Pod cannot be determined.
PodStatus contains an array called
PodConditions, which is
represented in the Pod manifest as
conditions. The field has a
conditions indicates more specifically the conditions
within the Pod that are causing its current status.
type field can contain
status field corresponds with the
type field, and can
Because Pods are ephemeral, it is not necessary to create Pods directly. Similarly, because Pods cannot repair or replace themselves, it is not recommended to create Pods directly.
Instead, you can use a controller, such as a Deployment, which creates and manages Pods for you. Controllers are also useful for rolling out updates, such as changing the version of an application running in a container, because the controller manages the whole update process for you.
When a Pod starts running, it requests an amount of CPU and memory. This helps Kubernetes schedule the Pod onto an appropriate node to run the workload. A Pod will not be scheduled onto a node that doesn't have the resources to honor the Pod's request. A request is the minimum amount of CPU or memory that Kubernetes guarantees to a Pod.
You can configure the CPU and memory requests for a Pod, based on the resources your applications need. You can also specify requests for individual containers running in the Pod. Keep the following in mind:
- The default request for CPU is 100m. This is too small for many applications, and is probably much smaller than the amount of CPU available on the node.
- There is no default request for memory. A Pod with no default memory request could be scheduled onto a node without enough memory to run the Pod's workloads.
- Setting too small a value for CPU or memory requests could cause too many Pods or a sub-optimal combination of Pods to be scheduled onto a given node and reduce performance.
- Setting too large a value for CPU or memory requests could cause the Pod to be unschedulable and increase the cost of the cluster's resources.
- In addition to, or instead of, setting a Pod's resources, you can specify resources for individual containers running in the Pod. If you only specify resources for the containers, the Pod's requests are the sum of the requests specified for the containers. If you specify both, the sum of requests for all containers must not exceed the Pod requests.
It is strongly recommended that you configure requests for your Pods, based on the requirements of the actual workloads. For more information, refer to a Kubernetes best practices: Resource requests and limits on the GCP blog.
By default, a Pod has no upper bound on the maximum amount of CPU or memory it can use on a node. You can set limits to control the amount of CPU or memory your Pod can use on a node. A limit is the maximum amount of CPU or memory that Kubernetes guarantees to a Pod.
In addition to, or instead of, setting a Pod's limits, you can specify limits for individual containers running in the Pod. If you only specify limits for the containers, the Pod's limits are the sum of the limits specified for the containers. However, each container can only access resources up to its limit, so if you choose to specify the limits on containers only, you must specify limits for each container. If you specify both, the sum of limits for all containers must not exceed the Pod limit.
Limits are not taken into consideration when scheduling Pods, but can prevent resource contention among Pods on the same node, and can prevent a Pod from causing system instability on the node by starving the underlying operating system of resources.
It is strongly recommended that you configure limits for your Pods, based on the requirements of the actual workloads. For more information, refer to Kubernetes best practices: Resource requests and limits on the GCP blog.
Controller objects, such as Deployments and StatefulSets, contain a Pod template field. Pod templates contain a Pod specification which determines how each Pod should run, including which containers should be run within the Pods and which volumes the Pods should mount.
Controller objects use Pod templates to create Pods and to manage their "desired state" within your cluster. When a Pod template is changed, all future Pods reflect the new template, but all existing Pods do not.
For more information on how Pod Templates work, refer to Creating a Deployment in the Kubernetes documentation.
Controlling which nodes a Pod runs on
By default, Pods run on nodes in the default node pool for the cluster. You can configure the node pool a Pod selects explicitly or implicitly:
You can explicitly force a Pod to deploy to a specific node pool by setting a nodeSelector in the Pod manifest. This forces a Pod to run only on Nodes in that node pool.
You can specify resource requests for the containers your runs. The Pod will only run on nodes that satisfy the resource requests. For instance, if the Pod definition includes a container that requires four CPUs, the Service will not select Pods running on Nodes with two CPUs.
Pod usage patterns
Pods can be used in two main ways:
- Pods that run a single container. The simplest and most common Pod pattern is a single container per pod, where the single container represents an entire application. In this case, you can think of a Pod as a wrapper.
- Pods that run multiple containers that need to work together. Pods with multiple containers are primarily used to support co-located, co-managed programs that need to share resources. These co-located containers might form a single cohesive unit of service—one container serving files from a shared volume while another container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.
Each Pod is meant to run a single instance of a given application. If you want to run multiple instances, you should use one Pod for each instance of the application. This is generally referred to as replication. Replicated Pods are created and managed as a group by a controller, such as a Deployment.
Pods terminate gracefully when their processes are complete. Kubernetes imposes
a default graceful termination period of 30 seconds. When
deleting a Pod
you can override this grace period by setting the
--grace-period flag to the
number of seconds to wait for the Pod to terminate before forcibly terminating