Stay organized with collections
Save and categorize content based on your preferences.
This page explains how to create and manage stateless workloads within a
Google Distributed Cloud (GDC) air-gapped appliance Kubernetes cluster. Stateless workloads let you
scale your application deployment based on workload demands, all
without having to manage persistent storage in a Kubernetes cluster to store
data or application state. This page helps you get started so you can
efficiently optimize and adjust your application's availability.
This page is for developers within the application operator group, who are
responsible for creating application workloads for their organization.
Before you begin
To run commands against the pre-configured bare metal Kubernetes cluster, make sure you have the
following resources:
Locate the Kubernetes cluster name, or ask your Platform
Administrator what the cluster name is.
Sign in and generate the
kubeconfig file for the Kubernetes cluster if you don't have one.
Use the kubeconfig path of the Kubernetes cluster to replace
CLUSTER_KUBECONFIG in these instructions.
To get the required permissions to create stateless workloads, ask your
Organization IAM Admin to grant you the Namespace Admin role (namespace-admin)
in your project namespace.
Create a deployment
You create a deployment by writing a Deployment manifest and running
kubectl apply to create the resource. This method also retains updates made to
live resources without merging the changes back into the manifest files.
To create a Deployment from its manifest file, run:
kubectl--kubeconfigCLUSTER_KUBECONFIG-nNAMESPACE\apply-f-<<EOF
apiVersion:apps/v1
kind:Deployment
metadata:
name:DEPLOYMENT_NAME
spec:
replicas:NUMBER_OF_REPLICASselector:
matchLabels:
run:APP_NAMEtemplate:
metadata:
labels:# The labels given to each pod in the deployment, which are used# to manage all pods in the deployment.run:APP_NAMEspec:# The pod specification, which defines how each pod runs in the deployment.containers:
-name:CONTAINER_NAMEimage:CONTAINER_IMAGEresources:
requests:
nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE:1limits:
nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE:1
EOF
Replace the following:
CLUSTER_KUBECONFIG: the kubeconfig file for the
Kubernetes cluster to which you're deploying container workloads.
NAMESPACE: the project namespace in which to deploy the container workloads.
DEPLOYMENT_NAME: the name of the Deployment object.
APP_NAME: the name of the application to run within
the deployment.
NUMBER_OF_REPLICAS: the number of replicated Pod
objects that the deployment manages.
CONTAINER_NAME: the name of the container.
CONTAINER_IMAGE: the name of the container image. You
must include the container registry path and version of the image, such as
REGISTRY_PATH/hello-app:1.0.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eStateless applications enhance scalability by storing data and application state with the client, rather than in persistent user cluster storage.\u003c/p\u003e\n"],["\u003cp\u003eBefore running commands, you need the user cluster name and a kubeconfig file, and must use it to replace \u003ccode\u003eUSER_CLUSTER_KUBECONFIG\u003c/code\u003e in the instructions.\u003c/p\u003e\n"],["\u003cp\u003eTo get permissions for creating stateless workloads, request the Namespace Admin role from your Organization IAM Admin.\u003c/p\u003e\n"],["\u003cp\u003eDeployments are created by using a \u003ccode\u003eDeployment\u003c/code\u003e manifest and the \u003ccode\u003ekubectl apply\u003c/code\u003e command, retaining updates to live resources without merging them back into the manifest.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating a deployment, you need to define parameters such as \u003ccode\u003eDEPLOYMENT_NAME\u003c/code\u003e, \u003ccode\u003eAPP_NAME\u003c/code\u003e, \u003ccode\u003eNUMBER_OF_REPLICAS\u003c/code\u003e, \u003ccode\u003eCONTAINER_NAME\u003c/code\u003e, and \u003ccode\u003eCONTAINER_IMAGE\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Create stateless workloads\n\nThis page explains how to create and manage stateless workloads within a\nGoogle Distributed Cloud (GDC) air-gapped appliance Kubernetes cluster. Stateless workloads let you\nscale your application deployment based on workload demands, all\nwithout having to manage persistent storage in a Kubernetes cluster to store\ndata or application state. This page helps you get started so you can\nefficiently optimize and adjust your application's availability.\n\nThis page is for developers within the application operator group, who are\nresponsible for creating application workloads for their organization.\n\nBefore you begin\n----------------\n\nTo run commands against the pre-configured bare metal Kubernetes cluster, make sure you have the\nfollowing resources:\n\n1. Locate the Kubernetes cluster name, or ask your Platform\n Administrator what the cluster name is.\n\n2. [Sign in and generate](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/iam/sign-in#kubernetes-cluster-kubeconfig) the\n kubeconfig file for the Kubernetes cluster if you don't have one.\n\n3. Use the kubeconfig path of the Kubernetes cluster to replace\n \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e in these instructions.\n\nTo get the required permissions to create stateless workloads, ask your\nOrganization IAM Admin to grant you the Namespace Admin role (`namespace-admin`)\nin your project namespace.\n\nCreate a deployment\n-------------------\n\nYou create a deployment by writing a `Deployment` manifest and running\n`kubectl apply` to create the resource. This method also retains updates made to\nlive resources without merging the changes back into the manifest files.\n\nTo create a `Deployment` from its manifest file, run: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n apply -f - \u003c\u003cEOF\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e\n spec:\n replicas: \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e\n selector:\n matchLabels:\n run: \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e\n template:\n metadata:\n labels: # The labels given to each pod in the deployment, which are used\n # to manage all pods in the deployment.\n run: \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e\n spec: # The pod specification, which defines how each pod runs in the deployment.\n containers:\n - name: \u003cvar translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e\n image: \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE\u003c/var\u003e\n resources:\n requests:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n limits:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n EOF\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n Kubernetes cluster to which you're deploying container workloads.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace in which to deploy the container workloads.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the `Deployment` object.\n\n- \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e: the name of the application to run within\n the deployment.\n\n- \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e: the number of replicated `Pod`\n objects that the deployment manages.\n\n- \u003cvar translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e: the name of the container.\n\n- \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE\u003c/var\u003e: the name of the container image. You\n must include the container registry path and version of the image, such as\n \u003cvar class=\"readonly\" translate=\"no\"\u003eREGISTRY_PATH\u003c/var\u003e`/hello-app:1.0`.\n\n| **Note:** You can also use `kubectl apply -f `\u003cvar translate=\"no\"\u003eDIRECTORY\u003c/var\u003e to create new objects defined by manifest files stored in a directory.\n\nFor example: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: my-app\n spec:\n replicas: 3\n selector:\n matchLabels:\n run: my-app\n template:\n metadata:\n labels:\n run: my-app\n spec:\n containers:\n - name: hello-app\n image: \u003cvar class=\"readonly\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eREGISTRY_PATH\u003c/span\u003e\u003c/var\u003e/hello-app:1.0\n resources:\n requests:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n limits:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n\nIf you're deploying GPU workloads to your containers, see\n[Manage GPU container workloads](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/containers/deploy-gpu-container-workloads)\nfor more information."]]