Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Nesta página, explicamos como criar e gerenciar cargas de trabalho sem estado em um
cluster do Kubernetes do appliance isolado do Google Distributed Cloud (GDC). Com as cargas de trabalho sem estado, é possível
escalonar a implantação de aplicativos com base nas demandas de carga de trabalho, tudo
sem precisar gerenciar o armazenamento permanente em um cluster do Kubernetes para armazenar
dados ou estado do aplicativo. Esta página ajuda você a começar a otimizar e ajustar a disponibilidade do aplicativo de maneira eficiente.
Esta página é destinada a desenvolvedores do grupo de operadores de aplicativos, que são
responsáveis por criar cargas de trabalho de aplicativos para a organização.
Antes de começar
Para executar comandos no cluster do Kubernetes em bare metal pré-configurado, verifique se você tem os seguintes recursos:
Localize o nome do cluster do Kubernetes ou pergunte ao administrador da plataforma.
Faça login e gere o arquivo kubeconfig para o cluster do Kubernetes, se você não tiver um.
Use o caminho kubeconfig do cluster do Kubernetes para substituir
CLUSTER_KUBECONFIG nestas instruções.
Para receber as permissões necessárias para criar cargas de trabalho sem estado, peça ao administrador do IAM da organização para conceder a você o papel de administrador do namespace (namespace-admin) no namespace do projeto.
Criar uma implantação
Para criar uma implantação, escreva um manifesto Deployment e execute
kubectl apply para criar o recurso. Esse método também mantém atualizações feitas em
recursos ativos sem mesclar as mudanças novamente nos arquivos de manifesto.
Para criar uma Deployment com base no arquivo de manifesto, execute:
kubectl--kubeconfigCLUSTER_KUBECONFIG-nNAMESPACE\apply-f-<<EOF
apiVersion:apps/v1
kind:Deployment
metadata:
name:DEPLOYMENT_NAME
spec:
replicas:NUMBER_OF_REPLICASselector:
matchLabels:
run:APP_NAMEtemplate:
metadata:
labels:# The labels given to each pod in the deployment, which are used# to manage all pods in the deployment.run:APP_NAMEspec:# The pod specification, which defines how each pod runs in the deployment.containers:
-name:CONTAINER_NAMEimage:CONTAINER_IMAGEresources:
requests:
nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE:1limits:
nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE:1
EOF
Substitua:
CLUSTER_KUBECONFIG: o arquivo kubeconfig do cluster
do Kubernetes em que você está implantando cargas de trabalho de contêiner.
NAMESPACE: o namespace do projeto em que as cargas de trabalho de contêiner serão implantadas.
DEPLOYMENT_NAME: o nome do objeto Deployment.
APP_NAME: o nome do aplicativo a ser executado na
implantação.
NUMBER_OF_REPLICAS: o número de objetos Pod replicados que a implantação gerencia.
CONTAINER_NAME: o nome do contêiner.
CONTAINER_IMAGE: o nome da imagem do contêiner. Inclua o caminho do registro de contêiner e a versão da imagem, como
REGISTRY_PATH/hello-app:1.0.
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-04 UTC."],[[["\u003cp\u003eStateless applications enhance scalability by storing data and application state with the client, rather than in persistent user cluster storage.\u003c/p\u003e\n"],["\u003cp\u003eBefore running commands, you need the user cluster name and a kubeconfig file, and must use it to replace \u003ccode\u003eUSER_CLUSTER_KUBECONFIG\u003c/code\u003e in the instructions.\u003c/p\u003e\n"],["\u003cp\u003eTo get permissions for creating stateless workloads, request the Namespace Admin role from your Organization IAM Admin.\u003c/p\u003e\n"],["\u003cp\u003eDeployments are created by using a \u003ccode\u003eDeployment\u003c/code\u003e manifest and the \u003ccode\u003ekubectl apply\u003c/code\u003e command, retaining updates to live resources without merging them back into the manifest.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating a deployment, you need to define parameters such as \u003ccode\u003eDEPLOYMENT_NAME\u003c/code\u003e, \u003ccode\u003eAPP_NAME\u003c/code\u003e, \u003ccode\u003eNUMBER_OF_REPLICAS\u003c/code\u003e, \u003ccode\u003eCONTAINER_NAME\u003c/code\u003e, and \u003ccode\u003eCONTAINER_IMAGE\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Create stateless workloads\n\nThis page explains how to create and manage stateless workloads within a\nGoogle Distributed Cloud (GDC) air-gapped appliance Kubernetes cluster. Stateless workloads let you\nscale your application deployment based on workload demands, all\nwithout having to manage persistent storage in a Kubernetes cluster to store\ndata or application state. This page helps you get started so you can\nefficiently optimize and adjust your application's availability.\n\nThis page is for developers within the application operator group, who are\nresponsible for creating application workloads for their organization.\n\nBefore you begin\n----------------\n\nTo run commands against the pre-configured bare metal Kubernetes cluster, make sure you have the\nfollowing resources:\n\n1. Locate the Kubernetes cluster name, or ask your Platform\n Administrator what the cluster name is.\n\n2. [Sign in and generate](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/iam/sign-in#kubernetes-cluster-kubeconfig) the\n kubeconfig file for the Kubernetes cluster if you don't have one.\n\n3. Use the kubeconfig path of the Kubernetes cluster to replace\n \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e in these instructions.\n\nTo get the required permissions to create stateless workloads, ask your\nOrganization IAM Admin to grant you the Namespace Admin role (`namespace-admin`)\nin your project namespace.\n\nCreate a deployment\n-------------------\n\nYou create a deployment by writing a `Deployment` manifest and running\n`kubectl apply` to create the resource. This method also retains updates made to\nlive resources without merging the changes back into the manifest files.\n\nTo create a `Deployment` from its manifest file, run: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n apply -f - \u003c\u003cEOF\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e\n spec:\n replicas: \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e\n selector:\n matchLabels:\n run: \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e\n template:\n metadata:\n labels: # The labels given to each pod in the deployment, which are used\n # to manage all pods in the deployment.\n run: \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e\n spec: # The pod specification, which defines how each pod runs in the deployment.\n containers:\n - name: \u003cvar translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e\n image: \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE\u003c/var\u003e\n resources:\n requests:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n limits:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n EOF\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n Kubernetes cluster to which you're deploying container workloads.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace in which to deploy the container workloads.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the `Deployment` object.\n\n- \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e: the name of the application to run within\n the deployment.\n\n- \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e: the number of replicated `Pod`\n objects that the deployment manages.\n\n- \u003cvar translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e: the name of the container.\n\n- \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE\u003c/var\u003e: the name of the container image. You\n must include the container registry path and version of the image, such as\n \u003cvar class=\"readonly\" translate=\"no\"\u003eREGISTRY_PATH\u003c/var\u003e`/hello-app:1.0`.\n\n| **Note:** You can also use `kubectl apply -f `\u003cvar translate=\"no\"\u003eDIRECTORY\u003c/var\u003e to create new objects defined by manifest files stored in a directory.\n\nFor example: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: my-app\n spec:\n replicas: 3\n selector:\n matchLabels:\n run: my-app\n template:\n metadata:\n labels:\n run: my-app\n spec:\n containers:\n - name: hello-app\n image: \u003cvar class=\"readonly\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eREGISTRY_PATH\u003c/span\u003e\u003c/var\u003e/hello-app:1.0\n resources:\n requests:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n limits:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n\nIf you're deploying GPU workloads to your containers, see\n[Manage GPU container workloads](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/containers/deploy-gpu-container-workloads)\nfor more information."]]