Mantenha tudo organizado com as coleções
Salve e categorize o conteúdo com base nas suas preferências.
Escalone suas cargas de trabalho sem estado de acordo com os requisitos de carga de trabalho de contêiner em evolução.
Antes de começar
Para executar comandos no cluster do Kubernetes em bare metal pré-configurado, verifique se você tem os seguintes recursos:
Localize o nome do cluster do Kubernetes ou pergunte ao administrador da plataforma.
Faça login e gere o arquivo kubeconfig para o cluster do Kubernetes, se você não tiver um.
Use o caminho kubeconfig do cluster do Kubernetes para substituir
CLUSTER_KUBECONFIG nestas instruções.
Para receber as permissões necessárias para escalonar cargas de trabalho sem estado, peça ao administrador do IAM da organização para conceder a você o papel de administrador do namespace (namespace-admin) no namespace do projeto.
Escalonar uma implantação
Aproveite a funcionalidade de escalonamento do Kubernetes para dimensionar adequadamente a quantidade de pods em execução na sua implantação.
Fazer o escalonamento automático dos pods de uma implantação
O Kubernetes oferece escalonamento automático para eliminar a necessidade de atualizar manualmente sua
implantação quando a demanda evolui. Defina o escalonador automático horizontal de pods na sua
implantação para ativar esse recurso:
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-04 UTC."],[[["\u003cp\u003eThis document provides guidance on scaling stateless workloads within a Kubernetes environment to meet evolving container requirements.\u003c/p\u003e\n"],["\u003cp\u003eTo interact with a user cluster, you must have its name and the corresponding kubeconfig file, and permissions from the Organization IAM Admin.\u003c/p\u003e\n"],["\u003cp\u003eKubernetes allows you to autoscale the pods of a deployment by setting a horizontal pod autoscaler, eliminating the need for manual intervention based on the defined CPU utilization percentage, minimum, and maximum replica values.\u003c/p\u003e\n"],["\u003cp\u003eManual scaling of a deployment's pods can be achieved using the \u003ccode\u003ekubectl scale\u003c/code\u003e command, where you specify the desired number of replicas.\u003c/p\u003e\n"],["\u003cp\u003eUsing the \u003ccode\u003ekubectl get hpa\u003c/code\u003e command, you can view the status of the newly created horizontal pod autoscaler, providing information like its target, minimum and maximum pod values, as well as current replicas and age.\u003c/p\u003e\n"]]],[],null,["# Scale stateless workloads\n\nScale your stateless workloads to your evolving container workload requirements.\n\nBefore you begin\n----------------\n\nTo run commands against the pre-configured bare metal Kubernetes cluster, make sure you have the\nfollowing resources:\n\n1. Locate the Kubernetes cluster name, or ask your Platform\n Administrator what the cluster name is.\n\n2. [Sign in and generate](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/iam/sign-in#kubernetes-cluster-kubeconfig) the\n kubeconfig file for the Kubernetes cluster if you don't have one.\n\n3. Use the kubeconfig path of the Kubernetes cluster to replace\n \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e in these instructions.\n\nTo get the required permissions to scale stateless workloads, ask your\nOrganization IAM Admin to grant you the Namespace Admin role (`namespace-admin`)\nin your project namespace.\n\nScale a deployment\n------------------\n\nLeverage the scaling functionality of Kubernetes to appropriately scale the\namount of pods running in your deployment.\n\n### Autoscale the pods of a deployment\n\nKubernetes offers autoscaling to remove the need of manually updating your\ndeployment when demand evolves. Set the horizontal pod autoscaler in your\ndeployment to enable this feature: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e \\\n -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n autoscale deployment \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e \\\n --cpu-percent=\u003cvar translate=\"no\"\u003eCPU_PERCENT\u003c/var\u003e \\\n --min=\u003cvar translate=\"no\"\u003eMIN_NUMBER_REPLICAS\u003c/var\u003e \\\n --max=\u003cvar translate=\"no\"\u003eMAX_NUMBER_REPLICAS\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n Kubernetes cluster.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the deployment in which\n to autoscale.\n\n- \u003cvar translate=\"no\"\u003eCPU_PERCENT\u003c/var\u003e: the target average CPU utilization to\n request, represented as a percentage, over all the pods.\n\n- \u003cvar translate=\"no\"\u003eMIN_NUMBER_REPLICAS\u003c/var\u003e: the lower limit for the number\n of pods the autoscaler can provision.\n\n- \u003cvar translate=\"no\"\u003eMAX_NUMBER_REPLICAS\u003c/var\u003e: the upper limit for the number\n of pods the autoscaler can provision.\n\nTo check the current status of the newly-made horizontal pod autoscaler, run: \n\n kubectl get hpa\n\nThe output is similar to the following: \n\n NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\n \u003cvar class=\"readonly\" translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e Deployment/\u003cvar class=\"readonly\" translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e/scale 0% / 50% 1 10 1 18s\n\n### Manually scale the pods of a deployment\n\nIf you prefer to manually scale a deployment, run: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e \\\n -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n scale deployment \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e \\\n --replicas \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n Kubernetes cluster.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the deployment in which\n to autoscale.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the chosen number of replicated\n `Pod` objects in the deployment."]]