Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Menskalakan workload stateless Anda sesuai dengan persyaratan workload container yang terus berkembang.
Sebelum memulai
Untuk menjalankan perintah terhadap
cluster Kubernetes,
pastikan Anda memiliki resource berikut:
Temukan nama cluster Kubernetes, atau tanyakan kepada Administrator Platform Anda nama cluster tersebut.
Login dan buat
file kubeconfig untuk cluster Kubernetes jika Anda belum memilikinya.
Gunakan jalur kubeconfig cluster Kubernetes untuk mengganti
KUBERNETES_CLUSTER_KUBECONFIG dalam petunjuk ini.
Untuk mendapatkan izin yang diperlukan guna menskalakan workload stateless, minta Admin IAM Organisasi Anda untuk memberi Anda peran Admin Namespace (namespace-admin) di namespace project Anda.
Menskalakan deployment
Manfaatkan fungsi penskalaan Kubernetes untuk menskalakan jumlah pod yang berjalan dalam deployment Anda dengan tepat.
Menskalakan pod deployment secara otomatis
Kubernetes menawarkan penskalaan otomatis untuk menghilangkan kebutuhan memperbarui deployment secara manual saat permintaan berkembang. Tetapkan penskalaan otomatis pod horizontal di deployment Anda untuk mengaktifkan fitur ini:
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eThis document provides instructions on how to scale stateless workloads in a Kubernetes cluster to meet changing demands.\u003c/p\u003e\n"],["\u003cp\u003eYou can autoscale pods within a deployment by configuring a horizontal pod autoscaler that adjusts the number of pods based on the average CPU utilization, between a defined minimum and maximum.\u003c/p\u003e\n"],["\u003cp\u003eManual scaling of pods within a deployment can also be achieved by specifying the desired number of replicas.\u003c/p\u003e\n"],["\u003cp\u003eBefore scaling, you will need to have access to the Kubernetes cluster name and kubeconfig file, and be granted the Namespace Admin role in your project namespace.\u003c/p\u003e\n"]]],[],null,["# Scale stateless workloads\n\nScale your stateless workloads to your evolving container workload requirements.\n\nBefore you begin\n----------------\n\nTo run commands against a\n[Kubernetes cluster](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/clusters#cluster-architecture),\nmake sure you have the following resources:\n\n1. Locate the Kubernetes cluster name, or ask your Platform Administrator what\n the cluster name is.\n\n2. [Sign in and generate](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/iam/sign-in#zonal-cluster-kubeconfig)\n the kubeconfig file for the Kubernetes cluster if you don't have one.\n\n3. Use the kubeconfig path of the Kubernetes cluster to replace\n \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e in these instructions.\n\nTo get the required permissions to scale stateless workloads, ask your\nOrganization IAM Admin to grant you the Namespace Admin role (`namespace-admin`)\nin your project namespace.\n\nScale a deployment\n------------------\n\nLeverage the scaling functionality of Kubernetes to appropriately scale the\namount of pods running in your deployment.\n\n### Autoscale the pods of a deployment\n\nKubernetes offers autoscaling to remove the need of manually updating your\ndeployment when demand evolves. Set the horizontal pod autoscaler in your\ndeployment to enable this feature: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e \\\n -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n autoscale deployment \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e \\\n --cpu-percent=\u003cvar translate=\"no\"\u003eCPU_PERCENT\u003c/var\u003e \\\n --min=\u003cvar translate=\"no\"\u003eMIN_NUMBER_REPLICAS\u003c/var\u003e \\\n --max=\u003cvar translate=\"no\"\u003eMAX_NUMBER_REPLICAS\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n cluster.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the deployment in which\n to autoscale.\n\n- \u003cvar translate=\"no\"\u003eCPU_PERCENT\u003c/var\u003e: the target average CPU utilization to\n request, represented as a percentage, over all the pods.\n\n- \u003cvar translate=\"no\"\u003eMIN_NUMBER_REPLICAS\u003c/var\u003e: the lower limit for the number\n of pods the autoscaler can provision.\n\n- \u003cvar translate=\"no\"\u003eMAX_NUMBER_REPLICAS\u003c/var\u003e: the upper limit for the number\n of pods the autoscaler can provision.\n\nTo check the current status of the newly-made horizontal pod autoscaler, run: \n\n kubectl get hpa\n\nThe output is similar to the following: \n\n NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE\n \u003cvar class=\"readonly\" translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e Deployment/\u003cvar class=\"readonly\" translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e/scale 0% / 50% 1 10 1 18s\n\n### Manually scale the pods of a deployment\n\nIf you prefer to manually scale a deployment, run: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e \\\n -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n scale deployment \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e \\\n --replicas \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eKUBERNETES_CLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n cluster.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the deployment in which\n to autoscale.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the desired number of replicated\n `Pod` objects in the deployment."]]