Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini menjelaskan cara membuat dan mengelola workload stateless dalam cluster Kubernetes appliance air-gapped Google Distributed Cloud (GDC). Workload stateless memungkinkan Anda menskalakan deployment aplikasi berdasarkan permintaan workload, semuanya tanpa harus mengelola penyimpanan persisten di cluster Kubernetes untuk menyimpan data atau status aplikasi. Halaman ini membantu Anda memulai sehingga Anda dapat
mengoptimalkan dan menyesuaikan ketersediaan aplikasi Anda secara efisien.
Halaman ini ditujukan bagi developer dalam grup operator aplikasi, yang bertanggung jawab membuat workload aplikasi untuk organisasi mereka.
Sebelum memulai
Untuk menjalankan perintah terhadap cluster Kubernetes bare metal yang telah dikonfigurasi sebelumnya, pastikan Anda memiliki
resource berikut:
Temukan nama cluster Kubernetes, atau tanyakan kepada Administrator Platform Anda nama cluster tersebut.
Login dan buat file
kubeconfig untuk cluster Kubernetes jika Anda belum memilikinya.
Gunakan jalur kubeconfig cluster Kubernetes untuk mengganti
CLUSTER_KUBECONFIG dalam petunjuk ini.
Untuk mendapatkan izin yang diperlukan guna membuat workload stateless, minta Admin IAM Organisasi Anda untuk memberi Anda peran Namespace Admin (namespace-admin) di namespace project Anda.
Membuat deployment
Anda membuat deployment dengan menulis manifes Deployment dan menjalankan
kubectl apply untuk membuat resource. Metode ini juga mempertahankan update yang dilakukan pada
resource live tanpa menggabungkan kembali perubahan ke dalam file manifes.
Untuk membuat Deployment dari file manifesnya, jalankan:
kubectl--kubeconfigCLUSTER_KUBECONFIG-nNAMESPACE\apply-f-<<EOF
apiVersion:apps/v1
kind:Deployment
metadata:
name:DEPLOYMENT_NAME
spec:
replicas:NUMBER_OF_REPLICASselector:
matchLabels:
run:APP_NAMEtemplate:
metadata:
labels:# The labels given to each pod in the deployment, which are used# to manage all pods in the deployment.run:APP_NAMEspec:# The pod specification, which defines how each pod runs in the deployment.containers:
-name:CONTAINER_NAMEimage:CONTAINER_IMAGEresources:
requests:
nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE:1limits:
nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE:1
EOF
Ganti kode berikut:
CLUSTER_KUBECONFIG: file kubeconfig untuk
cluster Kubernetes tempat Anda men-deploy workload container.
NAMESPACE: namespace project tempat men-deploy workload penampung.
DEPLOYMENT_NAME: nama objek Deployment.
APP_NAME: nama aplikasi yang akan dijalankan dalam
deployment.
NUMBER_OF_REPLICAS: jumlah objek Pod
replika yang dikelola deployment.
CONTAINER_NAME: nama container.
CONTAINER_IMAGE: nama image container. Anda
harus menyertakan jalur registry container dan versi image, seperti
REGISTRY_PATH/hello-app:1.0.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eStateless applications enhance scalability by storing data and application state with the client, rather than in persistent user cluster storage.\u003c/p\u003e\n"],["\u003cp\u003eBefore running commands, you need the user cluster name and a kubeconfig file, and must use it to replace \u003ccode\u003eUSER_CLUSTER_KUBECONFIG\u003c/code\u003e in the instructions.\u003c/p\u003e\n"],["\u003cp\u003eTo get permissions for creating stateless workloads, request the Namespace Admin role from your Organization IAM Admin.\u003c/p\u003e\n"],["\u003cp\u003eDeployments are created by using a \u003ccode\u003eDeployment\u003c/code\u003e manifest and the \u003ccode\u003ekubectl apply\u003c/code\u003e command, retaining updates to live resources without merging them back into the manifest.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating a deployment, you need to define parameters such as \u003ccode\u003eDEPLOYMENT_NAME\u003c/code\u003e, \u003ccode\u003eAPP_NAME\u003c/code\u003e, \u003ccode\u003eNUMBER_OF_REPLICAS\u003c/code\u003e, \u003ccode\u003eCONTAINER_NAME\u003c/code\u003e, and \u003ccode\u003eCONTAINER_IMAGE\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Create stateless workloads\n\nThis page explains how to create and manage stateless workloads within a\nGoogle Distributed Cloud (GDC) air-gapped appliance Kubernetes cluster. Stateless workloads let you\nscale your application deployment based on workload demands, all\nwithout having to manage persistent storage in a Kubernetes cluster to store\ndata or application state. This page helps you get started so you can\nefficiently optimize and adjust your application's availability.\n\nThis page is for developers within the application operator group, who are\nresponsible for creating application workloads for their organization.\n\nBefore you begin\n----------------\n\nTo run commands against the pre-configured bare metal Kubernetes cluster, make sure you have the\nfollowing resources:\n\n1. Locate the Kubernetes cluster name, or ask your Platform\n Administrator what the cluster name is.\n\n2. [Sign in and generate](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/iam/sign-in#kubernetes-cluster-kubeconfig) the\n kubeconfig file for the Kubernetes cluster if you don't have one.\n\n3. Use the kubeconfig path of the Kubernetes cluster to replace\n \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e in these instructions.\n\nTo get the required permissions to create stateless workloads, ask your\nOrganization IAM Admin to grant you the Namespace Admin role (`namespace-admin`)\nin your project namespace.\n\nCreate a deployment\n-------------------\n\nYou create a deployment by writing a `Deployment` manifest and running\n`kubectl apply` to create the resource. This method also retains updates made to\nlive resources without merging the changes back into the manifest files.\n\nTo create a `Deployment` from its manifest file, run: \n\n kubectl --kubeconfig \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e -n \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e \\\n apply -f - \u003c\u003cEOF\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e\n spec:\n replicas: \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e\n selector:\n matchLabels:\n run: \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e\n template:\n metadata:\n labels: # The labels given to each pod in the deployment, which are used\n # to manage all pods in the deployment.\n run: \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e\n spec: # The pod specification, which defines how each pod runs in the deployment.\n containers:\n - name: \u003cvar translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e\n image: \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE\u003c/var\u003e\n resources:\n requests:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n limits:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n EOF\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_KUBECONFIG\u003c/var\u003e: the kubeconfig file for the\n Kubernetes cluster to which you're deploying container workloads.\n\n- \u003cvar translate=\"no\"\u003eNAMESPACE\u003c/var\u003e: the project namespace in which to deploy the container workloads.\n\n- \u003cvar translate=\"no\"\u003eDEPLOYMENT_NAME\u003c/var\u003e: the name of the `Deployment` object.\n\n- \u003cvar translate=\"no\"\u003eAPP_NAME\u003c/var\u003e: the name of the application to run within\n the deployment.\n\n- \u003cvar translate=\"no\"\u003eNUMBER_OF_REPLICAS\u003c/var\u003e: the number of replicated `Pod`\n objects that the deployment manages.\n\n- \u003cvar translate=\"no\"\u003eCONTAINER_NAME\u003c/var\u003e: the name of the container.\n\n- \u003cvar translate=\"no\"\u003eCONTAINER_IMAGE\u003c/var\u003e: the name of the container image. You\n must include the container registry path and version of the image, such as\n \u003cvar class=\"readonly\" translate=\"no\"\u003eREGISTRY_PATH\u003c/var\u003e`/hello-app:1.0`.\n\n| **Note:** You can also use `kubectl apply -f `\u003cvar translate=\"no\"\u003eDIRECTORY\u003c/var\u003e to create new objects defined by manifest files stored in a directory.\n\nFor example: \n\n apiVersion: apps/v1\n kind: Deployment\n metadata:\n name: my-app\n spec:\n replicas: 3\n selector:\n matchLabels:\n run: my-app\n template:\n metadata:\n labels:\n run: my-app\n spec:\n containers:\n - name: hello-app\n image: \u003cvar class=\"readonly\" translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eREGISTRY_PATH\u003c/span\u003e\u003c/var\u003e/hello-app:1.0\n resources:\n requests:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n limits:\n nvidia.com/gpu-pod-NVIDIA_A100_80GB_PCIE: 1\n\nIf you're deploying GPU workloads to your containers, see\n[Manage GPU container workloads](/distributed-cloud/hosted/docs/latest/appliance/application/ao-user/containers/deploy-gpu-container-workloads)\nfor more information."]]