[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eTo deploy container workloads, you'll need to create a user cluster, which requires the User Cluster Admin role.\u003c/p\u003e\n"],["\u003cp\u003eUser clusters can be created through the GDC console by selecting a name, version, and configuring network settings like Service CIDR and Pod CIDR.\u003c/p\u003e\n"],["\u003cp\u003eNode pools within a user cluster can be customized by specifying the number of worker nodes, and their machine class.\u003c/p\u003e\n"],["\u003cp\u003eCreating a user cluster via API involves defining a \u003ccode\u003eCluster\u003c/code\u003e custom resource in a YAML file, including details like network ranges, Kubernetes version, and node pool specifications.\u003c/p\u003e\n"],["\u003cp\u003eGDC supports NVIDIA GPUs, specifically the A100 PCIe 80 GB, which requires provisioning the \u003ccode\u003ea2-ultragpu-1g-gdc\u003c/code\u003e machine type within the user cluster.\u003c/p\u003e\n"]]],[],null,["# Create a cluster to run container workloads\n\nCreate a user cluster to allow for container workload deployment.\n\nBefore you begin\n----------------\n\nTo get the permissions needed to create a user cluster, ask your Organization\nIAM Admin to grant you the User Cluster Admin role (`user-cluster-admin` role).\n\nCreate a user cluster\n---------------------\n\nTo get the permissions needed to create a user cluster, ask your\nIAM Admin to grant you the User Cluster Admin role (`user-cluster-admin` role).\n\nComplete the following steps to create a user cluster: \n\n### Console\n\n1. In the navigation menu, select **Clusters**.\n\n2. Click **Create cluster**.\n\n3. In the **Name** field, specify a name for the user cluster.\n\n | **Important:** The cluster name must not end with `-system`. The `-system` suffix is reserved for clusters created by GDC.\n4. Select the GDC cluster version. Each version maps to\n a distinct Kubernetes version.\n\n5. Click **Next**.\n\n6. Configure the network settings for your cluster. You can't change these\n network settings after you create the cluster. The default and only\n supported Internet Protocol for user clusters is Internet Protocol version 4\n (IPv4).\n\n 1. If you want to create dedicated *load balancer nodes*, enter the number\n of nodes to create. By default, you receive zero nodes, and load\n balancer traffic runs through the control nodes.\n\n 2. Select the **Service CIDR** (Classless Inter-Domain Routing) to use. Your\n deployed services, such as load balancers, are allocated IP addresses\n from this range.\n\n | **Important:** The range can be any RFC 1918 range that doesn't conflict with other IP address ranges in the cluster and node pool resources. See [RFC 1918](https://tools.ietf.org/html/rfc1918) for more information, but note that you must connect to the internet to access the URL. This URL provides access outside of your air-gapped environment.\n 3. Select the **Pod CIDR** to use. The cluster allocates IP addresses from\n this range to your pods and VMs.\n\n 4. Click **Next**.\n\n7. Review the details of the auto-generated default node pool for the user\n cluster. Click *edit* **Edit** to modify the\n default node pool.\n\n | **Important:** If you intend the user cluster to run graphics processing unit (GPU) workloads, you must provision a GPU machine type in your user cluster. For more information, see [Support GPU workloads in a user cluster](#gpu).\n8. To create additional node pools, select **Add node pool**. When editing the\n default node pool or adding a new node pool, you customize it with the\n following options:\n\n 1. Assign a name for the node pool. You cannot modify the name after you create the node pool.\n 2. Specify the number of worker nodes to create in the node pool.\n 3. Select your machine class that best suits your workload requirements.\n View the list of the following settings:\n\n - Machine type\n - CPU\n - Memory\n 4. Click **Save**.\n\n9. Click **Create** to create the user cluster.\n\n### API\n\nTo create a new user cluster using the API directly, apply a custom resource\nto your GDC instance:\n\n1. Create a `Cluster` custom resource and save it as a YAML file, such as\n `cluster.yaml`:\n\n apiVersion: cluster.gdc.goog/v1\n kind: Cluster\n metadata:\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eCLUSTER_NAME\u003c/span\u003e\u003c/var\u003e\n namespace: platform\n spec:\n clusterNetwork:\n podCIDRSize: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003ePOD_CIDR\u003c/span\u003e\u003c/var\u003e\n serviceCIDRSize: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eSERVICE_CIDR\u003c/span\u003e\u003c/var\u003e\n initialVersion:\n kubernetesVersion: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eKUBERNETES_VERSION\u003c/span\u003e\u003c/var\u003e\n loadBalancer:\n ingressServiceIPSize: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eLOAD_BALANCER_POOL_SIZE\u003c/span\u003e\u003c/var\u003e\n nodePools:\n - machineTypeName: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eMACHINE_TYPE\u003c/span\u003e\u003c/var\u003e\n name: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNODE_POOL_NAME\u003c/span\u003e\u003c/var\u003e\n nodeCount: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eNUMBER_OF_WORKER_NODES\u003c/span\u003e\u003c/var\u003e\n taints: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eTAINTS\u003c/span\u003e\u003c/var\u003e\n labels: \u003cvar translate=\"no\"\u003e\u003cspan class=\"devsite-syntax-l devsite-syntax-l-Scalar devsite-syntax-l-Scalar-Plain\"\u003eLABELS\u003c/span\u003e\u003c/var\u003e\n releaseChannel:\n channel: UNSPECIFIED\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: The name of the cluster. The cluster name must not end with `-system`. The `-system` suffix is reserved for clusters created by GDC.\n - \u003cvar translate=\"no\"\u003ePOD_CIDR\u003c/var\u003e: The size of network ranges from which pod virtual IP addresses are allocated. If unset, a default value `21` is used.\n - \u003cvar translate=\"no\"\u003eSERVICE_CIDR\u003c/var\u003e: The size of network ranges from which service virtual IP addresses are allocated. If unset, a default value `23` is used.\n - \u003cvar translate=\"no\"\u003eKUBERNETES_VERSION\u003c/var\u003e: The Kubernetes version of the cluster, such as `1.26.5-gke.2100`. To list the available Kubernetes versions to configure, see [List available Kubernetes versions for a cluster](#available-kubernetes-versions).\n - \u003cvar translate=\"no\"\u003eLOAD_BALANCER_POOL_SIZE\u003c/var\u003e: The size of non-overlapping IP address pools used by load balancer services. If unset, a default value `20` is used.\n - \u003cvar translate=\"no\"\u003eMACHINE_TYPE\u003c/var\u003e: The machine type for the worker nodes of the node pool. View the [available machine types](/distributed-cloud/hosted/docs/latest/appliance/platform/pa-user/manage-node-pools#available-machine-types) for what is available to configure.\n - \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e: The name of the node pool.\n - \u003cvar translate=\"no\"\u003eNUMBER_OF_WORKER_NODES\u003c/var\u003e: The number of worker nodes to provision in the node pool.\n - \u003cvar translate=\"no\"\u003eTAINTS\u003c/var\u003e: The taints to apply to the nodes of this node pool. This is an optional field.\n - \u003cvar translate=\"no\"\u003eLABELS\u003c/var\u003e: The labels to apply to the nodes of this node pool. It contains a list of key-value pairs. This is an optional field.\n2. Apply the custom resource to your GDC instance:\n\n kubectl apply -f cluster.yaml --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\n Replace \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e with the\n org admin cluster's kubeconfig file path.\n\nList available Kubernetes versions for a cluster\n------------------------------------------------\n\nYou can list the available Kubernetes versions in your GDC\ninstance using the `kubectl` CLI: \n\n kubectl get userclustermetadata.upgrade.private.gdc.goog \\\n -o=custom-columns=K8S-VERSION:.spec.kubernetesVersion \\\n --kubeconfig \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e\n\nReplace \u003cvar translate=\"no\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e with the admin\ncluster's kubeconfig file path.\n\nThe output looks similar to the following: \n\n K8S-VERSION\n 1.25.10-gke.2100\n 1.26.5-gke.2100\n 1.27.4-gke.500\n\nSupport GPU resources in a user cluster\n---------------------------------------\n\nGDC provides NVIDIA graphics processing unit (GPU)\nsupport for user clusters, and they run your GPU devices as\nuser workloads. GPU support is enabled by default for clusters\nwho have GPU machines provisioned for them. Ensure that your user cluster\nsupports GPU devices before leveraging Deep Learning Containers. For example, if\nyou intend to run Deep Learning Containers, ensure you create a user cluster\nwith at least one GPU node.\n\nUser clusters can be created using the GDC console or API directly.\nEnsure that you provision GPU machines for your user cluster to support GPU\nworkloads on its associated containers. For more information, see\n[Create a user cluster](#create).\n\nSupported NVIDIA GPU cards\n--------------------------\n\nGDC clusters support the A100 PCIe 80 GB NVIDIA GPU. To\nenable this support, provision the `a2-ultragpu-1g-gdc` machine type in a user\ncluster."]]