[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eGoogle Distributed Cloud (GDC) air-gapped supports both virtual machine (VM) and container-based workloads, with distinct deployment processes for each.\u003c/p\u003e\n"],["\u003cp\u003eVM-based workloads are deployed directly within projects and do not require Kubernetes clusters, while container-based workloads are deployed to pods on Kubernetes clusters.\u003c/p\u003e\n"],["\u003cp\u003eBest practices recommend creating separate Kubernetes clusters for each deployment environment (e.g., production, non-production) to isolate resources, access policies, and configurations.\u003c/p\u003e\n"],["\u003cp\u003eDesigning fewer, larger Kubernetes clusters and node pools within those clusters is generally more efficient in resource utilization than creating numerous smaller ones.\u003c/p\u003e\n"],["\u003cp\u003eMultiple clusters can be created for specific operational requirements, such as running workloads on different Kubernetes versions or with different cluster configurations.\u003c/p\u003e\n"]]],[],null,["# Design workload separation\n\nThis document provides an overview for workload management in\nGoogle Distributed Cloud (GDC) air-gapped. The following topics are covered:\n\n- [Where to deploy workloads](#where-to-deploy-workloads)\n- [Kubernetes cluster best practices](#best-practices-clusters)\n\nAlthough some of the workload deployment designs are recommended, it's not\nrequired to follow them exactly as prescribed. Each GDC\nuniverse has unique requirements and considerations that must be satisfied on a\ncase-by-case basis.\n\nWhere to deploy workloads\n-------------------------\n\nOn the GDC platform, operations to deploy virtual\nmachine (VM) workloads and container workloads are different. This section\nintroduces the differences and where you deploy each resource.\n\n### VM-based workloads\n\nYou can create VMs to host your VM-based workloads. You have many configuration\noptions for your VM's shape and size to help best meet your VM-based workload\nrequirements. You must create a VM in a project, which can have many VMs and VM\nworkloads. For more information, see the\n[VMs overview](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/vms/vm-introduction).\n\nProjects containing only VM-based workloads don't require a Kubernetes cluster.\nTherefore, you don't need to provision Kubernetes clusters for VM-based\nworkloads.\n\n### Container-based workloads\n\nYou can deploy container-based workloads to a pod on a Kubernetes cluster.\nKubernetes clusters can be attached to one or many projects, but they are not a\nchild resource of a project. We recommend only attaching clusters to projects in\nthe appropriate deployment environment. For example, a cluster for production\nworkloads is attached to a project for production workloads.\n\nFor pod scheduling within a Kubernetes cluster, GDC\nadopts the general Kubernetes concepts of scheduling, preemption, and eviction.\nBest practices on scheduling pods within a cluster vary based on the\nrequirements of your workload.\n\nFor more information on Kubernetes clusters, see the\n[Kubernetes cluster overview](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/clusters). See\nthe [Container workloads overview](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/containers/containers-intro)\nfor details on managing your containers in a Kubernetes cluster.\n\nBest practices for designing Kubernetes clusters\n------------------------------------------------\n\nThis section introduces best practices for designing Kubernetes clusters:\n\n- [Create separate clusters per deployment environment](#separate-clusters-per-deployment)\n- [Create fewer, larger clusters](#create-fewer-larger-clusters)\n- [Create fewer, larger node pools within a cluster](#create-fewer-larger-node-pools)\n\n### Create separate clusters per deployment environment\n\nIn addition to\n[separate projects per deployment environment](/distributed-cloud/hosted/docs/latest/gdch/resources/access-boundaries#design-projects-for-isolation),\nwe recommend that you design separate Kubernetes clusters per deployment\nenvironment. By separating both the Kubernetes cluster and project per\nenvironment, you isolate resource consumption, access policies, maintenance\nevents, and cluster-level configuration changes between your production and\nnon-production workloads.\n\nThe following diagram shows a sample Kubernetes cluster design for multiple workloads\nthat span projects, clusters, deployment environments, and machine classes.\n\nThis sample architecture assumes that workloads within a deployment environment\nare allowed to share clusters. Each deployment environment has a separate set of\nKubernetes clusters. You then assign projects to the Kubernetes cluster of the appropriate\ndeployment environment. A Kubernetes cluster might be further subdivided into multiple\nnode pools for different machine class requirements.\n\nAlternatively, designing multiple Kubernetes clusters is useful for container\noperations like the following scenarios:\n\n- You have some workloads pinned to a specific Kubernetes version, so you maintain different clusters at different versions.\n- You have some workloads that require different cluster configuration needs, such as the [backup policy](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/install-backup-restore), so you create multiple clusters with different configurations.\n- You run copies of a cluster in parallel to facilitate disruptive version upgrades or a blue-green deployment strategy.\n- You build an experimental workload that risks throttling the API server or other single point of failures within a cluster, so you isolate it from existing workloads.\n\nThe following diagram shows an example where multiple clusters are\nconfigured per deployment environment due to requirements such as the container\noperations described in the previous section.\n\n### Create fewer clusters\n\nFor efficient resource utilization, we recommend designing the fewest number of\nKubernetes clusters that meet your requirements for separating deployment\nenvironments and container operations. Each additional cluster incurs additional\noverhead resource consumption, such as additional control plane nodes required.\nTherefore, a larger cluster with many workloads utilizes underlying compute\nresources more efficiently than many small clusters.\n\nWhen there are multiple clusters with similar configurations, it creates\nadditional maintenance overhead to monitor cluster capacity and plan for\ncross-cluster dependencies.\n\nIf a cluster is approaching capacity, we recommend that you add additional\nnodes to a cluster instead of creating a new cluster.\n\n### Create fewer node pools within a cluster\n\nFor efficient resource utilization, we recommend designing fewer, larger node\npools within a Kubernetes cluster.\n\nConfiguring multiple node pools is useful when you need to schedule pods that\nrequire a different machine class than others. Create a node pool for each\nmachine class your workloads require, and set the node capacity to autoscaling\nto allow for efficient usage of compute resources."]]