[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eGoogle Distributed Cloud (GDC) air-gapped supports both virtual machine (VM) and container-based workloads, with distinct deployment processes for each.\u003c/p\u003e\n"],["\u003cp\u003eVM-based workloads are deployed directly within projects and do not require Kubernetes clusters, while container-based workloads are deployed to pods on Kubernetes clusters.\u003c/p\u003e\n"],["\u003cp\u003eBest practices recommend creating separate Kubernetes clusters for each deployment environment (e.g., production, non-production) to isolate resources, access policies, and configurations.\u003c/p\u003e\n"],["\u003cp\u003eDesigning fewer, larger Kubernetes clusters and node pools within those clusters is generally more efficient in resource utilization than creating numerous smaller ones.\u003c/p\u003e\n"],["\u003cp\u003eMultiple clusters can be created for specific operational requirements, such as running workloads on different Kubernetes versions or with different cluster configurations.\u003c/p\u003e\n"]]],[],null,["This document provides an overview for workload management in\nGoogle Distributed Cloud (GDC) air-gapped. The following topics are covered:\n\n- [Where to deploy workloads](#where-to-deploy-workloads)\n- [Kubernetes cluster best practices](#best-practices-clusters)\n\nAlthough some of the workload deployment designs are recommended, it's not\nrequired to follow them exactly as prescribed. Each GDC\nuniverse has unique requirements and considerations that must be satisfied on a\ncase-by-case basis.\n\nThis document is for IT administrators within the platform administrator group\nwho are responsible for managing resources within their organization, and\napplication developers within the application operator group who are responsible\nfor developing and maintaining applications in a GDC\nuniverse.\n\nFor more information, see\n[Audiences for GDC air-gapped documentation](/distributed-cloud/hosted/docs/latest/gdch/resources/audiences).\n\nWhere to deploy workloads\n\nOn the GDC platform, operations to deploy virtual\nmachine (VM) workloads and container workloads are different. The following\ndiagram illustrates workload separation within the data plane layer of your\norganization.\n\n[VM-based workloads](#vm-workloads) operate within a VM. Conversely,\n[container workloads](#container-workloads) operate within a Kubernetes cluster.\nThe fundamental separation between VMs and Kubernetes clusters provide isolation\nboundaries between your VM workloads and container workloads. For more\ninformation, see\n[Resource hierarchy](/distributed-cloud/hosted/docs/latest/gdch/resources/resource-hierarchy).\n\nThe following sections introduce the differences between each workload type and\ntheir deployment lifecycle.\n\nVM-based workloads\n\nYou can create VMs to host your VM-based workloads. You have many configuration\noptions for your VM's shape and size to help best meet your VM-based workload\nrequirements. You must create a VM in a project, which can have many VM\nworkloads. VMs are a child resource of a project. For more information, see the\n[VMs overview](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/vms/vm-introduction).\n\nProjects containing only VM-based workloads don't require a Kubernetes cluster.\nTherefore, you don't need to provision Kubernetes clusters for VM-based\nworkloads.\n\nContainer-based workloads\n\nYou can deploy container-based workloads to a pod on a Kubernetes cluster. A\nKubernetes cluster consists of the following node types:\n\n- **Control plane node**: runs the management services, such as scheduling,\n etcd, and an API server.\n\n- **Worker node**: runs your pods and container applications.\n\nKubernetes clusters can be attached to one or many projects, but they are not a child resource of a project. This is a fundamental difference that Kubernetes clusters have compared to VMs. A VM is a child resource of a project, whereas Kubernetes clusters operate as a child resource of an organization, allowing them to attach to multiple projects.\n\nFor pod scheduling within a Kubernetes cluster, GDC\nadopts the general Kubernetes concepts of scheduling, preemption, and eviction.\nBest practices on scheduling pods within a cluster vary based on the\nrequirements of your workload.\n\nFor more information on Kubernetes clusters, see the\n[Kubernetes cluster overview](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/clusters). For more\ninformation on managing your containers in a Kubernetes cluster, see\n[Container workloads in GDC](/distributed-cloud/hosted/docs/latest/gdch/application/ao-user/containers/containers-intro).\n\nBest practices for designing Kubernetes clusters\n\nThis section introduces best practices for designing Kubernetes clusters:\n\n- [Create separate clusters per software development environment](#separate-clusters-per-deployment)\n- [Create fewer, larger clusters](#create-fewer-larger-clusters)\n- [Create fewer, larger node pools within a cluster](#create-fewer-larger-node-pools)\n\nConsider each best practice to design a resilient cluster design for your\ncontainer workload lifecycle.\n\nCreate separate clusters per software development environment\n\nIn addition to\n[separate projects per software development environment](/distributed-cloud/hosted/docs/latest/gdch/resources/access-boundaries#design-projects-for-isolation),\nwe recommend that you design separate Kubernetes clusters per software\ndevelopment environment. A *software development environment* is an area within\nyour GDC universe intended for all operations that\ncorrespond to a designated lifecycle phase. For example, if you have two\nsoftware development environments named `development` and `production` in your\norganization, you could create a separate set of Kubernetes clusters for each\nenvironment and attach projects to each cluster based on your needs. We\nrecommend Kubernetes clusters in pre-production and production lifecycles to\nhave multiple projects attached to them.\n\nDefined clusters for each software development environment assumes that\nworkloads within a software development environment can share clusters. You then\nassign projects to the Kubernetes cluster of the appropriate environment. A\nKubernetes cluster might be further subdivided into multiple node pools or\n[use taints for workload isolation](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/isolate-container-workloads).\n\nBy separating Kubernetes clusters by software development environment, you\nisolate resource consumption, access policies, maintenance events, and\ncluster-level configuration changes between your production and non-production\nworkloads.\n\nThe following diagram shows a sample Kubernetes cluster design for multiple\nworkloads that span projects, clusters, software development environments, and\nmachine classes.\n\nThis sample architecture assumes that workloads within a production and\ndevelopment software development environment can share clusters. Each\nenvironment has a separate set of Kubernetes clusters, which are further\nsubdivided into multiple node pools for different machine class requirements.\n\nAlternatively, designing multiple Kubernetes clusters is useful for container\noperations like the following scenarios:\n\n- You have some workloads pinned to a specific Kubernetes version, so you maintain different clusters at different versions.\n- You have some workloads that require different cluster configuration needs, such as the [backup policy](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/cluster-backup/install-backup-restore), so you create multiple clusters with different configurations.\n- You run copies of a cluster in parallel to facilitate disruptive version upgrades or a blue-green deployment strategy.\n- You build an experimental workload that risks throttling the API server or other single point of failures within a cluster, so you isolate it from existing workloads.\n\nThe following diagram shows an example where multiple clusters are configured\nper software development environment due to requirements such as the container\noperations described in the previous section.\n\nCreate fewer clusters\n\nFor efficient resource utilization, we recommend designing the fewest number of\nKubernetes clusters that meet your requirements for separating software\ndevelopment environments and container operations. Each additional cluster\nincurs additional overhead resource consumption, such as additional control\nplane nodes required. Therefore, a larger cluster with many workloads utilizes\nunderlying compute resources more efficiently than many small clusters.\n\nWhen there are multiple clusters with similar configurations, it creates\nadditional maintenance overhead to monitor cluster capacity and plan for\ncross-cluster dependencies.\n\nIf a cluster is approaching capacity, we recommend that you add additional\nnodes to a cluster instead of creating a new cluster.\n\nCreate fewer node pools within a cluster\n\nFor efficient resource utilization, we recommend designing fewer, larger node\npools within a Kubernetes cluster.\n\nConfiguring multiple node pools is useful when you need to schedule pods that\nrequire a different machine class than others. Create a node pool for each\nmachine class your workloads require, and set the node capacity to autoscaling\nto allow for efficient usage of compute resources.\n\nWhat's next\n\n- [Resource hierarchy](/distributed-cloud/hosted/docs/latest/gdch/resources/resource-hierarchy)\n- [Create a highly available container app](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/ha-apps/deploy-ha-container-app)\n- [Create a highly available VM app](/distributed-cloud/hosted/docs/latest/gdch/platform-application/pa-ao-operations/ha-apps/deploy-ha-vm-app)"]]