本系列教程简化了一些规划注意事项,以便您可以专注于了解 Google Kubernetes Engine (GKE) 的关键功能和服务。在开始创建自己的 Google Kubernetes Engine 环境(与本系列教程中介绍的环境类似)之前,需牢记一些额外的规划注意事项。这些注意事项包括集群管理级别、网络和可用性类型。
网络
对于 GKE 集群,您需要谨慎地规划 IP 地址。您选择的网络选项会影响 GKE 集群的架构。其中一些选项在配置后不重新创建集群的情况下无法更改。
在本系列教程中,您会将 Cymbal Bank 示例应用的 Pod 和服务部署到单个命名空间中。此方法可降低部署复杂性,但不允许您像在生产环境中那样使用命名空间将资源分配给不同的团队和用户。如需查看使用多个命名空间的可用于生产用途的更安全 Cymbal Bank 示例应用,请参阅 Cymbal Bank 应用架构。
Pod 中断预算
Pod 中断预算 (PDB) 政策通过阻止在更改系统时同时关闭 Pod 来帮助确保应用性能,并且会限制复制应用中同时不可用的 Pod 的数量。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[],[],null,["# Learning Path: Scale applications - Production considerations\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview)\n\n*** ** * ** ***\n\nIn this set of tutorials, some of the planning considerations are simplified\nso that you can focus on learning key Google Kubernetes Engine (GKE) features and\nservices. Before you start to create your own Google Kubernetes Engine environment similar\nto the one described in this set of tutorials, there are some additional\nplanning considerations to keep in mind. These considerations include the\ncluster management level, networking, and availability types.\n\nNetworking\n----------\n\nGKE clusters require careful IP address planning. The networking\noptions that you choose impact the architecture of your GKE\nclusters. Some of these options can't be changed after they're configured\nwithout recreating the cluster.\n\nIn this set of tutorials, you use Autopilot mode clusters that always\nuse VPC-native mode networking. VPC-native\nclusters use alias IP address ranges on GKE nodes, and are\nrequired for creating clusters on Shared VPCs. VPC-native\nclusters scale more easily than routes-based clusters without consuming\nGoogle Cloud routes and so are less susceptible to hitting routing\nlimits.\n\nBefore you create your own GKE environment and deploy workloads,\nreview the following networking guidance:\n\n- [GKE network overview](/kubernetes-engine/docs/concepts/network-overview)\n- [Best practices for GKE networking](/kubernetes-engine/docs/best-practices/networking)\n- [Plan IP addresses for GKE](/kubernetes-engine/docs/concepts/gke-ip-address-mgmt-strategies)\n\nCluster modes\n-------------\n\nIn this set of tutorials you create a regional GKE cluster that\nuses Autopilot mode. Autopilot clusters are pre-configured\nwith an optimized cluster configuration that is ready for production workloads.\nYou can alternatively use Standard mode clusters for more advanced\nconfiguration flexibility over the underlying infrastructure.\n\nFor a more comprehensive overview, review the planning documents that start with\nthe\n[cluster configuration choices](/kubernetes-engine/docs/concepts/types-of-clusters).\n\nNamespaces\n----------\n\nNamespaces let you organize your applications and isolate components from each\nother. Each namespace has its own set of resources, such as Pods, Services, and\nDeployments. For example, you can create a namespace for all your frontend\nservices and a namespace for your backend services. This grouping makes it\neasier to manage your services and to control access to them.\n\nIn this set of tutorials, you deploy the Pods and Services for the Cymbal Bank\nsample application into a single namespace. This approach reduces deployment\ncomplexity, but doesn't let you use namespaces to assign resources to different\nteams and users, as you might do in a production environment. For a more secure\nand production-ready example of the Cymbal Bank sample application that uses\nmultiple namespaces, see\n[Cymbal Bank application architecture](/architecture/enterprise-application-blueprint/cymbal-bank).\n\nPod disruption budgets\n----------------------\n\nPod Disruption Budget (PDB) policies help ensure app performance by preventing\nPods going down at the same time when you make a change to the system and limit\nthe number of simultaneously unavailable Pods in a replicated application.\n\nIn this set of tutorials, you don't configure and use PDBs. When you complete\nthe tutorial to simulate failure, your Services and nodes should all respond as\nexpected. When you deploy your own workloads, PDBs on nodes might block node\ndraining.\n\nIf you use PDBs, review your configuration before you try to cordon and drain\nnodes. If the nodes can't successfully drain, you might have problems with\nscheduled maintenance events.\n\nWhat's next\n-----------\n\nGet started by completing the\n[first tutorial to a deploy a single GKE cluster](/kubernetes-engine/docs/learn/scalable-apps-basic-deployment)\nthat runs a microservices-based application."]]