將外部叢集新增至機群時,系統會在叢集上安裝 Connect 代理程式,在叢集和 Google Cloud之間建立控制層連線。 Google Cloud 代理程式可以掃遍 NAT、輸出 Proxy、VPN,以及環境和 Google 之間的互連網路。Kubernetes 叢集及其 API 伺服器不需要公開或對外公開的 IP 位址。如要進一步瞭解 Connect 代理程式,請參閱「Connect 代理程式總覽」。
Google Cloud 身分識別:如果您想使用 Google Cloud 做為身分識別提供者,Connect 閘道會以機群為基礎,提供一致的方式,從指令列連線至已註冊的叢集並對其執行指令,同時簡化多個叢集 (包括 Google Cloud外部叢集) 的 DevOps 工作自動化程序。使用者不必直接連線至叢集的 IP,即可使用這個選項連線。詳情請參閱連結閘道指南。
第三方身分:Fleet 也支援使用現有的第三方身分識別資訊提供者 (例如 Microsoft ADFS),讓您設定 Fleet 叢集,使用者就能以現有的第三方 ID 和密碼登入。系統支援 OIDC 和 LDAP 提供者。詳情請參閱「透過第三方身分設定連線閘道」和「推出 GKE Identity Service」。
無論採用哪種方法,使用者都能從指令列或 Google Cloud 控制台登入叢集。
Google Cloud 控制台
Google Cloud 控制台提供集中式使用者介面,可管理所有機群叢集,無論叢集在哪裡執行都沒問題。將叢集註冊至機群後,只要登入帳戶就能查看、監控、管理工作負載,以及進行偵錯。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-14 (世界標準時間)。"],[],[],null,["GKE Enterprise offers a set of capabilities that helps you and your\norganization (from infrastructure operators and workload developers to security\nand network engineers) manage clusters, infrastructure, and workloads, on\nGoogle Cloud and across public cloud and on-premises\nenvironments. These capabilities are all built around the idea of the\n*fleet*: a logical grouping of Kubernetes clusters and other resources that can\nbe managed together. Fleets are managed by the Fleet service, also known as the Hub service.\n\nThis page describes our expanding portfolio of multi-cluster\nmanagement capabilities and provides resources to get started managing your fleet.\n\nIntroducing fleets\n\nTypically, as organizations embrace cloud-native technologies like containers,\ncontainer orchestration, and service meshes, they reach a point where running a\nsingle cluster is no longer sufficient. There are a variety of reasons why\norganizations choose to deploy multiple clusters to achieve their technical and\nbusiness objectives; for example, separating production from non-production\nenvironments, or separating services across tiers, locales, or teams. You can\nread more about the benefits and tradeoffs involved in multi-cluster approaches\nin [multi-cluster use cases](/kubernetes-engine/fleet-management/docs/multi-cluster-use-cases).\n\nGKE Enterprise and Google Cloud use the concept of a *fleet* to simplify\nmanaging multiple clusters, regardless of which project they exist in and what workloads run on them. For example, suppose your organization has ten Google Cloud projects with two GKE clusters in each project, using them to run multiple different production applications. Without fleets, if you want to make a production-wide change to clusters, you need to make the change on the individual clusters, in multiple projects. Even observing multiple clusters can require switching context between projects. With fleets, you can logically group and normalize clusters, helping you uplevel management and observability from individual clusters to entire groups of clusters, with a single \"fleet host project\" to view and manage your fleet.\n\nHowever, fleets can be more than just simple groups of clusters. You can build on fleets by enabling fleet-based features that let you abstract away cluster boundaries - for example, by defining and managing resources that belong to specific teams across multiple clusters, or by automating applying the same configuration across your fleet.\n\nA fleet can be entirely made up of [Google Kubernetes Engine clusters](/kubernetes-engine/docs/concepts/types-of-clusters) on Google Cloud, or include clusters outside Google Cloud.\n\n- To learn more about how fleets work, and to find a complete list of fleet-enabled features, see [How fleets work](/kubernetes-engine/fleet-management/docs/fleet-concepts).\n- To learn about current limitations and requirements for using fleets in\n multi-cluster deployments, as well as recommendations for implementing\n fleets in your organization, see [Fleet requirements and best\n practices](/kubernetes-engine/fleet-management/docs/fleet-concepts/best-practices).\n\n- To help you implement fleets in your own systems, read about hypothetical scenarios\n that use fleets in\n [Fleet examples](/kubernetes-engine/fleet-management/docs/fleet-concepts/examples).\n\nCreating a fleet\n\nCreating a fleet involves registering the clusters you want to manage together to a fleet in your chosen fleet host project. Some cluster types are automatically registered at cluster creation time, while other cluster types must be manually registered.\nYou can read more about how this works in the [Fleet creation overview](/kubernetes-engine/fleet-management/docs/fleet-creation), and follow the linked instructions to start adding clusters to your fleet.\n\nWhen you add a cluster outside Google Cloud to your fleet, a [Connect Agent](/kubernetes-engine/fleet-management/docs/connect-agent) is installed on the cluster to establish control plane connectivity between the cluster and Google Cloud. The agent can traverse NATs, egress\nproxies, VPNs, and other interconnects that you have between your environments\nand Google. Your Kubernetes clusters and their API servers do not\nneed public or externally exposed IP addresses. To learn more about the Connect Agent, see the\n[Connect Agent overview](/kubernetes-engine/fleet-management/docs/connect-agent).\n\nAuthenticating to clusters\n\nConnecting and authenticating users and service accounts to clusters across multiple environments can be\nchallenging. With fleets, you can choose from two options for consistent, secure authentication to clusters for all your organization's developers and admins.\n\n- **Google Cloud identity** : If you want to use Google Cloud as your identity provider, the Connect gateway builds on fleets to provide a consistent way to connect to and run commands against your registered clusters from the command line, and makes it simpler to automate DevOps tasks across multiple clusters, including clusters outside Google Cloud. Users don't need direct IP connectivity to a cluster to connect to it using this option. Find out more in the [Connect gateway guide](/kubernetes-engine/enterprise/multicluster-management/gateway).\n\n- **Third-party identity** : Fleets also support using your existing third-party identity provider, such as Microsoft ADFS, letting you configure your fleet clusters so that users can log in with their existing third-party ID and password. OIDC and LDAP providers are supported. Find out more in [Set up the connect gateway with third party identities](/kubernetes-engine/enterprise/multicluster-management/gateway/setup-third-party) and [Introducing GKE Identity Service](/kubernetes-engine/enterprise/identity).\n\nWith either approach, users can log in to clusters from the command line or from the Google Cloud console.\n\nGoogle Cloud console\n\nThe Google Cloud console provides a central user interface for\nmanaging all of your fleet clusters no matter where they are running. After\nyou have registered your clusters to your fleet, you\ncan log in to view, monitor, debug, and manage your workloads.\n\nTo learn more and to get started, see\n[Work with clusters from the Google Cloud console](/kubernetes-engine/fleet-management/docs/console).\n\nWho can use fleet management features?\n\nIf you want to enable and use multiple enterprise and multi-cluster features for a single per-vCPU charge, or if you want to register clusters outside Google Cloud to your fleet, you must enable GKE Enterprise. Find out how to do this in [Enable GKE Enterprise](/kubernetes-engine/enterprise/docs/setup/enable-gkee).\n\nFor clusters on Google Cloud only, you can register GKE clusters to a fleet and use [some enterprise and multi-cluster features](/kubernetes-engine/enterprise/docs/deployment-options#pricing_options_for_gke_on_google_cloud) at no additional charge to regular GKE pricing. You can then pay separately for additional Enterprise features such as [Multi Cluster Ingress](/kubernetes-engine/pricing#multi-cluster-ingress) and [Cloud Service Mesh](/service-mesh/docs/overview).\n\nFor complete details of which features are included in each option, see the GKE Enterprise [Deployment Options](/kubernetes-engine/enterprise/docs/deployment-options) page.\n\nUse cases\n\nWhile managing more than one cluster has its challenges, there are many reasons\nto deploy multiple clusters to achieve technical and business objectives. Find\nout more in our [Multi-cluster use cases](/kubernetes-engine/fleet-management/docs/multi-cluster-use-cases) guide.\n\nWhat's next?\n\n- Learn more about fleets in [How fleets work](/kubernetes-engine/fleet-management/docs/fleet-concepts)\n- Start planning how to organize your clusters into fleets with [Plan fleet resources](/kubernetes-engine/fleet-management/docs/fleet-concepts/plan-fleets)\n- Get best practices for adding features to your fleet in [Plan fleet features](/kubernetes-engine/fleet-management/docs/fleet-concepts/fleet-features)\n- Get started creating your fleet following the [Fleet creation overview](/kubernetes-engine/fleet-management/docs/fleet-creation)"]]