Google Cloud の外部のクラスタをフリートに追加すると、クラスタに Connect Agent がインストールされ、クラスタと Google Cloud間のコントロール プレーンの接続が確立されます。エージェントは、環境と Google の間にある NAT、下り(外向き)プロキシ、VPN、その他の相互接続を走査できます。Kubernetes クラスタとその API サーバーは、パブリック IP アドレスまたは外部に公開される IP アドレスを必要としません。Connect Agent の詳細については、Connect Agent の概要をご覧ください。
Google Cloud ID: Google Cloud を ID プロバイダとして使用する場合、Connect Gateway はフリート上に構築され、コマンドラインから登録済みクラスタに接続してコマンドを実行するための一貫した方法を提供し、 Google Cloudの外部にあるクラスタを含む複数のクラスタにまたがって DevOps タスクを簡単に自動化します。このオプションを使用して接続する場合、ユーザーはクラスタに直接 IP 接続する必要はありません。詳細については、Connect Gateway ガイドをご覧ください。
[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-01 UTC。"],[],[],null,["[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nIf you need to work with multiple GKE\nclusters, GKE Enterprise provides an extra layer of tools and features that can\nhelp you manage, govern, and operate containerized workloads at scale.\nGKE Enterprise offers a set of capabilities that helps you and your\norganization (from infrastructure operators and workload developers to security\nand network engineers) manage clusters, infrastructure, and workloads, on\nGoogle Cloud and across public cloud and on-premises\nenvironments. These capabilities are all built around the idea of the\n*fleet*: a logical grouping of Kubernetes clusters and other resources that can\nbe managed together. Fleets are managed by the Fleet service, also known as the Hub service.\n\nThis page describes our expanding portfolio of multi-cluster\nmanagement capabilities and provides resources to get started managing your fleet.\n\nIntroducing fleets\n\nTypically, as organizations embrace cloud-native technologies like containers,\ncontainer orchestration, and service meshes, they reach a point where running a\nsingle cluster is no longer sufficient. There are a variety of reasons why\norganizations choose to deploy multiple clusters to achieve their technical and\nbusiness objectives; for example, separating production from non-production\nenvironments, or separating services across tiers, locales, or teams. You can\nread more about the benefits and tradeoffs involved in multi-cluster approaches\nin [multi-cluster use cases](/kubernetes-engine/fleet-management/docs/multi-cluster-use-cases).\n\nGKE Enterprise and Google Cloud use the concept of a *fleet* to simplify\nmanaging multiple clusters, regardless of which project they exist in and what workloads run on them. For example, suppose your organization has ten Google Cloud projects with two GKE clusters in each project, using them to run multiple different production applications. Without fleets, if you want to make a production-wide change to clusters, you need to make the change on the individual clusters, in multiple projects. Even observing multiple clusters can require switching context between projects. With fleets, you can logically group and normalize clusters, helping you uplevel management and observability from individual clusters to entire groups of clusters, with a single \"fleet host project\" to view and manage your fleet.\n\nHowever, fleets can be more than just simple groups of clusters. You can build on fleets by enabling fleet-based features that let you abstract away cluster boundaries - for example, by defining and managing resources that belong to specific teams across multiple clusters, or by automating applying the same configuration across your fleet.\n\nA fleet can be entirely made up of [Google Kubernetes Engine clusters](/kubernetes-engine/docs/concepts/types-of-clusters) on Google Cloud, or include clusters outside Google Cloud.\n\n- To learn more about how fleets work, and to find a complete list of fleet-enabled features, see [How fleets work](/kubernetes-engine/fleet-management/docs/fleet-concepts).\n- To learn how to add GKE clusters to a fleet, see [Create fleets to simplify multi-cluster management](/kubernetes-engine/docs/how-to/creating-fleets).\n- To learn about current limitations and requirements for using fleets in\n multi-cluster deployments, as well as recommendations for implementing\n fleets in your organization, see [Fleet requirements and best\n practices](/kubernetes-engine/fleet-management/docs/fleet-concepts/best-practices).\n\n- To help you implement fleets in your own systems, read about hypothetical scenarios\n that use fleets in\n [Fleet examples](/kubernetes-engine/fleet-management/docs/fleet-concepts/examples).\n\nCreating a fleet\n\nCreating a fleet involves registering the clusters you want to manage together to a fleet in your chosen fleet host project. Some cluster types are automatically registered at cluster creation time, while other cluster types must be manually registered.\nYou can read more about how this works in the [Fleet creation overview](/kubernetes-engine/fleet-management/docs/fleet-creation), and follow the linked instructions to start adding clusters to your fleet.\n\nWhen you add a cluster outside Google Cloud to your fleet, a [Connect Agent](/kubernetes-engine/fleet-management/docs/connect-agent) is installed on the cluster to establish control plane connectivity between the cluster and Google Cloud. The agent can traverse NATs, egress\nproxies, VPNs, and other interconnects that you have between your environments\nand Google. Your Kubernetes clusters and their API servers do not\nneed public or externally exposed IP addresses. To learn more about the Connect Agent, see the\n[Connect Agent overview](/kubernetes-engine/fleet-management/docs/connect-agent).\n\nAuthenticating to clusters\n\nConnecting and authenticating users and service accounts to clusters across multiple environments can be\nchallenging. With fleets, you can choose from two options for consistent, secure authentication to clusters for all your organization's developers and admins.\n\n- **Google Cloud identity** : If you want to use Google Cloud as your identity provider, the Connect gateway builds on fleets to provide a consistent way to connect to and run commands against your registered clusters from the command line, and makes it simpler to automate DevOps tasks across multiple clusters, including clusters outside Google Cloud. Users don't need direct IP connectivity to a cluster to connect to it using this option. Find out more in the [Connect gateway guide](/kubernetes-engine/enterprise/multicluster-management/gateway).\n\n- **Third-party identity** : Fleets also support using your existing third-party identity provider, such as Microsoft ADFS, letting you configure your fleet clusters so that users can log in with their existing third-party ID and password. OIDC and LDAP providers are supported. Find out more in [Set up the connect gateway with third party identities](/kubernetes-engine/enterprise/multicluster-management/gateway/setup-third-party) and [Introducing GKE Identity Service](/kubernetes-engine/enterprise/identity).\n\nWith either approach, users can log in to clusters from the command line or from the Google Cloud console.\n\nGoogle Cloud console\n\nThe Google Cloud console provides a central user interface for\nmanaging all of your fleet clusters no matter where they are running. After\nyou have registered your clusters to your fleet, you\ncan log in to view, monitor, debug, and manage your workloads.\n\nTo learn more and to get started, see\n[Work with clusters from the Google Cloud console](/kubernetes-engine/fleet-management/docs/console).\n\nWho can use fleet management features?\n\nIf you want to enable and use multiple enterprise and multi-cluster features for a single per-vCPU charge, or if you want to register clusters outside Google Cloud to your fleet, you must enable GKE Enterprise. Find out how to do this in [Enable GKE Enterprise](/kubernetes-engine/enterprise/docs/setup/enable-gkee).\n\nFor clusters on Google Cloud only, you can register GKE clusters to a fleet and use [some enterprise and multi-cluster features](/kubernetes-engine/enterprise/docs/deployment-options#pricing_options_for_gke_on_google_cloud) at no additional charge to regular GKE pricing. You can then pay separately for additional Enterprise features such as [Multi Cluster Ingress](/kubernetes-engine/pricing#multi-cluster-ingress) and [Cloud Service Mesh](/service-mesh/docs/overview).\n\nFor complete details of which features are included in each option, see the GKE Enterprise [Deployment Options](/kubernetes-engine/enterprise/docs/deployment-options) page.\n\nUse cases\n\nWhile managing more than one cluster has its challenges, there are many reasons\nto deploy multiple clusters to achieve technical and business objectives. Find\nout more in our [Multi-cluster use cases](/kubernetes-engine/fleet-management/docs/multi-cluster-use-cases) guide.\n\nWhat's next?\n\n- Learn more about fleets in [How fleets work](/kubernetes-engine/fleet-management/docs/fleet-concepts)\n- Start planning how to organize your clusters into fleets with [Plan fleet resources](/kubernetes-engine/fleet-management/docs/fleet-concepts/plan-fleets)\n- Get best practices for adding features to your fleet in [Plan fleet features](/kubernetes-engine/fleet-management/docs/fleet-concepts/fleet-features)\n- Get started creating your fleet following the [Fleet creation overview](/kubernetes-engine/fleet-management/docs/fleet-creation)"]]