[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-29 (世界標準時間)。"],[],[],null,["# Deploying workloads\n\n[Autopilot](/kubernetes-engine/docs/concepts/autopilot-overview) [Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nAs you read in our [Cluster\nlifecycle](/kubernetes-engine/docs/get-started/cluster-lifecycle) guide, as a\nGKE user you generally use Google Cloud tools for cluster\nmanagement and Kubernetes tools like `kubectl` for cluster-internal tasks like\ndeploying applications. This means that if you're already familiar with\ndeploying workloads on another Kubernetes implementation, deploying workloads on\nGKE should involve many of the same workflows (if you're not\nalready familiar with deploying workloads on Kubernetes, see\n[Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)\nand the other resources in [Start learning about\nKubernetes](/kubernetes-engine/docs/learn/get-started-with-kubernetes)).\n\nHowever, GKE also provides additional features for deploying and\nmanaging your workloads, including observability tools, fully-managed database\noptions for stateful applications, and specific hardware options for special\nworkload types, including AI/ML workloads.\n\nThis page provides a quick overview for developers and administrators who want\nto deploy workloads on GKE clusters, with links to some more\ndetailed documentation. You can find many more specific guides and tutorials in\nthe **Deploy...** sections of the GKE core documentation.\n\nBefore you read this page, you should be familiar with the following:\n\n- [GKE overview](/kubernetes-engine/docs/concepts/kubernetes-engine-overview)\n- [GKE modes of operation](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n- [Cluster lifecycle](/kubernetes-engine/docs/get-started/cluster-lifecycle)\n\nRequired roles\n--------------\n\nIf you are not a project owner, you must have the following\nIdentity and Access Management (IAM) role at minimum to deploy workloads:\n\n- Kubernetes Engine Cluster Viewer (`roles/container.clusterViewer`): This\n provides the `container.clusters.get` permission, which is required to\n authenticate to clusters in a Google Cloud project. This does not authorize\n you to perform any actions inside those clusters. Your cluster administrator\n can authorize you to perform other actions on the cluster by using either\n IAM or Kubernetes RBAC.\n\n For details about all the permissions included in this role, or to grant a\n role with read/write permissions, see [Kubernetes Engine\n roles](/iam/docs/understanding-roles#kubernetes-engine-roles) in the\n IAM documentation.\n\nYou can learn more about how access control works in GKE in\n[Access control](/kubernetes-engine/docs/concepts/access-control).\n\nStateless applications\n----------------------\n\n*Stateless applications* are applications which don't store data or application\nstate to the cluster or to persistent storage. Stateless applications can be\ndeployed directly from the [**Workloads** menu in the\nGoogle Cloud console](/kubernetes-engine/docs/concepts/dashboards#workloads)\nas well as by using the Kubernetes API. You can learn how to deploy a stateless\nLinux application on GKE in [Deploying a stateless Linux\napplication](/kubernetes-engine/docs/how-to/stateless-apps). If you prefer, you\ncan also learn how to deploy a stateless [Windows Server\napplication](/kubernetes-engine/docs/how-to/deploying-windows-app).\n\nStateful applications and storage\n---------------------------------\n\nApplications that need to save data that exists beyond the lifetime of their Pod\nare known as *stateful applications* . You or your administrator can use a\nKubernetes *PersistentVolume* object to provision this storage. In\nGKE, PersistentVolume storage is backed by Compute Engine\ndisks. You can learn how to deploy a simple stateful application on\nGKE in [Deploying a stateful\napplication](/kubernetes-engine/docs/how-to/stateful-apps).\n\nIf you need your stateful application's data to persist in a database rather\nthan storage that is tied to the lifetime of a cluster, GKE\noffers the following options:\n\n- **Fully-managed databases** : A managed database, such as [Cloud SQL](/sql) or [Spanner](/spanner), provides reduced operational overhead and is optimized for Google Cloud infrastructure. Managed databases require less effort to maintain and operate than a database that you deploy directly in Kubernetes.\n- **Kubernetes application** : You can deploy and run a database instance (such as [MySQL](/kubernetes-engine/docs/tutorials/stateful-workloads/mysql) or [PostgreSQL](/kubernetes-engine/docs/tutorials/stateful-workloads/postgresql)) on a GKE cluster.\n\nYou can learn much more about data options in GKE in [Data on\nGKE](/kubernetes-engine/docs/integrations/data) and [Plan your\ndatabase deployments on\nGKE](/kubernetes-engine/docs/concepts/database-options).\n\nAI/ML workloads\n---------------\n\nGKE has rich support for deploying [AI/ML\nworkloads](/kubernetes-engine/docs/integrations/ai-infra). This includes support for\ntraining and serving models on specialist hardware, as well as flexible integration with\ndistributed computing and data-processing frameworks. You can start learning\nmore in the following guides:\n\n- [About TPUs in GKE](/kubernetes-engine/docs/concepts/tpus) introduces you to using Cloud TPU accelerators for AI/ML workloads in GKE. GKE provides full support for TPU node and node pool lifecycle management, including creating, configuring, and deleting TPU VMs. You can deploy TPU workloads on both Standard and Autopilot clusters.\n- [About GPUs in GKE](/kubernetes-engine/docs/concepts/gpus) explains how to request and use GPU hardware with GKE workloads.\n\nWorkloads with other special requirements\n-----------------------------------------\n\nGKE provides features and guides to help you deploy workloads\nwith other special requirements, including applications that require particular\nnode architectures, or that need their Pods to run on the same or separate\nnodes. You can learn more about deploying some of these in the following guides:\n\n- [Compute classes in Autopilot](/kubernetes-engine/docs/concepts/autopilot-compute-classes) explains how you can pick specific compute architectures for scheduling your Pods when deploying applications on Autopilot clusters. For Standard clusters, you can directly specify the machine family you want to use for your nodes when creating a cluster.\n- [About custom compute classes](/kubernetes-engine/docs/concepts/about-custom-compute-classes) describes how to create custom compute classes for even greater flexibility when specifying hardware options for your applications on both Autopilot and Standard clusters.\n- [Configure workload separation in GKE](/kubernetes-engine/docs/how-to/workload-separation) tells you how to ensure that your application's Pods run on the same or different underlying machines.\n- [GKE Sandbox](/kubernetes-engine/docs/concepts/sandbox-pods) explains how to protect your host kernel by using sandbox Pods when you deploy unknown or untrusted workloads.\n\nObserving your workloads\n------------------------\n\nGKE provides a range of features for observing your workloads and\ntheir health, including at-a-glance overviews of workload state and metrics in\nthe Google Cloud console, as well as more in-depth metrics, logs, and\nalerting.\n\n- Learn more about using the GKE pages in the Google Cloud console in [GKE in the\n Google Cloud console](/kubernetes-engine/docs/concepts/dashboards).\n- Learn more about using [App Hub](/app-hub/docs/overview) to view your workloads and Services.\n- Learn more about GKE and Google Cloud observability in [Observability for\n GKE](/kubernetes-engine/docs/concepts/observability).\n\nManaging workload deployment\n----------------------------\n\nIf you or your administrator want to set up a continuous integration and\ndelivery (CI/CD) pipeline for deploying your workloads, you can find\nGKE-specific best practices and guidelines for CI/CD in [Best\npractices for continuous integration and delivery to\nGKE](/kubernetes-engine/docs/concepts/best-practices-continuous-integration-delivery-kubernetes),\nas well as tutorials for setting up CI/CD pipelines with specific tools and\nproducts.\n\nWhat's next\n-----------\n\n- Learn more about tools for working with GKE:\n\n - [GKE in the Google Cloud console](/kubernetes-engine/docs/concepts/dashboards)\n - [Google Cloud CLI overview](/sdk/gcloud)\n - [Install `kubectl` and configure cluster access](/kubernetes-engine/docs/how-to/cluster-access-for-kubectl)\n - [Provision GKE resources with Terraform](/kubernetes-engine/docs/terraform)\n- Learn how to simplify deployment from your IDE with Cloud Code in\n our [Deploy and update from an\n IDE](/kubernetes-engine/docs/tutorials/developer-workflow) tutorial."]]