Stay organized with collections
Save and categorize content based on your preferences.
This page explains how to deploy your own workloads on Config Controller
clusters.
This page is for IT administrators and Operators who manage
the lifecycle of the underlying tech infrastructure and plan capacity, and
deploy apps and services to production. To learn more about common roles and
example tasks that we reference in Google Cloud content, see
Common GKE user roles and tasks.
Before you begin
Before you start, make sure you have performed the following tasks:
If your Config Controller cluster is on a GKE version earlier
than version 1.27,
upgrade your cluster
to version 1.27 or later.
Enable node auto-provisioning on Standard clusters
You must enable node auto-provisioning
to deploy your own workloads on Config Controller clusters. This allows for
workload separation between your workloads and the Google-managed workloads
installed by default on Config Controller clusters.
If you use Autopilot clusters, you don't need to enable node auto-provisioning
because GKE automatically manages node scaling and provisioning.
gcloud
To enable node auto-provisioning, run the following command:
When you deploy your workloads, Config Controller automatically
enables GKE Sandbox to provide an extra layer of security to prevent untrusted
code from affecting the host kernel on your cluster nodes. For more information, see
About GKE Sandbox.
You can deploy a workload by writing a workload manifest file and then
running the following command:
kubectl apply -f WORKLOAD_FILE
Replace WORKLOAD_FILE with the manifest file, such as my-app.yaml.
Confirm that your workload is running on the auto-provisioned nodes:
Get the list of nodes created for your workload:
kubectl get nodes
Inspect a specific node:
kubectl get nodes NODE_NAME -o yaml
Replace NODE_NAME with the name of the node that you want to inspect.
Limitations
GKE Sandbox: GKE Sandbox works well with many applications,
but not all. For more information, see GKE Sandbox limitations.
Control plane security: when granting permission for your workloads, use the
principle of least privilege to grant only the permissions that you need. If
your workload becomes compromised, the workload can use overly-permissive
permissions to change or delete Kubernetes resources.
Control plane availability: if your workloads cause increased traffic in a short
time, the cluster control plane might become unavailable until the traffic
decreases.
Control plane resizing: GKE automatically resizes the control
plane as needed. If your workload causes a large load increase (for example, installing thousands of
CRD objects), GKE's automatic resizing might not be able to keep
up with the load increase.
Quotas: when deploying workloads, you should be aware of GKE's
quotas and limits and not exceed them.
Network access to control plane and nodes: Config Controller uses private
nodes with Master Authorized Networks Enabled, Private Endpoint Enabled, and
Public Access Disabled. For more information, see
GKE network security.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Deploy third-party workloads on Config Controller\n\nThis page explains how to deploy your own workloads on Config Controller\nclusters.\n\nThis page is for IT administrators and Operators who manage\nthe lifecycle of the underlying tech infrastructure and plan capacity, and\ndeploy apps and services to production. To learn more about common roles and\nexample tasks that we reference in Google Cloud content, see\n[Common GKE user roles and tasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nBefore you begin\n----------------\n\nBefore you start, make sure you have performed the following tasks:\n\n1. Set up [Config Controller](/kubernetes-engine/enterprise/config-controller/docs/setup).\n2. If your Config Controller cluster is on a GKE version earlier than version 1.27, [upgrade your cluster](/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrading_the_cluster) to version 1.27 or later.\n\nEnable node auto-provisioning on Standard clusters\n--------------------------------------------------\n\nYou must enable [node auto-provisioning](/kubernetes-engine/docs/concepts/node-auto-provisioning)\nto deploy your own workloads on Config Controller clusters. This allows for\nworkload separation between your workloads and the Google-managed workloads\ninstalled by default on Config Controller clusters.\n\nIf you use Autopilot clusters, you don't need to enable node auto-provisioning\nbecause GKE automatically manages node scaling and provisioning. \n\n### gcloud\n\nTo enable node auto-provisioning, run the following command: \n\n```\ngcloud container clusters update CLUSTER_NAME \\\n --enable-autoprovisioning \\\n --min-cpu MINIMUM_CPU \\\n --min-memory MIMIMUM_MEMORY \\\n --max-cpu MAXIMUM_CPU \\\n --max-memory MAXIMUM_MEMORY \\\n --autoprovisioning-scopes=https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring,https://www.googleapis.com/auth/devstorage.read_only\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your Config Controller cluster.\n- \u003cvar translate=\"no\"\u003eMINIMUM_CPU\u003c/var\u003e: the minimum number of cores in the cluster.\n- \u003cvar translate=\"no\"\u003eMINIMUM_MEMORY\u003c/var\u003e: the minimum number of gigabytes of memory in the cluster.\n- \u003cvar translate=\"no\"\u003eMAXIMUM_CPU\u003c/var\u003e: the maximum number of cores in the cluster.\n- \u003cvar translate=\"no\"\u003eMAXIMUM_MEMORY\u003c/var\u003e: the maximum number of gigabytes of memory in the cluster.\n\n### Console\n\nTo enable node auto-provisioning, perform the following steps:\n\n1. Go to the **Google Kubernetes Engine** page in Google Cloud console.\n\n [Go to Google Kubernetes Engine](https://console.cloud.google.com/kubernetes/list)\n2. Click the name of the cluster.\n\n3. In the **Automation** section, for **Node auto-provisioning** , click edit **Edit**.\n\n4. Select the **Enable node auto-provisioning** checkbox.\n\n5. Set the minimum and maximum CPU and memory usage for the cluster.\n\n6. Click **Save changes**.\n\nFor more information on configuring node auto-provisioning, such as setting defaults,\nsee [Configure node auto-provisioning](/kubernetes-engine/docs/how-to/node-auto-provisioning).\n\nDeploy your workload\n--------------------\n\nWhen you deploy your workloads, Config Controller automatically\nenables GKE Sandbox to provide an extra layer of security to prevent untrusted\ncode from affecting the host kernel on your cluster nodes. For more information, see\n[About GKE Sandbox](/kubernetes-engine/docs/concepts/sandbox-pods).\n\nYou can deploy a workload by writing a workload manifest file and then\nrunning the following command: \n\n```\nkubectl apply -f WORKLOAD_FILE\n```\n\nReplace \u003cvar translate=\"no\"\u003eWORKLOAD_FILE\u003c/var\u003e with the manifest file, such as `my-app.yaml`.\n\nConfirm that your workload is running on the auto-provisioned nodes:\n\n1. Get the list of nodes created for your workload:\n\n ```\n kubectl get nodes\n ```\n2. Inspect a specific node:\n\n kubectl get nodes \u003cvar translate=\"no\"\u003eNODE_NAME\u003c/var\u003e -o yaml\n\n Replace \u003cvar translate=\"no\"\u003eNODE_NAME\u003c/var\u003e with the name of the node that you want to inspect.\n\nLimitations\n-----------\n\n- GKE Sandbox: GKE Sandbox works well with many applications, but not all. For more information, see [GKE Sandbox limitations](/kubernetes-engine/docs/concepts/sandbox-pods#limitations).\n- Control plane security: when granting permission for your workloads, use the principle of least privilege to grant only the permissions that you need. If your workload becomes compromised, the workload can use overly-permissive permissions to change or delete Kubernetes resources.\n- Control plane availability: if your workloads cause increased traffic in a short time, the cluster control plane might become unavailable until the traffic decreases.\n- Control plane resizing: GKE automatically resizes the control plane as needed. If your workload causes a large load increase (for example, installing thousands of CRD objects), GKE's automatic resizing might not be able to keep up with the load increase.\n- Quotas: when deploying workloads, you should be aware of GKE's [quotas and limits](/kubernetes-engine/quotas) and not exceed them.\n- Network access to control plane and nodes: Config Controller uses private nodes with Master Authorized Networks Enabled, Private Endpoint Enabled, and Public Access Disabled. For more information, see [GKE network security](/kubernetes-engine/docs/how-to/hardening-your-cluster#restrict_network_access_to_the_control_plane_and_nodes).\n\nWhat's next\n-----------\n\n- Learn more about Config Controller best practices: [Config Controller scalability](/kubernetes-engine/enterprise/config-controller/docs/scalability), [Config Controller sharding](/kubernetes-engine/enterprise/config-controller/docs/sharding), and [Configuring Config Controller for high availability](/kubernetes-engine/enterprise/config-controller/docs/availability)\n- [Troubleshoot Config Controller](/kubernetes-engine/enterprise/config-controller/docs/troubleshoot)\n- [Monitor Config Controller](/kubernetes-engine/enterprise/config-controller/docs/monitor)"]]