Men-deploy workload pihak ketiga di Config Controller
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini menjelaskan cara men-deploy workload Anda sendiri di cluster Config Controller.
Halaman ini ditujukan untuk administrator dan Operator IT yang mengelola
siklus proses infrastruktur teknologi yang mendasarinya dan merencanakan kapasitas, serta
men-deploy aplikasi dan layanan ke produksi. Untuk mempelajari lebih lanjut peran umum dan
contoh tugas yang kami referensikan dalam Google Cloud konten, lihat
Peran dan tugas pengguna GKE umum.
Sebelum memulai
Sebelum memulai, pastikan Anda telah menjalankan tugas berikut:
Jika cluster Config Controller Anda menggunakan GKE versi yang lebih lama dari versi 1.27, upgrade cluster Anda ke versi 1.27 atau yang lebih baru.
Mengaktifkan penyediaan otomatis node di cluster Standard
Anda harus mengaktifkan penyediaan otomatis node
untuk men-deploy workload Anda sendiri di cluster Config Controller. Hal ini memungkinkan pemisahan workload antara workload Anda dan workload yang dikelola Google yang diinstal secara default di cluster Config Controller.
Jika Anda menggunakan cluster Autopilot, Anda tidak perlu mengaktifkan penyediaan otomatis node karena GKE secara otomatis mengelola penskalaan dan penyediaan node.
gcloud
Untuk mengaktifkan penyediaan otomatis node, jalankan perintah berikut:
Di bagian Automation, untuk Node auto-provisioning, klik editEdit.
Centang kotak Enable node auto-provisioning.
Tetapkan penggunaan CPU dan memori minimum dan maksimum untuk cluster.
Klik Simpan perubahan.
Untuk mengetahui informasi selengkapnya tentang cara mengonfigurasi penyediaan otomatis node, seperti menetapkan default, lihat Mengonfigurasi penyediaan otomatis node.
Men-deploy workload Anda
Saat Anda men-deploy workload, Config Controller akan otomatis mengaktifkan GKE Sandbox untuk memberikan lapisan keamanan tambahan guna mencegah kode yang tidak tepercaya memengaruhi kernel host di node cluster Anda. Untuk mengetahui informasi selengkapnya, lihat
Tentang GKE Sandbox.
Anda dapat men-deploy workload dengan menulis file manifes workload, lalu menjalankan perintah berikut:
kubectl apply -f WORKLOAD_FILE
Ganti WORKLOAD_FILE dengan file manifes, seperti my-app.yaml.
Pastikan beban kerja Anda berjalan di node yang disediakan otomatis:
Dapatkan daftar node yang dibuat untuk workload Anda:
kubectl get nodes
Memeriksa node tertentu:
kubectl get nodes NODE_NAME -o yaml
Ganti NODE_NAME dengan nama node yang ingin Anda periksa.
Batasan
GKE Sandbox: GKE Sandbox berfungsi baik dengan banyak aplikasi, tetapi tidak semuanya. Untuk mengetahui informasi selengkapnya, lihat Batasan GKE Sandbox.
Keamanan bidang kontrol: saat memberikan izin untuk workload Anda, gunakan prinsip hak istimewa terendah untuk memberikan hanya izin yang Anda perlukan. Jika
workload Anda disusupi, workload tersebut dapat menggunakan izin yang terlalu permisif
untuk mengubah atau menghapus resource Kubernetes.
Ketersediaan bidang kontrol: jika workload Anda menyebabkan peningkatan traffic dalam waktu singkat, bidang kontrol cluster mungkin menjadi tidak tersedia hingga traffic menurun.
Perubahan ukuran bidang kontrol: GKE otomatis mengubah ukuran bidang kontrol sesuai kebutuhan. Jika workload Anda menyebabkan peningkatan beban yang besar (misalnya, menginstal ribuan objek CRD), pengubahan ukuran otomatis GKE mungkin tidak dapat mengimbangi peningkatan beban.
Kuota: saat men-deploy workload, Anda harus mengetahui kuota dan batas GKE dan tidak boleh melebihi batas tersebut.
Akses jaringan ke bidang kontrol dan node: Config Controller menggunakan node pribadi dengan Jaringan yang Diizinkan Master Diaktifkan, Endpoint Pribadi Diaktifkan, dan Akses Publik Dinonaktifkan. Untuk mengetahui informasi selengkapnya, lihat
Keamanan jaringan GKE.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-01 UTC."],[],[],null,["# Deploy third-party workloads on Config Controller\n\nThis page explains how to deploy your own workloads on Config Controller\nclusters.\n\nThis page is for IT administrators and Operators who manage\nthe lifecycle of the underlying tech infrastructure and plan capacity, and\ndeploy apps and services to production. To learn more about common roles and\nexample tasks that we reference in Google Cloud content, see\n[Common GKE user roles and tasks](/kubernetes-engine/enterprise/docs/concepts/roles-tasks).\n\nBefore you begin\n----------------\n\nBefore you start, make sure you have performed the following tasks:\n\n1. Set up [Config Controller](/kubernetes-engine/enterprise/config-controller/docs/setup).\n2. If your Config Controller cluster is on a GKE version earlier than version 1.27, [upgrade your cluster](/kubernetes-engine/docs/how-to/upgrading-a-cluster#upgrading_the_cluster) to version 1.27 or later.\n\nEnable node auto-provisioning on Standard clusters\n--------------------------------------------------\n\nYou must enable [node auto-provisioning](/kubernetes-engine/docs/concepts/node-auto-provisioning)\nto deploy your own workloads on Config Controller clusters. This allows for\nworkload separation between your workloads and the Google-managed workloads\ninstalled by default on Config Controller clusters.\n\nIf you use Autopilot clusters, you don't need to enable node auto-provisioning\nbecause GKE automatically manages node scaling and provisioning. \n\n### gcloud\n\nTo enable node auto-provisioning, run the following command: \n\n```\ngcloud container clusters update CLUSTER_NAME \\\n --enable-autoprovisioning \\\n --min-cpu MINIMUM_CPU \\\n --min-memory MIMIMUM_MEMORY \\\n --max-cpu MAXIMUM_CPU \\\n --max-memory MAXIMUM_MEMORY \\\n --autoprovisioning-scopes=https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring,https://www.googleapis.com/auth/devstorage.read_only\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of your Config Controller cluster.\n- \u003cvar translate=\"no\"\u003eMINIMUM_CPU\u003c/var\u003e: the minimum number of cores in the cluster.\n- \u003cvar translate=\"no\"\u003eMINIMUM_MEMORY\u003c/var\u003e: the minimum number of gigabytes of memory in the cluster.\n- \u003cvar translate=\"no\"\u003eMAXIMUM_CPU\u003c/var\u003e: the maximum number of cores in the cluster.\n- \u003cvar translate=\"no\"\u003eMAXIMUM_MEMORY\u003c/var\u003e: the maximum number of gigabytes of memory in the cluster.\n\n### Console\n\nTo enable node auto-provisioning, perform the following steps:\n\n1. Go to the **Google Kubernetes Engine** page in Google Cloud console.\n\n [Go to Google Kubernetes Engine](https://console.cloud.google.com/kubernetes/list)\n2. Click the name of the cluster.\n\n3. In the **Automation** section, for **Node auto-provisioning** , click edit **Edit**.\n\n4. Select the **Enable node auto-provisioning** checkbox.\n\n5. Set the minimum and maximum CPU and memory usage for the cluster.\n\n6. Click **Save changes**.\n\nFor more information on configuring node auto-provisioning, such as setting defaults,\nsee [Configure node auto-provisioning](/kubernetes-engine/docs/how-to/node-auto-provisioning).\n\nDeploy your workload\n--------------------\n\nWhen you deploy your workloads, Config Controller automatically\nenables GKE Sandbox to provide an extra layer of security to prevent untrusted\ncode from affecting the host kernel on your cluster nodes. For more information, see\n[About GKE Sandbox](/kubernetes-engine/docs/concepts/sandbox-pods).\n\nYou can deploy a workload by writing a workload manifest file and then\nrunning the following command: \n\n```\nkubectl apply -f WORKLOAD_FILE\n```\n\nReplace \u003cvar translate=\"no\"\u003eWORKLOAD_FILE\u003c/var\u003e with the manifest file, such as `my-app.yaml`.\n\nConfirm that your workload is running on the auto-provisioned nodes:\n\n1. Get the list of nodes created for your workload:\n\n ```\n kubectl get nodes\n ```\n2. Inspect a specific node:\n\n kubectl get nodes \u003cvar translate=\"no\"\u003eNODE_NAME\u003c/var\u003e -o yaml\n\n Replace \u003cvar translate=\"no\"\u003eNODE_NAME\u003c/var\u003e with the name of the node that you want to inspect.\n\nLimitations\n-----------\n\n- GKE Sandbox: GKE Sandbox works well with many applications, but not all. For more information, see [GKE Sandbox limitations](/kubernetes-engine/docs/concepts/sandbox-pods#limitations).\n- Control plane security: when granting permission for your workloads, use the principle of least privilege to grant only the permissions that you need. If your workload becomes compromised, the workload can use overly-permissive permissions to change or delete Kubernetes resources.\n- Control plane availability: if your workloads cause increased traffic in a short time, the cluster control plane might become unavailable until the traffic decreases.\n- Control plane resizing: GKE automatically resizes the control plane as needed. If your workload causes a large load increase (for example, installing thousands of CRD objects), GKE's automatic resizing might not be able to keep up with the load increase.\n- Quotas: when deploying workloads, you should be aware of GKE's [quotas and limits](/kubernetes-engine/quotas) and not exceed them.\n- Network access to control plane and nodes: Config Controller uses private nodes with Master Authorized Networks Enabled, Private Endpoint Enabled, and Public Access Disabled. For more information, see [GKE network security](/kubernetes-engine/docs/how-to/hardening-your-cluster#restrict_network_access_to_the_control_plane_and_nodes).\n\nWhat's next\n-----------\n\n- Learn more about Config Controller best practices: [Config Controller scalability](/kubernetes-engine/enterprise/config-controller/docs/scalability), [Config Controller sharding](/kubernetes-engine/enterprise/config-controller/docs/sharding), and [Configuring Config Controller for high availability](/kubernetes-engine/enterprise/config-controller/docs/availability)\n- [Troubleshoot Config Controller](/kubernetes-engine/enterprise/config-controller/docs/troubleshoot)\n- [Monitor Config Controller](/kubernetes-engine/enterprise/config-controller/docs/monitor)"]]