Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Saat membuat cluster Kubernetes di Google Distributed Cloud (GDC) dengan air gap, Anda
membuat node pool yang bertanggung jawab untuk menjalankan workload container di
cluster. Anda menyediakan node berdasarkan persyaratan workload container, dan dapat memperbaruinya seiring perubahan persyaratan.
GDC menyediakan jenis mesin yang telah ditentukan untuk node
worker yang dapat dipilih saat Anda
menambahkan node pool.
Ada juga beberapa cara untuk mempartisi instance GPU terpisah menggunakan fitur Multi-Instance GPU (MIG).
Lihat bagian berikut untuk mengetahui jenis mesin yang tersedia dan dukungan GPU.
Jenis mesin yang tersedia
GDC menentukan jenis mesin dengan beberapa parameter
untuk node cluster Kubernetes, yang mencakup CPU, memori, dan GPU.
GDC memiliki berbagai jenis mesin untuk berbagai tujuan.
Misalnya, cluster menggunakan n2-standard-4-gdc untuk beban kerja container
tujuan umum. Jika berencana menjalankan notebook kecerdasan buatan (AI) dan machine learning (ML), Anda harus menyediakan mesin GPU, seperti a2-highgpu-1g-gdc.
Berikut adalah daftar semua jenis mesin bawaan GDC yang tersedia untuk node pekerja cluster Kubernetes:
Nama
vCPU
Memori
GPU
n2-standard-4-gdc
4
16G
T/A
n2-standard-8-gdc
8
32G
T/A
n2-standard-16-gdc
16
64G
T/A
n2-standard-32-gdc
32
128G
T/A
n2-highmem-4-gdc
4
32G
T/A
n2-highmem-8-gdc
8
64G
T/A
n2-highmem-16-gdc
16
128G
T/A
n2-highmem-32-gdc
32
256G
T/A
a2-highgpu-1g-gdc
12
85G
1x A100 40GB
a2-ultragpu-1g-gdc
12
170G
1x A100 80GB
a2-ultragpu-2g-gdc
24
340G
2x A100 80GB
a3-highgpu-1g-gdc
28
240G
1x H100 94GB
a3-highgpu-2g-gdc
56
480G
2x H100 94GB
a3-highgpu-4g-gdc
112
960G
4x H100 94GB
Profil MIG yang didukung
Bagian ini menentukan skema partisi yang didukung untuk profil MIG pada GPU yang didukung. Anda dapat menentukan skema partisi untuk node pool di
resource kustom Cluster.
Untuk mengetahui informasi selengkapnya tentang cara menerapkan skema partisi GPU, lihat
Menambahkan node pool.
GPU A100 40 GB
Tabel berikut menentukan profil MIG yang didukung di GPU NVIDIA A100 40 GB:
Skema Partisi
Partisi yang Tersedia
1g.5gb
7x 1g.5gb
2g.10gb
3x 2g.10gb
3g.20gb
2x 3g.20gb
7g.40gb
1x 7g.40gb
mixed-1
1x 4g.20gb 1x 2g.10gb 1x 1g.5gb
mixed-2
1x 4g.20gb 3x 1g.5gb
mixed-3
1x 3g.20gb 2x 2g.10gb
mixed-4
1x 3g.20gb 1x 2g.10gb 2x 1g.5gb
mixed-5
1x 3g.20gb 4x 1g.5gb
mixed-6
3x 2g.10gb 1x 1g.5b
mixed-7
2x 2g.10gb 3x 1g.5b
mixed-8
1x 2g.10gb 5x 1g.5gb
GPU A100 80 GB
Tabel berikut menentukan profil MIG yang didukung di GPU NVIDIA A100 80 GB:
Skema Partisi
Partisi yang Tersedia
1g.10gb
7x 1g.10gb
2g.20gb
3x 2g.20gb
3g.40gb
2x 3g.40gb
7g.80gb
1x 7g.80gb
mixed-1
1x 4g.40gb 1x 2g.20gb 1x 1g.10gb
mixed-2
1x 4g.40gb 3x 1g.10gb
mixed-3
1x 3g.40gb 2x 2g.20gb
mixed-4
1x 3g.40gb 1x 2g.20gb 2x 1g.10gb
mixed-5
1x 3g.40gb 4x 1g.10gb
mixed-6
3x 2g.20gb 1x 1g.10gb
mixed-7
2x 2g.20gb 3x 1g.10gb
mixed-8
1x 2g.20gb 5x 1g.10gb
GPU H100 94 GB
Tabel berikut menentukan profil MIG yang didukung di GPU NVIDIA H100 94 GB:
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-04 UTC."],[[["\u003cp\u003eGoogle Distributed Cloud (GDC) air-gapped Kubernetes clusters utilize node pools for running container workloads, allowing for node provisioning and updates based on evolving requirements.\u003c/p\u003e\n"],["\u003cp\u003eGDC offers a variety of predefined machine types for worker nodes, including general-purpose options like \u003ccode\u003en2-standard-4-gdc\u003c/code\u003e and GPU-enabled options like \u003ccode\u003ea2-highgpu-1g-gdc\u003c/code\u003e for AI/ML workloads.\u003c/p\u003e\n"],["\u003cp\u003eThe Multi-Instance GPU (MIG) feature allows for partitioning GPU instances, and applying a chosen partitioning scheme will affect all the GPUs available in a specified node.\u003c/p\u003e\n"],["\u003cp\u003eDifferent NVIDIA GPUs, such as A100 40GB, A100 80GB, and H100 94GB, have different supported MIG profiles, which define the available partitioning schemes and their specific configurations.\u003c/p\u003e\n"],["\u003cp\u003eSome machine types such as the a3-highgpu-1g-gdc and a3-highgpu-2g-gdc are in preview at the moment.\u003c/p\u003e\n"]]],[],null,["# Cluster node machine types\n\nWhen you create a Kubernetes cluster in Google Distributed Cloud (GDC) air-gapped, you\ncreate node pools that are responsible for running your container workloads in\nthe cluster. You provision nodes based on your container workload requirements,\nand can update them as your requirements evolve.\n\nGDC provides predefined machine types for your worker\nnodes that are selectable when you\n[add a node pool](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/manage-node-pools#add-a-node-pool).\nThere are also multiple ways to partition separate GPU instances using the\nMulti-Instance GPU (MIG) feature.\n\nReference the following sections for available machine types and GPU support.\n\nAvailable machine types\n-----------------------\n\nGDC defines machine types with some parameters\nfor a Kubernetes cluster node, which include CPU, memory, and GPU.\nGDC has various machine types for different purposes.\nFor example, clusters use `n2-standard-4-gdc` for general purpose container\nworkloads. If you plan to run artificial intelligence (AI) and\nmachine learning (ML) notebooks, you must provision GPU machines, such as\n`a2-highgpu-1g-gdc`.\n\nThe following is a list of all GDC predefined machine\ntypes available for Kubernetes cluster worker nodes:\n\n| **Preview:** The following machine types are in Preview:\n|\n| - a3-highgpu-1g-gdc\n| - a3-highgpu-2g-gdc\n|\n| For more information on Preview features, see [Feature stages](/distributed-cloud/hosted/docs/latest/gdch/resources/feature-stages#preview).\n\nSupported MIG profiles\n----------------------\n\nThis section defines the supported partitioning schemes of MIG profiles on\nsupported GPUs. You can define a partitioning scheme for a node pool in your\n`Cluster` custom resource.\n| **Important:** A partitioning scheme gets applied to all GPUs in a node. For example, the `a3-highgpu-4g-gdc` machine can support four iterations of the `7x 1g.12gb` GPU slicing because there are four GPUs available to the machine type.\n\nFor more information on how to apply a GPU partitioning scheme, see\n[Add a node pool](/distributed-cloud/hosted/docs/latest/gdch/platform/pa-user/manage-node-pools#add-a-node-pool).\n\n### A100 40GB GPU\n\nThe following table defines the MIG profiles supported on the A100 40GB NVIDIA\nGPU:\n\n### A100 80GB GPU\n\nThe following table defines the MIG profiles supported on the A100 80GB NVIDIA\nGPU:\n\n### H100 94GB GPU\n\nThe following table defines the MIG profiles supported on the H100 94GB NVIDIA\nGPU:"]]