Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Dokumen ini menjelaskan cara mengonfigurasi cluster dan VM untuk mendukung beban kerja berperforma tinggi dan latensi rendah dengan efisiensi komputasi akses memori tidak seragam (NUMA). Ada petunjuk untuk menyesuaikan setelan Kubernetes
untuk node cluster. Dokumen ini juga menyertakan petunjuk untuk
mengonfigurasi Virtual Machine (VM) dengan afinitas NUMA sehingga VM dijadwalkan
dan memanfaatkan node NUMA.
Dengan VM yang kompatibel dengan NUMA, semua komunikasi dalam VM bersifat lokal ke node
NUMA. VM yang kompatibel dengan NUMA menghindari transaksi data ke dan dari resource jarak jauh yang dapat menurunkan performa VM.
Mengonfigurasi node untuk menggunakan NUMA
Bagian berikut menjelaskan cara mengonfigurasi komponen Kubernetes penting untuk menyesuaikan node dan memastikan node dapat menjadwalkan container yang kompatibel dengan NUMA.
Node NUMA ini disetel untuk mengoptimalkan performa CPU dan memori. Ikuti
petunjuk untuk setiap node yang ingin Anda jalankan VM yang kompatibel dengan NUMA.
Memperbarui konfigurasi kubelet
Sebagai bagian dari konfigurasi node untuk mendukung afinitas node NUMA, Anda perlu
melakukan perubahan berikut dalam konfigurasi kubelet:
Mengaktifkan CPU Manager dengan kebijakan static
Aktifkan Pengelola Memori dengan kebijakan Static
Aktifkan Pengelola topologi dengan topologi restricted
Untuk mengonfigurasi kubelet di node pekerja:
Temukan file kubelet di node pekerja Anda dan buka untuk mengedit:
edit/etc/default/kubelet
Jika Anda tidak melihat file kubelet, buat dengan perintah berikut:
Untuk mengaktifkan Topology Manager dengan kebijakan restricted, tambahkan
flag --topology-manager-policy=restricted ke bagian KUBELET_EXTRA_ARGS=""
file:
Setelan --kube-reserved=cpu=100m,memory=3470Mi menunjukkan bahwa
Google Distributed Cloud telah mencadangkan memori sebesar 3.470 mebibyte di node.
Tetapkan tanda --reserved-memory di bagian KUBELET_EXTRA_ARGS pada
file kubelet menjadi 100 mebibyte lebih banyak dari memori yang dicadangkan saat ini untuk
memperhitungkan nilai minimum pengusiran. Jika tidak ada memori yang dicadangkan, Anda dapat melewati langkah ini.
Misalnya, dengan memori yang dicadangkan sebesar 3470Mi dari contoh pada
langkah sebelumnya, Anda mencadangkan memori sebesar 3570Mi dalam file kubelet:
Mengonfigurasi node untuk menggunakan halaman besar
Setelah mengaktifkan Pengelola Memori dengan kebijakan Static, Anda dapat menambahkan
hugepage untuk lebih meningkatkan performa beban kerja penampung di node NUMA.
Hugepage, seperti namanya, memungkinkan Anda menentukan halaman memori yang lebih besar daripada 4 kibibyte (KiB) standar. Runtime VM di GDC mendukung hugepage 2
mebibyte (MiB) dan 1 gibibyte (GiB). Anda dapat menyetel hugepages untuk node
saat runtime, atau saat mesin node di-boot. Sebaiknya Anda mengonfigurasi
hugepage di setiap node yang ingin Anda jalankan VM yang kompatibel dengan NUMA.
Untuk mengonfigurasi jumlah hugepage dengan ukuran tertentu di node NUMA saat runtime, gunakan perintah berikut:
HUGEPAGE_QTY: jumlah hugepage yang akan dialokasikan dengan ukuran yang ditentukan.
NUMA_NODE: node NUMA, seperti
node0, tempat Anda mengalokasikan hugepage.
HUGEPAGE_SIZE: ukuran
hugepage dalam kibibyte, 2048 (2 MiB) atau1048576 (1 GiB).
Mengonfigurasi VM untuk menggunakan node NUMA
Setelah node cluster disesuaikan untuk NUMA, Anda dapat membuat VM yang kompatibel dengan NUMA.
VM yang kompatibel dengan NUMA dijadwalkan di node NUMA.
Gunakan setelan compute berikut untuk mengonfigurasi VM agar kompatibel dengan NUMA:
spec.compute.guaranteed: Tetapkan guaranteed ke true. Dengan setelan ini, Pod virt-launcher dikonfigurasi untuk ditempatkan di class Quality of Service (QoS) yang Dijamin Kubernetes.
spec.compute.advancedCompute:
dedicatedCPUPlacement: Tetapkan dedicatedCPUPlacement ke true. Setelan
ini menyematkan CPU virtual ke CPU fisik node.
hugePageSize: Tetapkan hugePageSize ke 2Mi atau 1Gi untuk menentukan
ukuran halaman besar yang akan digunakan VM Anda, 2 mebibyte atau 1 gibibyte.
numaGuestMappingPassthrough: Sertakan struktur kosong ({}) untuk
setelan ini. Setelan ini menetapkan afinitas NUMA sehingga VM Anda
dijadwalkan hanya di node NUMA.
Contoh manifes VirtualMachine berikut menunjukkan kemungkinan tampilan konfigurasi VM yang kompatibel dengan NUMA:
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-09-01 UTC."],[],[],null,["| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis document describes how to configure clusters and VMs to support high\nperformance and low latency workloads with the computing efficiencies of\nnon-uniform memory access (NUMA). There are instructions for tuning Kubernetes\nsettings for cluster nodes. This document also includes instructions for\nconfiguring Virtual Machines (VMs) with NUMA affinity so that they get scheduled\non and take advantage of the *NUMA nodes*.\n\nWith a NUMA-aware VM, all the communication within the VM is local to the NUMA\nnode. The NUMA-aware VM avoids data transactions to and from remote\nresources that can degrade VM performance.\n\nConfigure nodes to use NUMA\n\nThe following sections describe how to configure the critical Kubernetes\ncomponents to tune the node and make sure it can schedule NUMA-aware containers.\nThese NUMA nodes are tuned to optimize CPU and memory performance. Follow the\ninstructions for each node that you want to run NUMA-aware VMs.\n\nUpdate the kubelet configuration\n\nAs part of the node configuration to support NUMA node affinity, you need to\nmake the following changes in the kubelet configuration:\n\n- Enable the CPU Manager with a `static` policy\n- Enable the Memory Manager with a `Static` policy\n- Enable the Topology manager with `restricted` topology\n\nTo configure kubelet on your worker node:\n| **Note:** You may need to use `sudo` to run some of the following commands.\n\n1. Locate the `kubelet` file on your worker node and open it for editing:\n\n edit /etc/default/kubelet\n\n If you don't see the `kubelet` file, create it with the following command: \n\n echo \"KUBELET_EXTRA_ARGS=\\\"\\\"\" \u003e\u003e /etc/default/kubelet\n\n This command creates the `kubelet` file with an empty\n `KUBELET_EXTRA_ARGS=\"\"` section.\n2. To enable the CPU Manager with a `static` policy, add the\n `--cpu-manager-policy=static` flag to the `KUBELET_EXTRA_ARGS=\"\"`\n section of the file:\n\n ```\n KUBELET_EXTRA_ARGS=\"--cpu-manager-policy=static\"\n ```\n3. To enable the Memory Manager with a `Static` policy, add the\n `--memory-manager-policy=Static` flag to the `KUBELET_EXTRA_ARGS=\"\"`\n section of the file:\n\n ```\n KUBELET_EXTRA_ARGS=\"--cpu-manager-policy=static --memory-manager-policy=Static\"\n ```\n4. To enable the Topology Manager with a `restricted` policy, add the\n `--topology-manager-policy=restricted` flag to the `KUBELET_EXTRA_ARGS=\"\"`\n section of the file:\n\n ```\n KUBELET_EXTRA_ARGS=\"--cpu-manager-policy=static --memory-manager-policy=Static --topology-manager-policy=restricted\"\n ```\n5. Check the current amount of memory reserved by Google Distributed Cloud:\n\n cat /var/lib/kubelet/kubeadm-flags.env\n\n The output should look like the following: \n\n ```\n KUBELET_KUBEADM_ARGS=\"--anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=SeccompDefault=true --kube-reserved=cpu=100m,memory=3470Mi --max-pods=110 --node-ip=192.168.1.190 --node-labels=baremetal.cluster.gke.io/k8s-ip=192.168.1.190,baremetal.cluster.gke.io/namespace=cluster-user001,baremetal.cluster.gke.io/node-pool=node-pool-1,cloud.google.com/gke-nodepool=node-pool-1 --pod-infra-container-image=gcr.io/anthos-baremetal-release/pause-amd64:3.1-gke.5 --provider-id=baremetal://192.168.1.190 --read-only-port=10255 --rotate-server-certificates=true --seccomp-default=true\"\n ```\n\n The `--kube-reserved=cpu=100m,memory=3470Mi` setting indicates that\n Google Distributed Cloud has reserved 3,470 mebibytes of memory on the node.\n6. Set the `--reserved-memory` flag in the `KUBELET_EXTRA_ARGS` section of the\n `kubelet` file to 100 mebibytes more than the current reserved memory to\n account for the eviction threshold. If there is no reserved memory, you can\n skip this step.\n\n For example, with the reserved memory of `3470Mi` from the example in the\n preceding step, you reserve `3570Mi` of memory in the `kubelet` file: \n\n ```\n KUBELET_EXTRA_ARGS=\"--cpu-manager-policy=static --memory-manager-policy=Static --topology-manager-policy=restricted --reserved-memory=0:memory=3570Mi\"\n ```\n7. Remove CPU and memory state files from the `/var/lib` directory:\n\n rm /var/lib/cpu_manager_state\n rm /var/lib/memory_manager_state\n\n8. Restart kubelet:\n\n systemctl start kubelet\n\nFor more information about these policy settings, see the following\nKubernetes documentation:\n\n- [CPU Management Policies](https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies)\n- [Memory Manager configuration](https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/#memory-manager-configuration)\n- [Topology Manager restricted policy](https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#policy-restricted)\n\nConfigure the node to use hugepages\n\nOnce you have enabled Memory Manager with the `Static` policy, you can add\nhugepages to further improve container workload performance on your NUMA nodes.\nHugepages, as the name suggests, let you specify memory pages that are larger\nthan the standard 4 kibibyte (KiB). VM Runtime on GDC supports 2\nmebibyte (MiB) and 1 gibibyte (GiB) hugepages. You can set hugepages for a node\nat runtime, or for when the node machine boots. We recommend that you configure\nhugepages on each node that you want to run NUMA-aware VMs.\n\n1. To configure the number of hugepages of a specific size on your NUMA node at\n runtime, use the following command:\n\n echo \u003cvar label=\"number of hugepages\" translate=\"no\"\u003eHUGEPAGE_QTY\u003c/var\u003e \u003e \\\n /sys/devices/system/node/\u003cvar label=\"the NUMA node\" translate=\"no\"\u003eNUMA_NODE\u003c/var\u003e/hugepages/hugepages-\u003cvar label=\"size of the hugepages in kilobytes\" translate=\"no\"\u003eHUGEPAGE_SIZE\u003c/var\u003ekB/nr_hugepages\n\n Replace the following:\n - \u003cvar scope=\"HUGEPAGE_QTY\" translate=\"no\"\u003eHUGEPAGE_QTY\u003c/var\u003e: the number of\n hugepages to allocate of the specified size.\n\n - \u003cvar scope=\"NUMA_NODE\" translate=\"no\"\u003eNUMA_NODE\u003c/var\u003e: the NUMA node, such\n as `node0`, to which you're allocating hugepages.\n\n - \u003cvar scope=\"HUGEPAGE_SIZE\" translate=\"no\"\u003eHUGEPAGE_SIZE\u003c/var\u003e: the size of\n the hugepages in kibibytes, `2048` (2 MiB) or`1048576` (1 GiB).\n\nConfigure a VM to use the NUMA node\n\nOnce your cluster nodes are tuned for NUMA, you can create NUMA-aware VMs.\nNUMA-aware VMs are scheduled on NUMA nodes.\n\nTo create a NUMA-aware VM:\n\n1. Follow the instructions to\n [create a VM from a manifest](/kubernetes-engine/distributed-cloud/bare-metal/docs/vm-runtime/tutorial-create-vm#create_a_vm).\n\n Use the following `compute` settings to configure your VM to be NUMA-aware:\n - `spec.compute.guaranteed`: Set `guaranteed` to `true`. With this setting,\n the `virt-launcher` Pod is configured to be placed in the Kubernetes\n Guaranteed [Quality of Service (QoS) class](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-guaranteed).\n\n - `spec.compute.advancedCompute`:\n\n - `dedicatedCPUPlacement`: Set `dedicatedCPUPlacement` to `true`. This setting pins virtual CPUs to the physical CPUs of the node.\n - `hugePageSize`: Set `hugePageSize` to either `2Mi` or `1Gi` to specify the hugepages size for your VM to use, 2 mebibyte or 1 gibibyte.\n - `numaGuestMappingPassthrough`: Include an empty structure (`{}`) for this setting. This setting establishes NUMA affinity so that your VM is scheduled on NUMA nodes only.\n\n The following example VirtualMachine manifest shows how a NUMA-aware VM\n configuration might look: \n\n apiVersion: vm.cluster.gke.io/v1\n kind: VirtualMachine\n metadata:\n name: vm1\n spec:\n compute:\n cpu:\n vcpus: 2guaranteed: true\n advancedCompute:\n dedicatedCPUPlacement: true\n hugePageSize: 2Mi\n numaGuestMappingPassthrough: {}\n memory:\n capacity: 256Mi\n interfaces:\n - name: eth0\n networkName: pod-network\n default: true\n disks:\n - virtualMachineDiskName: disk-from-gcs\n boot: true\n readOnly: true"]]