[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2025-09-04 UTC。"],[],[],null,["This page shows how to determine how much CPU and memory is available on a node\nto run your workloads on Google Distributed Cloud.\n\nReserved resources\n\nOn each cluster node, Google Distributed Cloud reserves the following\nresources for operating system components and core Kubernetes components:\n\n- 80 millicores + 1% of the CPU capacity\n- 330 MiB + 5% of the memory capacity\n\nFor example, suppose a node has the default capacity of 4 CPU cores and\n8 GiB of memory. Then Google Distributed Cloud reserves:\n\n- 80 millicores + 1% of 4 cores = 120 millicores\n- 330 MiB + 5% of 8 GiB = 730 MiB\n\nThe operating system and core Kubernetes components do not run as Pods; they\nrun as ordinary processes. The resources that remain, beyond these reserved\nresources, are available for Pods.\n\nEviction threshold\n\nTo determine how much memory is available for Pods, you must also consider the\n[eviction threshold](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/).\nGoogle Distributed Cloud sets an eviction threshold of 100 MiB. This means\nthat if available memory on a node falls below 100 MiB, the `kubelet` might\nevict one or more Pods.\n\nAllocatable resources\n\nThe resources on a node that are available for Pods are called the\n*allocatable resources*. Calculate the allocatable resources as follows:\n\n- `Allocatable CPU = CPU Capacity - Reserved CPU`\n- `Allocatable Memory = Memory Capacity - Reserved Memory - Eviction Threshold`\n\nFor example, suppose a node has 8 GiB of memory capacity, 680 MiB of\nreserved memory, and an eviction threshold of 100 MiB. Then the allocatable\nmemory is:\n\n`8 GiB - 680 MiB - 100 MiB = 7220 MiB`\n\nResources available for your workloads\n\nA node's allocatable resources are the resources available for Pods. This\nincludes the Pods that run your workloads and the Pods that run\nGoogle Distributed Cloud add-ons. Add-ons include the ingress controller, the\ningress service, the Connect agent, networking components, logging\ncomponents, and more.\n\nOn a given node, to determine the resources available for your workloads,\nstart with the allocatable resources and then subtract the resources used by\nadd-ons.\n\nThe challenge is that add-ons are not distributed evenly among the nodes of a\nGoogle Distributed Cloud cluster. One node might have three add-ons, and another\nnode might have ten add-ons. Also, the various add-ons require different amounts\nof CPU and memory.\n\nAs a general rule, you can figure that the add-ons running on a node require:\n\n- 200 millicores of CPU\n- 100 MiB of memory\n\nNow you can calculate the resources available on a node for your workloads as\nfollows:\n\n- `Allocatable CPU - 200 millicores`\n- `Allocatable memory - 100 MiB`\n\nCertain nodes require more resources for add-ons than the preceding general rule\nindicates. For example, one node might run a Prometheus add-on that requires\n2 GiB of memory. But if your cluster has more than a few nodes, it is reasonable\nto assume that the general rule applies to most nodes.\n\nWhat's next\n\nTo learn more about the concept of allocatable resources, see\n[Allocatable resources](/kubernetes-engine/docs/concepts/cluster-architecture#node_allocatable)\nin the documentation for GKE on Google Cloud."]]