Version 1.12. This version is no longer supported. For information about how to upgrade to version 1.13, see Upgrading Anthos on bare metal in the 1.13 documentation. For more information about supported and unsupported versions, see the Version history page in the latest documentation.
Google Distributed Cloud supports a wide variety of systems running on the
hardware that the target operating system distributions support.
An Google Distributed Cloud configuration can run on minimal hardware, or on
multiple machines to provide flexibility, availability, and performance.
Regardless of your Google Distributed Cloud configuration, your nodes and clusters
must have enough CPU, RAM, and storage resources to meet the needs of clusters
and the workloads that you're running.
Minimum and recommended CPU, RAM, and storage requirements
When you install Google Distributed Cloud, you can create different types of
clusters:
A user cluster that runs workloads.
An admin cluster that creates and controls user clusters to run workloads.
A standalone cluster is a single cluster that can manage and run
workloads, but a standalone cluster can't create or manage user clusters.
A hybrid cluster can manage and run workloads, and a hybrid cluster can
also create and manage additional user clusters.
In addition to cluster type, you can choose from the following installation
profiles in terms of resource requirements:
Default: The default profile has standard system resource requirements,
and you can use it for all cluster types.
Edge: The edge profile has significantly reduced system resource
requirements. Use of this profile is recommended for edge devices with limited
resources. You can only use the edge profile for standalone clusters.
Resource requirements for all cluster types using the default profile
The following table describes the minimum and recommended hardware requirements
that Google Distributed Cloud needs to operate and manage admin, hybrid, user, and
standalone clusters using the default profile:
Resource
Minimum
Recommended
CPUs / vCPUs*
4 core
8 core
RAM
16 GiB
32 GiB
Storage
128 GiB
256 GiB
* Google Distributed Cloud supports CPUs and vCPUs from the x86
processor family only.
Resource requirements for standalone clusters using the edge profile
The following table describes the minimum and recommended hardware requirements
that Google Distributed Cloud needs to operate and manage standalone clusters
using the edge profile:
Resource
Minimum
Recommended
CPUs / vCPUs*
2 core
4 core
RAM
Ubuntu: 4 GiB CentOS/RHEL: 6 GiB
Ubuntu: 8 GiB CentOS/RHEL: 12 GiB
Storage
128 GiB
256 GiB
* Google Distributed Cloud supports CPUs and vCPUs from the x86
processor family only.
To configure standalone clusters using the edge profile, follow these best practices:
Run bmctl on a separate workstation. If you must run bmctl on the target
cluster node, you need 2 GiB of memory to meet the
minimum requirements. For example, you require 6 GiB for Ubuntu and
8 GiB for CentOS/Redhat.
Set MaxPodsPerNode to 110. The cluster runs no more than 30 user pods per
node on average. You might need extra resources for a higher MaxPodsPerNode
configuration or run more than 30 user pods per node.
Kubevirt components are not
considered in this minimum resource configuration. Kubevirt requires
additional resources depending on the number of VMs deployed in the cluster.
Additional storage requirements
Google Distributed Cloud doesn't provide any storage resources. You must provision
and configure the required storage on your system.
Disk speed is critical to etcd performance and stability. A slow disk increases
etcd request latency, which can lead to cluster stability problems. We recommend
that you use a solid-state disk (SSD) for your etcd store. The etcd
documentation provides additional
hardware recommendations
for ensuring the best etcd performance when running your clusters in production.
To check your etcd and disk performance, use the following etcd I/O latency
metrics in the Metrics Explorer:
etcd_disk_backend_commit_duration_seconds: the duration should be less than
25 milliseconds for the 99th percentile (p99).
etcd_disk_wal_fsync_duration_seconds: the duration should be less than 10
milliseconds for the 99th percentile (p99).
The node machines have the following prerequisites:
Their operating system is one of the supported Linux distributions. For more
information, see,
Select your operating system.
The Linux kernel version is 4.17.0 or newer. Ubuntu 18.04 and 18.04.1 are on
Linux kernel version 4.15 and therefore incompatible.
Meet the minimum hardware requirements.
Internet access.
Layer 3 connectivity to all other node machines.
Access to the control plane VIP.
Properly configured DNS nameservers.
No duplicate host names.
One of the following NTP services is enabled and working:
chrony
ntp
ntpdate
systemd-timesyncd
A working package manager: apt, dnf, etc.
On Ubuntu, you must disable Uncomplicated Firewall (UFW).
Run systemctl stop ufw to disable UFW.
On Ubuntu and starting with Google Distributed Cloud 1.8.2, you aren't required
to disable AppArmor. If you deploy clusters using earlier releases of
Google Distributed Cloud disable AppArmor with the following command:
systemctl stop apparmor
If you choose Docker as your container runtime, you may use Docker
version 19.03 or later installed. If you don't have Docker installed on your node machines or have an older
version installed, Anthos on bare metal installs Docker 19.03.13 or later
when you create clusters.
If you use the default container runtime, containerd, you don't need Docker,
and installing Docker can cause issues. For more information, see the
known issues.
Cluster creation only checks for the required
free space for the Google Distributed Cloud system components. This change
gives you more control on the space you allocate for application workloads.
Whenever you install Google Distributed Cloud, ensure
that the file systems backing the following directories have the required
capacity and meet the following requirements:
/: 17 GiB (18,253,611,008 bytes).
/var/lib/docker or /var/lib/containerd, depending on the container
runtime:
30 GiB (32,212,254,720 bytes) for control plane nodes.
10 GiB (10,485,760 bytes) for worker nodes.
/var/lib/kubelet: 500 MiB (524,288,000 bytes).
/var/lib/etcd: 20 GiB (21,474,836,480 bytes, applicable to control plane nodes only).
Regardless of cluster version, the preceding lists of directories can be on
the same or different partitions. If they are on the same underlying
partition, then the space requirement is the sum of the space
required for each individual directory on that partition. For all release
versions, the cluster creation process creates the directories, if needed.
/var/lib/etcd and /etc/kubernetes directories are either non-existent or
empty.
In addition to the prerequisites for installing and running Google Distributed Cloud,
customers are expected to comply with relevant standards governing their industry
or business segment, such as PCI DSS requirements for businesses that process
credit cards or Security Technical Implementation Guides (STIGs) for businesses
in the defense industry.
Load balancer machines prerequisites
When your deployment doesn't have a specialized load balancer node pool, you can have worker nodes or control plane nodes build a load balancer node pool. In that case, they have additional prerequisites:
Machines are in the same Layer 2 subnet.
All VIPs are in the load balancer nodes subnet and routable from the gateway of the subnet.
The gateway of the load balancer subnet should listen to gratuitous ARPs to forward packets to the master load balancer.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eGoogle Distributed Cloud offers flexibility in hardware configurations, supporting minimal setups or multiple machines for enhanced availability and performance, with the requirement that nodes and clusters have sufficient CPU, RAM, and storage for the workloads.\u003c/p\u003e\n"],["\u003cp\u003eDifferent cluster types are available, including user, admin, standalone, and hybrid, each with varying capabilities for managing and running workloads; Additionally, there are default and edge installation profiles that determine resource requirements, with the edge profile allowing for significantly reduced resource use.\u003c/p\u003e\n"],["\u003cp\u003eThe default profile requires a minimum of 4 CPU cores, 16 GiB of RAM, and 128 GiB of storage, while the edge profile for standalone clusters can operate with a minimum of 2 CPU cores and 4-6 GiB of RAM, depending on the OS.\u003c/p\u003e\n"],["\u003cp\u003eetcd performance is critical and requires fast disk speeds, preferably SSDs, with specific latency targets for \u003ccode\u003eetcd_disk_backend_commit_duration_seconds\u003c/code\u003e and \u003ccode\u003eetcd_disk_wal_fsync_duration_seconds\u003c/code\u003e to ensure cluster stability.\u003c/p\u003e\n"],["\u003cp\u003eNode machines have prerequisites such as a supported Linux distribution, a kernel version of 4.17.0 or newer, internet access, proper network configuration, and specific storage requirements for directories like \u003ccode\u003e/\u003c/code\u003e, \u003ccode\u003e/var/lib/docker\u003c/code\u003e or \u003ccode\u003e/var/lib/containerd\u003c/code\u003e, \u003ccode\u003e/var/lib/kubelet\u003c/code\u003e, and \u003ccode\u003e/var/lib/etcd\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Cluster node machine prerequisites\n\n\u003cbr /\u003e\n\nGoogle Distributed Cloud supports a wide variety of systems running on the\nhardware that the target operating system distributions support.\nAn Google Distributed Cloud configuration can run on minimal hardware, or on\nmultiple machines to provide flexibility, availability, and performance.\n\nRegardless of your Google Distributed Cloud configuration, your nodes and clusters\nmust have enough CPU, RAM, and storage resources to meet the needs of clusters\nand the workloads that you're running.\n\nMinimum and recommended CPU, RAM, and storage requirements\n----------------------------------------------------------\n\nWhen you install Google Distributed Cloud, you can create different types of\nclusters:\n\n- A user cluster that runs workloads.\n- An admin cluster that creates and controls user clusters to run workloads.\n- A standalone cluster is a single cluster that can manage and run workloads, but a standalone cluster can't create or manage user clusters.\n- A hybrid cluster can manage and run workloads, and a hybrid cluster can also create and manage additional user clusters.\n\nIn addition to cluster type, you can choose from the following installation\nprofiles in terms of resource requirements:\n\n- **Default**: The default profile has standard system resource requirements,\n and you can use it for all cluster types.\n\n- **Edge**: The edge profile has significantly reduced system resource\n requirements. Use of this profile is recommended for edge devices with limited\n resources. You can only use the edge profile for standalone clusters.\n\n| **Warning:** The following resource requirements don't take into account the requirements of your workloads. You must consider separately the resource requirements for your workloads to operate optimally.\n\n### Resource requirements for all cluster types using the default profile\n\nThe following table describes the minimum and recommended hardware requirements\nthat Google Distributed Cloud needs to operate and manage admin, hybrid, user, and\nstandalone clusters using the default profile:\n\n^\\*^ Google Distributed Cloud supports CPUs and vCPUs from the x86\nprocessor family only.\n\n### Resource requirements for standalone clusters using the edge profile\n\nThe following table describes the minimum and recommended hardware requirements\nthat Google Distributed Cloud needs to operate and manage standalone clusters\nusing the edge profile:\n\n^\\*^ Google Distributed Cloud supports CPUs and vCPUs from the x86\nprocessor family only.\n\nTo configure standalone clusters using the edge profile, follow these best practices:\n\n- Run `bmctl` on a separate workstation. If you must run `bmctl` on the target\n cluster node, you need 2 GiB of memory to meet the\n minimum requirements. For example, you require 6 GiB for Ubuntu and\n 8 GiB for CentOS/Redhat.\n\n- Set `MaxPodsPerNode` to 110. The cluster runs no more than 30 user pods per\n node on average. You might need extra resources for a higher `MaxPodsPerNode`\n configuration or run more than 30 user pods per node.\n\n- Use [`containerd` as the container runtime](/anthos/clusters/docs/bare-metal/1.12/installing/runtime).\n You might need extra resources to run with the Docker container runtime.\n\n- [Kubevirt](https://kubevirt.io) components are not\n considered in this minimum resource configuration. Kubevirt requires\n additional resources depending on the number of VMs deployed in the cluster.\n\nAdditional storage requirements\n-------------------------------\n\nGoogle Distributed Cloud doesn't provide any storage resources. You must provision\nand configure the required storage on your system.\n\nFor detailed storage requirements, see the\n[Installation prerequisites overview](/anthos/clusters/docs/bare-metal/1.12/installing/install-prereq).\n\nFor more information about how to configure the storage required, see\n[Configuring storage for Google Distributed Cloud](/anthos/clusters/docs/bare-metal/1.12/installing/storage).\n\netcd performance\n----------------\n\nDisk speed is critical to etcd performance and stability. A slow disk increases\netcd request latency, which can lead to cluster stability problems. We recommend\nthat you use a solid-state disk (SSD) for your etcd store. The etcd\ndocumentation provides additional\n[hardware recommendations](https://etcd.io/docs/v3.4/op-guide/hardware/)\nfor ensuring the best etcd performance when running your clusters in production.\n\nTo check your etcd and disk performance, use the following etcd I/O latency\nmetrics in the Metrics Explorer:\n\n- `etcd_disk_backend_commit_duration_seconds`: the duration should be less than 25 milliseconds for the 99th percentile (p99).\n- `etcd_disk_wal_fsync_duration_seconds`: the duration should be less than 10 milliseconds for the 99th percentile (p99).\n\nFor more information about etcd performance, see\n[What does the etcd warning \"apply entries took too long\" mean?](https://etcd.io/docs/v3.3/faq/#what-does-the-etcd-warning-apply-entries-took-too-long-mean)\nand\n[What does the etcd warning \"failed to send out heartbeat on time\" mean?](https://etcd.io/docs/v3.3/faq/#what-does-the-etcd-warning-failed-to-send-out-heartbeat-on-time-mean).\n\nNode machine prerequisites\n--------------------------\n\nThe node machines have the following prerequisites:\n\n- Their operating system is one of the supported Linux distributions. For more information, see, [Select your operating system](/anthos/clusters/docs/bare-metal/1.12/installing/os-reqs).\n- The Linux kernel version is 4.17.0 or newer. Ubuntu 18.04 and 18.04.1 are on Linux kernel version 4.15 and therefore incompatible.\n- Meet the minimum hardware requirements.\n- Internet access.\n- Layer 3 connectivity to all other node machines.\n- Access to the control plane VIP.\n- Properly configured DNS nameservers.\n- No duplicate host names.\n- One of the following NTP services is enabled and working:\n - chrony\n - ntp\n - ntpdate\n - systemd-timesyncd\n- A working package manager: apt, dnf, etc.\n- On Ubuntu, you must disable Uncomplicated Firewall (UFW). Run `systemctl stop ufw` to disable UFW.\n- On Ubuntu and starting with Google Distributed Cloud 1.8.2, you aren't required\n to disable AppArmor. If you deploy clusters using earlier releases of\n Google Distributed Cloud disable AppArmor with the following command:\n `systemctl stop apparmor`\n\n | **Note:** We recommend that you use containerd as the container runtime for your clusters. Support for Docker engine as the container runtime is removed in Google Distributed Cloud version 1.13.0 and higher. For more information, see [Change your container runtime from docker to containerd](/anthos/clusters/docs/bare-metal/1.12/installing/runtime).\n- If you choose Docker as your container runtime, you may use Docker\n version 19.03 or later installed. If you don't have Docker installed on your node machines or have an older\n version installed, Anthos on bare metal installs Docker 19.03.13 or later\n when you create clusters.\n\n- If you use the default container runtime, containerd, you don't need Docker,\n and installing Docker can cause issues. For more information, see the\n [known issues](/anthos/clusters/docs/bare-metal/1.12/troubleshooting/known-issues#docker_service).\n\n- Cluster creation only checks for the required\n free space for the Google Distributed Cloud system components. This change\n gives you more control on the space you allocate for application workloads.\n Whenever you install Google Distributed Cloud, ensure\n that the file systems backing the following directories have the required\n capacity and meet the following requirements:\n\n - `/`: 17 GiB (18,253,611,008 bytes).\n - `/var/lib/docker` or `/var/lib/containerd`, depending on the container runtime:\n - 30 GiB (32,212,254,720 bytes) for control plane nodes.\n - 10 GiB (10,485,760 bytes) for worker nodes.\n - `/var/lib/kubelet`: 500 MiB (524,288,000 bytes).\n - `/var/lib/etcd`: 20 GiB (21,474,836,480 bytes, applicable to control plane nodes only).\n\n | **Note:** The preceding storage/space requirements are for system components only. You may require additional storage depending on the workloads that you plan to deploy.\n\n Regardless of cluster version, the preceding lists of directories can be on\n the same or different partitions. If they are on the same underlying\n partition, then the space requirement is the sum of the space\n required for each individual directory on that partition. For all release\n versions, the cluster creation process creates the directories, if needed.\n- `/var/lib/etcd` and `/etc/kubernetes` directories are either non-existent or\n empty.\n\nIn addition to the prerequisites for installing and running Google Distributed Cloud,\ncustomers are expected to comply with relevant standards governing their industry\nor business segment, such as PCI DSS requirements for businesses that process\ncredit cards or Security Technical Implementation Guides (STIGs) for businesses\nin the defense industry.\n\nLoad balancer machines prerequisites\n------------------------------------\n\nWhen your deployment doesn't have a specialized load balancer node pool, you can have worker nodes or control plane nodes build a load balancer node pool. In that case, they have additional prerequisites:\n\n- Machines are in the same Layer 2 subnet.\n- All VIPs are in the load balancer nodes subnet and routable from the gateway of the subnet.\n- The gateway of the load balancer subnet should listen to gratuitous ARPs to forward packets to the master load balancer.\n\nWhat's next\n-----------\n\n- [Set up networks](/anthos/clusters/docs/bare-metal/1.12/concepts/network-reqs)\n- [Select your operating system](/anthos/clusters/docs/bare-metal/1.12/installing/os-reqs)"]]