Anthos clusters on bare metal has the following sets of installation prerequisites:
- The prerequisites for the workstation machine running the
- The prerequisites for the node machines that are part of the Anthos clusters on bare metal deployment.
- The prerequisites for the load balancer machines.
- The prerequisites for the Google Cloud project.
- The prerequisites for your service accounts.
If you use the workstation machine as a cluster node machine, it must meet the prerequisites for both.
Before you begin
During installation, you must provide the following credentials:
- The private SSH keys needed to access cluster node machines.
- If you are not using
root, the cluster node machine login name.
- The Google Cloud service account keys. Go to Creating and managing service account keys to learn more.
Ensure you have all the necessary credentials before attempting to install Anthos clusters on bare metal.
Logging into gcloud
- Login to gcloud as a user using
gcloud auth application-defaultlogin:
gcloud auth application-default loginYou need to have a Project Owner/Editor role to use the automatic API enablement and Service Account creation features, described below. You can also add the following IAM roles to the user:
- Service Account Admin
- Service Account Key Admin
- Project IAM Admin
- Compute Viewer
- Service Usage Admin
export GOOGLE_APPLICATION_CREDENTIALS=JSON_KEY_FILEJSON_KEY_FILE specifies the path to your service account JSON key file.
export CLOUD_PROJECT_ID=$(gcloud config get-value project)
bmctl workstation must meet the following prerequisites:
- Operating system is the same supported Linux distribution running on the cluster node machines.
- Docker version 19.03 or later installed.
- Non-root user is member of
dockergroup (for instructions, go to Manage Docker as a non-root user).
- gcloud installed.
- More than 50 GiB of free disk space.
- Layer 3 connectivity to all cluster node machines.
- Access to all cluster node machines through SSH via private keys with passwordless root access. Access can be either direct or through sudo.
- Access the control plane VIP.
Node machine prerequisites
The node machines have the following prerequisites:
- Their operating system is one of the supported Linux distributions.
- The Linux kernel version is 4.17.0 or newer. Ubuntu 18.04 and 18.04.1 are on Linux kernel version 4.15 and therefore incompatible.
- Meet the minimum hardware requirements.
- Internet access.
- Layer 3 connectivity to all other node machines.
- Access the control plane VIP.
- Properly configured DNS nameservers.
- No duplicate host names.
- One of the following NTP services is enabled and working:
- A working package manager: apt, dnf, etc.
- On Ubuntu, you must disable Uncomplicated Firewall (UFW).
systemctl stop ufwto disable UFW.
- On Ubuntu and starting with Anthos clusters on bare metal 1.8.2, you aren't required
to disable AppArmor. If you deploy clusters using earlier releases of
Anthos clusters on bare metal disable AppArmor with the following command:
systemctl stop apparmor
- If you choose Docker as your container runtime, you may use Docker version 19.03 or later installed. If you don't have Docker installed on your node machines or have an older version installed, Anthos on bare metal installs Docker 19.03.13 or later when you create clusters.
If you use the default container runtime, containerd, you don't need Docker, and installing Docker can cause issues. For more information, see the known issues.
Cluster creation only checks for the required free space for the Anthos clusters on bare metal system components. This change gives you more control on the space you allocate for application workloads. Whenever you install Anthos clusters on bare metal, ensure that the file systems backing the following directories have the required capacity and meet the following requirements:
/: 17 GiB (18,253,611,008 bytes).
/var/lib/containerd, depending on the container runtime:
- 30 GiB (32,212,254,720 bytes) for control plane nodes.
- 10 GiB (10,485,760 bytes) for worker nodes.
/var/lib/kubelet: 500 MiB (524,288,000 bytes).
/var/lib/etcd: 20 GiB (21,474,836,480 bytes, applicable to control plane nodes only).
Regardless of cluster version, the preceding lists of directories can be on the same or different partitions. If they are on the same underlying partition, then the space requirement is the sum of the space required for each individual directory on that partition. For all release versions, the cluster creation process creates the directories, if needed.
/etc/kubernetesdirectories are either non-existent or empty.
In addition to the prerequisites for installing and running Anthos clusters on bare metal, customers are expected to comply with relevant standards governing their industry or business segment, such as PCI DSS requirements for businesses that process credit cards or Security Technical Implementation Guides (STIGs) for businesses in the defense industry.
Load balancer machines prerequisites
When your deployment doesn't have a specialized load balancer node pool, you can have worker nodes or control plane nodes build a load balancer node pool. In that case, they have additional prerequisites:
- Machines are in the same Layer 2 subnet.
- All VIPs are in the load balancer nodes subnet and routable from the gateway of the subnet.
- The gateway of the load balancer subnet should listen to gratuitous ARPs to forward packets to the master load balancer.
Google Cloud project prerequisites
Before you install Anthos clusters on bare metal, enable the following services for your associated GCP project:
You can also use the
bmctl tool to enable these services.
Service accounts prerequisites
In production environments, you should create separate service accounts for different purposes. Anthos clusters on bare metal needs the following different types of Google Cloud service accounts depending on their purpose:
- To access Container Registry (
gcr.io), no special role is required.
- To register a cluster in a fleet, grant the
roles/gkehub.adminIAM role to the service account on your Google Cloud project.
- To connect to fleets, grant the
roles/gkehub.connectIAM role to the service account on your Google Cloud project.
To send logs and metrics to Google Cloud's operations suite, grant the following IAM roles to the service account on your Google Cloud project:
You can also use the
bmctl tool to create these service accounts.