This page shows you how to create an Anthos on bare metal user cluster using the Google Cloud console, the Google Cloud CLI, or Terraform. A user cluster requires an admin cluster to manage it. The provided script in the first section of this tutorial creates an Anthos on bare metal admin cluster and helps you get set up to create a user cluster. The script creates Compute Engine Virtual Machines (VMs), configures a Virtual Extensible LAN (VXLAN) overlay network between the VMs, and installs the admin cluster on one of the VMs. Then you create the user cluster on the remaining VMs created by the script. You can try out Anthos clusters on bare metal quickly and without having to prepare any hardware. Completing the steps on this page provides you with a working Anthos clusters on bare metal test environment that runs on Compute Engine.
What is the Anthos On-Prem API?
The Anthos On-Prem API is a Google Cloud-hosted API that lets you manage the lifecycle of your on-premises clusters using Terraform and standard Google Cloud applications. The Anthos On-Prem API runs in Google Cloud's infrastructure. Terraform, the console, and the gcloud CLI are clients of the API, and they use the API to create clusters in your data center.
To manage the lifecycle of your clusters, the Anthos On-Prem API must store metadata about your cluster's state in Google Cloud, using the Google Cloud region that you specify when creating the cluster. This metadata lets the API manage the cluster lifecycle and doesn't include workload-specific data.
When you create a cluster using an Anthos On-Prem API client, you specify a Google Cloud project. After the cluster is created, it is automatically registered to the specified project's fleet. This project is referred to as the fleet host project. The fleet host project can't be changed after the cluster is created.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to check if billing is enabled on a project.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to check if billing is enabled on a project.
- Make a note of the project ID because you need it to set an environment variable that is used in the script and commands on this page. If you selected an existing project, make sure that you are either a project owner or editor.
-
You can run the script on Cloud Shell or your
local machine running Linux or macOS. If you aren't using
Cloud Shell:
- Make sure you have installed the latest Google Cloud CLI, the command line tool for interacting with Google Cloud, including the gcloud CLI beta components.
-
If you don't already have the beta components, run the
following command to install them:
gcloud components install beta
-
Update the gcloud CLI components, if needed:
gcloud components update
Depending on how the gcloud CLI was installed, you might see the following message: "You cannot perform this action because the Google Cloud CLI component manager is disabled for this installation. You can run the following command to achieve the same result for this installation:" Follow the instructions to copy and paste the command to update the components.
- Make sure you have
kubectl
installed. If you need to installkubectl
, run the following command:gcloud components install kubectl
Create the VM infrastructure and admin cluster
Do the following steps to get set up and run the script. The script that you download and run is from the anthos-samples repository. Note that it takes about 15 to 20 minutes to create and configure the Compute Engine VMs and create the admin cluster. If you want to learn more about the script before you run it, see the next section, About the script.
Setup environment variables:
export PROJECT_ID=PROJECT_ID export ADMIN_CLUSTER_NAME=ADMIN_CLUSTER_NAME export ON_PREM_API_REGION=ON_PREM_API_REGION export ZONE=ZONE
ON_PREM_API_REGION
: The Google Cloud region in which the Anthos On-Prem API runs and stores its metadata. Specifyus-central1
or another supported region.ZONE
: The Google Cloud zone that the Compute Engine VMs will be created in. You can useus-central1-a
or any of the other Compute Engine zones.
Run the following commands to log in with your Google account and set the default project and zone.
gcloud auth login gcloud config set project $PROJECT_ID gcloud config set compute/zone $ZONE
Get a list of available versions that you can install:
gcloud beta container bare-metal admin-clusters query-version-config \ --location=$ON_PREM_API_REGION
Select a version from the output of the previous command and set it in an environment variable:
export BMCTL_VERSION=VERSION
Clone the
anthos-samples
repository and change to the directory where the script is located:git clone https://github.com/GoogleCloudPlatform/anthos-samples cd anthos-samples/anthos-bm-gcp-bash
Run the script:
bash install_admin_cluster.sh
The script outputs each command it runs and the status. When it finishes, the script outputs the following:
✅ Installation complete. Please check the logs for any errors!!! ✅ If you do not see any errors in the output log, then you now have the following setup: |---------------------------------------------------------------------------------------------------------| | VM Name | L2 Network IP (VxLAN) | INFO | |---------------------------------------------------------------------------------------------------------| | abm-admin-cluster-cp1 | 10.200.0.3 | Has control plane of admin cluster running inside | | abm-user-cluster-cp1 | 10.200.0.4 | 🌟 Ready for use as control plane for the user cluster | | abm-user-cluster-w1 | 10.200.0.5 | 🌟 Ready for use as worker for the user cluster | | abm-user-cluster-w2 | 10.200.0.6 | 🌟 Ready for use as worker for the user cluster | |---------------------------------------------------------------------------------------------------------|
When you create the user cluster you use the following IP addresses from the earlier output:
- abm-user-cluster-cp1: Use this IP address for the Control plane node.
- abm-user-cluster-w1: Use this IP address when you configure the default node pool.
- abm-user-cluster-w2: Use this IP address after the cluster is created, to add a node pool to the user cluster.
About the script
This section describes what the
install_admin_cluster.sh
script does and
provides some background information on the admin/user cluster deployment model.
About the script
The script automates the following manual steps:
-
Creates a service account called
baremetal-gcr
, and grants the service account additional permissions to avoid needing multiple service accounts for different APIs and services. -
Enables the following Google Cloud APIs:
anthos.googleapis.com anthosaudit.googleapis.com anthosgke.googleapis.com cloudresourcemanager.googleapis.com connectgateway.googleapis.com container.googleapis.com gkeconnect.googleapis.com gkehub.googleapis.com gkeonprem.googleapis.com iam.googleapis.com logging.googleapis.com monitoring.googleapis.com opsconfigmonitoring.googleapis.com serviceusage.googleapis.com stackdriver.googleapis.com storage.googleapis.com
-
Creates the following VMs:
- One VM for the admin workstation. An admin workstation hosts command-line interface (CLI) tools and configuration files to provision clusters during installation, and CLI tools for interacting with provisioned clusters post-installation. The admin workstation will have access to all the other nodes in the cluster via SSH.
- One VM for the control plane node of the admin cluster. An admin cluster manages user clusters, helping with creation, updates, and deletion of user clusters.
- Two VMs for the worker nodes of the user cluster. A user cluster hosts the user workloads that you deploy.
- One VM for the control plane node of the user cluster. The user cluster requires a control plane node.
- Creates a Virtual Extensible LAN (VXLAN) overlay network for layer 2 connectivity between the VMs. The network is setup to be on the 10.200.0.0/24 subnet. The layer 2 connectivity is a requirement for the bundled load balancer.
-
Installs the following tools on the admin workstation:
bmctl
kubectl
- Docker
The script also downloads the service account key for the
baremetal-gcr
service account to the admin workstation. -
Ensures that
root@10.200.0.x
from the admin workstation works by doing the following tasks:- Generate a new SSH key on the admin workstation.
- Adds the public key to all the other VMs in the deployment.
-
Creates the admin cluster by running the following command:
The script uses SSH to log in to the admin workstation as the root user. Next, the script runs the
bmctl
command-line tool to create the admin cluster. This is the same tool that you use to create clusters. Although you can create user clusters withbmctl
, this page shows you how to create the user cluster using the console and the gcloud CLI.When Anthos clusters on bare metal creates clusters, it deploys a Kubernetes in Docker (kind) cluster on the admin workstation. This bootstrap cluster hosts the Kubernetes controllers needed to create clusters and is used to create the admin cluster. Upon creation, relevant controllers are moved from the bootstrap cluster into the admin cluster. Finally, unless you specify otherwise, the bootstrap cluster is removed when cluster creation completes successfully. The bootstrap cluster requires Docker to pull container images.
The admin/user cluster deployment model separates the admin cluster, user cluster control plane, and user cluster worker nodes onto separate node machines. You use this model for data center environments at scale, as it provides greater fault tolerance by isolating the control plane from the worker nodes. The script creates and configures four nodes, which is suitable for an admin/user cluster testing environment. In a production environment, you would add additional node machines to configure the clusters for a high-availability (HA) deployment.
Verify the admin cluster
You can find your admin cluster's kubeconfig
file on the admin workstation in
the bmctl-workspace
directory of the root account. To verify your deployment,
complete the following steps.
SSH into the admin workstation as root:
gcloud compute ssh root@abm-ws --zone ${ZONE}
You can ignore any messages about updating the VM and complete this tutorial. If you plan to keep the VMs as a test environment, you might want to update the OS or upgrade to the next release as described in the Ubuntu documentation.
Set the
KUBECONFIG
environment variable with the path to the cluster's configuration file to runkubectl
commands on the cluster.export clusterid=ADMIN_CLUSTER_NAME export KUBECONFIG=$HOME/bmctl-workspace/$clusterid/$clusterid-kubeconfig kubectl get nodes
The output is similar to the following:
NAME STATUS ROLES AGE VERSION abm-admin-cluster-cp Ready control-plane,master 91m v1.24.2-gke.1900
Set the current context in an environment variable:
export CONTEXT="$(kubectl config current-context)"
Run the following
gcloud
command. This command does the following:- Grants your user account the Kubernetes
clusterrole/cluster-admin
role on the cluster. - Configures the cluster so that you can run
kubectl
commands on your local computer without having to SSH to the admin workstation.
Replace
GOOGLE_ACCOUNT_EMAIL
with the email address that is associated with your Google Cloud account. For example:--users=alex@example.com
.gcloud container fleet memberships generate-gateway-rbac \ --membership=ADMIN_CLUSTER_NAME \ --role=clusterrole/cluster-admin \ --users=GOOGLE_ACCOUNT_EMAIL \ --project=PROJECT_ID \ --kubeconfig=$KUBECONFIG \ --context=$CONTEXT\ --apply
The output of this command is similar to the following, which is truncated for readability:
Validating input arguments. Specified Cluster Role is: clusterrole/cluster-admin Generated RBAC policy is: -------------------------------------------- ... Applying the generate RBAC policy to cluster with kubeconfig: /root/bmctl-workspace/ADMIN_CLUSTER_NAME/ADMIN_CLUSTER_NAME-kubeconfig, context: ADMIN_CLUSTER_NAME-admin@ADMIN_CLUSTER_NAME Writing RBAC policy for user: GOOGLE_ACCOUNT_EMAIL to cluster. Successfully applied the RBAC policy to cluster.
- Grants your user account the Kubernetes
When you are finished exploring, enter exit to log out of the admin workstation.
Run the following command on your local computer to get the
kubeconfig
entry that can access the cluster through the Connect gateway:gcloud container fleet memberships get-credentials $ADMIN_CLUSTER_NAME
The output is similar to the following:
Starting to build Gateway kubeconfig... Current project_id: PROJECT_ID A new kubeconfig entry "connectgateway_PROJECT_ID_global_ADMIN_CLUSTER_NAME" has been generated and set as the current context.
You can now run
kubectl
commands through the Connect gateway:kubectl get nodes kubectl get namespaces
The output is similar to the following:
NAME STATUS ROLES AGE VERSION abm-admin-cluster-cp Ready control-plane,master 94m v1.24.2-gke.1900 NAME STATUS AGE anthos-creds Active 91m anthos-identity-service Active 94m capi-kubeadm-bootstrap-system Active 91m capi-system Active 91m cert-manager Active 94m cluster-admin-cluster-1 Active 91m default Active 94m gke-connect Active 92m gke-managed-metrics-server Active 94m kube-node-lease Active 94m kube-public Active 94m kube-system Active 94m vm-system Active 94m
Create the user cluster
When the script created the L2 VXLAN for the VMs, it assigned the following IP addresses in the 10.200.0.0/24 network. You use these IP addresses when configuring network and node pool settings for the user cluster.
VM Name | Network IP | Node description |
---|---|---|
abm-admin-cluster-cp1 | 10.200.0.3 | Control plane node for the admin cluster |
abm-user-cluster-cp1 | 10.200.0.4 | Control plane node for the user cluster |
abm-user-cluster-w1 | 10.200.0.5 | Worker node for the user cluster |
abm-user-cluster-w2 | 10.200.0.6 | Another worker node for the user cluster |
You can use the Google Cloud console, the Google Cloud CLI, or Terraform to create the user cluster.
Console
Do the following steps to create a user cluster in the console:
In the console, go to the Anthos clusters page.
Make sure that the Google Cloud project in which you created the admin cluster is selected. You should see the admin cluster on the list.
The selected project is used as the fleet host project. This must be the same project that the admin cluster is registered to, which is set in the
gkeConnect.projectID
field in the admin cluster configuration file. After the user cluster is created, it is automatically registered to the selected project's fleet.Click Create Cluster.
In the dialog box, click On-premises.
Next to Bare metal, click Configure.
Click Cluster basics in the left-navigation bar.
Cluster basics
Enter a name for the user cluster or use the default.
Make sure that the newly created admin cluster is selected. You can use the defaults for the rest of the settings on this page.
Click Networking in the left-navigation bar.
Networking
In the Control plane section, enter the following in the Control plane node IP 1 field:
10.200.0.4
This is the IP address of the abm-user-cluster-cp1 VM in the VXLAN created by the script.
In the Load balancer section, use the default load balancer, Bundled with MetalLB.
In the New address pool section, enter the following IP address range in the IP address range 1 field:
10.200.0.51-10.200.0.70
Click Done.
In the Virtual IPs section, enter the following IP address in the Control Plane VIP field:
10.200.0.50
Enter the following IP address for the Ingress VIP:
10.200.0.51
Use the default IP addresses in the Service and Pod CIDRs section.
Click default pool in the left-navigation bar.
Create a node pool
Your cluster must have at least one node pool for worker nodes. A node pool is a template for the groups of worker nodes created in this cluster.
Enter the following IP address in the Nodes address 1 field:
10.200.0.5
This is the IP address of the abm-user-cluster-w1 VM in the VXLAN created by the script.
Create the cluster
Click Verify and Create to create the user cluster.
It takes 15 minutes or more to create the user cluster. The console displays status messages as it verifies the settings and creates the cluster.
If there is a problem with the configuration, the console displays an error message that should be clear enough for you to fix the configuration issue and try again to create the cluster.
To see additional information about the creation process, click Show details to display a side panel. Click
to close the details panel.When the cluster is created, Cluster status: running is displayed.
After the cluster is created, click
Clusters to go back to the Clusters page.
gcloud CLI
You use the following command to create a user cluster:
gcloud beta container bare-metal clusters create
After creating the cluster, you need to create at least one node pool using the following command:
gcloud beta container bare-metal node-pools create
To create the user cluster:
Ensure the environment variables that you defined previously have the correct values. The example commands to create the user cluster and node pools use the environment variables.
echo $PROJECT_ID echo $ADMIN_CLUSTER_NAME echo $ON_PREM_API_REGION echo $BMCTL_VERSION
Run the following command to create the user cluster. Replace the the following:
USER_CLUSTER_NAME
: The name for the cluster.YOUR_EMAIL_ADDRESS
: The email address that is associated with your Google account.
The rest of the flag values have been filled out for you.
gcloud beta container bare-metal clusters create USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --admin-cluster-membership=projects/$PROJECT_ID/locations/global/memberships/$ADMIN_CLUSTER_NAME \ --location=$ON_PREM_API_REGION \ --version=$BMCTL_VERSION \ --admin-users=YOUR_EMAIL_ADDRESS \ --metal-lb-address-pools='pool=lb-pool-1,manual-assign=True,addresses=10.200.0.51-10.200.0.70' \ --control-plane-node-configs='node-ip=10.200.0.4' \ --control-plane-vip=10.200.0.50 \ --control-plane-load-balancer-port=443 \ --ingress-vip=10.200.0.51 \ --island-mode-service-address-cidr-blocks=10.96.0.0/20 \ --island-mode-pod-address-cidr-blocks=192.168.0.0/16 \ --lvp-share-path=/mnt/localpv-share \ --lvp-share-storage-class=local-shared \ --lvp-node-mounts-config-path=/mnt/localpv-disk \ --lvp-node-mounts-config-storage-class=local-disks
The following list describes the flags:
--project
: Both the admin cluster and the user cluster are registered to a fleet on creation. This project is referred to as the fleet host project. The user cluster's project ID must match the project ID configured in thegkeConnect.projectID
field in the admin cluster's configuration file.--admin-cluster-membership
: The fully-specified admin cluster name that identifies the admin cluster in the fleet.--location
: The Google Cloud region in which the Anthos On-Prem API runs and stores its metadata.--version
: The Anthos clusters on bare metal version.--admin-users
: Include your email address to be granted the Kubernetes role-based access control (RBAC) policies that gives you full administrative access to the cluster.--metal-lb-address-pools
: The address pool configuration for the bundled MetalLB load balancer. The IP address range must be in10.200.0.0/24
network that the script created. The address range must not contain the IP addresses assigned to the VMs nor the control plane VIP. Note, however, that the ingress VIP must be in this address range.--control-plane-node-configs
: The control plane node configuration for the user cluster. The value fornode-ip
is10.200.0.4
, which is the IP address that the script assigned to the VMabm-user-cluster-cp1
.--control-plane-vip
: The virtual IP for the control plane. The value10.200.0.50
is in the10.200.0.0/24
network that the script created, but doesn't overlap with the IP address range used for the MetalLB load balancer address pools.--control-plane-load-balancer-port
: The port the load balancer serves the control plane on. Although you can configure another value, port443
is the standard port used for HTTPS connections.--ingress-vip
: The virtual IP for the ingress service. This IP address must be in the IP address range used for the MetalLB load balancer address pools.--island-mode-service-address-cidr-blocks
: A range of IP addresses, in CIDR format, to be used for Services in the user cluster. The example command used10.96.0.0/20
, which is the default value provided by the console. The CIDR range must be between /24 and /12, where /12 provides the most IP addresses. We recommend that you use a range in the IP address space for private internets, as defined in RFC 1918.--island-mode-pod-address-cidr-blocks
: A range of IP addresses, in CIDR format, to be used for Pods in the user cluster. The example command used192.168.0.0/16
, which is the default value provided by the console. The CIDR range must be between /18 and /8, where /8 provides the most IP addresses. We recommend that you use a range in the IP address space for private internets, as defined in RFC 1918.--lvp-share-path
: This is the host machine path where subdirectories can be created. A local PersistentVolume (PV) is created for each subdirectory.--lvp-share-storage-class
: This is the StorageClass to use to create persistent volumes. The StorageClass is created during cluster creation.--lvp-node-mounts-config-path
: This is the host machine path where mounted disks can be discovered. A local PersistentVolume (PV) is created for each mount.--lvp-node-mounts-config-storage
: The storage class that PVs are created with during cluster creation.
After running the command, you see output like the following:
Waiting for operation [projects/PROJECT_ID/locations/ON_PREM_API_REGION/operations/operation-1678304606537-5f668bde5c57e-341effde-b612ff8a] to complete...
It takes about 15 minutes or more to create the cluster. As the cluster is being created, you can learn more about the flags used to create the cluster starting in the Cluster basics section that follows. The information about the flags is grouped by the corresponding settings in the console for your convenience.
When the cluster is created, you see output like the following:
Created Anthos cluster on bare metal [https://gkeonprem.googleapis.com/v1/projects/PROJECT_ID/locations/ON_PREM_API_REGION/bareMetalClusters/USER_CLUSTER_NAME].
NAME LOCATION VERSION ADMIN_CLUSTER STATE
USER_CLUSTER_NAME ON_PREM_API_REGION 1.13.8 ADMIN_CLUSTER_NAME RUNNING
Create a node pool
After the cluster is successfully created, run the following command
to create a node pool. Replace NODE_POOL_NAME
with a name for the node pool, and ensure that the placeholder for the
the --cluster
flag is still set to the user cluster's name.
gcloud beta container bare-metal node-pools create NODE_POOL_NAME \ --cluster=USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION \ --node-configs='node-ip=10.200.0.5'
-node-configs
: The value assigned tonode-ip
is the IP address of the abm-user-cluster-w1 VM in the VXLAN created by the script.
After running the command, you see output like the following:
Waiting for operation [projects/PROJECT_ID/locations/ON_PREM_API_REGION/operations/operation-1678308682052-5f669b0d132cb-6ebd1c2c-816287a7] to complete...
It takes about 5 minutes or less to create the node pool. When the node pool is created, you see output like the following:
Created node pool in Anthos cluster on bare metal [https://gkeonprem.googleapis.com/v1/projects/PROJECT_ID/locations/ON_PREM_API_REGION/bareMetalClusters/USER_CLUSTER_NAME/bareMetalNodePools/NODE_POOL_NAME].
NAME LOCATION STATE
NODE_POOL_NAME ON_PREM_API_REGION RUNNING
Other user cluster commands
In addition to creating clusters, there are other gcloud CLI commands that you can run, for example:
- To list your user clusters:
gcloud beta container bare-metal clusters list \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION
- To describe a user cluster:
gcloud beta container bare-metal clusters describe USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION
For more information, see gcloud beta container bare-metal clusters.
Other node pool commands
In addition to creating node pools, there are other gcloud CLI commands that you can run, for example:
- To list node pools:
gcloud beta container bare-metal node-pools list \ --cluster=USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION
- To describe a node pool:
gcloud beta container bare-metal node-pools describe NODE_POOL_NAME \ --cluster=USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION
For more information, see gcloud beta container bare-metal node-pools.
Terraform
You can use the following basic configuration sample to create a user cluster
with bundled MetalLB load balancer. For more information, see the
google_gkeonprem_bare_metal_cluster
reference documentation.
In the directory where you cloned
anthos-samples
, change to the directory where the Terraform sample is located:cd anthos-samples/anthos-onprem-terraform/abm_user_cluster_metallb
The sample provides an example variables file to pass in to
main.tf
.Make a copy of the
terraform.tfvars.sample
file:cp terraform.tfvars.sample terraform.tfvars
Modify the parameters values in
terraform.tfvars
and save the file.The following list describes the variables:
project_id
: The project ID that you set in at the beginning of the tutorial. Runecho $PROJECT_ID
to get the value. Both the admin cluster and the user cluster are registered to a fleet on creation. This project is referred to as the fleet host project.region
: The Google Cloud region that you set at the beginning of the tutorial. Runecho $ON_PREM_API_REGION
to get the value.admin_cluster_name
: The name of the admin cluster that you set at the beginning of this tutorial. Run echo$ADMIN_CLUSTER_NAME
to get the value.bare_metal_version
: The Anthos clusters on bare metal version for your user cluster. To use the same version that you used for the admin cluster, runecho $BMCTL_VERSION
to get the value. If you prefer, you can specify a version that is no more than one minor version lower than the admin cluster version. The user cluster version can't be higher than the admin cluster version.cluster_name
: You can either use the name in the TVARS file for the user cluster or specify a name of your choice. The name can't be changed after the cluster is created.admin_user_emails
: A list of email addresses of the users to be granted administrative privileges on the cluster. Be sure to add your email address so that you can administer the cluster.When the cluster is created, the Anthos On-Prem API applies the Kubernetes role-based access control (RBAC) policies to the cluster to grant the admin users the Kubernetes
clusterrole/cluster-admin
role, which provides full access to every resource in the cluster in all namespaces. This also lets users log on to the console using their Google identity.
Use the default values defined in
terraform.tvars
for the remaining variables. The script used these values when it created the VMs and admin cluster.control_plane_ips
: A list of one or more IPv4 addresses for the control plane nodes. Use the default value, which is the IP address that the script assigned to the VMabm-user-cluster-cp1
.worker_node_ips
: A list of one or more IPv4 addresses for the worker node machines. Use the default values, which are the IP addresses that the script assigned to the VMsabm-user-cluster-w1
andabm-user-cluster-w2
.control_plane_vip
: The virtual IP (VIP) for the control plane. Use the default value,10.200.0.50
, which is in the10.200.0.0/24
network that the script created. Note that this IP address doesn't overlap with the IP address range used for the MetalLB load balancer address pools.ingress_vip
: The virtual IP address to configure on the load balancer for the ingress proxy. Use the default value,10.200.0.51
, which is in the10.200.0.0/24
network that the script created. Note that this IP address is in the IP address range used for the MetalLB load balancer address pools.lb_address_pools
: A list of maps that define the address pools for the MetalLB load balancer. Use the default value.
Save the changes in
terraform.tfvars
.Initialize and create the Terraform plan:
terraform init
Terraform installs any needed libraries, such as the Google Cloud provider.
Review the configuration and make changes if needed:
terraform plan
Apply the Terraform plan to create the user cluster:
terraform apply
It takes 15 minutes or more to create the user cluster. You can view the cluster in the Google Cloud console on the Anthos clusters page.
Connect to the user cluster
When you create a user cluster using the console or the
gcloud CLI, the cluster is configured with the same Kubernetes
role-based access control (RBAC) policies that you configured for the admin
cluster when you ran gcloud container fleet memberships generate-gateway-rbac
.
These RBAC policies let you connect to the cluster using your Google Cloud
identity, which is the email address associated with your Google Cloud
account. These RBAC policies let you log in to the console
without any additional configuration.
Connect to the cluster in the console
If you used the gcloud CLI to create the user cluster, go to the Anthos clusters page in the console:
Go to the Anthos clusters page
Make sure that the project in which you created the user cluster is selected. You should see both the admin and user cluster on the list.
Notice that the user cluster has Anthos (Bare metal: User) in the Type column. This indicates that the cluster is managed by the Anthos On-Prem API.
The admin cluster has External in the Type column. This indicates that the cluster isn't managed by the Anthos On-Prem API.
Although the admin cluster was created by the script using bmctl
, you
can
configure the admin cluster to be managed by the Anthos On-Prem API.
To login to a cluster:
Click the link on the cluster name, and on the side panel, click Login.
Select Use your Google identity to log in.
Click Login.
Repeat the same steps to log into the admin cluster as well.
Connect to the cluster on the command line
The Anthos On-Prem API configures the RBAC policies for you as the user
cluster creator. These policies let you run kubectl
commands on your local
desktop using the Connect gateway's kubeconfig
.
From your local computer:
Get the
kubeconfig
entry that can access the cluster through the Connect gateway.gcloud container fleet memberships get-credentials USER_CLUSTER_NAME
The output is similar to the following:
Starting to build Gateway kubeconfig... Current project_id: PROJECT_ID A new kubeconfig entry "connectgateway_PROJECT_ID_global_USER_CLUSTER_NAME" has been generated and set as the current context.
You can now run
kubectl
commands through the Connect gateway:kubectl get nodes
The output is similar to the following:
NAME STATUS ROLES AGE VERSION abm-user-cluster-cp Ready control-plane,master 14m v1.24.2-gke.1900 abm-user-cluster-w1 Ready worker 8m28s v1.24.2-gke.1900
Add another node pool to the user cluster
Console
In the Google Cloud console, go to the Anthos clusters page.
In the cluster list, click the name of the cluster, and then click More details in the Details panel.
Click the Nodes tab.
Click
Add Node Pool.Enter a name for the node pool.
In the Nodes address 1 field, enter the following IP address:
10.200.0.6
This is the IP address of the abm-user-cluster-w2 VM that the script created.
Click Create
Click the Nodes tab again if needed.
The new node pool shows a status of Reconciling.
Click
in the top-right corner to view the status of the node pool creation. You might have to refresh the page to see the updated status in the node pools list.
gcloud CLI
Run the following command to create another node pool. Replace
NODE_POOL_NAME_2
with a name for the node pool, and
ensure that the placeholder for the --cluster
flag is still set to the user
cluster's name.
gcloud beta container bare-metal node-pools create NODE_POOL_NAME_2 \ --cluster=USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION \ --node-configs='node-ip=10.200.0.6'
-node-configs
: The value assigned tonode-ip
is the IP address of the abm-user-cluster-w2 VM in the VXLAN created by the script.
Terraform
If you created the cluster using Terraform, the cluster was created with
two nodes, so there aren't any additional VMs in the VXLAN available to
add another node. For information on adding node pools, see
see the
google_gkeonprem_bare_metal_cluster
reference documentation.
You can also verify the new node using kubectl
. You first have to run the
gcloud container fleet memberships get-credentials
command as shown
earlier to fetch the cluster config:
kubectl get nodes
The output is similar to the following:
NAME STATUS ROLES AGE VERSION
abm-user-cluster-cp Ready control-plane,master 24m v1.24.2-gke.1900
abm-user-cluster-w1 Ready worker 18m v1.24.2-gke.1900
abm-user-cluster-w2 Ready worker 52s v1.24.2-gke.1900
Clean up
Delete the user cluster
Console
In the console, go to the Anthos clusters page.
In the list of clusters, click the user cluster.
In the Details panel, click More details.
Near the top of the window, click
Delete.When prompted to confirm, enter the cluster name and click Confirm.
Click
in the top-right corner to view the status of the deletion. You might have to refresh the page to update the clusters list.
Wait for the user cluster to be deleted before deleting the admin cluster and VMs.
gcloud CLI
Run the following command to delete the cluster:
gcloud beta container bare-metal clusters delete USER_CLUSTER_NAME \ --project=$PROJECT_ID \ --location=$ON_PREM_API_REGION \ --force
The --force
flag lets you delete a cluster that has node pools.
Without the --force
flag, you have to
delete the node pools
first, and then delete the cluster.
For information about other flags, see gcloud beta container bare-metal clusters delete.
Terraform
Run the following command:
terraform destroy
Delete the admin cluster and VMs
Connect to the admin workstation to reset the cluster VMs to their state prior to installation and unregister the cluster from your Google Cloud project:
gcloud compute ssh root@abm-ws --zone ${ZONE} << EOF set -x export clusterid=ADMIN_CLUSTER_NAME bmctl reset -c \$clusterid EOF
Wait for the cluster to be deleted.
List all VMs that have
abm
in their name:gcloud compute instances list | grep 'abm'
Verify that you're fine with deleting all VMs that contain
abm
in the name.After you've verified, you can delete
abm
VMs by running the following command:gcloud compute instances list --format="value(name)" | \ grep 'abm' | \ xargs gcloud --quiet compute instances delete
Delete the service account:
gcloud iam service-accounts delete baremetal-gcr@PROJECT_ID.iam.gserviceaccount.com
At the confirmation prompt, enter y.
What's next