This is the second part of a guide that walks you through a small proof-of-concept installation of Google Distributed Cloud. The first part is Set up minimal infrastructure, which shows you how to plan your IP addresses and set up the necessary vSphere and Google Cloud infrastructure for your deployment. This document builds on the setup and planning you did in the previous section and shows you how to create an admin workstation, admin cluster, and user cluster in your vSphere environment, using simple templates that you can fill in here in this document. You can then go on to deploy an application.
As with the infrastructure setup of this simple installation, the clusters you set up using this document might not be suitable for your actual production needs and use cases. For much more information, best practices, and instructions for production installations, see the installation guides.
Before you begin
Ensure that you have set up your vSphere and Google Cloud environments as described in Set up minimal infrastructure.
If you want to use Terraform to create the user cluster, you need Terraform either on your admin workstation or another computer.
Procedure overview
These are the primary steps involved in this setup:
Log in to the Google Cloud CLI with an account that has the necessary permissions to create service accounts.
Gather information that you need to configure Google Distributed Cloud, including your vCenter username and password, and the IP addresses that you prepared in the previous section.
Create an admin workstation with the resources and tools you need to create admin and user clusters, including the additional service accounts you need to finish the setup.
Create an admin cluster to manage and update your user cluster.
Create a user cluster to run your workloads.
1. Log in to the Google Cloud CLI
Setting up Google Distributed Cloud requires multiple
service accounts
with different permissions. While you must
create your component access service account
manually, the gkeadm
command line tool can create and configure the remaining
accounts for you as part of creating the admin workstation. To do this,
however, you must be logged in to the Google Cloud CLI with an account that has
the necessary permissions to create and configure service accounts, as gkeadm
uses your current gcloud CLI account
property when doing this setup.
Log in to the gcloud CLI. You can use any Google account but it must have the required permissions. If you have followed the previous part of this guide you have probably already logged in with an appropriate account to create your component access service account.
gcloud auth login
Verify that your gcloud CLI
account
property is set correctly:gcloud config list
The output shows the value of your SDK
account
property. For example:[core] account = my-name@google.com disable_usage_reporting = False Your active configuration is: [default]
Make sure you have the latest gcloud CLI components installed:
gcloud components update
Depending on how you installed the gcloud CLI, you might see the following message: "You cannot perform this action because the Google Cloud CLI component manager is disabled for this installation. You can run the following command to achieve the same result for this installation:" Follow the instructions to copy and paste the command to update the components.
2. Gather information
Use the information that you prepared in Set up minimal infrastructure to edit the placeholders in the following table:
vSphere details | |
---|---|
The username of your vCenter account | USERNAME |
The password of your vCenter account | PASSWORD |
Your vCenter Server address | ADDRESS |
The path to the root CA certificate for your vCenter Server, on the machine you're going to use to create your admin workstation | CA_CERT_PATH |
The name of your vSphere data center | DATA_CENTER |
The name of your vSphere cluster | VSPHERE_CLUSTER |
The name or path of your vSphere resource pool. For more information, see vcenter.resourcePool. | RESOURCE_POOL |
The name of your vSphere datastore | DATASTORE |
The name of your vSphere network | NETWORK |
IP addresses | |
One IP address for your admin workstation | ADMIN_WS_IP |
Four IP addresses for your admin cluster nodes. This includes an address for an extra node that can be used during upgrade and update. |
ADMIN_NODE_IP_1 ADMIN_NODE_IP_2 ADMIN_NODE_IP_3 ADMIN_NODE_IP_4 |
An IP address for the control-plane node in the user cluster. |
USER_CONTROL_PLANE_NODE_IP |
Four IP addresses for your user cluster nodes. This includes an address for an extra node that can be used during upgrade and update. |
USER_NODE_IP_1 USER_NODE_IP_2 USER_NODE_IP_3 USER_NODE_IP_4 |
A virtual IP address (VIP) for the admin cluster Kubernetes API server | ADMIN_CONTROL_PLANE_VIP |
A VIP for the user cluster Kubernetes API server | USER_CONTROL_PLANE_VIP |
An Ingress VIP for the user cluster | USER_INGRESS_VIP |
Two VIPs for Services of type LoadBalancer in your user cluster. |
SERVICE_VIP_1 SERVICE_VIP_2 |
The IP address of a DNS server that is reachable from your admin workstation and cluster nodes | DNS_SERVER_IP |
The IP address of an NTP server that is reachable from your admin workstation and cluster nodes | NTP_SERVER_IP |
The IP address of the default gateway for the subnet that has your admin workstation and cluster nodes | DEFAULT_GATEWAY_IP |
The netmask for the subnet that has your admin workstation and cluster
nodes Example: 255.255.255.0 |
NETMASK |
If your network is behind a proxy server, the URL of the proxy server. For more information, see proxy. Fill this in manually in your admin workstation configuration file if needed. | PROXY_URL |
CIDR ranges for Services and Pods | |
The admin cluster and user cluster each need a CIDR range for Services and a CIDR range for Pods. Use the following prepopulated values unless you need to change them to avoid overlap with other elements in your network: | |
A CIDR range for Services in the admin cluster | 10.96.232.0/24 |
A CIDR range for Pods in the admin cluster | 192.168.0.0/16 |
A CIDR range for Services in the user cluster | 10.96.0.0/20 |
A CIDR range for Pods in the user cluster | 192.168.0.0/16 |
Google Cloud details | |
The ID of your chosen Cloud project | PROJECT_ID |
The path to the JSON key file for the component access service account that you set up in the previous section, on the machine that you're going to use to create your admin cluster. | COMPONENT_ACCESS_SA_KEY_PATH |
The email address that is associated with your Google Cloud
account. For example: alex@example.com . |
GOOGLE_ACCOUNT_EMAIL |
3. Create an admin workstation
Before you can create any clusters, you need to create an admin workstation
and then connect to it using SSH. The admin workstation is a standalone VM with
the tools and resources you need to create GKE Enterprise clusters in your
vSphere environment. You use the gkeadm
command-line tool to create the
admin workstation.
Download gkeadm
Download gkeadm
to your current directory:
gcloud storage cp gs://gke-on-prem-release/gkeadm/1.15.9-gke.20/linux/gkeadm ./ chmod +x gkeadm
You need the gkeadm
version (which is also the version of Google Distributed Cloud)
to create your admin and user cluster config files. To check the version of
gkeadm
, run the following:
./gkeadm version
The following example output shows the version.
gkeadm 1.15.10 (1.15.9-gke.20)
Although you can
download another version of gkeadm
,
this guide assumes that you are installing 1.15.9-gke.20,
and uses that version in all configuration files and commands.
Create your credentials file
Create and save a file named credential.yaml
in your current directory with the following content:
apiVersion: v1 kind: CredentialFile items: - name: vCenter username: "USERNAME" password: "PASSWORD"
Create your admin workstation configuration file
Create and save a file named admin-ws-config.yaml
, again in your current directory, with the following content:
gcp: componentAccessServiceAccountKeyPath: "COMPONENT_ACCESS_SA_KEY_PATH" vCenter: credentials: address: "ADDRESS" fileRef: path: "credential.yaml" entry: "vCenter" datacenter: "DATA_CENTER" datastore: "DATASTORE" cluster: "VSPHERE_CLUSTER" network: "NETWORK" resourcePool: "RESOURCE_POOL" caCertPath: "CA_CERT_PATH" proxyUrl: "" adminWorkstation: name: "minimal-installation-admin-workstation" cpus: 4 memoryMB: 8192 diskGB: 50 dataDiskName: gke-on-prem-admin-workstation-data-disk/minimal-installation-data-disk.vmdk dataDiskMB: 512 network: ipAllocationMode: "static" hostConfig: ip: "ADMIN_WS_IP" gateway: "DEFAULT_GATEWAY_IP" netmask: "NETMASK" dns: - "DNS_SERVER_IP" proxyUrl: "" ntpServer: ntp.ubuntu.com
Create your admin workstation
Create your admin workstation using the following command:
./gkeadm create admin-workstation --auto-create-service-accounts
Running this command:
- Creates your admin workstation
- Automatically creates any additional service accounts you need for your installation
- Creates template configuration files for your admin and user clusters
The output gives detailed information about the creation of your admin workstation and provides a command that you can use to get an SSH connection to your admin workstation:
... Admin workstation is ready to use. Admin workstation information saved to /usr/local/google/home/me/my-admin-workstation This file is required for future upgrades SSH into the admin workstation with the following command: ssh -i /usr/local/google/home/me/.ssh/gke-admin-workstation ubuntu@172.16.20.49 ********************************************************************
For more detailed information about creating an admin workstation, see Create an admin workstation.
Connect to your admin workstation
Use the command displayed in the preceding output to get an SSH connection to your admin workstation. For example:
ssh -i /usr/local/google/home/me/.ssh/gke-admin-workstation ubuntu@172.16.20.49
If you need to find this command again, gkeadm
generates a file called gke-admin-ws-...
in the directory on your local machine where you ran gkeadm create admin-workstation
.
This contains details about your admin workstation, including the SSH command.
View generated files
On your admin workstation, list the files in the home directory:
ls -1
The output should include:
admin-cluster.yaml
, a template config file for creating your admin cluster.user-cluster.yaml
, a template config file for creating your user cluster.- The vCenter certificate file that you specified in your admin workstation configuration
- The
credential.yaml
file that you specified in your admin workstation configuration. - JSON key files for two service accounts that
gkeadm
created for you: a connect-register service account and a logging-monitoring service account, as well as the key file for the component access service account you created earlier.
For example:
admin-cluster.yaml admin-ws-config.yaml sa-key.json connect-register-sa-2203040617.json credential.yaml log-mon-sa-2203040617.json logs vc01-cert.pem user-cluster.yaml
You'll need to specify some of these filenames in your configuration files to create clusters. Use the filenames as values for the placeholders in the following table:
Connect-register service account key file name Example: connect-register-sa-2203040617.json |
CONNECT_REGISTER_SA_KEY_PATH |
Logging-monitoring service account key file name Example: log-mon-sa-2203040617.json |
LOG_MON_SA_KEY_PATH |
Component access service account key file name Example: sa-key.json |
COMPONENT_ACCESS_SA_KEY_FILE |
vCenter certificate file name Example: vc01-cert.pem |
CA_CERT_FILE |
4. Create an admin cluster
Now that you have an admin workstation configured with your vCenter and other details, you can use it to create an admin cluster in your vSphere environment. Ensure you have an SSH connection to your admin workstation, as described previously, before starting this step. All of the following commands are run on the admin workstation.
Create your admin cluster IP block file
Create and save a file named admin-ipblock.yaml
in the same directory as admin-cluster.yaml
on your admin workstation, with the following content:
blocks: - netmask: "NETMASK" gateway: "DEFAULT_GATEWAY_IP" ips: - ip: "ADMIN_NODE_IP_1" hostname: "admin-vm-1" - ip: "ADMIN_NODE_IP_2" hostname: "admin-vm-2" - ip: "ADMIN_NODE_IP_3" hostname: "admin-vm-3" - ip: "ADMIN_NODE_IP_4" hostname: "admin-vm-4"
The ips
field is an array of IP addresses and hostnames. These are the IP
addresses and hostnames that Google Distributed Cloud will assign to
your admin cluster nodes.
The IP block file also specifies a subnet mask and a default gateway for the admin cluster nodes.
Create your admin cluster configuration file
Open admin-cluster.yaml
and replace the content with the following:
apiVersion: v1 kind: AdminCluster name: "minimal-installation-admin-cluster" bundlePath: "/var/lib/gke/bundles/gke-onprem-vsphere-1.15.9-gke.20-full.tgz" vCenter: address: "ADDRESS" datacenter: "DATA_CENTER" cluster: "VSPHERE_CLUSTER" resourcePool: "RESOURCE_POOL" datastore: "DATASTORE" caCertPath: "CA_CERT_FILE" credentials: fileRef: path: "credential.yaml" entry: "vCenter" dataDisk: "data-disks/minimal-installation-admin-disk.vmdk" network: hostConfig: dnsServers: - "DNS_SERVER_IP" ntpServers: - "NTP_SERVER_IP" ipMode: type: "static" ipBlockFilePath: admin-ipblock.yaml serviceCIDR: "10.96.232.0/24" podCIDR: "192.168.0.0/16" vCenter: networkName: "NETWORK" loadBalancer: vips: controlPlaneVIP: "ADMIN_CONTROL_PLANE_VIP" kind: "MetalLB" antiAffinityGroups: enabled: false componentAccessServiceAccountKeyPath: "COMPONENT_ACCESS_SA_KEY_FILE" gkeConnect: projectID: "PROJECT_ID" registerServiceAccountKeyPath: "CONNECT_REGISTER_SA_KEY_PATH" stackdriver: projectID: "PROJECT_ID" clusterLocation: "us-central1" enableVPC: false serviceAccountKeyPath: "LOG_MON_SA_KEY_PATH" disableVsphereResourceMetrics: false
Validate the admin cluster configuration file
Verify that the your admin cluster configuration file is valid and can be used for cluster creation:
gkectl check-config --config admin-cluster.yaml
Import OS images to vSphere
Run gkectl prepare
with your completed config file to import node OS images to vSphere:
gkectl prepare --config admin-cluster.yaml --skip-validation-all
Running this command imports the images to vSphere and marks them as VM templates, including the image for your admin cluster.
This command can take a few minutes to return.
Create the admin cluster
Create the admin cluster:
gkectl create admin --config admin-cluster.yaml
Resume admin cluster creation after a failure
If the admin cluster creation fails or is canceled, you can run the create
command again:
gkectl create admin --config admin-cluster.yaml
Locate the admin cluster kubeconfig file
The gkectl create admin
command creates a kubeconfig file named
kubeconfig
in the current directory. You will need this kubeconfig file
later to interact with your admin cluster.
Verify that your admin cluster is running
Verify that your admin cluster is running:
kubectl get nodes --kubeconfig kubeconfig
The output shows the admin cluster nodes. For example:
gke-admin-master-hdn4z Ready control-plane,master ... gke-admin-node-7f46cc8c47-g7w2c Ready ... gke-admin-node-7f46cc8c47-kwlrs Ready ...
Enable RBAC authorization
To grant your user account the Kubernetes clusterrole/cluster-admin
role
on the cluster, run the following command:
gcloud container fleet memberships generate-gateway-rbac \ --membership=minimal-installation-admin-cluster \ --role=clusterrole/cluster-admin \ --users=GOOGLE_ACCOUNT_EMAIL \ --project=PROJECT_ID \ --kubeconfig=kubeconfig \ --context=minimal-installation-admin-cluster \ --apply
The output of this command is similar to the following, which is truncated for readability:
Validating input arguments. Specified Cluster Role is: clusterrole/cluster-admin Generated RBAC policy is: -------------------------------------------- ... Applying the generate RBAC policy to cluster with kubeconfig: kubeconfig, context: minimal-installation-admin-cluster Writing RBAC policy for user: GOOGLE_ACCOUNT_EMAIL to cluster. Successfully applied the RBAC policy to cluster.
Among other things, the RBAC policy lets you log in to your cluster in the Google Cloud console using your Google Identity to see more cluster details.
Enroll the cluster in the GKE On-Prem API
Optionally, enroll the cluster in the GKE On-Prem API. Enrolling your admin cluster in the GKE On-Prem API lets you use the Google Cloud console, the Google Cloud CLI, or Terraform—to upgrade user clusters that the admin cluster manages.
gcloud container vmware admin-clusters enroll minimal-installation-admin-cluster \ --project=PROJECT_ID \ --admin-cluster-membership=projects/PROJECT_ID/locations/global/memberships/minimal-installation-admin-cluster \ --location=us-central1
5. Create a user cluster
This section provides steps for creating the user cluster using the
console, gkectl
or Terraform.
gkectl
Ensure you have an SSH connection to your admin workstation, as described previously, before starting this procedure. All of the following commands are run on the admin workstation.
Create your user cluster IP block file
Create a file named
user-ipblock.yaml
.Copy and paste the following content into
user-ipblock.yaml
and save the file:blocks: - netmask: "NETMASK" gateway: "DEFAULT_GATEWAY_IP" ips: - ip: "USER_NODE_IP_1" hostname: "user-vm-1" - ip: "USER_NODE_IP_2" hostname: "user-vm-2" - ip: "USER_NODE_IP_3" hostname: "user-vm-3" - ip: "USER_NODE_IP_4" hostname: "user-vm-4"
Create your user cluster configuration file
Create a file named
user-cluster.yaml
in the same directory asuser-ipblock.yaml
.Copy and paste the following content into
user-cluster.yaml
and save the file:
apiVersion: v1 kind: UserCluster name: "minimal-installation-user-cluster" gkeOnPremVersion: "1.15.9-gke.20" enableControlplaneV2: true network: hostConfig: dnsServers: - "DNS_SERVER_IP" ntpServers: - "NTP_SERVER_IP" ipMode: type: "static" ipBlockFilePath: "user-ipblock.yaml" serviceCIDR: "10.96.0.0/20" podCIDR: "192.168.0.0/16" controlPlaneIPBlock: netmask: "NETMASK" gateway: "DEFAULT_GATEWAY_IP" ips: - ip: "USER_CONTROL_PLANE_NODE_IP" hostname: "cp-vm-1" loadBalancer: vips: controlPlaneVIP: "USER_CONTROL_PLANE_VIP" ingressVIP: "USER_INGRESS_VIP" kind: "MetalLB" metalLB: addressPools: - name: "uc-address-pool" addresses: - "USER_INGRESS_VIP/32" - "SERVICE_VIP_1/32" - "SERVICE_VIP_2/32" enableDataplaneV2: true nodePools: - name: "uc-node-pool" cpus: 4 memoryMB: 8192 replicas: 3 enableLoadBalancer: true antiAffinityGroups: enabled: false gkeConnect: projectID: "PROJECT_ID" registerServiceAccountKeyPath: "CONNECT_REGISTER_SA_KEY_PATH" stackdriver: projectID: "PROJECT_ID" clusterLocation: "us-central1" enableVPC: false serviceAccountKeyPath: "LOG_MON_SA_KEY_PATH" disableVsphereResourceMetrics: false autoRepair: enabled: true
Validate the configuration and create the cluster
Verify that the your user cluster configuration file is valid and can be used for cluster creation:
gkectl check-config --kubeconfig kubeconfig --config user-cluster.yaml
Create the user cluster:
gkectl create cluster --kubeconfig kubeconfig --config user-cluster.yaml
Cluster creation takes approximately 30 minutes.
Locate the user cluster kubeconfig file
The gkectl create cluster
command creates a kubeconfig file named
USER_CLUSTER_NAME-kubeconfig
in the current directory. You will need this
kubeconfig file later to interact with your user cluster.
Verify that your user cluster is running
Verify that your user cluster is running:
kubectl get nodes --kubeconfig USER_CLUSTER_KUBECONFIG
Replace USER_CLUSTER_KUBECONFIG with the path of your user cluster kubeconfig file.
The output shows the user cluster nodes. For example:
cp-vm-1 Ready control-plane,master user-vm-1 Ready user-vm-2 Ready user-vm-3 Ready
Enable RBAC authorization
To grant your user account the Kubernetes clusterrole/cluster-admin
role
on the cluster, run the following command:
gcloud container fleet memberships generate-gateway-rbac \ --membership=minimal-installation-user-cluster \ --role=clusterrole/cluster-admin \ --users=GOOGLE_ACCOUNT_EMAIL \ --project=PROJECT_ID \ --kubeconfig=USER_CLUSTER_KUBECONFIG \ --context=minimal-installation-user-cluster \ --apply
The output of this command is similar to the following, which is truncated for readability:
Validating input arguments. Specified Cluster Role is: clusterrole/cluster-admin Generated RBAC policy is: -------------------------------------------- ... Applying the generate RBAC policy to cluster with kubeconfig: kubeconfig, context: minimal-installation-admin-cluster Writing RBAC policy for user: GOOGLE_ACCOUNT_EMAIL to cluster. Successfully applied the RBAC policy to cluster.
Among other things, the RBAC policy lets you log in to your cluster in the Google Cloud console using your Google Identity to see more cluster details.
Enroll the cluster in the GKE On-Prem API
Optionally, enroll the cluster in the GKE On-Prem API. Enrolling your user cluster in the GKE On-Prem API lets you use the Google Cloud console or the Google Cloud CLI for upgrades.
gcloud container vmware clusters enroll minimal-installation-user-cluster \ --project=PROJECT_ID \ --admin-cluster-membership=projects/PROJECT_ID/locations/global/memberships/minimal-installation-admin-cluster \ --location=us-central1
Terraform
This section shows you how to create a user cluster and a node pool using Terraform. For more information and other examples, see the following:
Create a directory and a new file within that directory. The filename must have the
.tf
extension. In this guide, the file is calledmain.tf
.mkdir DIRECTORY && cd DIRECTORY && touch main.tf
Verify the user cluster Terraform resource:
The following Terraform resource example is filled in with the values that you entered in the planning table in the preceding section.
resource "google_gkeonprem_vmware_cluster" "cluster-basic" { name = "minimal-installation-user-cluster" project = "PROJECT_ID" location = "us-central1" admin_cluster_membership = "projects/PROJECT_ID/locations/global/memberships/minimal-installation-admin-cluster" description = "User cluster config with MetalLB, static IPs, and Controlplane V2" enable_control_plane_v2 = "true" on_prem_version = "1.15.9-gke.20" control_plane_node { cpus = 4 memory = 8192 replicas = 1 } network_config { service_address_cidr_blocks = ["10.96.0.0/20"] pod_address_cidr_blocks = ["192.168.0.0/16"] host_config { dns_servers = ["DNS_SERVER_IP"] ntp_servers = ["NTP_SERVER_IP"] } static_ip_config { ip_blocks { netmask = "NETMASK" gateway = "DEFAULT_GATEWAY_IP" ips { ip = "USER_NODE_IP_1" hostname = "user-vm-1" } ips { ip = "USER_NODE_IP_2" hostname = "user-vm-2" } ips { ip = "USER_NODE_IP_3" hostname = "user-vm-3" } ips { ip = "USER_NODE_IP_4" hostname = "user-vm-4" } } } control_plane_v2_config { control_plane_ip_block { netmask = "NETMASK" gateway = "DEFAULT_GATEWAY_IP" ips { ip = "USER_CONTROL_PLANE_NODE_IP" hostname = "cp-vm-1" } } } } load_balancer { vip_config { control_plane_vip = "USER_CONTROL_PLANE_VIP" ingress_vip = "USER_INGRESS_VIP" } metal_lb_config { address_pools { pool = "uc-address-pool" manual_assign = "true" addresses = ["USER_INGRESS_VIP/32", "SERVICE_VIP_1/32", "SERVICE_VIP_2/32"] } } } authorization { admin_users { username = "GOOGLE_ACCOUNT_EMAIL" } } provider = google-beta } resource "google_gkeonprem_vmware_node_pool" "my-node-pool-1" { name = "uc-node-pool" project = "PROJECT_ID" vmware_cluster = "minimal-installation-user-cluster" location = "us-central1" config { replicas = 3 image_type = "ubuntu_containerd" enable_load_balancer = "true" } depends_on = [ google_gkeonprem_vmware_cluster.cluster-basic ] provider = google-beta }
Copy the Terraform resource to
main.tf
and save the file.Initialize and create the Terraform plan:
terraform init
Terraform installs any needed libraries, such as the Google Cloud provider.
Review the configuration and make changes if needed:
terraform plan
Apply the Terraform plan to create the user cluster:
terraform apply
When prompted, enter
yes
.It takes about 15 minutes (or more depending on your network) to create the basic user cluster and node pool.
gcloud
Create the cluster:
gcloud container vmware clusters create minimal-installation-user-cluster \ --project=PROJECT_ID \ --admin-cluster-membership=projects/PROJECT_ID/locations/global/memberships/minimal-installation-admin-cluster \ --location=us-central1 \ --version=1.15.9-gke.20 \ --admin-users=GOOGLE_ACCOUNT_EMAIL \ --service-address-cidr-blocks=10.96.0.0/20 \ --pod-address-cidr-blocks=192.168.0.0/16 \ --metal-lb-config-address-pools='pool=uc-address-pool,avoid-buggy-ips=False,manual-assign=False,addresses=USER_INGRESS_VIP/32;SERVICE_VIP_1/32;SERVICE_VIP_2/32' \ --control-plane-vip=USER_CONTROL_PLANE_VIP \ --ingress-vip=USER_INGRESS_VIP \ --static-ip-config-ip-blocks='gateway=DEFAULT_GATEWAY_IP,netmask=NETMASK,ips=USER_NODE_IP_1;USER_NODE_IP_2;USER_NODE_IP_3;USER_NODE_IP_4' \ --dns-servers=DNS_SERVER_IP \ --ntp-servers=NTP_SERVER_IP \ --enable-control-plane-v2 \ --enable-dataplane-v2 \ --control-plane-ip-block='gateway=DEFAULT_GATEWAY_IP,netmask=NETMASK,ips=USER_CONTROL_PLANE_NODE_IP'
The output from the command is similar to the following:
Waiting for operation [projects/example-project-12345/locations/us-central1/operations/operation-1679543737105-5f7893fd5bae9-942b3f97-75e59179] to complete.
In the example output, the string operation-1679543737105-5f7893fd5bae9-942b3f97-75e59179
is the OPERATION_ID
of the long-running operation. You
can find out the status of the operation with the following command:
gcloud container vmware operations describe OPERATION_ID \ --project=PROJECT_ID \ --location=us-central1
For more information, see gcloud container vmware operations.
It takes 15 minutes or more to create the user cluster. You can view the cluster in the Google Cloud console on the Anthos clusters page.
Create a node pool:
gcloud container vmware node-pools create uc-node-pool \ --cluster=minimal-installation-user-cluster \ --project=PROJECT_ID \ --location=us-central1 \ --image-type=ubuntu_containerd \ --boot-disk-size=40 \ --cpus=4 \ --memory=8192 \ --replicas=3 \ --enable-load-balancer
What's next
You have now completed this minimal installation of Google Distributed Cloud. As an optional follow-up, you can see your installation in action by deploying an application.