Creating an admin workstation

This page explains how to create the latest version of the admin workstation virtual machine (VM).

To upgrade an existing admin workstation, see Upgrading GKE On-Prem.

Overview

The admin workstation is a vSphere VM that contains all the tools you need to create and manage GKE On-Prem clusters. To create the admin workstation, you perform the following steps described in this topic:

  • Download the admin workstation Open Virtual Appliance (OVA) file, a compressed image of the admin workstation VM.
  • Use govc, the command line interface to vSphere, to import the OVA to vSphere as a VM template.
  • Copy and populate HashiCorp Terraform configuration files.
  • Use Terraform to create the VM.

Before you begin

  1. Read the admin workstation overview.
  2. Complete the steps in Preparing to install.
  3. Check that you have installed govc.
  4. Check that you have installed Terraform version 0.11.

Download the admin workstation OVA

Download the latest version of the admin workstation OVA from the Downloads topic. The latest OVA file is:

gke-on-prem-admin-appliance-vsphere-1.1.2-gke.0.ova

where 1.1.2-gke.0 is the latest GKE On-Prem version. The OVA includes all of the cluster components, command line tools, and other entities needed to install and manage GKE On-Prem clusters. The latest OVA includes the latest version of these entities.

Save this file somewhere on the computer you're using to create the admin workstation.

Using govc to import the OVA to vSphere and mark it as a VM template

In the following sections, you:

  1. Create some variables declaring elements of your vCenter Server and vSphere environment.
  2. Import the admin workstation OVA to vSphere and mark it as a VM template.

Creating variables for govc

Before you import the admin workstation OVA to vSphere, you need to provide govc some variables declaring elements of your vCenter Server and vSphere environment:

export GOVC_URL=https://[VCENTER_SERVER_ADDRESS]/sdk
export GOVC_USERNAME=[VCENTER_SERVER_USERNAME]
export GOVC_PASSWORD=[VCENTER_SERVER_PASSWORD]
export GOVC_DATASTORE=[VSPHERE_DATASTORE]
export GOVC_DATACENTER=[VSPHERE_DATACENTER]
export GOVC_INSECURE=true

You can choose to use vSphere's default resource pool or create your own:

# If you want to use a resource pool you've configured yourself, export this variable:
export GOVC_RESOURCE_POOL=[VSPHERE_CLUSTER]/Resources/[VSPHERE_RESOURCE_POOL]
# If you want to use vSphere's default resource pool, export this variable instead:
export GOVC_RESOURCE_POOL=[VSPHERE_CLUSTER]/Resources

where:

  • [VCENTER_SERVER_ADDRESS] is your vCenter Server's IP address or hostname.
  • [VCENTER_SERVER_USERNAME] is the username of an account that holds the Administrator role or equivalent privileges in vCenter Server.
  • [VCENTER_SERVER_PASSWORD] is the vCenter Server account's password.
  • [VSPHERE_DATASTORE] is the name of the datastore you've configured in your vSphere environment.
  • [VSPHERE_DATACENTER] is the name of the datacenter you've configured in your vSphere environment.
  • [VSPHERE_CLUSTER] is the name of the cluster you've configured in your vSphere environment.
  • For using a non-default resource pool,
  • [VSPHERE_RESOURCE_POOL] is the name of the resource pool you've configured to your vSphere environment.

Importing the OVA to vSphere: Standard switch

If you are using a vSphere Standard Switch, import the OVA to vSphere using this command:

govc import.ova -options - ~/gke-on-prem-admin-appliance-vsphere-1.1.2-gke.0.ova <<EOF
{
  "DiskProvisioning": "thin",
  "MarkAsTemplate": true
}
EOF

Importing the OVA to vSphere: Distributed switch

If you are using a vSphere Distributed Switch, import the OVA to vSphere using this command, where [YOUR_DISTRIBUTED_PORT_GROUP_NAME] is the name of your distributed port group:

govc import.ova -options - ~/gke-on-prem-admin-appliance-vsphere-1.1.2-gke.0.ova <<EOF
{
  "DiskProvisioning": "thin",
  "MarkAsTemplate": true,
  "NetworkMapping": [
      {
          "Name": "VM Network",
          "Network": "[YOUR_DISTRIBUTED_PORT_GROUP_NAME]"
      }
  ]
}
EOF

Copying the Terraform configuration files

  1. Create a directory for your Terraform files:

    mkdir "[TERRAFORM_DIR]"
    

    where [TERRAFORM_DIR] is the path of a directory where you want to keep your Terraform files.

  2. Copy one of the following Terraform configurations, depending on whether you want to specify a static IP address for your admin workstation or use a DHCP server to get an IP address.

    Be sure to copy both the TF and TFVARS files. The TF file is the Terraform HCL config that performs the VM creation.

  3. Save the configurations to [TERRAFORM_DIR]/terraform.tf and [TERRAFORM_DIR]/terraform.tfvars, respectively.

DHCP

Static IP

Creating an SSH public key

Create an SSH public key, so that you can SSH into the admin workstation from your local laptop or workstation. On Linux-based operating systems, you can use ssh-keygen:

ssh-keygen -t rsa -f ~/.ssh/vsphere_workstation -N ""

Modifying the TFVARS file

Open terraform.tfvars in a text editor and provide values for the following variables. You can find many of these values by logging in to the vCenter Client:

vcenter_user

Provide a vCenter Server user account as a string. The user account should have the Administrator role or equivalent privileges (see System requirements). For example:

vcenter_user = "administrator@vsphere.local"

vcenter_password

Provide the vCenter Server user account's password as a string. For example:

vcenter_password = "#STyZ2T#Ko2o"

vcenter_server

Provide your vCenter Server's address (IP or hostname) as a string. For example:

vcenter_server = "198.51.100.0"

ssh_public_key_path

Provide the path to your SSH public key. You created this in a previous step:

ssh_public_key_path = "~/.ssh/vsphere_workstation.pub"

vm_name

Provide a name of your choice for the admin workstation:

vm_name = "admin-workstation"

datastore

Provide the name of your vSphere datastore as a string:

datastore = "MY-DATASTORE"

datacenter

Provide the name of your vSphere datacenter as a string:

datacenter = "MY-DATACENTER"

cluster

Provide the name of your vSphere cluster as a string:

cluster = "MY-CLUSTER"

resource_pool

If you are using a non-default resource pool, provide the name of your vSphere resource pool as a string:

resource_pool = "MY-POOL"

If you are using the default resource pool, provide the following value:

resource_pool = "MY-CLUSTER/Resources"

See Specifying the root resource pool for a standalone host.

network

Provide the vSphere network where you want to create your admin workstation, as a string. For example:

network = "VM Network"

vm_template

Provide the VM template name as a string. You created imported the OVA and marked it as a template in a previous step:

vm_template = "gke-on-prem-admin-appliance-vsphere-[VERSION]"

Using a static IP address for your admin workstation

If you want to use a static IP for your admin workstation, copy the static IP TF and TFVARS file. In the TFVARs file, enter values for the following variables:

ipv4_address

Provide an IPv4 static IP address for the admin workstation. For example:

ipv4_address = "203.0.113.0"

ipv4_netmask_prefix_length

Provide the number of bits in the subnet mask of the network where you want to create your admin workstation. For example:

ipv4_netmask_prefix_length = "22"

ipv4_gateway

Provide the gateway of the subnet in which the admin workstation is to be created. See VMware's vsphere_virtual_machine documentation.

For example:

ipv4_gateway = "198.51.100.0

dns_nameservers

Provide DNS nameservers to be used by the admin workstation, separated by commas. For example:

dns_nameservers = "8.8.8.8,8.8.4.4"

Creating the admin workstation

After you've completed the preceding steps, you're ready to create the admin workstation VM:

  1. Change into the directory containing your Terraform configuration files (TF and TFVARS):

    cd [TERRAFORM_DIR]
    
  2. Initialize Terraform in the directory and apply the configuration. This might take a few minutes:

    terraform init && terraform apply -auto-approve -input=false

SSH in to your admin workstation

  1. Change to the directory containing your Terraform configuration files.

  2. Retrieve the IP address of the admin workstation:

    terraform output ip_address

    Make note of the admin workstation's IP address, or export it as an variable in your shell.

  3. SSH in to the admin workstation by using the generated key and IP address:

    ssh -i ~/.ssh/vsphere_workstation ubuntu@$(terraform output ip_address)

    or, if you want to just use its address:

    ssh -i ~/.ssh/vsphere_workstation ubuntu@[IP_ADDRESS]
    

Verifying that the admin workstation is set up correctly

Verify that gkectl and docker are installed:

gkectl version
docker version

Configuring your proxy for your admin workstation

If your environment is behind a proxy, follow the sections below to configure Google Cloud CLI and Docker to use your proxy.

Configuring Google Cloud CLI to use your proxy

Complete the following steps from the admin workstation VM.

If you are using a proxy to connect to the internet from your laptop or workstation, you need to configure Google Cloud CLI for the proxy, so that you can run gcloud and gsutil commands. For instructions, see Configuring gcloud CLI for use behind a proxy/firewall.

Configuring a Docker registry to pull through your proxy

Complete the following steps from the admin workstation VM.

If you want to use a Docker registry and your network runs behind a proxy, you need to configure the Docker daemon running on your admin workstation to pull images through your proxy:

  1. Gather the addresses of your HTTP and HTTPS proxies.

  2. Gather IP addresses and hostnames of every host that you need to contact without proxying, including:

    • The vCenter server IP address.
    • The IP addresses of all ESXi hosts.
    • IP addresses that you intend to configure on your load balancer.
    • The 192.168.0.0/16 range.

    In your admin workstation, add these addresses to the no_proxy variable:

    printf -v no_proxy '%s,' [ADDRESSES];
    

    Optionally, you can export the range to an environment variable for later reference. Keep in mind that applications and processes might use this variable:

    export no_proxy="${no_proxy%,}";
  3. Open Docker's configuration file, stored at /root/.docker/config.json, /home/[USER]/.docker/config.json, or elsewhere depending on your setup.

  4. Within config.json, add the following lines:

    "proxies": {
    
    "default": {
           "httpProxy": "[HTTP_PROXY]",
           "httpsProxy": "[HTTPS_PROXY]",
           "noProxy": "[ADDRESSES]"
               }
    }

    where:

    • [HTTP_PROXY] is your HTTP proxy, if you have one.
    • [HTTPS_PROXY] is your HTTPS proxy, if you have one.
    • [ADDRESSES] is a comma-delimited list of addresses and hostnames that you need to contact without proxying.
  5. Restart Docker for the changes to take effect:

    sudo systemctl restart docker

Troubleshooting

For more information, refer to Troubleshooting.

Terraform vSphere provider session limit

GKE On-Prem uses Terraform's vSphere provider to bring up VMs in your vSphere environment. The provider's session limit is 1000 sessions. The current implementation doesn't close active sessions after use. You might encounter 503 errors if you have too many sessions running.

Sessions are automatically closed after 300 seconds.

Symptoms

If you have too many sessions running, you might encounter the following error:

Error connecting to CIS REST endpoint: Login failed: body:
  {"type":"com.vmware.vapi.std.errors.service_unavailable","value":
  {"messages":[{"args":["1000","1000"],"default_message":"Sessions count is
  limited to 1000. Existing sessions are 1000.",
  "id":"com.vmware.vapi.endpoint.failedToLoginMaxSessionCountReached"}]}},
  status: 503 Service Unavailable
Potential causes

There are too many Terraform provider sessions running in your environment.

Resolution

Currently, this is working as intended. Sessions are automatically closed after 300 seconds. For more information, refer to to GitHub issue #618.

Using a proxy for Docker: oauth2: cannot fetch token

Symptoms

While using a proxy, you encounter the following error:

oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: proxyconnect tcp: tls: oversized record received with length 20527
Potential causes

You might have provided a HTTPS proxy instead of HTTP.

Resolution

In your Docker configuration, change the proxy address to http:// instead of https://.

Verifying that licenses are valid

Remember to verify that your licenses is valid, especially if you are using trial licenses. You might encounter unexpected failures if your F5, ESXi host, or vCenter licenses have expired.

openssl can't validate admin workstation OVA

Symptoms

Running openssl dgst against the admin workstation OVA file doesn't return Verified OK

Potential causes

An issue is present in the OVA file that prevents successful validation.

Resolution

Try downloading and deploying the admin workstation OVA again, as instructed in Download the admin workstation OVA . If the issue persists, reach out to Google for assistance.

What's next