Configuring a Cloud cluster for Windows VMs

This topic describes how to set up a Google Kubernetes Engine cluster in Google Cloud as a processing cluster for migrating Windows VMs. You'll use the processing cluster to generate migrated container artifacts, operate and monitor the migration.

Before you begin

Before creating a processing cluster, you'll need:

Set your project defaults

Before you start, make sure you have performed the following tasks:

Set up default gcloud settings using one of the following methods:

  • Using gcloud init, if you want to be walked through setting defaults.
  • Using gcloud config, to individually set your project ID, zone, and region.

Using gcloud init

If you receive the error One of [--zone, --region] must be supplied: Please specify location, complete this section.

  1. Run gcloud init and follow the directions:

    gcloud init

    If you are using SSH on a remote server, use the --console-only flag to prevent the command from launching a browser:

    gcloud init --console-only
  2. Follow the instructions to authorize gcloud to use your Google Cloud account.
  3. Create a new configuration or select an existing one.
  4. Choose a Google Cloud project.
  5. Choose a default Compute Engine zone.

Using gcloud config

  • Set your default project ID:
    gcloud config set project PROJECT_ID
  • If you are working with zonal clusters, set your default compute zone:
    gcloud config set compute/zone COMPUTE_ZONE
  • If you are working with regional clusters, set your default compute region:
    gcloud config set compute/region COMPUTE_REGION
  • Update gcloud to the latest version:
    gcloud components update

Create a cluster for Windows workloads

Use the gcloud container command to create a cluster and the necessary node pool. See the command documentation for other flags you might want to use.

Your cluster must:

  • Use GKE 1.16.
  • Specify cloud-platform for the --scopes value.
  • Specify --enable-ip-alias to makes the cluster VPC-native.
  • Use any operating system, such as Container-Optimized OS or Ubuntu.
  • Contain a Windows node pool used to process the migration with an image-type set to WINDOWS_SAC.

Creating a cluster using Windows Server node pools includes additional information you might find useful for creating clusters to run node pools running Windows Server.

About clusters for Compute Engine sources

A migration source represents the source platform from which you'll be migrating. For example, VMware, AWS, Azure, or Compute Engine are all types of source platforms. As part of the migration, Migrate for Anthos creates a snapshot of the disk images from a VM running on the source platform.

When the source platform is Compute Engine, you can only snapshot a specific disk at most once every 10 minutes, or 6 times an hour. To avoid hitting this limit, we recommend that you create the cluster in the same zone as the Compute Engine VM. When the cluster is in the same zone as the VM, Migrate for Anthos can clone the disk, rather than create a snapshot, which is a more efficient process and bypasses the snapshot limit.

See Creating frequent snapshots efficiently for more on the limit.

Creating the cluster

The example below creates a simple processing cluster that can get you started using Migrate for Anthos. For more advanced options, see:

The gcloud command offers many configuration options that you might want to set. These include choosing node machine types, specifying the --network and --subnetwork, and enabling Alias IP addresses. See gcloud container clusters create for more information.

  1. Create the cluster:

    gcloud container clusters create cluster-name \
     --project project-name \
     --zone=gcp-zone \
     --enable-ip-alias \
     --num-nodes=1 \
     --machine-type=n1-standard-4 \
     --cluster-version=1.16 \
     --enable-stackdriver-kubernetes \
     --network network \
     --subnetwork subnetwork
    

    Edit the parameters in the command examples to match your own needs. For example, set project to the name of your Google Cloud project and set zone to your compute zone.

    We requires a machine-type of "n1-standard-2" or larger. See Types of clusters for more information.

  2. Create the Windows node pool used to process the migration:

    gcloud container node-pools create node-pool-name \
     --cluster=cluster-name \
     --image-type=WINDOWS_SAC \
     --num-nodes=1 \
     --scopes "cloud-platform" \
     --machine-type=n1-standard-4
    
  3. Connect to the cluster:

    gcloud container clusters get-credentials cluster-name \
     --zone gcp-zone --project project-name
    

Creating a cluster on a shared VPC

A common environment creates clusters with a Shared VPC. With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project.

Shown below is an example command to create a cluster in a Shared VPC environment:

gcloud container clusters create cluster-name \
  --project project-name \
  --zone=gcp-zone \
  --machine-type "n1-standard-2" \
  --image-type "UBUNTU" \
  --num-nodes "1" \
  --enable-ip-alias \
  --username "admin" \
  --cluster-version=1.16 \
  --enable-stackdriver-kubernetes \
  --cluster-ipv4-cidr=ipRange \
  --services-ipv4-cidr=ipRange \
  --cluster-secondary-range-name pods-range \
  --services-secondary-range-name svc-range \
  --network projects/projectId/global/networks/vpcId \
  --subnetwork projects/projectId/regions/region/subnetworks/subnetwork \
  --addons HorizontalPodAutoscaling,HttpLoadBalancing \
  --no-enable-autoupgrade \
  --no-enable-autorepair \
  --tags networkTags \   # optional
  --labels key=value     # optional

See Setting up clusters with Shared VPC for a complete description of this command and its options.

Creating a private cluster

Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This isolation is achieved as the nodes have internal IP addresses only.

Migrate for Anthos supports the use of private clusters. However, when using a private cluster, the master node must be able to reach the Migrate for Anthos infrastructure pod on port 9443.

Therefore, you must add port 9443 to the firewall rules of the master node. For more, see Creating a private cluster.

Next Steps