Configuring a processing cluster on Google Cloud for Windows workloads
This topic describes how to set up Google Kubernetes Engine (GKE) or Anthos cluster on Google Cloud for migrating Windows VMs. You'll use the processing cluster to generate migrated container artifacts, operate and monitor the migration.
Before you begin
Before creating a processing cluster, you'll need:
- A user with GKE Administrator privileges. These privileges are only necessary for the setup portion.
Prerequisites for migration. See Prerequisites for migrating Windows VMs using Google Cloud processing clusters for more.
Configured your environment as described in Enabling Google services and configuring service accounts.
Set your project defaults
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI.
Create a cluster for Windows workloads
Use the gcloud container command to create a cluster and the necessary node pool. See the command documentation for other flags you might want to use.
Your cluster must:
- Be located on Google Cloud - Anthos clusters on VMware and Anthos clusters on AWS are not supported for Windows migrations.
--enable-ip-aliasto makes the cluster VPC-native.
- Use any operating system, such as Container-Optimized OS or Ubuntu.
Contain a Windows node pool used to process the migration with an
Creating a cluster using Windows Server node pools includes additional information you might find useful for creating clusters to run node pools running Windows Server.
About clusters for Compute Engine sources
A migration source represents the source platform from which you'll be migrating. For example, VMware, AWS, Azure, or Compute Engine are all types of source platforms. As part of the migration, Migrate to Containers creates a snapshot of the disk images from a VM running on the source platform.
When the source platform is Compute Engine, you can only snapshot a specific disk at most once every 10 minutes, or 6 times an hour. To avoid hitting this limit, we recommend that you create the cluster in the same zone as the Compute Engine VM. When the cluster is in the same zone as the VM, Migrate to Containers can clone the disk, rather than create a snapshot, which is a more efficient process and bypasses the snapshot limit.
See Creating frequent snapshots efficiently for more on the limit.
Creating the cluster
The example below creates a simple processing cluster that can get you started using Migrate to Containers. For more advanced options, see:
gcloud command offers many configuration options that you might want to set. These
include choosing node machine types, specifying
--subnetwork, and enabling
Alias IP addresses.
gcloud container clusters create for more information.
Create the cluster:
gcloud container clusters create cluster-name \ --project project-name \ --zone=gcp-zone \ --enable-ip-alias \ --num-nodes=1 \ --machine-type=e2-standard-4 \ --cluster-version=version-number \ --network network \ --subnetwork subnetwork
We requires a
machine-typeof "e2-standard-2" or larger. See Types of clusters for more information.
Create the Windows node pool used to process the migration:
gcloud container node-pools create node-pool-name \ --cluster=cluster-name \ --image-type=WINDOWS_LTSC \ --zone=gcp-zone \ --num-nodes=1 \ --scopes "cloud-platform" \ --machine-type=e2-standard-4
Connect to the cluster:
gcloud container clusters get-credentials cluster-name \ --zone gcp-zone --project project-name
Creating a cluster on a shared VPC
A common environment creates clusters with a Shared VPC. With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project.
Shown below is an example command to create a cluster in a Shared VPC environment:
gcloud container clusters create cluster-name \ --project project-name \ --zone=gcp-zone \ --machine-type "e2-standard-2" \ --image-type "UBUNTU" \ --num-nodes "1" \ --enable-ip-alias \ --username "admin" \ --cluster-version=version-number \ --cluster-ipv4-cidr=ipRange \ --services-ipv4-cidr=ipRange \ --cluster-secondary-range-name pods-range \ --services-secondary-range-name svc-range \ --network projects/projectId/global/networks/vpcId \ --subnetwork projects/projectId/regions/region/subnetworks/subnetwork \ --addons HorizontalPodAutoscaling,HttpLoadBalancing \ --no-enable-autoupgrade \ --no-enable-autorepair \ --tags networkTags \ # optional --labels key=value # optional
See Setting up clusters with Shared VPC for a complete description of this command and its options.
Creating a private cluster
Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This isolation is achieved as the nodes have internal IP addresses only.
Migrate to Containers supports the use of private clusters. However, when using a private cluster, the control plane node must be able to reach the Migrate to Containers infrastructure pod on port 9443.
Therefore, you must add port 9443 to the firewall rules of the control plane node. For more, see Creating a private cluster.