This topic describes how to set up a Google Kubernetes Engine cluster in Google Cloud as a processing cluster for migrating Windows VMs. You'll use the processing cluster to generate migrated container artifacts, operate and monitor the migration.
Before you begin
Before creating a processing cluster, you'll need:
- A user with GKE Administrator privileges. These privileges are only necessary for the setup portion.
Prerequisites for migration. See Prerequisites for migrating Windows VMs on Cloud for more.
Configured your environment as described in Enabling Google services and configuring service accounts.
Set your project defaults
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default
gcloud settings using one of the following methods:
gcloud init, if you want to be walked through setting defaults.
gcloud config, to individually set your project ID, zone, and region.
Using gcloud init
If you receive the error
One of [--zone, --region] must be supplied: Please specify
location, complete this section.
gcloud initand follow the directions:
If you are using SSH on a remote server, use the
--console-onlyflag to prevent the command from launching a browser:
gcloud init --console-only
Follow the instructions to authorize
gcloudto use your Google Cloud account.
- Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone.
Using gcloud config
- Set your default project ID:
gcloud config set project PROJECT_ID
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone COMPUTE_ZONE
- If you are working with regional clusters, set your default compute region:
gcloud config set compute/region COMPUTE_REGION
gcloudto the latest version:
gcloud components update
Create a cluster for Windows workloads
Use the gcloud container command to create a cluster and the necessary node pool. See the command documentation for other flags you might want to use.
Your cluster must:
- Use GKE 1.16.
--enable-ip-aliasto makes the cluster VPC-native.
- Use any operating system, such as Container-Optimized OS or Ubuntu.
- Contain a Windows node pool used to process the migration with an
Creating a cluster using Windows Server node pools includes additional information you might find useful for creating clusters to run node pools running Windows Server.
About clusters for Compute Engine sources
A migration source represents the source platform from which you'll be migrating. For example, VMware, AWS, Azure, or Compute Engine are all types of source platforms. As part of the migration, Migrate for Anthos creates a snapshot of the disk images from a VM running on the source platform.
When the source platform is Compute Engine, you can only snapshot a specific disk at most once every 10 minutes, or 6 times an hour. To avoid hitting this limit, we recommend that you create the cluster in the same zone as the Compute Engine VM. When the cluster is in the same zone as the VM, Migrate for Anthos can clone the disk, rather than create a snapshot, which is a more efficient process and bypasses the snapshot limit.
See Creating frequent snapshots efficiently for more on the limit.
Creating the cluster
The example below creates a simple processing cluster that can get you started using Migrate for Anthos. For more advanced options, see:
gcloud command offers many configuration options that you might want to set. These
include choosing node machine types, specifying
--subnetwork, and enabling
Alias IP addresses.
gcloud container clusters create for more information.
Create the cluster:
gcloud container clusters create cluster-name \ --project project-name \ --zone=gcp-zone \ --enable-ip-alias \ --num-nodes=1 \ --machine-type=n1-standard-4 \ --cluster-version=1.16 \ --enable-stackdriver-kubernetes \ --network network \ --subnetwork subnetwork
We requires a
machine-typeof "n1-standard-2" or larger. See Types of clusters for more information.
Create the Windows node pool used to process the migration:
gcloud container node-pools create node-pool-name \ --cluster=cluster-name \ --image-type=WINDOWS_SAC \ --num-nodes=1 \ --scopes "cloud-platform" \ --machine-type=n1-standard-4
Connect to the cluster:
gcloud container clusters get-credentials cluster-name \ --zone gcp-zone --project project-name
Creating a cluster on a shared VPC
A common environment creates clusters with a Shared VPC. With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project.
Shown below is an example command to create a cluster in a Shared VPC environment:
gcloud container clusters create cluster-name \ --project project-name \ --zone=gcp-zone \ --machine-type "n1-standard-2" \ --image-type "UBUNTU" \ --num-nodes "1" \ --enable-ip-alias \ --username "admin" \ --cluster-version=1.16 \ --enable-stackdriver-kubernetes \ --cluster-ipv4-cidr=ipRange \ --services-ipv4-cidr=ipRange \ --cluster-secondary-range-name pods-range \ --services-secondary-range-name svc-range \ --network projects/projectId/global/networks/vpcId \ --subnetwork projects/projectId/regions/region/subnetworks/subnetwork \ --addons HorizontalPodAutoscaling,HttpLoadBalancing \ --no-enable-autoupgrade \ --no-enable-autorepair \ --tags networkTags \ # optional --labels key=value # optional
See Setting up clusters with Shared VPC for a complete description of this command and its options.
Creating a private cluster
Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This isolation is achieved as the nodes have internal IP addresses only.
Migrate for Anthos supports the use of private clusters. However, when using a private cluster, the master node must be able to reach the Migrate for Anthos infrastructure pod on port 9443.
Therefore, you must add port 9443 to the firewall rules of the master node. For more, see Creating a private cluster.