Stay organized with collections Save and categorize content based on your preferences.

Configure a processing cluster for Linux

This topic describes how to set up a Google Kubernetes Engine (GKE) or Anthos cluster on Google Cloud as a processing cluster. You'll use the processing cluster to generate migrated container artifacts, operate and monitor the migration.

Before you begin

Before creating a processing cluster, you'll need:

Configure firewall rules

If you're migrating from a source platform other than Compute Engine, you'll need to create two firewall rules for subnets that hold GKE clusters used to migrate your workloads. Add the following firewall rules on the Google Cloud console.

Type Source Targets Protocol Port
Ingress VPN subnet or cluster network tag. For example, this might be fw-workload, as described in the Migrate to VMs networking setup. This is for the GKE cluster nodes. Migrate to VMs Cloud Extension Subnet on Google Cloud or Cloud Extension nodes network tag. For example, fw-migration-cloud-extension, as described in Migrate for Compute Engine networking setup. iSCSI TCP/3260
Ingress VPN subnet or cluster network tag. For example, this might be fw-workload, as described in the Migrate to VMs networking setup. This is for the GKE cluster nodes.

For this rule or as an additional rule, also add as a network source the Pod IP ranges as configured on the GKE cluster.

Migrate to Virtual Machines Manager Subnet on Google Cloud or network tag. For example, fw-migration-manager, as described in Migrate for Compute Engine networking setup. HTTPS TCP/443

Set your project defaults

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI.

Create a cluster for Linux workloads

Use the command shown below to create a zonal cluster for use as a processing cluster. Be sure to use VPCs that are running your Migrate to Containers installation or connected through a shared VPC.

Migrate to Containers supports only certain operating systems for nodes. Use the Ubuntu node image if your nodes require support for XFS, CephFS, or Debian packages.

You can also use the Google Cloud console to create a cluster.

About clusters and Compute Engine sources

A migration source represents the source platform from which you'll be migrating. For example, VMware or Compute Engine. As part of the migration, Migrate to Containers creates a snapshot of the disk images from a VM running on the source platform.

When the source platform is Compute Engine, you can only snapshot a specific disk at most once every 10 minutes, or 6 times an hour. To avoid hitting this limit, we recommend that you create the cluster in the same zone as the Compute Engine VM. When the cluster is in the same zone as the VM, Migrate to Containers can clone the disk, rather than create a snapshot, which is a more efficient process and bypasses the snapshot limit.

See Creating frequent snapshots efficiently for more.

Creating the cluster

The example below creates a simple processing cluster that can get you started using Migrate to Containers. To learn more about the different ways you can configure your cluster, see About cluster configuration choices.

For more advanced networking configuration options, see:

The gcloud command offers many configuration options that you might want to set. These include choosing node machine types, specifying the --network and --subnetwork, and enabling Alias IP addresses. See gcloud container clusters create for more information.

  1. Create the cluster:

    gcloud container clusters create cluster-name \
     --project project-name \
     --zone=gcp-zone \
     --num-nodes 1 \
     --machine-type "e2-standard-4" \
     --image-type "cos_containerd" \
     --network network \
     --subnetwork subnetwork \

    Edit the parameters in the command examples to match your own needs. For example, set project to the name of your Google Cloud project and set zone to your compute zone.

    We recommend a machine-type of "e2-standard-4" or larger.

  2. Connect to the cluster:

    gcloud container clusters get-credentials cluster-name \
     --zone gcp-zone --project project-name

Creating a cluster on a shared VPC

A common environment creates clusters with a Shared VPC. With Shared VPC, you designate one project as the host project, and you can attach other projects, called service projects, to the host project.

To learn how to create a cluster on a shared VPC, see Setting up clusters with Shared VPC.

Creating a private cluster

Private clusters give you the ability to isolate nodes from having inbound and outbound connectivity to the public internet. This isolation is achieved as the nodes have internal IP addresses only.

Migrate to Containers supports the use of private clusters, and automatically selects a port that is always open. Therefore, you don't need to add a port number to the firewall rules of the control plane node.

What's next