Deploying a Linux workload to a target cluster

After you have migrated a workload from your source platform, you can use the deployment artifacts generated by that process to deploy the migrated workload container to another cluster.

Before you begin

Before deploying your workload, you should have first:

Ensure the target cluster has read access to the Docker registry

As part of performing a migration, Migrate for Anthos uploads Docker images representing a migrated VM to a Docker registry. These Docker images represent the files and directories of the migrating VM.

For the Docker registry you can choose to use:

See Defining data repositories for more.

To deploy a migrated workload to a target cluster you use the following command:

kubectl apply -f deployment_spec.yaml

Where deployment_spec.yaml is the YAML file that contains the deployment information for your migrated workload. Included in deployment_spec.yaml is the containers property which specifies the location of the Docker image and registry.

For example:

      - image:
        name: quickstart-instance

In this example, the deployment_spec.yaml specifies that the Docker image registry uses the Google Container Registry (GCR).

Before you can deploy a migrated workload to a target cluster, you must ensure that the cluster has read access to the Docker registry. The following sections describe how to enable access to the different types of registries.

Deploying workloads on the processing cluster

You can deploy the migrated workload on the same cluster as you used to perform the migration, referred to as the Migrate for Anthos processing cluster. In most situations you do not have to perform any additional configuration on the processing cluster because the cluster already requires read/write access to the Docker registry to perform a migration.

However, if you are using ECR as the Docker image registry with Anthos clusters on AWS, then you must enable read access to the AWS node pool before you can deploy the workload. See Deploying on a target cluster using ECR as the Docker registry below for more.

Deploying on a target cluster using GCR as the Docker registry

To ensure that a target cluster has access to the Google Container Registry (GCR), create a Kubernetes secret that contains the credentials required to access GCR:

  1. Create a service account for deploying a migration as described in Creating a service account for accessing Container Registry and Cloud Storage.

    This process has you download a JSON key file named m4a-install.json.

  2. Create a Kubernetes secret that contains the credentials required to access GCR:

    kubectl create secret docker-registry gcr-json-key \ --docker-username=_json_key --docker-password="$(cat ~/m4a-install.json)" \


    • docker-registry specifies the name of the Kubernetes secret, gcr-json-key in this example.
    • specifies GCR as the server.
    • docker-username=_json_key specifies that the username is contained in the JSON key file.
    • docker-password specifies to use a password from the JSON key file.
    • docker-email specifies the email address of the service account.
  3. Set the Kubernetes secret by either:

    • Changing the default imagePullSecrets value:

      kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}'
    • Editing the deployment_spec.yaml file to add the imagePullSecrets value to the spec.template.spec definition, as shown below:

        - image:
          name: mycontainer-instance
        - hostPath:
            path: /sys/fs/cgroup
            type: Directory
          name: cgroups
        - name: gcr-json-key

Deploying on a target cluster using ECR as the Docker registry

If your Anthos clusters on AWS use ECR as the registry, then you can grant the cluster read access to the registry.

When creating Anthos clusters on AWS, you can edit the staging-nodepools.yaml file to customize the node pool definition for the cluster. See Creating a custom user cluster for more.

The staging-nodepools.yaml file contains the iamInstanceProfile property that specifies the name of the AWS EC2 instance profile assigned to nodes in the pool:

iamInstanceProfile: NODE_IAM_PROFILE

Ensure that the specified profile has the AmazonEC2ContainerRegistryReadOnly role to enable read access of ECR.

Deploying on a target cluster using a Docker registry with basic authentication

If you use a Docker registry to store for migration images, then the registry must support basic authentication using a username and password. Because there are many ways to configure a read-only connection to a Docker registry, you should use the method appropriate for your cluster platform and Docker registry.

Deploying a workload to a GCP project other than the one used for migration

Often you will have multiple Google Cloud projects in your environment. If you perform a migration in one GCP project, but then want to deploy the migrated workload to a cluster in a different project, you must ensure that you have the permissions configured correctly.

For example, you perform the migration in project A. In this case, the migrated workload is copied into a GCR bucket in project A. For example:

You then want to deploy the workload to a cluster in project B. If you do not configure permissions correctly the workload pod fails to run because the cluster in project B has no image pull access to project A. You then see an event on the pod containing a message in the form:

Failed to pull image "
pull access denied...
repository does not exist or may acquire 'docker login'...

All projects that have enabled the Compute Engine API have a Compute Engine default service account, which has the following email address:

Where PROJECT_NUMBER is the project number for project B.

To work around this issue, ensure that the Compute Engine default service account for project B has the necessary permissions to access the GCR bucket. For example, you can use the following gsutil command to enable access:

gsutil iam ch gs://

Apply generated deployment YAML file

Use kubectl to apply the deployment spec to your target cluster, such as a production cluster.


  1. Ensure that the target cluster has read access to the Docker image registry as described above in Ensure the target cluster has read access to the Docker image registry.

  2. Deploy the container:

    kubectl apply -f deployment_spec.yaml

Deleting a migration

After you validate and test the migrated workload to ensure that it is functioning correctly, you should delete the migration. Deleting the migration frees up any resources used by the migration.


  1. Delete a completed migration by using the following command:

    migctl migration delete MIGRATION_NAME

    Where MIGRATION_NAME is the name of the migration.


  1. Open the Migrate for Anthos page in the Cloud Console.

    Go to the Migrate for Anthos page

  2. Click the Migrations tab to display a table containing the available migrations.

  3. For the migration to delete, click the three dot icon, , on the right side of the table, and then select Delete migration.

Next Steps