Stay organized with collections Save and categorize content based on your preferences.

Deploy a Linux workload to a target cluster

After you have migrated a workload from your source platform, you can use the deployment artifacts generated by that process to deploy the migrated workload container to another cluster.

Skaffold cab handle the workflow for building, pushing and deploying your application. For more information, see Build and deploy multiple images using Skaffold.

Before you begin

Before deploying your workload, you should have first:

Deploy workloads on the processing cluster

You can deploy the migrated workload on the same cluster as you used to perform the migration, referred to as the Migrate to Containers processing cluster. In most situations you do not have to perform any additional configuration on the processing cluster because the cluster already requires read/write access to the Docker registry to perform a migration.

However, if you are using ECR as the Docker image registry with Anthos clusters on AWS, then you must enable read access to the AWS node pool before you can deploy the workload. See Deploying on a target cluster using ECR as the Docker registry below for more.

Deploy on a target cluster using GCR as the Docker registry

To ensure that a target cluster has access to the Google Container Registry (GCR), create a Kubernetes secret that contains the credentials required to access GCR:

  1. Create a service account for deploying a migration as described in Creating a service account for accessing Container Registry and Cloud Storage.

    This process has you download a JSON key file named m4a-install.json.

  2. Create a Kubernetes secret that contains the credentials required to access GCR:

    kubectl create secret docker-registry gcr-json-key \
     --docker-server=gcr.io --docker-username=_json_key --docker-password="$(cat ~/m4a-install.json)" \
     --docker-email=account@project.iam.gserviceaccount.com

    where:

    • docker-registry specifies the name of the Kubernetes secret, gcr-json-key in this example.
    • docker-server=gcr.io specifies GCR as the server.
    • docker-username=_json_key specifies that the username is contained in the JSON key file.
    • docker-password specifies to use a password from the JSON key file.
    • docker-email specifies the email address of the service account.
  3. Set the Kubernetes secret by either:

    • Changing the default imagePullSecrets value:

      kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}'
    • Editing the deployment_spec.yaml file to add the imagePullSecrets value to the spec.template.spec definition, as shown below:

      spec:
        containers:
        - image: gcr.io/PROJECT_ID/mycontainer-instance:v1.0.0
          name: mycontainer-instance
          ...
        volumes:
        - hostPath:
            path: /sys/fs/cgroup
            type: Directory
          name: cgroups
        imagePullSecrets:
        - name: gcr-json-key

Deploy on a target cluster using ECR as the Docker registry

If your Anthos clusters on AWS use ECR as the registry, then you can grant the cluster read access to the registry.

When creating Anthos clusters on AWS, you can edit the staging-nodepools.yaml file to customize the node pool definition for the cluster. See Creating a custom user cluster for more.

The staging-nodepools.yaml file contains the iamInstanceProfile property that specifies the name of the AWS EC2 instance profile assigned to nodes in the pool:

iamInstanceProfile: NODE_IAM_PROFILE

Ensure that the specified profile has the AmazonEC2ContainerRegistryReadOnly role to enable read access of ECR.

Deploy on a target cluster using a Docker registry with basic authentication

If you use a Docker registry to store for migration images, then the registry must support basic authentication using a username and password. Because there are many ways to configure a read-only connection to a Docker registry, you should use the method appropriate for your cluster platform and Docker registry.

Apply generated deployment YAML file

Use kubectl to apply the deployment spec to your target cluster, such as a production cluster.

kubectl

  1. Ensure that the target cluster has read access to the Docker image registry as described above in Ensure the target cluster has read access to the Docker image registry.

  2. Deploy the container:

    kubectl apply -f deployment_spec.yaml
  3. After you complete your validation testing on the migrated workload to ensure that the migrated workload is functioning correctly, you should delete the migration to free up resources. See Deleting a migration for more information.

Delete a migration

After you validate and test the migrated workload to ensure that it is functioning correctly, you should delete the migration. Deleting the migration frees up any resources used by the migration.

migctl

  1. Delete a completed migration by using the following command:

    migctl migration delete MIGRATION_NAME

    Where MIGRATION_NAME is the name of the migration.

Console

  1. Open the Migrate to Containers page in the Google Cloud console.

    Go to the Migrate to Containers page

  2. Click the Migrations tab to display a table containing the available migrations.

  3. For the migration to delete, click the trash icon, , on the right side of the table, and then select Delete migration.

Configure AWS workload identity

Workload identity for Anthos clusters on AWS lets you bind Kubernetes service accounts to AWS IAM accounts with specific permissions. Workload identity uses AWS IAM permissions to block unwanted access to cloud resources.

With workload identity, you can assign different IAM roles to each workload. This fine-grained control of permissions lets you follow the principle of least privilege.

Use workload identity with Migrate to Containers

Migrate to Containers lets you deploy your migrated workloads to Anthos clusters on AWS. If you have enabled workload identity on your deployment cluster, then you have to ensure that you configure your deployment environment correctly to support Migrate to Containers.

You must set two environment variables for services using workload identity:

  • AWS_ROLE_ARN: The Amazon Resource Name (ARN) of the IAM role.
  • AWS_WEB_IDENTITY_TOKEN_FILE: The path where the token is stored.

The steps you perform depend on the init system for your cluster. See below for the specific steps for systemd, SysV, and Upstart.

Configure using kubectl exec

The sections below describe how to configure a service for workload identity that is started by the init system. To run commands that use workload identity in container shell from kubectl exec, then you must first run the following command on the deployed pod:

ln -s /kubernetes-info/secrets /var/run/secrets

If the /var/run mount gets deleted (by a process in the pod, or by a pod reset), you might have to run the command again.

Configure the init system

Perform the following steps for your specific init system.

Configure systemd

systemd supports setting environment variables for all spawned services. You can either add a configuration file using the Dockerfile, or you can automate it in the outer container.

For example, edit the /etc/systemd/system.conf.d/10-default-env.conf configuration file to set the environment variables:

[Manager]
DefaultEnvironment="AWS_ROLE_ARN=ROLE_ARN""AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token"

Where ROLE_ARN specifies the Amazon Resource Name (ARN) of the IAM role as you set it for AWS_ROLE_ARN when you enabled workload identity.

Alternatively, edit the Dockerfile to add the following:

RUN mkdir -p /etc/systemd/system.conf.d/
RUN echo '[Manager]' >> /etc/systemd/system.conf.d/10-default-env.conf
RUN echo 'DefaultEnvironment="AWS_ROLE_ARN=ROLE_ARN" "AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token" '>> /etc/systemd/system.conf.d/10-default-env.conf

Configure SysV/Upstart for services not running as root

For systems using SysV/Upstart where the services are not running as root you can use pam_env to set the environment variables:

  1. Add the following to /etc/pam.d/su:

    session required pam_env.so

  2. Add the following to /etc/environment:

    AWS_ROLE_ARN=ROLE_ARN
    AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token

    Where ROLE_ARN specifies the Amazon Resource Name (ARN) of the IAM role as you set it for AWS_ROLE_ARN when you enabled workload identity.

Alternatively, edit the Dockerfile to add the following:

RUN echo "session        required      pam_env.so" >> /etc/pam.d/su
RUN echo 'AWS_ROLE_ARN=ROLE_ARN' >> /etc/environment
RUN echo 'AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token' >> /etc/environment

Configure SysV/Upstart for services without a new user

To manually configure the service set for SysV/Upstart:

AWS_ROLE_ARN=ROLE_ARN
AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token

Where ROLE_ARN specifies the Amazon Resource Name (ARN) of the IAM role as you set it for AWS_ROLE_ARN when you enabled workload identity.

Set these environment variables based on your service type:

  1. SysV: Add export lines to your initialization script to set AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.

  2. Upstart: See Environment Variables for more on setting environment variables, such as AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.

Grant access to the service token

Regardless of your init system, for services that are not running as root, you must grant access to the service token:

  1. Add the following line to the Dockerfile:

    RUN groupadd aws-token --gid ID_OF_CREATED_GROUP && usermod -a -G aws-token SERVICE_USER_NAME 

    Where ID_OF_CREATED_GROUP is an ID you choose, such as 1234.

  2. Add the following to the deployment spec:

    securityContext: 
        fsGroup: ID_OF_CREATED_GROUP

See Configure a Security Context for a Pod or Container for more information.

What's next

Linux

Windows

Tomcat

WebSphere (Pre-Ga)