Deploying a Linux workload to a target cluster
After you have migrated a workload from your source platform, you can use the deployment artifacts generated by that process to deploy the migrated workload container to another cluster.
Before you begin
Before deploying your workload, you should have first:
- Migrated the workload using Migrate to Containers tools.
- Reviewed the generate deployment files.
Before you run your migrated workloads, you must install
migctl
with runtime support for Container-Optimized OS nodes on your cluster:migctl setup install --runtime
Ensure the target cluster has read access to the Docker registry
As part of performing a migration, Migrate to Containers uploads Docker images representing a migrated VM to a Docker registry. These Docker images represent the files and directories of the migrating VM.
For the Docker registry you can choose to use:
Any Docker registry that supports basic authentication
See Defining data repositories for more.
To deploy a migrated workload to a target cluster you use the following command:
kubectl apply -f deployment_spec.yaml
Where deployment_spec.yaml
is the YAML file that contains the deployment information
for your migrated workload. Included in deployment_spec.yaml
is the containers
property which specifies the location of the Docker image and registry.
For example:
spec: containers: - image: gcr.io/PROJECT_ID/quickstart-instance:v1.0.0 name: quickstart-instance
In this example, the deployment_spec.yaml
specifies that the Docker image registry
uses the Google Container Registry (GCR).
Before you can deploy a migrated workload to a target cluster, you must ensure that the cluster has read access to the Docker registry. The following sections describe how to enable access to the different types of registries.
Deploying workloads on the processing cluster
You can deploy the migrated workload on the same cluster as you used to perform the migration, referred to as the Migrate to Containers processing cluster. In most situations you do not have to perform any additional configuration on the processing cluster because the cluster already requires read/write access to the Docker registry to perform a migration.
However, if you are using ECR as the Docker image registry with Anthos clusters on AWS, then you must enable read access to the AWS node pool before you can deploy the workload. See Deploying on a target cluster using ECR as the Docker registry below for more.
Deploying on a target cluster using GCR as the Docker registry
To ensure that a target cluster has access to the Google Container Registry (GCR), create a Kubernetes secret that contains the credentials required to access GCR:
Create a service account for deploying a migration as described in Creating a service account for accessing Container Registry and Cloud Storage.
This process has you download a JSON key file named
m4a-install.json
.Create a Kubernetes secret that contains the credentials required to access GCR:
kubectl create secret docker-registry gcr-json-key \ --docker-server=gcr.io --docker-username=_json_key --docker-password="$(cat ~/m4a-install.json)" \ --docker-email=account@project.iam.gserviceaccount.com
where:
docker-registry
specifies the name of the Kubernetes secret, gcr-json-key in this example.docker-server=gcr.io
specifies GCR as the server.docker-username=_json_key
specifies that the username is contained in the JSON key file.docker-password
specifies to use a password from the JSON key file.docker-email
specifies the email address of the service account.
Set the Kubernetes secret by either:
Changing the default
imagePullSecrets
value:kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}'
Editing the
deployment_spec.yaml
file to add theimagePullSecrets
value to thespec.template.spec
definition, as shown below:spec: containers: - image: gcr.io/PROJECT_ID/mycontainer-instance:v1.0.0 name: mycontainer-instance ... volumes: - hostPath: path: /sys/fs/cgroup type: Directory name: cgroups imagePullSecrets: - name: gcr-json-key
Deploying on a target cluster using ECR as the Docker registry
If your Anthos clusters on AWS use ECR as the registry, then you can grant the cluster read access to the registry.
When creating Anthos clusters on AWS, you can edit the staging-nodepools.yaml
file to
customize the node pool definition for the cluster.
See Creating a custom user cluster
for more.
The staging-nodepools.yaml
file contains the iamInstanceProfile
property
that specifies the name of the AWS EC2 instance profile assigned to nodes in the pool:
iamInstanceProfile: NODE_IAM_PROFILE
Ensure that the specified profile has the AmazonEC2ContainerRegistryReadOnly
role
to enable read access of ECR.
Deploying on a target cluster using a Docker registry with basic authentication
If you use a Docker registry to store for migration images, then the registry must support basic authentication using a username and password. Because there are many ways to configure a read-only connection to a Docker registry, you should use the method appropriate for your cluster platform and Docker registry.
Deploying a workload to a GCP project other than the one used for migration
Often you will have multiple Google Cloud projects in your environment. If you perform a migration in one GCP project, but then want to deploy the migrated workload to a cluster in a different project, you must ensure that you have the permissions configured correctly.
For example, you perform the migration in project A. In this case, the migrated workload is copied into a GCR bucket in project A. For example:
gcr.io/project-a/image_name:image_tag
You then want to deploy the workload to a cluster in project B. If you do not configure permissions correctly the workload pod fails to run because the cluster in project B has no image pull access to project A. You then see an event on the pod containing a message in the form:
Failed to pull image "gcr.io/project-a/image_name:image_tag... pull access denied... repository does not exist or may acquire 'docker login'...
All projects that have enabled the Compute Engine API have a Compute Engine default service account, which has the following email address:
PROJECT_NUMBER-compute@developer.gserviceaccount.com
Where PROJECT_NUMBER is the project number for project B.
To work around this issue, ensure that the Compute Engine default service account
for project B has the necessary permissions to access the GCR bucket.
For example, you can use the following gsutil
command to enable access:
gsutil iam ch serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com:objectViewer gs://artifacts.project-a.appspot.com
Apply generated deployment YAML file
Use kubectl
to apply the deployment spec to your target cluster, such as a production cluster.
kubectl
Ensure that the target cluster has read access to the Docker image registry as described above in Ensure the target cluster has read access to the Docker image registry.
Deploy the container:
kubectl apply -f deployment_spec.yaml
After you complete your validation testing on the migrated workload to ensure that the migrated workload is functioning correctly, you should delete the migration to free up resources. See Deleting a migration for more information.
Deleting a migration
After you validate and test the migrated workload to ensure that it is functioning correctly, you should delete the migration. Deleting the migration frees up any resources used by the migration.
migctl
Delete a completed migration by using the following command:
migctl migration delete MIGRATION_NAME
Where MIGRATION_NAME is the name of the migration.
Console
Open the Migrate to Containers page in the Google Cloud console.
Click the Migrations tab to display a table containing the available migrations.
For the migration to delete, click the trash icon,
, on the right side of the table, and then select Delete migration.
Configuring AWS workload identity
Workload identity for Anthos clusters on AWS lets you bind Kubernetes service accounts to AWS IAM accounts with specific permissions. Workload identity uses AWS IAM permissions to block unwanted access to cloud resources.
With workload identity, you can assign different IAM roles to each workload. This fine-grained control of permissions lets you follow the principle of least privilege.
Using workload identity with Migrate to Containers
Migrate to Containers lets you deploy your migrated workloads to Anthos clusters on AWS. If you have enabled workload identity on your deployment cluster, then you have to ensure that you configure your deployment environment correctly to support Migrate to Containers.
You must set two environment variables for services using workload identity:
AWS_ROLE_ARN
: The Amazon Resource Name (ARN) of the IAM role.AWS_WEB_IDENTITY_TOKEN_FILE
: The path where the token is stored.
The steps you perform depend on the init system for your cluster.
See below for the specific steps for systemd
, SysV
, and Upstart
.
Configure using kubectl exec
The sections below describe how to configure a service for workload identity
that is started by the init system.
To run commands that use workload identity in container shell from kubectl exec
,
then you must first run the following command on the deployed pod:
ln -s /kubernetes-info/secrets /var/run/secrets
If the /var/run
mount gets deleted (by a process in the pod, or by a pod reset),
you might have to run the command again.
Configuring the init system
Perform the steps below for your specific init system.
Configuring systemd
systemd
supports setting environment variables for all spawned services.
You can either add a configuration file using the Dockerfile, or you can automate it
in the outer container.
For example, edit the /etc/systemd/system.conf.d/10-default-env.conf
configuration file to set the environment variables:
[Manager] DefaultEnvironment="AWS_ROLE_ARN=ROLE_ARN""AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token"
Where ROLE_ARN specifies the Amazon Resource Name (ARN) of the IAM role
as you set it for AWS_ROLE_ARN
when you enabled workload identity.
Alternatively, edit the Dockerfile to add the following:
RUN mkdir -p /etc/systemd/system.conf.d/ RUN echo '[Manager]' >> /etc/systemd/system.conf.d/10-default-env.conf RUN echo 'DefaultEnvironment="AWS_ROLE_ARN=ROLE_ARN" "AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token" '>> /etc/systemd/system.conf.d/10-default-env.conf
Configuring SysV/Upstart for services not running as root
For systems using SysV
/Upstart
where the services are not running as root you
can use pam_env
to set the environment variables:
Add the following to
/etc/pam.d/su
:session required pam_env.so
Add the following to
/etc/environment
:AWS_ROLE_ARN=ROLE_ARN AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token
Where ROLE_ARN specifies the Amazon Resource Name (ARN) of the IAM role as you set it for
AWS_ROLE_ARN
when you enabled workload identity.
Alternatively, edit the Dockerfile to add the following:
RUN echo "session required pam_env.so" >> /etc/pam.d/su RUN echo 'AWS_ROLE_ARN=ROLE_ARN' >> /etc/environment RUN echo 'AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token' >> /etc/environment
Configuring SysV/Upstart for services without a new user
SysV
/Upstart
services without a new-user must manually configure the service to set:
AWS_ROLE_ARN=ROLE_ARN AWS_WEB_IDENTITY_TOKEN_FILE=/kubernetes-info/secrets/eks.amazonaws.com/serviceaccount/token
Where ROLE_ARN specifies the Amazon Resource Name (ARN) of the IAM role
as you set it for AWS_ROLE_ARN
when you enabled workload identity.
Set these environment variables based on your service type:
SysV: Add export lines to your initialization script to set
AWS_ROLE_ARN
andAWS_WEB_IDENTITY_TOKEN_FILE.
Upstart: See Environment Variables for more on setting environment variables, such as
AWS_ROLE_ARN
andAWS_WEB_IDENTITY_TOKEN_FILE.
Accessing the service token from non-root inside the container
Regardless of your init system, for services that are not running as root, you must grant access to the service token:
Add to the Dockerfile the following line:
RUN groupadd aws-token --gid ID_OF_CREATED_GROUP && usermod -a -G aws-token SERVICE_USER_NAME
Where ID_OF_CREATED_GROUP is an ID you choose, such as 1234.
Add the following to the deployment spec:
securityContext: fsGroup: ID_OF_CREATED_GROUP
See Configure a Security Context for a Pod or Container for more information.