Deploying Linux workloads

The topics in this section describe how to deploy your migrated workloads to other clusters, such as for testing or production. Deployment clusters include:

Before the steps in these topics, you should have set up Migrate for Anthos, qualified your workloads for migration, and migrated a VM to a workload on GKE.

  1. Review deployment YAML.

    After you have generated the container deployment artifacts, you can download them for deployment on a target test or production cluster. In this section, you will learn about the automatically-created deployment artifacts, such as Deployment or StatefulSet, Kubernetes service. Be sure to review and customize these further to fit your intended use, such as to configure load balancing, assign labels, and so on.

  2. Configure logging to Cloud Logging.

    You can have entries from application logs sent to Cloud Logging. The Migrate for Anthos software includes a feature to automatically propagate application logs to the logging facility configured on GKE. By default, Cloud Logging is used, but you can use other logging options. This section provides instructions to enable the feature and add your own application logs.

  3. Mounting external volumes.

    You can mount external volumes to you migrated workloads -- for example, NFS file shares or additional data disks.

  4. Deploy migrated VMs.

    Using the deployment artifacts you generated when you migrated and reviewed or customized in the previous steps, you can deploy your workload to another cluster, such as for testing or production.

  5. Monitor migrated workload.

    You can use Cloud Logging or kubectl to view container logs.

You can execute bash commands on the container with your migrated workload. For more, see the Troubleshooting topic.