Prepare a Windows cluster for deployment

This page discusses some scenarios that might require you to customize the migration artifacts.

Before you begin

This document assumes that you've completed the migration.

Ensure the target cluster has read access to the Docker registry

As part of performing a migration, Migrate to Containers uploads Docker images representing a migrated VM to a Docker registry. These Docker images represent the files and directories of the migrating VM.

For the Docker registry you can choose to use:

For more information, see Defining data repositories.

Deploy a workload to a Google Cloud project other than the one used for migration

Often you have multiple Google Cloud projects in your environment. If you perform a migration in one Google Cloud project, but then want to deploy the migrated workload to a cluster in a different project, you must ensure that you have the permissions configured correctly.

For example, you perform the migration in project A. In this case, the migrated workload is copied into a Container Registry bucket in project A. For example:

gcr.io/project-a/image_name:image_tag

You then want to deploy the workload to a cluster in project B. If you don't configure permissions correctly the workload pod fails to run because the cluster in project B has no image pull access to project A. You then see an event on the pod containing a message in the form:

Failed to pull image "gcr.io/project-a/image_name:image_tag...
pull access denied...
repository does not exist or may acquire 'docker login'...

All projects that have enabled the Compute Engine API have a Compute Engine default service account, which has the following email address:

PROJECT_NUMBER-compute@developer.gserviceaccount.com

Where PROJECT_NUMBER is the project number for project B.

To work around this issue, ensure that the Compute Engine default service account for project B has the necessary permissions to access the Container Registry bucket. For example, you can use the following gsutil command to enable access:

gsutil iam ch serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com:objectViewer gs://artifacts.project-a.appspot.com

Deploy workloads on the processing cluster

You can deploy the migrated workload on the same cluster as you used to perform the migration, referred to as the Migrate to Containers processing cluster. In most situations you don't have to perform any additional configuration on the processing cluster because the cluster already requires read or write access to the Docker registry to perform a migration.

Deploy on a target cluster using Container Registry as the Docker registry

To ensure that a target cluster has access to the Container Registry, create a Kubernetes secret that contains the credentials required to access Container Registry:

  1. Create a service account for deploying a migration as described in Creating a service account for accessing Container Registry and Cloud Storage.

    This process has you download a JSON key file named m4a-install.json.

  2. Create a Kubernetes secret that contains the credentials required to access Container Registry:

    kubectl create secret docker-registry gcr-json-key \
     --docker-server=gcr.io --docker-username=_json_key --docker-password="$(cat ~/m4a-install.json.json)" \
     --docker-email=account@project.iam.gserviceaccount.com

    where:

    • docker-registry specifies the name of the Kubernetes secret, gcr-json-key in this example.
    • docker-server=gcr.io specifies Container Registry as the server.
    • docker-username=_json_key specifies that the username is contained in the JSON key file.
    • docker-password specifies to use a password from the JSON key file.
    • docker-email specifies the email address of the service account.
  3. Set the Kubernetes secret by either:

    • Changing the default imagePullSecrets value:

      kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}'
    • Editing the deployment_spec.yaml file to add the imagePullSecrets value to the spec.template.spec definition. When using WebSphere traditional, the deployment YAML file is named twas_deployment_spec.yaml, liberty_deployment_spec.yaml or openliberty_deployment_spec.yaml, depending on your target.

      spec:
        containers:
        - image: gcr.io/PROJECT_ID/mycontainer-instance:v1.0.0
          name: mycontainer-instance
          ...
        volumes:
        - hostPath:
            path: /sys/fs/cgroup
            type: Directory
          name: cgroups
        imagePullSecrets:
        - name: gcr-json-key

      Replace PROJECT_ID with your project ID.

  4. Deploy the workload secret, if secrets.yaml exists. A secrets file will exist for Tomcat-based workloads and WebSphere traditional based workloads with Liberty. The Liberty file is named liberty-secrets.yaml.

    kubectl apply -f secrets.yaml

Deploy on a target cluster using a Docker registry with basic authentication

If you use a Docker registry to store for migration images, then the registry must support basic authentication using a username and password. Because there are many ways to configure a read-only connection to a Docker registry, you should use the method appropriate for your cluster platform and Docker registry.

Configure migrated workloads to use gMSA

Windows IIS Application workloads are often Active Directory (AD) joined and operate using domain identities. When migrating these VMs to containers, the containers themselves are not domain-joined, but rather their host Kubernetes cluster nodes can be domain-joined.

When you deploy your migrated containers to a cluster, you can use a Group Managed Service Account (gMSA). Use gMSA to execute the container within a specific service account identity. You attach a gMSA in the Kubernetes cluster as part of the pod configuration rather than as a static identity configuration inside the container image.

Migrate to Containers helps you in the process of transforming your workloads. Migrate to Containers automatically discovers the configuration of IIS application pools and adds recommendations to the generated migration plan. You can then evaluate these recommendations and modify them for your specific environment and requirements.

If Migrate to Containers determines that the configuration of an application pool does not require a gMSA, then it maintains the original application pool configuration. For example, when it uses a built-in account type such as ApplicationPoolIdentity, NetworkService, LocalSystem, or LocalService.

To support gMSA in a migrated Windows container, you must:

  1. Edit the migration plan to set the necessary properties to configure the migrated container to use a gMSA.

  2. Configure the target cluster that hosts the deployed container.

Configure a target cluster to support gMSA

You attach a gMSA in the Kubernetes cluster as part of the pod configuration, rather than as a static identity configuration inside the container image.

To configure a cluster hosting the migrated Windows container to support gMSA, you must have:

  1. Configured Active Directory for VMs to automatically join a domain.

  2. Configured gMSA for Windows Pods and containers.

For more information, see the following:

Deploy a container when storing SSL certificates as Kubernetes secrets

We recommend that you use Cloud Load Balancing, Ingress, or Anthos Service Mesh as an HTTPS frontend to secure external access to your deployed container. This option lets you secure external communication without including any certificates inside the cluster. For more information, see Customizing a migration plan.

You can also store Secure Sockets Layer (SSL) certificates as Kubernetes secrets and mount them at runtime into the container.

To use Kubernetes secrets:

  1. Create a PFX file with the certificate and password.

  2. Create a configuration YAML file that defines site access:

    sites:
     - sitename: "sitename"
       sslport: 443
       pfxpath: c:\sslconfig\pfx
       password: "password"
       thumbprint: "3e858d0551fc0536f52d411dad92b680a4fad4da"

    Where:

    • sitename specifies the name of the site configured to use SSL. The sites property can contain multiple sitename entries.

    • sslport specifies the port to listen to for SSL connections (typically 443).

    • pfxpath specifies the path to the PFX file. Configure it as part of the volumeMounts of the deployment of the pod.

    • password specifies the password for the PFX file.

    • thumbprint specifies the SHA-1 thumbprint of the PFX file that can be retrieved using the PowerShell command:

      Get-PfxCertificate -FilePath "path to pfx"

      Or view in the Windows Certificate Manager.

  3. Create the Kubernetes secret:

    kubectl create secret generic secret-name --from-file=pfx=path-to-pfx --from-file=config=path-to-config
  4. Create the volume and volume mount in the deployment of the image:

    apiVersion: v1
    kind: Pod
    metadata:
     name: iis-pod
     labels:
       app: iis-server-simple
     spec:
       nodeSelector:
         kubernetes.io/os: windows
       containers:
       - name: iis-server
         image: your-image-url
         volumeMounts:
         - name: ssl-secret
           mountPath: c:\sslconfig
         env:
         - name: M4A_CERT_YAML
           value: c:\sslconfig\config
       volumes:
       - name: ssl-secret
         secret:
           secretName: secret-name

    Where:

    • mountPath is the same path as specified by pfxpath in the configuration file you created in Step 2.
    • M4A_CERT_YAML is an environment variable set to the full path to the configuration YAML file you created in Step 2.
    • secret-name is the name of the secret you created in step 3.

Configure SSL

It is recommended not to store SSL certificates private keys inside a container image as they are accessible to anyone reading the image. Migrate to Containers provides several ways of handling SSL for Windows.

Use a self-signed auto-generated certificate

By default, a Windows container with an HTTPS binding is assigned a self-signed auto-generated certificate that is generated on the initialization of the Docker container. This configuration lets you test your migrated workload, but cannot be used in a production environment. The certificate is both self-signed and regenerated every time the container is run.

Recommended - Use Cloud Load Balancing, Ingress, or Anthos Service Mesh

You can customize the bindings in the migration plan to use HTTP. Then use Cloud Load Balancing, Ingress, or Anthos Service Mesh as an HTTPS frontend to secure external access. This option lets you secure external communication without including any certificates inside the cluster.

  • To customize the binding, edit the site definition in the migration plan that represents the migration to set protocol to http:

    sites:
      site:
      - applications:
        - path: /
          virtualdirectories:
            - path: /
              physicalpath: '%SystemDrive%\inetpub\wwwroot'
              bindings:
              - port: 8080
                protocol: http
              name: Default Web Site
    

You can then forward requests from the HTTPS frontend to the HTTP path and port of the Windows workload.

Store SSL certificates as Kubernetes secrets

It's recommended that you use Cloud Load Balancing, Ingress, or Anthos Service Mesh as an HTTPS frontend to secure external access. However, you can also store SSL certificates as Kubernetes secrets and mount them at runtime into the container.

To use SSL certificates stored as Kubernetes secrets, you must edit the deployment image of the container. For more information, see Deploy a container when storing SSL certificates as Kubernetes secrets.

Configure logging to Cloud Logging

Migrate to Containers uses the LogMonitor tool to extract logs from a Windows container and forward them to your GKE cluster. These logs are then automatically forwarded to Cloud Logging, which provides a suite of tools to monitor your containers.

By default Migrate to Containers enables IIS logging to monitor the IIS logs, and also forwards the Application or System event logs to Cloud Logging.

Configure logging

Expanding the generated artifacts.zip file creates several directories, including the m4a directory. The directory contains a folder for every image. Included in the m4a directory is the LogMonitorConfig.json file that you can edit to control logging.

For more on editing LogMonitorConfig.json, see Authoring a Config File.

Set ACLs

Some IIS applications require that you set specific access control lists (ACL) permissions on files and folders in order for the applications to perform correctly. Migrate to Containers automatically scans all migrated IIS applications and adds any specific permissions defined in the source VM that apply to IIS accounts (the IUSR account and the IIS_IUSRS group) and applies them to the copied files and directories in the generated container image.

Because Windows container images don't support setting ACLs as part of the Docker COPY command, the ACLs are set in a script called set_acls.bat. Migrate to Containers automatically creates set_acls.bat in the directory of the generated image for your specific windows application. Migrate to Containers then calls set_acls.bat when you execute the docker build command.

Edit set_acls.bat to add or remove custom permissions, or edit permissions that are not related to specific IIS users and therefore were not detected by Migrate to Containers.

The script uses the Windows built-in icacls tool to set permissions.

About the .NET Global Assembly Cache

Migrate to Containers scans the source image .NET Global Assembly Cache (GAC) for .NET resources that are installed on the source machine and not available as part of the official images. Any discovered DLL is copied into the Docker context and installed as part of the building of the target image by a utility script install_gac.ps1.

All .NET assemblies are copied into the Docker context under the m4a\gac directory. To remove assemblies from the image, delete them from the m4a\gac directory.

COM object DLL registration

DLLs that expose COM objects are automatically scanned and registered. During the extraction phase, the copied files are scanned for DLLs that are registered as COM objects, which are then registered in the container.

This process occurs without user input. However, you can influence this process by adding more DLLs to be copied. If needed, these DLLs are checked in turn, and registered.

What's next