Migrate WebSphere applications to containers with Migrate to Containers

Last reviewed 2021-12-22 UTC

This document is for application owners and cloud architects looking to migrate Java applications running on IBM WebSphere Application Server (WAS) to containers running in Google Kubernetes Engine (GKE) or GKE Enterprise. It guides you through the process to migrate WAS traditional applications to containers from a source environment that is on-premises, in a private hosting environment, or in another cloud provider. It also highlights the benefits of using Migrate to Containers to automate your migration.

You should have prior WebSphere knowledge before trying to migrate WebSphere VMs to containers with Migrate to Containers.

This document also contains important points for you to consider when planning a WAS application migration to containers. It's part of a multi-part series about migrating to Google Cloud. If you're interested in an overview of the series, see Migration to Google Cloud: Choosing your migration path.

Read this document if you're planning to migrate WAS traditional applications running a compatible WAS on a compatible operating system, such as Linux, from a supported source environment to a GKE or GKE Enterprise environment with Migrate to Containers. These source environments can include the following:

Migrate to Containers automates the use of IBM Migration Toolkit for Application Binaries to discover, inspect, and migrate all your WAS traditional applications in your WAS traditional virtual machines. It then splits those WAS traditional applications into individual WebSphere traditional containers.

Migrate to Containers discovers, inspects, migrates, and splits all WAS traditional applications into individual WebSphere containers.

Migrating WAS traditional applications using Migrate to Containers requires a small footprint (minimums of 1-GB RAM and 2-GB image size) and reduced licensing costs (up to 70% off a WAS Network Deployment subscription).

By migrating WAS traditional applications with Migrate to Containers, you benefit from several aspects of a containerized environment. There are the reduced licensing costs discussed previously. There is also the ability to future-proof further modernization into built-in cloud frameworks by creating WAS Liberty or Open Liberty containers for your applications.

WAS Liberty is a lightweight production runtime for rapid web- and cloud-based application development and deployment. It is built on the open source Open Liberty project. Companies use both WAS Liberty and Open Liberty to build cloud-based Java microservices.

Migrating to GKE Enterprise or GKE offloads WAS Network Deployment manager functionality to the following products:

The following diagram shows how GKE or GKE Enterprise manages centralized functionality (high availability, workload placement, and centralized configuration, for example) which was previously managed by WAS Network Deployment. Application-specific configuration is managed at Docker image build time. Using a Docker-image-based configuration enables repeatability and automation through CI/CD processes.

Migrating offloads WAS Network Deployment manager functions to Kubernetes, Anthos Service Mesh, Config Sync, and Google Cloud Operations WAS Network Deployment environments and WAS Base environments can migrate into WAS Liberty or Open Liberty containers.

Migrating WAS traditional applications to containers with Migrate to Containers is one of the possible steps in your workload modernization journey. Migrating helps you to both transform the applications so they run in a cloud environment and avoid the expensive rewrites needed to modernize WAS traditional applications.

Ideal migration candidates are applications running on supported WebSphere Network Deployment, WebSphere Base, or supported Java versions for which modernization through a complete rewrite is too expensive—in terms of resources—or isn't possible at all. To learn more about ideal migration candidates, see Migrating VMs to containers with Migrate to Containers.

Design the migration to Google Cloud

To migrate your WAS traditional applications from your source environment to containers running in Google Cloud, follow the framework described in the Migration to Google Cloud series.

The following diagram illustrates the path of your migration journey:

Migration path with four phases.

The framework illustrated in the preceding diagram has four phases:

  1. Assess: In this phase, you assess your source environment, assess the applications that you want to migrate to Google Cloud, and assess which WAS traditional applications are suitable for migration.
  2. Plan: In this phase, you create the basic infrastructure for Migrate to Containers, such as provisioning the resource hierarchy and setting up network access.
  3. Deploy: In this phase, you migrate the WAS traditional applications from the source environment to GKE or GKE Enterprise with Migrate to Containers.
  4. Optimize: In this phase, you begin to take advantage of the cloud technologies and capabilities.

Assess the source environment and applications

In the assessment phase, you gather information about your source environment and the applications that you want to migrate. Doing so helps you rightsize the resources that you need—both for the migration and your target environment.

In the assessment phase, you:

  1. Build a comprehensive inventory of your applications.
  2. Catalog your applications according to their properties and dependencies.
  3. Train and educate your teams on Google Cloud.
  4. Build an experiment and proof of concept on Google Cloud.
  5. Calculate the total cost of ownership (TCO) of the target environment.
  6. Choose the applications that you want to migrate first.

The following sections rely on Migration to Google Cloud: Assessing and discovering your workloads. However, they provide information that is specific to assessing the WAS traditional applications that you want to migrate to containers with Migrate to Containers.

Build your inventories

To scope your migration, you must understand your WAS traditional environment. To understand your environment, gather information about your applications and their dependencies.

Building an inventory of your apps describes how to build an inventory of your workloads in your WAS traditional environment and their dependencies. Follow that guidance and build your inventories. When you're done with that work, continue reading this document.

Now that you've built an inventory of your workloads and their dependencies, you refine the inventory. Assess the aspects and features that are of interest to your organization when it migrates its WAS traditional applications with Migrate to Containers.

Before assessing your WAS environment for migration, complete the assessment work in Migrating VMs to containers with Migrate to Containers and Migration to Google Cloud: Assessing and discovering your workloads. When you're done with that work, complete the inventory of your workloads.

To complete the inventory of your workloads, consider the following:

  • Operating systems running in your WAS VMs: Gather information about the operating systems and their licenses running in your WAS VMs, and ensure that the operating system is a 64-bit Linux operating system listed in Compatible operating systems and Kubernetes versions.
  • WAS versions running your applications: Gather information about the WAS versions running your applications, and ensure their compatibility with Migrate to Containers. Migrate to Containers supports migrating traditional WAS applications (WebSphere Application Server traditional 8.5.5.x, and WebSphere Application Server traditional 9.0.5.x versions) for both WAS base and WAS Network Deployment environments.

  • Applications deployed in your WAS: Assess which applications are deployed in each WAS. Then map the dependencies between your applications, and between your applications and external services. Next, gather information about the configuration sources of your applications. For example, are you using:

    • Environment variables
    • Non-standard WAS installation paths
    • LDAP user registries
    • Security role mappings
    • Modifications to apply a class to your loader order
  • Migrate to Containers fit score: Assess if your WAS traditional applications are fit to migrate with Migrate to Containers. Migrate to Containers provides a fit assessment tool that you can run on your WAS traditional applications to compute a fit score. Migrate to Containers has a set of minimal requirements to successfully migrate WAS traditional applications. It also has some limitations when automating WAS traditional applications migration. You can address these limitations by manually configurating the applications when you migrate them.

  • Authentication: WAS provides several authentication mechanisms such as Simple WebSphere Authentication Mechanism (SWAM), Lightweight Third Party Authentication (LTPA), and Kerberos. You can only configure one user registry implementation as the active user registry of the WAS security domain. Migrate to Containers doesn't automatically migrate authentication details. That means configuring authentication normally requires some manual configuration during migration.

  • Data Access (JDBC): The J2EE connector architecture defines a standard resource adapter that connects WAS to enterprise information systems. The adapter provides connectivity between the enterprise information system, the application server, and the applications. Migrate to Containers automatically migrates the JDBC configuration to the modernized WAS container. Ensure that you have enabled connectivity between your migrated applications and the existing data stores.

  • Messaging (JMS): WAS supports asynchronous communication through the Java Messaging Service (JMS) programming interface. Migrate to Containers automatically migrates JMS configuration information. However, some manual migration work is required for specific configurations, like SSL.

  • Mail: WAS supports sending emails through the JavaMail API. Migrate to Containers doesn't automatically migrate JavaMail configuration files. Manually configure these files during the migration phase.

Complete the assessment

After building the inventories related to your environment and your WAS traditional workloads, complete the rest of the assessment phase activities documented in Migration to Google Cloud: Assessing and discovering your workloads. When you're done with that work, continue reading this document.

Plan and build your foundation

After following the guidance in Planning and building your foundation when migrating VMs, complete your WAS foundation:

  1. Confirm the name of the Cloud Storage bucket.
  2. Upload the binaryAppScanner.jar file available as part of the IBM WebSphere Application Server Migration Toolkit for Application Binaries by following these steps:

    1. Download the binaryAppScannerInstaller.jar installer file. You must accept the license agreement as part of the download.
    2. Run the following command to extract the binaryAppScanner.jar file and to accept the License Agreement:

      java -jar binaryAppScannerInstaller.jar --acceptLicense --verbose
      
    3. Specify the target directory for the extraction—for example, /tmp. The installer creates a directory named /wamt within the target directory.

    4. Navigate to the /wamt directory—for example:

      cd /tmp/wamt
      
    5. Upload the binaryAppScanner.jar file to the root of a Cloud Storage bucket:

      gsutil cp binaryAppScanner.jar gs://BUCKET_NAME
      

      Where BUCKET_NAME is the name of your Cloud Storage bucket.

Set up Migrate to Containers describes how to provision and configure Migrate to Containers and its dependencies. Follow that guidance to set up Migrate to Containers.

When you're done with that work, continue reading this document.

Migrate your WAS traditional applications to containers

To learn more about the deployment phase of the migration, follow the guidance in Migrating your VMs to containers.

Generate and review the migration plan

Create a Migrate to Containers migration plan for your WAS traditional applications:

  1. Configure the source environments as Migrate to Containers migration sources: To migrate your WAS traditional applications, Migrate to Containers needs information about the source environments where your VMs run. You gathered that information by performing the tasks described in the Build your inventories section within this document. For more information about configuring source environments, see Adding a migration source.
  2. Create migration plans: To specify which WAS traditional applications you want to migrate from a source environment to a supported target environment, create a migration plan—for example, you can configure where you want to store your persistent data.

    For more information about creating and monitoring migration plans, see Creating a migration.

    To create the migration you must use the command line. You cannot use Google Cloud console. The full command is as follows:

    migctl migration create my-migration
      --source my-was-src
      --vm-id PROJECT_ID
      --intent Image
      --os-type Linux
      --app-type websphere-traditional
    

    Where PROJECT_ID is the ID assigned to your migration project and Image is the value for the intent flag. You use 'Image' because of the stateless nature of the workload.

  3. Review and customize migration plans: After generating migration plans for each of the VMs you want to migrate, review and customize each migration plan to ensure that it fits your requirements. For more information about customizing migration plans, see Customizing a migration plan.

Generate migration artifacts and deployment descriptors

To generate the target WAS artifacts for your applications, Migrate to Containers extracts the applications running in the VMs you configured in the migration plans. It then creates several artifacts and places them in a Cloud Storage bucket. Migrate to Containers also generates the deployment descriptors that you can customize and use to deploy instances of the container images in the target environment.

For each migrated application, Migrate to Containers creates a folder containing Docker context, the application binaries, a build script, and a WAS configuration script.

You can monitor the progress of the container artifacts you create and migrate. For more information about monitoring a migration, see Monitoring migrated workloads.

Verify and validate the generated resources and descriptors

After you generate container artifacts and deployment descriptors with Migrate to Containers, review and update those artifacts and descriptors to ensure that they meet your requirements—for example, consider the following aspects:

  • Container image descriptors: Review the container image descriptors that you generated with Migrate to Containers and verify that they are adequate for the container workload. If you need to update the container image descriptors, see Building an application image. You can add properties, and install iFixes.
  • Application-level logging: Migrate to Containers automatically writes WAS logs in the JSON format. To change to basic logging, see Logging configuration.

For more information about reviewing container artifacts and deployment descriptors, see Reviewing generated deployment files.

Deploy and validate the containerized workloads to GKE or GKE Enterprise

When the deployment descriptors for your workloads are ready, you:

  1. Build an application container image: Build an application container image for your migrated workload in the application artifacts folder that you would like to build:

    bash ./build.sh
    
  2. Deploy your migrated applications in the target environment: Deploy your migrated applications:

    kubectl apply -f deployment_spec.yaml
    
  3. Monitor your migrated workloads: After deploying your WAS traditional application container, you can gather information about how they are performing in the target environment. For more information, see Monitoring migrated workloads.

  4. Integrate your migrated workloads: Once the workloads you deployed in the target environment work, integrate the container artifact generation and deployment processes of the workloads with your deployment processes and pipelines. If you don't currently have an automated deployment process in place and are manually deploying your workloads, it's recommended that you migrate from manual deployments to automated deployments.

Uninstall Migrate to Containers

After you complete the migration of your workloads with Migrate to Containers, it's recommended that you:

  1. Ensure that you have all the references to the artifacts that Migrate to Containers generated during the migration.
  2. Uninstall Migrate to Containers.

When you're done with the work described in this section, return to this document.

Optimize your environment after you migrate

To complete your migration, see the guidelines at Optimizing your environment after migration.

You can perform these WAS-specific optimizations for your migrated WAS traditional applications:

  • Externalize your configuration: When you build a traditional WAS container, there might be configuration changes between environments. To avoid rebuilding the container for each environment, it's recommended that you externalize the WAS configuration into properties and use ConfigMaps to set those properties at container start-up.
  • Securing your sensitive data: Passwords and any other sensitive data should be put into Kubernetes Secrets. Use Kubernetes secrets to replace configuration placeholders at container start-up.

What's next