This document helps you plan, design, and implement your migration from OpenShift to Anthos. If done incorrectly, moving your workloads from one environment to another can be a challenging task, so plan and execute your migration carefully.
This document is part of a multi-part series about migrating to Google Cloud. If you're interested in an overview of the series, see Migration to Google Cloud: Choosing your migration path.
This document is part of a series that discusses migrating containers to Google Cloud:
- Migrating containers to Google Cloud: Getting started
- Migrating containers to Google Cloud: Migrating Kubernetes to Google Kubernetes Engine (GKE)
- Migrating containers to Google Cloud: Migrating to a new GKE environment
- Migrating containers to Google Cloud: Migrating to a multi-cluster GKE environment with Multi-Cluster Service Discovery and Multi Cluster Ingress
- Migrating containers to Google Cloud: Migrating from OpenShift to Anthos (this document)
- Migrating containers to Google Cloud: Migrate OpenShift projects to Anthos
- Migrating from OpenShift to Anthos: Migrate OpenShift SCCs to Anthos Config Management Constraints
This document is useful if you're planning to migrate from OpenShift running in an on-premises or private hosting environment, or in another cloud provider, to Anthos. This document is also useful if you're evaluating the opportunity to migrate and want to explore what it might look like. The target environment can be one of the following:
- A hosted environment entirely on Google Cloud.
- A hybrid environment where you maintain part of your workload on-premises or in a private hosting environment and migrate the rest to Google Cloud.
To decide which environment suits your needs, consider your requirements. For example, you can focus on increasing the value of your business instead of worrying about the infrastructure, by migrating to a public cloud environment and outsourcing some responsibilities to Google. You benefit from an elastic consumption model to optimize your spending and resource usage. If you have any requirements, such that you have to keep some of your workloads outside Google Cloud, you might consider a hybrid environment, for example, if you're required to keep part of your workloads in your current environment to comply with data location policies and regulations. Or, you can implement an improve and move migration strategy, where you first modernize your workloads in place, and then migrate to Google Cloud.
Regardless of your target environment type, the goal of this migration is to manage your workloads running in that environment using Anthos. By adopting Anthos, you have access to a range of services, including the following:
- Multi-cluster management to help you and your organization manage clusters, infrastructure, and workloads across cloud and on-premises environments from a single place.
- Anthos Config Management to create a common configuration and policies across all your infrastructure, and to apply them both on-premises and in the cloud.
- Anthos Service Mesh to adopt a fully managed service mesh that simplifies operating services, traffic management, telemetry, and securing communications between services.
- Binary Authorization to help ensure that the containers you deploy in your environments are trusted.
- Cloud Run for Anthos to support your serverless workloads in your Anthos environment.
We recommend that you evaluate these services early in your migration process while you're still designing your migration. It's easier to adopt these services now, instead of modifying your processes and infrastructure later. You can start using these services immediately, or when you're ready to modernize your workloads.
In this migration, you follow the migration framework defined in Migration to Google Cloud: Getting started. The framework has four phases:
- Assessing and discovering your workloads.
- Planning and building a foundation.
- Deploying your workloads.
- Optimizing your environment.
The following diagram illustrates the path of your migration journey.
This document relies on concepts covered in Migrating containers to Google Cloud: Migrating Kubernetes to GKE, so there are links to that document, where appropriate.
Assessing and discovering your workloads
In the assessment phase, you determine the requirements and dependencies to migrate to your workloads from OpenShift to Anthos:
- Build a comprehensive inventory of your processes and apps.
- Catalog your processes and apps according to their properties and dependencies.
- Train and educate your teams on Google Cloud.
- Build experiments and proofs-of-concept on Google Cloud.
- Calculate the total cost of ownership (TCO) of the target environment.
- Choose the workloads that you want to migrate first.
The following sections rely on Migration to Google Cloud: Assessing and discovering your workloads, but they provide information that is specific to assessing workloads that you want to migrate from OpenShift to Anthos.
Build your inventories
To build the inventory of the components of your environment, consider the following:
- Service delivery and platform management model
- OpenShift projects
- Build and deployment process
- Workloads, requirements, and dependencies
- OpenShift clusters configuration
Service delivery and platform management model
To migrate workloads from OpenShift to Anthos, you assess the current service delivery and platform management model of your OpenShift environment. This model probably reflects your current organizational structure and needs. If you realize that the current model doesn't satisfy the organization needs, you can use this migration as an opportunity to improve the model.
First, you gather information about the teams responsible for the following aspects:
- Application development and deployment, including all OpenShift users, typically development, or workload release teams.
- OpenShift platform management, including creating OpenShift projects, assigning roles to users, configuring security contexts, and configuring CI/CD pipelines.
- OpenShift installation and cluster management, including OpenShift installation, upgrade, cluster scaling, and capacity management.
- Infrastructure management. These teams manage physical servers, storage, networking, the virtualization platform, and operating systems.
A service delivery and platform management model can consist of the following teams:
- The development team. This team develops workloads and deploys them on OpenShift. When dealing with complex production environments, the team that deploys workloads might be different from the development team. For simplicity in this document, we consider this team to be part of the development team. In self-service environments, the development team also has the responsibility of creating OpenShift projects.
- The platform team. This team is responsible for OpenShift platform management, typically referred to as OpenShift cluster administrators. This team configures OpenShift project templates for different development teams and, in more managed environments, creates OpenShift projects. This team also assigns roles and permissions, configures security contexts and role-based access control (RBAC), defines quotas for compute resources and objects, and defines build and deployment strategies. They are sometimes referred to as the DevOps team or as the middleware team, if they manage middleware and application server configurations for developers. The platform team and the infrastructure team might also be involved in low-level OpenShift cluster management activities, such as software installation and upgrade, cluster scaling, and capacity management.
- The infrastructure team. This team manages the underlying infrastructure that supports the OpenShift environment. For example, they're in charge of servers, storage, networking, the virtualization platform, and the base operating system. This team is sometimes referred to as the data center team or operations team. If OpenShift is deployed in a public cloud environment, this team is responsible for the infrastructure as a service (IaaS) services that a public cloud provider offers.
It's also important to assess if you have dedicated OpenShift clusters for different environments. For example, you might have different environments for development, quality assurance, and production, or to segregate different network and security zones, such as internal zones and de-militarized zones.
OpenShift projects
An OpenShift project is a Kubernetes namespace with additional annotations that lets developers manage resources in isolation from other teams to logically separate resources. To build the inventory of your OpenShift projects, consider the following for each project:
- Cluster roles and local roles. OpenShift supports both roles that are local to an OpenShift project, or cluster-wide roles. You assess if you created any cluster and local roles to design an effective access control mechanism in the target environment.
- Role bindings, both for cluster roles and local roles. Users and groups are granted permissions to perform operations on OpenShift projects by assigning them role bindings. Roles can be at the cluster level or the local level. Often, local role bindings are bound to predefined cluster roles. For example, the default OpenShift project admin role binding might be bound to the default cluster admin role.
- ResourceQuotas. To constrain aggregate resource consumption, OpenShift lets you define both OpenShift project-level quotas, and quotas across multiple OpenShift projects. You assess how they map to Kubernetes ResourceQuotas, and populate a list of all ResourceQuotas that you provisioned and configured in your OpenShift environment.
Assessing your environment describes how to assess Kubernetes resources, such as ServiceAccounts, and PersistentVolumes.
Build and deployment processes
After gathering information about the service delivery and platform management model, and about OpenShift projects, you assess how you're building your workloads and deploying them in your environment.
In your existing OpenShift environment, you might have the same building and deployment process for all your workloads, or there might be different processes to assess. Artifacts of the building process for a containerized workload are container images. In an OpenShift environment, you might be building container images, storing them, and deploying them in different ways:
- The container image building process runs completely outside OpenShift. The build process can be based on manual steps or can be based on an automated continuous integration and continuous deployment (CI/CD) pipeline that has the container image and Kubernetes manifests as the final product.
- The container image building process runs inside OpenShift. OpenShift supports different options, such as providing a Dockerfile and all the required artifacts to build a container image, configuring a source-to-image build, configuring a pipeline build, or configuring a custom build. These build strategies create a BuildConfig resource that defines the building choice, the source artifacts location, the target container images, and the events that can trigger the container image building process.
After building each container image, you store it in a container registry that you can later deploy. Your container registry can be hosted either on OpenShift, or outside your OpenShift environment. Assess this aspect because you might need a similar system in your target environment.
Workloads, requirements, and dependencies
Each OpenShift application contains the following components:
- An OpenShift DeploymentConfig or a Kubernetes Deployment object. For more information about the differences between these objects, see Comparing Deployments and DeploymentConfigs.
- A Kubernetes Service to make your application reachable by clients, and an OpenShift Route to connect to that Kubernetes Service from outside the cluster.
- An OpenShift ImageStream to provide an abstraction to reference container images. An OpenShift ImageStream includes one or more container images, each identified by tags, and presents a single abstract view of related images, similar to a container image repository.
- An OpenShift BuildConfig to build the container images of that OpenShift application in OpenShift.
Depending on the purpose of the application, you can use different objects to define the app instead of using the Deployment or DeploymentConfig objects:
- Define batch applications using Job or cron job.
- Define stateful applications using StatefulSets.
- If you have operations-related workloads that need to run on every node, or be bound to specific nodes, you can define them by using DaemonSets.
The following table lists the most important specs and parameters that you gather from OpenShift resources in order to migrate the applications to the target Anthos environment.
Source OpenShift resource manifest | Most important specs and parameters |
---|---|
Deployment, DeploymentConfig, StatefulSet, Job, cron job | Container image and repository, container port, number of Pod replicas, ConfigMaps, Secrets, PersistentVolumeClaims, resource requests and limits, update strategy, StatefulSet Service Name, cron job schedule |
ImageStream | Container image, image pull policy, container image repository |
Horizontal Pod Autoscaler | Autoscale criteria |
Service | Hostname used to connect to the application from inside the cluster, IP address and port on which the Service is exposed, endpoints created for external resources |
Route | Hostname and resource path that is used to connect to the application from outside the cluster, routing rules, encryption, certificate chain information |
Assessing your environment describes how to assess Kubernetes resources such as the following:
- Other Kubernetes controllers
- Horizontal Pod Autoscalers
- Pod security contexts
- Stateless and stateful workloads
- Storage
- Configuration and secret injection
- Ingresses
- Logging and monitoring
OpenShift 4 introduced the Operator Framework. If you are using this OpenShift version, you might have deployed some applications using installed Operators. In this case, you get the list of the installed Operators and you gather information for each of them about the deployed Operator instances. These instances are Operator-defined Custom Resources that deploy some of the previously listed Kubernetes Resources.
In addition to assessing these resources, you assess the following:
- Application's network connectivity requirements. For example, do your Services or Pods need to be exposed to a specific network? Do they need to reach specific backend systems?
- Constraints to run workloads in a specific location. For example, do any workloads or datasets need to remain on-premises to comply with requirements such as latency in communicating with other workloads, policies related to data location, and proximity to users?
OpenShift clusters configuration
Next, you assess your OpenShift clusters. To complete this task, you gather the following information:
- OpenShift version. OpenShift major versions in scope of this document are OpenShift 3 and OpenShift 4. Different OpenShift versions might have different capabilities. You assess which version of OpenShift you're running to know whether you're using any OpenShift version-specific features.
- Identity provider used for authentication. For authentication, you might be using the built-in OAuth server, and one or more identity providers.
- Security Context Constraints. Assess the OpenShift Security Context Constraints that you defined in your clusters, their configuration, and to which users, groups, and service accounts they are assigned.
- Network policy and isolation. Assess NetworkPolicies, how you configured Pod network isolation, and which OpenShift SDN mode that you configured in your clusters.
Monitoring. Assess your current monitoring requirements, and how you provisioned and configured your current monitoring system to decide how to design and implement a monitoring solution in the target environment. This assessment can help you determine whether to use a new monitoring solution or if you can continue to use the existing solution. Many OpenShift versions include a monitoring stack based on Prometheus to monitor system components, which can also be used for application monitoring. When designing your target solution, consider the following:
- The monitoring solution that you're currently using in your OpenShift environment, for example, an OpenShift-hosted Prometheus, an independent Prometheus- Grafana stack, Zabbix, InfluxData, or Nagios.
- How metrics are produced and gathered, for example, a pull or a push mechanism.
- Dependencies on any components deployed in your OpenShift clusters.
- The location of your monitoring system, for example, deployed in a cloud environment or on-premises.
- The metrics that you're currently gathering for your workloads.
- Any alerts on metrics that you configured in your current monitoring system.
Logging. Assess your current logging requirements, and how you provisioned and configured your current logging system to decide how to design and implement a logging solution in the target environment. This assessment can help you determine whether to use a new logging solution or if you can continue to use the existing solution. Many OpenShift versions ship with a logging solution based on an Elasticsearch, Fluentd, and Kibana (EFK) stack that is used to gather logs from system components. This solution can also be used for application logging. When designing your target solution, consider the following:
- The logging system that you're currently using in your OpenShift environment, for example, an OpenShift-hosted EFK stack, an independent EFK stack, or Splunk.
- Dependencies on any components deployed in your OpenShift clusters.
- The architecture and the capacity of the log storage components.
- The location of your logging system, for example, deployed in a cloud environment or on-premises.
- The log retention policies and configuration.
Assessing your environment describes how to assess the following:
- Number of clusters
- Number and type of nodes per cluster
- Additional considerations about logging, monitoring, and tracing
- Custom Kubernetes resources
Complete the assessment
After building the inventories related to your OpenShift processes and workloads, you complete the rest of the activities of the assessment phase in Migration to Google Cloud: Assessing and discovering your workloads.
Planning and building your foundation
In the planning and building phase, you provision and configure the infrastructure and services that support your workloads:
- Build a resource hierarchy.
- Configure identity and access management.
- Set up billing.
- Set up network connectivity.
- Harden your security.
- Set up monitoring and alerting.
This section provides information that is specific to building your foundation on Anthos, building on the information in Migration to Google Cloud: Building your foundation.
Before building a foundation in Google Cloud, read the Anthos technical overview to understand how Anthos works, and which Anthos components you might need. Depending on the workload and data locality requirements that you gathered in the assessment phase, you deploy your workloads on Anthos clusters on VMware, on Anthos clusters on Google Cloud, or on Anthos clusters on AWS. Your clusters might be distributed among different environments. For more information about building a foundation for GKE on Google Cloud, see Planning and building your foundation.
To build a foundation for Anthos clusters on VMware, read about its core concepts, and then consider the following when installing Anthos clusters on VMware:
- Ensure that your on-premises environment meets the requirements for Anthos GKE on-prem. You need to provide enough capacity in your VMware vSphere environment to accommodate admin cluster and user clusters requirements. These requirements depend on the amount of your workloads resource requests, and the number of clusters that you need. You assessed both aspects in the assessment phase.
Set up your network. You need to configure your on-premises network to satisfy the applications' network connectivity requirements gathered in the assessment, in addition to the Anthos clusters on VMware installation requirements. Consider the following networking needs:
To build a foundation for Anthos clusters on AWS, read about its core concepts, such as Anthos clusters on AWS architecture and Anthos clusters on AWS storage. Consider the following when installing Anthos clusters on AWS:
- Ensure that your Amazon Web Services (AWS) and Google Cloud environments meet the requirements for Anthos clusters on AWS. Anthos clusters on AWS requires an (AWS) account with command-line access and an AWS Key Management Service (KMS) key to encrypt application-layer secrets in clusters. You need Terraform and kubectl.
- Configure the AWS environment. You need to configure your AWS environment, install tools, such as the AWS Command Line Interface (CLI), configure AWS IAM credentials, and provision resources in your AWS environment, such as an AWS KMS key.
- Configure the Google Cloud environment. You need to configure your Google Cloud environment, create the necessary Google Cloud projects and service accounts, and configure IAM.
Deploying your workloads
In the deployment phase, you deploy your workloads on Anthos:
- Provision and configure your runtime platform and environments.
- Migrate data from your old environment to your new environment.
- Deploy your workloads.
The following sections provide information that is specific to deploying workloads to Anthos, building on the information in Migration to Google Cloud: Transferring large datasets, Migration to Google Cloud: Deploying your workloads, and Migration to Google Cloud: Migrating from manual deployments to automated, containerized deployments.
Provision and configure your runtime platform and environments
Before you can deploy any workload, you provision and configure the necessary Anthos clusters.
You can provision GKE clusters on Google Cloud, Anthos clusters on VMware clusters, or Anthos clusters on AWS clusters. For example, if you have a workload that you must deploy on-premises, then you provision one or more Anthos clusters on VMware clusters. If your workloads don't have any locality requirement, you provision GKE clusters on Google Cloud. In both cases, you manage and monitor your clusters with Anthos. If you have any multi-cloud requirements, then you provision Anthos clusters on AWS clusters, along with other Anthos clusters.
First, you define the number and type of Anthos clusters that you need. These requirements largely depend on the information that you gathered in the assessment phase, such as the service model that you want to implement and how you want to isolate different environments. If multiple development teams are currently sharing your OpenShift clusters, you must implement a multi-tenancy model on Anthos:
- Use different Kubernetes namespaces. The platform team creates a Kubernetes namespace for each OpenShift project, and implements a cluster multi-tenancy model. This model closely resembles the one you likely adopted in your OpenShift environment, so it might require a number of Anthos clusters that's similar to the number of your OpenShift clusters. If needed, you can still have dedicated clusters for different environments.
- Use different Anthos clusters. The infrastructure team provides an Anthos cluster for each development team, and the platform team manages each of these clusters. This model might require a number of Anthos clusters more than the number of your OpenShift clusters because it provides greater flexibility and isolation for your development.
- Use different Google Cloud projects. The infrastructure team creates a Google Cloud project for each development team, and provisions Anthos clusters inside that Google Cloud project. The platform team then manages these clusters. This model might require some Anthos clusters more than the number of your OpenShift clusters because it provides the maximum flexibility and isolation for your development teams.
After deciding the number of clusters that you need and in which environment to provision them, you define cluster size, configuration, and node types. Then you provision your clusters and node pools, according to the workload requirements that you gathered during the assessment phase. For example, your workloads might require certain performance and scalability guarantees, along with any other requirements, such as the need for GPUs and TPUs.
For more information about provisioning and configuring clusters, see the following:
- Provision and configure your runtime platform and environments for GKE clusters on Google Cloud.
- Creating admin and user clusters for Anthos clusters on VMware clusters.
- Installing the management cluster and creating a user cluster for Anthos clusters on AWS clusters.
After you create your clusters and before deploying any workload, you configure the following components to meet the requirements that you gathered in the OpenShift projects and clusters assessment phase:
- Identity and access management. You can configure identity and access management as described in Configure identity and access management. You can migrate to Cloud Identity as your main identity provider, or use Google Cloud Directory Sync to synchronize Cloud Identity with an existing LDAP or Active Directory server. Anthos clusters on VMware supports OpenID Connect (OIDC) for authenticating against user clusters using the command line. Follow Authenticating with OIDC and Google to integrate command-line authentication with Cloud Identity.
- Monitoring. You can adapt your current monitoring solution to the target Anthos environment according to your constraints and requirements. If your current solution is hosted on OpenShift, you can implement Cloud Monitoring as described in Building your foundation or you can implement Prometheus and Grafana with Anthos clusters on VMware.
- Logging. You can adapt your current logging solution to the target Anthos environment according to your constraints and requirements. If your current solution is hosted on OpenShift, you can implement Cloud Logging as described in Building your foundation - Monitoring and Alerting.
Using Anthos Config Management, you can centrally define the configuration of the following resources in a common Git-compliant repository, and apply that configuration to all clusters, both on-premises and in the cloud:
- Role-based access control (RBAC). After you configure authentication, you can implement your authorization policies using a mix of Identity and Access Management and Kubernetes RBAC. These policies meet the requirements that you gathered in the OpenShift Project assessment and the multi-tenancy model that you chose.
- Resource quotas. You can apply Resource quotas to namespaces to assign quotas to developer teams as needed.
- Security context for your workloads. You can use Anthos Config Management Policy Controller to create constraints to enforce Pod security according to your requirements and OpenShift Security Context Constraints configuration gathered in the assessment phase.
- Network policy and isolation. You can implement the required network isolation between namespaces or workloads using Kubernetes Network Policies.
To use Anthos Config Management, you need to install it, along with Policy Controller.
Migrate data from your old environment
Now you can migrate data from your source environment to the target environment.
If your OpenShift stateful applications host data on Kubernetes persistent volumes, there are different strategies to migrate data to the target environment. Choosing the right strategy depends on various factors, such as your source and target backend storage providers and deployment locations:
- Rely on your storage provider's volume cloning, exporting, and importing capabilities. If you are using VMware vSphere volumes in your on-premises environment and you are migrating to Anthos clusters on VMware, you clone the PersistentVolumes underlying VMDK virtual disks, and mount them as volumes in your target environment. If you are migrating to GKE, you import your virtual disks as Compute Engine persistent disks and use them as persistent volumes.
- Back up your data from your source environment by using operating system tools or database tools. Host that data in a temporary location that is accessible from both environments, and then restore the data in your target environment.
- Use a remote copy tool, such as rsync, to copy data from the source environment to the target environment.
- Use a storage-independent backup solution, such as Velero with restic integration.
For more information, see Migration to Google Cloud: Transferring large datasets.
For more information about migrating data and strategies to manage storage in GKE, see Migrate data from your old environment to your new environment and the GKE documents about storage configuration. If you're planning to modernize your workloads to apply a microservices architecture or if you've already adopted it, see Migrating a monolithic application to microservices on GKE.
With Anthos clusters on VMware, you can choose between different options for integrating with external storage systems, such as through VMware vSphere storage, Kubernetes in-tree volume plugins, and Container Storage Interface (CSI) drivers. Your choice depends on which external storage system that you need to integrate with, the supported access modes, and if you need dynamic volume provisioning.
Anthos clusters on AWS automatically deploys the CSI driver for Amazon Elastic Block Store (EBS) and a default StorageClass that backs PersistentVolumeClaims with EBS volumes and StorageClasses for other EBS volume types. You can also install additional CSI drivers and custom StorageClasses. If you have an EBS volume that you want to import in Anthos clusters on AWS, you can create a PersistentVolume from it.
Deploy your workloads
After provisioning the Anthos cluster and migrating data, you now build and deploy your workloads. You have different options, ranging from manual deployments to fully automated ones.
If you need to use Operators to deploy workloads that use this deployment method in your OpenShift environment, you need to install the Operator before deploying your workload. You can verify the availability of the Operators that you need in the following sources:
- Google Cloud Marketplace
- operatorhub.io
- Specific software vendor website or GitHub repository
Deploy manually
If you are manually deploying your workloads in your OpenShift environment, you can adapt this manual deployment process to your new Anthos environment. For example, you can manually translate the OpenShift resource manifests that you assessed in workloads, requirements, and dependencies to the corresponding Anthos resource manifests.
The following table extends the table in the workloads, requirements, and dependencies section of this document and adds information about the target Anthos resources that you can use them in.
Source OpenShift resource manifest | Most important specs and parameters | Target Anthos resource manifest |
---|---|---|
Deployment, DeploymentConfig, StatefulSet, Job, cron job | Container image and repository, container port, number of Pod replicas, ConfigMaps, Secrets, PersistentVolumeClaims, resource requests and limits, update strategy, StatefulSet Service Name, cron job schedule | Deployment, StatefulSet, Job, cron job |
ImageStream | Container image, image pull policy, container image repository | Deployment |
Horizontal Pod Autoscaler | Autoscale criteria | Horizontal Pod Autoscaler |
Service | Hostname used to connect to the application from inside the cluster, IP address and port on which the Service is exposed, endpoints created for external resources | Service |
Route | Hostname and resource path used to connect to the application from outside the cluster, routing rules | Ingress |
Design and implement an automated deployment process
To automatically build and deploy your workloads, you design and implement build and deployment processes, or adapt the existing ones to support your new environment. If you need to deploy your workloads in a hybrid environment, your deployment processes must support both GKE on Google Cloud and Anthos clusters on VMware.
To implement your build and deployment processes, you can use Cloud Build. If you want to automate your build and deployment processes, you can configure build triggers or GitHub App triggers, or set up automated deployments from Google Cloud console. If you are using any policy controller constraints, you can check your Kubernetes and Anthos descriptors against policies in your Cloud Build jobs in order to provide feedback to developers.
If you need to run build jobs or store source code on-premises, you might use GitLab. GitLab offers source code repositories and a collaboration platform, CI/CD capabilities, and a container image registry. You might deploy GitLab on your Anthos clusters directly from the Cloud Marketplace, or use one of the other available installation options.
If you are currently using one of the OpenShift facilities to build or automatically deploy your workloads you can adopt one of the following strategies, based on your current process:
- Jenkins pipelines. If you're using Jenkins pipelines to automate your build and deployment process, you can port your pipeline to Cloud Build, use your existing Jenkins environment, or deploy Jenkins in Google Cloud.
- Builds and deployments from a Dockerfile and the required artifacts. You can use Cloud Build to build container images with a Dockerfile or a build configuration file. If you want to execute your builds on an on-premises cluster, you can use GitLab.
- Source-to-image builds. In Cloud Build, you must implement a preliminary step to build the artifacts that the resulting container image requires. If your source-to-image job builds a Python app and produces a container image, you need to configure a custom build step to build the Python app, and then build the container image. This approach also requires that you provide a Dockerfile, or if you don't want to provide one, you can use Google Cloud's buildpacks or Jib for Java applications.
- Custom builds. You can create custom Cloud Build builders like you're doing now in OpenShift. If your custom builders are not using any OpenShift-specific features, you might be able to use them as they are in Cloud Build.
Whatever approach you choose to build your container images, you need to store them in a container image repository. You have the following different options:
- Keep your existing container image repository. If you're using an external container image repository that's not running on OpenShift, and you're not yet ready to migrate,you can continue using that repository to store your container images.
- Container Registry. If you prefer a fully managed service, you can use Container Registry to store your container images. If you need additional security layers, you can manage the Container Registry encryption keys by yourself, configure a secure perimeter to access Container Registry, enhance the security of your software supply chain, and scan your container images for known vulnerabilities with Artifact Analysis. Container Registry also supports managed base images that are maintained by Google, as a base for your container images.
- On-premises repository. If you need to migrate away from your current repository because it's hosted on OpenShift, and you need to store your container images on-premises, you can choose the registry provided with GitLab.
- Hybrid approach. You can combine the previous options to benefit from the strengths of each one. For example, you can use Container Registry as your main repository, and mirror that to your on-premises repository. In this case, you use Container Registry features, and still benefit from having an on-premises repository.
Regardless of your choice to store container images, you need to provision and configure credentials for your clusters to access the container image repository.
If you need to send notifications about the status of your builds and your container images to users or third-party services, you can use Cloud Functions to respond to events produced by Cloud Build and Container Registry.
Summary of OpenShift to Anthos capability mapping
The following table is a summary of how to map Anthos capabilities to the ones that you used on OpenShift.
OpenShift | Anthos |
---|---|
OpenShift projects |
|
OpenShift SDN and network isolation |
|
OpenShift Security Context Constraints |
|
OpenShift Pipelines |
|
OpenShift Source-to-Image |
|
Identity integration |
|
Optimizing your environment
Optimization is the last phase of your migration. In this phase, you make your environment more efficient than it was before. In this phase, you execute multiple iterations of a repeatable loop until your environment meets your optimization requirements. The steps of this repeatable loop are as follows:
- Assessing your current environment, teams, and optimization loop.
- Establishing your optimization requirements and goals.
- Optimizing your environment and your teams.
- Tuning the optimization loop.
The following sections rely on Migration to Google Cloud: Optimizing your environment.
Assess your current environment, teams, and optimization loop
While the first assessment focuses on the migration from your current environment to Anthos, this assessment is tailored for the optimization phase.
Establish your optimization requirements
For GKE on Google Cloud optimization requirements, review the optimization requirements established in Optimizing your environment.
Review the following optimization requirements for your Anthos environment:
- Start deploying workloads in a serverless environment. If you need to reduce the strain on your operations teams, you can start using fully managed serverless platforms such as Cloud Run and Cloud Run for Anthos.
- Modernize your deployment processes. Migration to Google Cloud: Deploying your workloads describes typical end-to-end deployment processes and how to modernize your existing processes. If you want to modernize your existing deployment processes, or want to design new ones, refer to Migration to Google Cloud: Migrating from manual deployments to automated, containerized deployments for guidance.
- Deploy with Spinnaker. If you need to implement deployment logic, such as canary deployments and blue/green deployments to increase the reliability of your environment and reduce the impact for your users, you can use Spinnaker. To use Spinnaker on Google Cloud, you need to install it. After that, you implement your deployment processes with Spinnaker. For example, you can register your existing GKE clusters in Spinnaker, enable Kustomize support for Spinnaker, or implement continuous delivery pipelines with Spinnaker and GKE.
- Implement a secure software supply-chain. For security-critical workloads, you can implement a secure software supply chain in your Anthos clusters by using Binary Authorization.
- Switch to Anthos Service Mesh. If you're already using OpenShift Service Mesh or you are looking for the traffic management, observability, and security capabilities that a service mesh provides, you can adopt Anthos Service Mesh. Anthos Service Mesh provides an Anthos tested and supported distribution of Istio, together with Google-managed backend capabilities for observability, mTLS certificate management, and integration with Identity Aware Proxy (IAP).
Complete the optimization
After populating the list of your optimization requirements, you complete the rest of the activities of the optimization phase in Migration to Google Cloud: Optimizing your environment.
Finding help
Google Cloud offers various options and resources for you to find the necessary help and support to best use Google Cloud services:
- Self-service resources. If you don't need dedicated support, you have various options that you can use at your own pace.
- Technology partners. Google Cloud has partnered with multiple companies to help you use our products and services.
- Google Cloud professional services. Our professional services can help you get the most out of your investment in Google Cloud.
There are more resources to help migrate workloads to Google Cloud in the Google Cloud Migration Center.
For more information about these resources, see the finding help section of Migration to Google Cloud: Getting started.
What's next
- Migration to Google Cloud: Getting started.
- Learn more about Anthos and Migrate to Containers.
- Migrating a monolithic application to microservices on Google Kubernetes Engine.
- Read how you can support your migration with Istio mesh expansion.
- Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.