This document describes how to migrate from a single-cluster Google Kubernetes Engine (GKE) environment to a multi-cluster GKE environment with minimal downtime. The multi-cluster GKE environment uses the following:
- Multi Cluster Ingress to expose workloads outside clusters.
- Multi-Cluster Service Discovery to expose workloads across clusters.
This document is useful if you're planning to migrate from a single-cluster environment to a multi-cluster environment with minimal downtime. This document is also useful if you're evaluating the opportunity to migrate and want to explore what it might look like.
A multi-cluster environment can help to mitigate scalability issues and service disruptions that are caused by single points of failure. When you use Multi Cluster Ingress and Multi-Cluster Service Discovery, you can deploy multiple instances of your workloads and transparently expose them to clients that are outside the cluster or that are running in other clusters in the multi-cluster environment.
This document is part of a series that discusses migrating containers to Google Cloud:
- Migrating containers to Google Cloud: Getting started
- Migrating containers to Google Cloud: Migrating Kubernetes to GKE
- Migrating containers to Google Cloud: Migrating to a new GKE environment
- Migrating containers to Google Cloud: Migrating to a multi-cluster GKE environment with Multi Cluster Ingress and Multi-Cluster Service Discovery (this document)
- Migrating containers to Google Cloud: Migrating from OpenShift to GKE Enterprise
This tutorial assumes that you're familiar with the following:
- Kubernetes
- GKE
- Multi Cluster Ingress
- Multi-Cluster Service Discovery
In this tutorial, you use the following software:
- Terraform: a tool to provision resources in cloud environments.
You perform most of the steps for this tutorial in Cloud Shell.
Objectives
- Provision a GKE cluster to simulate the source environment.
- Provision multiple GKE clusters to simulate the target environment.
- Deploy the example workloads that are provided in this tutorial.
- Configure Multi-Cluster Service Discovery and Multi Cluster Ingress.
- Expose the example workloads with Multi Cluster Ingress.
- Deploy and use Multi-Cluster Service Discovery.
- Switch traffic to the target environment.
- Decommission the source environment.
Costs
In this document, you use the following billable components of Google Cloud:
- GKE
- Compute Engine for GKE worker nodes
- Cloud Load Balancing
- Virtual Private Cloud external static IP addresses
- Multi Cluster Ingress
- Cloud DNS
- Cloud Storage
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
To complete the steps in this tutorial, you need the following:
- A Google Cloud
organization
in which you have the following Identity and Access Management (IAM) roles:
- Project Creator role
(
roles/resourcemanager.projectCreator
) - Billing Account Administrator role
(
roles/billing.admin
).
- Project Creator role
(
- An active billing account.
You can create a new organization and a new billing account, or you can use an existing organization and billing account. In this tutorial, you run scripts to create two Google Cloud projects.
Preparing your environment
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Change the working directory to the home directory:
cd "${HOME}"
Clone the GitHub repository:
git clone https://github.com/GoogleCloudPlatform/solutions-multicluster-gke-migration.git
The repository contains the scripts and the manifest files to deploy and configure the demo workloads.
Authenticate your user account with Application Default Credentials:
gcloud auth application-default login
The output is similar to the following, which shows the path to the Application Default Credentials file:
Credentials saved to file: [/tmp/tmp.T5Qae7XwAO/application_default_credentials.json] These credentials will be used by any library that requests Application Default Credentials (ADC).
Make a note of the path to the Application Default Credentials file. You use it in the next step.
Set the following environment variables:
APPLICATION_DEFAULT_CREDENTIALS_PATH=ADC_PATH BILLING_ACCOUNT_ID=BILLING_ACCOUNT_ID DEFAULT_PROJECT=DEFAULT_PROJECT DEFAULT_REGION=DEFAULT_REGION DEFAULT_ZONE=DEFAULT_ZONE GOOGLE_APPLICATION_CREDENTIALS="${APPLICATION_DEFAULT_CREDENTIALS_PATH}" MCI_MCS_TUTORIAL_DIRECTORY_PATH="$(pwd)"/solutions-multicluster-gke-migration ORGANIZATION_ID=ORGANIZATION_ID TERRAFORM_STATE_PROJECT=TERRAFORM_STATE_PROJECT export GOOGLE_APPLICATION_CREDENTIALS
Replace the following:
ADC_PATH
: the path to the Application Default Credentials file that you noted in the previous step.BILLING_ACCOUNT_ID
: the ID of the billing account that you want to use. To get a list of billing account IDs, run the following:gcloud beta billing accounts list --filter=open=true
DEFAULT_PROJECT
: the Google Cloud project ID that you want to use to provision the resources for this tutorial. You run a Terraform script later to create this project.DEFAULT_REGION
: the default region in which to provision resources.DEFAULT_ZONE
: the default zone in which to provision resources.ORGANIZATION_ID
: the ID of your Google Cloud organization. To find your organization ID, run the following:gcloud organizations list
TERRAFORM_STATE_PROJECT
: the Google Cloud project ID that you want to use to store the Terraform state information. You run an initialization script later to create this project. For this tutorial, the projectTERRAFORM_STATE_PROJECT
must be different from the projectDEFAULT_PROJECT
, but both projects must be in the same organization.
The example workload
In this tutorial, you use the Bookinfo app, which is a 4-tier, polyglot microservices app that shows information about books. Although this example workload is already containerized, the approach described in this document series also applies to non-containerized services. In such cases, you can add a modernization phase during which you containerize the services that you intend to migrate.
The Bookinfo app has the following microservice components:
productpage
: Calls thedetails
,ratings
, andreviews
microservices to populate the book information page.details
: Serves information about books.reviews
: Contains book reviews.ratings
: Returns book ranking information to accompany a book review.
Provisioning the source and target environments
In this section, you use Terraform to automatically provision the source and target environments. By using Terraform to apply the proposed changes, you automate the following tasks:
- Create a Google Cloud project in your Google Cloud organization.
- Enable the necessary Cloud APIs.
- Provision three regional GKE clusters to simulate source and target environments. To provision the GKE clusters, Terraform uses the kubernetes-engine Terraform module.
- Configure Workload Identity for the GKE clusters.
- Register the GKE clusters as part of the multi-cluster fleet.
To apply the proposed changes, do the following:
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Initialize the Terraform backend configuration:
scripts/init-terraform.sh \ --application-credentials "${APPLICATION_DEFAULT_CREDENTIALS_PATH}" \ --billing-account-id "${BILLING_ACCOUNT_ID}" \ --default-project "${DEFAULT_PROJECT}" \ --default-region "${DEFAULT_REGION}" \ --default-zone "${DEFAULT_ZONE}" \ --organization-id "${ORGANIZATION_ID}" \ --terraform-state-project "${TERRAFORM_STATE_PROJECT}"
The
init-terraform.sh
script does the following:- Creates a project and a Cloud Storage bucket to store the Terraform state information.
- Generates the descriptors to configure Terraform to use that bucket as a remote backend.
- Initializes the Terraform working directory.
Change the working directory to the
terraform
directory:cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"/terraform
Use Terraform to apply the changes:
terraform apply
When prompted, review the proposed changes. To confirm, enter
yes
.The output is similar to the following, which shows details about the resource that Terraform created:
Apply complete! Resources: 60 added, 0 changed, 0 destroyed
The following diagram shows the target architecture of the system for this
section, with DEFAULT_REGION
set to us-central1
:
The preceding diagram shows the following GKE clusters in the default region:
source-cluster-1
: the source environmenttarget-cluster-1
andtarget-cluster-2
: the target environment.
The GKE clusters don't currently host any workload, and they are ready for the deployment of the workload to be migrated.
Deploying the example workload in the source environment
In this section, you deploy the example workload in the source environment. The example workload simulates a workload that you migrate to the target environment.
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Deploy the example workload in the source environment and use GKE Ingress to expose the workload outside the cluster:
scripts/workloads.sh \ --cluster-name source-cluster-1 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}" \ --expose-with GKE_INGRESS
Confirm that the Pods of the example workload are ready:
kubectl get pods --namespace bookinfo
The output is similar to the following. When all Pods are ready, the
STATUS
field showsRUNNING
:NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-95khd 1/1 Running 0 43h productpage-v1-7b8d9dcc69-95lc6 1/1 Running 0 23h ratings-v1-b6994bb9-gt94b 1/1 Running 0 43h reviews-v3-674d9bff46-4gl2v 1/1 Running 0 23h
Confirm that the Ingress object of the example workloads is ready:
kubectl get ingress --namespace bookinfo
The output is similar to the following, in which the
IP
column shows the IP address for thebookinfo
Ingress object:NAME CLASS HOSTS ADDRESS PORTS AGE bookinfo <none> * 34.117.181.7 80 45h
In your browser, go to the following URL, where
EXTERNAL_IP
is the IP address from the previous step:http://EXTERNAL_IP/productpage
The page that loads displays information about books and relevant ratings.
The environment is now provisioned and configured to simulate the source environment.
The following diagram shows the target architecture of the system for this section:
The preceding diagram shows the example workload running in the
GKE cluster source-cluster-1
, and a load balancer that exposes
that workload. You configured Cloud Load Balancing using a GKE
Ingress object. The target GKE clusters, target-cluster-1
and
target-cluster-2
, don't host any workload. A client accesses the workload
through the load balancer that's configured with the GKE
Ingress object.
Deploying the example workload in the target environment
In this section, you start the migration by deploying the example workload in the target environment. Having the target workload ready in the target environment ensures that the target environment is ready to fulfill requests coming from clients. If you expose workloads before they are ready in the target environment, you may expose clients to issues due to service disruption.
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Deploy the example workload in the cluster
target-cluster-1
to the target environment:scripts/workloads.sh \ --cluster-name target-cluster-1 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}"
Confirm that the Pods of the example workload are ready:
kubectl get pods --namespace bookinfo
The output is similar to the following. When all Pods are ready, the value shown in the
STATUS
field isRUNNING
:NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-95khd 1/1 Running 0 43h productpage-v1-7b8d9dcc69-95lc6 1/1 Running 0 23h ratings-v1-b6994bb9-gt94b 1/1 Running 0 43h reviews-v3-674d9bff46-4gl2v 1/1 Running 0 23h
Deploy the example workload in the
target-cluster-2
cluster in the target environment:scripts/workloads.sh \ --cluster-name target-cluster-2 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}"
Confirm that the Pods of the example workload are ready:
kubectl get pods --namespace bookinfo
The output is similar to the following. When all Pods are ready, the value shown in the
STATUS
field isRUNNING
:NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-95khd 1/1 Running 0 43h productpage-v1-7b8d9dcc69-95lc6 1/1 Running 0 23h ratings-v1-b6994bb9-gt94b 1/1 Running 0 43h reviews-v3-674d9bff46-4gl2v 1/1 Running 0 23h
The following diagram shows the target architecture of the system for this section:
The preceding diagram shows the example workload running in the
GKE cluster source-cluster-1
, and a load balancer that exposes
that workload. The architecture in the diagram matches the configuration that
you set up, in which the load balancer uses a GKE Ingress object.
The target GKE clusters, target-cluster-1
and
target-cluster-2
, host instances of the example workload that aren't exposed
to clients or to workloads running in other GKE clusters. A
client accesses the workload through the load balancer that's configured with
the GKE Ingress object.
Configuring Multi Cluster Ingress and Multi-Cluster Service Discovery
In this section, you provision and configure Multi Cluster Ingress and Multi-Cluster Service Discovery. Multi Cluster Ingress lets you expose workloads outside multiple clusters, and Multi-Cluster Service Discovery lets you expose workloads across multiple clusters.
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Provision and configure Multi Cluster Ingress and Multi-Cluster Service Discovery:
scripts/mci-mcs.sh \ --config-cluster-membership-name "${DEFAULT_REGION}-target-cluster-1" \ --google-cloud-project "${DEFAULT_PROJECT}"
The
mci-mcs.sh
script does the following:- Designates a Config Cluster. To simplify the migration process, we recommend that you designate a Config Cluster in the target environment so that you don't have to migrate the Config Cluster before decommissioning the source environment.
- Enables Multi-Cluster Service Discovery.
Confirm that Multi Cluster Ingress is set up correctly:
gcloud alpha container hub ingress describe
The output is similar to the following, in which the
featureState.details.code
fields showOK
:featureState: details: code: OK description: Ready to use updateTime: '2021-05-10T12:39:28.378476653Z' detailsByMembership: projects/324979197388/locations/global/memberships/us-central1-source-cluster-1: code: OK updateTime: '2021-05-12T09:22:39.420038966Z' projects/324979197388/locations/global/memberships/us-central1-target-cluster-1: code: OK updateTime: '2021-05-12T09:22:39.420038676Z' projects/324979197388/locations/global/memberships/us-central1-target-cluster-2: code: OK updateTime: '2021-05-12T09:22:39.420039116Z' hasResources: true lifecycleState: ENABLED
Confirm that Multi-Cluster Service Discovery is set up correctly:
gcloud alpha container hub multi-cluster-services describe
It might take a few minutes for Multi-Cluster Service Discovery to be ready. When it's ready, the output is similar to the following, in which the value shown in the
featureState
code
fields isOK
:featureState: detailsByMembership: projects/PROJECT/locations/global/memberships/CLUSTER1: code: OK description: Firewall successfully created. updateTime: '2020-09-24T05:16:27.675313587Z' projects/PROJECT/locations/global/memberships/CLUSTER2: code: OK description: Firewall successfully created. updateTime: '2020-09-24T05:15:26.665213577Z' lifecycleState: ENABLED
Multi Cluster Ingress and Multi-Cluster Service Discovery are now provisioned and configured.
Exposing workloads using Multi Cluster Ingress
In this section, you deploy Multi Cluster Ingress to expose the example workload.
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Use Multi Cluster Ingress to expose the example workload outside the cluster:
scripts/workloads.sh \ --cluster-name target-cluster-1 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}" \ --expose-with MCI
The
workloads.sh
script deploys Multi Cluster Ingress descriptors in the Config Cluster. System workloads in the Config Cluster handle the provisioning and configuring of Multi Cluster Ingress.Confirm that Multi Cluster Ingress is set up correctly:
kubectl describe mci productpage --namespace bookinfo
It might take a couple of minutes for Multi-Cluster Service Discovery to be ready. When it's ready, the output shows a
Status
message similar to the following:Status: Cloud Resources: Backend Services: gkemci-24ipb2-9080-bookinfo-productpage Firewalls: gkemci-24ipb2-mcs-mci-tutorial-2-workloads-vpc-l7 Forwarding Rules: gkemci-24ipb2-fw-bookinfo-bookinfo-mci Health Checks: gkemci-24ipb2-9080-bookinfo-productpage Network Endpoint Groups: zones/us-central1-a/networkEndpointGroups/k8s1-2f4b7c0c-bookinf-mci-productpage-svc-ohon2ru3-908-d8e31dbd zones/us-central1-a/networkEndpointGroups/k8s1-c05699ee-bookinf-mci-productpage-svc-ohon2ru3-908-a9236573 zones/us-central1-a/networkEndpointGroups/k8s1-fa6c2f9b-bookinf-mci-productpage-svc-ohon2ru3-908-8f28ea70 zones/us-central1-b/networkEndpointGroups/k8s1-2f4b7c0c-bookinf-mci-productpage-svc-ohon2ru3-908-d8e31dbd zones/us-central1-b/networkEndpointGroups/k8s1-c05699ee-bookinf-mci-productpage-svc-ohon2ru3-908-a9236573 zones/us-central1-c/networkEndpointGroups/k8s1-fa6c2f9b-bookinf-mci-productpage-svc-ohon2ru3-908-8f28ea70 zones/us-central1-f/networkEndpointGroups/k8s1-2f4b7c0c-bookinf-mci-productpage-svc-ohon2ru3-908-d8e31dbd zones/us-central1-f/networkEndpointGroups/k8s1-c05699ee-bookinf-mci-productpage-svc-ohon2ru3-908-a9236573 zones/us-central1-f/networkEndpointGroups/k8s1-fa6c2f9b-bookinf-mci-productpage-svc-ohon2ru3-908-8f28ea70 Target Proxies: gkemci-24ipb2-bookinfo-bookinfo-mci URL Map: gkemci-24ipb2-bookinfo-bookinfo-mci VIP: 34.117.121.178
Make a note of the IP address in the
VIP
field. You use it throughout this tutorial.Open a browser and go to the following URL, where
MCI_IP
is the IP address in theVIP
field from the previous step:http://MCI_IP/productpage
It might take a couple of minutes for the page to be ready. When it's ready, the page displays information about books and relevant ratings.
The following diagram shows the target architecture of the system for this section:
The preceding diagram shows the example workload running in the following
GKE clusters: source-cluster-1
, target-cluster-1
, and
target-cluster-2
. Cloud Load Balancing exposes the workload running in each
GKE cluster. The architecture in the diagram matches the
configuration that you set up as follows:
- The workload that is running in the GKE cluster
source-cluster-1
is exposed by a load balancer using a GKE Ingress object and exposed by a load balancer using Multi Cluster Ingress. - The workloads that are running in the GKE clusters
target-cluster-1
andtarget-cluster-2
are exposed by a load balancer using Multi Cluster Ingress.
A client accesses the workload through the load balancer that's configured with the GKE Ingress object.
Exposing workloads using Multi-Cluster Service Discovery
In this section, you configure Multi-Cluster Service Discovery to expose the example workload.
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Use Multi-Cluster Service Discovery to expose the example workload across the cluster
target-cluster-1
:scripts/workloads.sh \ --cluster-name target-cluster-1 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}" \ --expose-with MCI \ --deploy-mcs
The
workloads.sh
script deploys the Kubernetes resources to configure Multi-Cluster Service Discovery in the Config Cluster. System workloads in the Config Cluster handle provisioning and configuring Multi-Cluster Service Discovery. The first provisioning and configuration of Multi-Cluster Service Discovery takes about five minutes.Confirm that Multi-Cluster Service Discovery is set up correctly:
kubectl get serviceimport --namespace bookinfo
The output is similar to the following, in which the
IP
column shows the IP address for three ServiceImports resources:NAME TYPE IP AGE details ClusterSetIP ["192.168.115.96"] 41h ratings ClusterSetIP ["192.168.107.46"] 41h reviews ClusterSetIP ["192.168.167.69"] 41h
When Multi-Cluster Service Discovery setup is complete, the output includes three ServiceImport resources. The resources appear gradually, and it might take a few minutes for all of them to be available. If the output displays the following error message, wait a minute and then run the command again :
No resources found in bookinfo namespace
.Use Multi-Cluster Service Discovery to expose the example workload across the cluster
target-cluster-2
:scripts/workloads.sh \ --cluster-name target-cluster-1 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}" \ --expose-with MCI \ --deploy-mcs
The
workloads.sh
script automatically updates the kubectl context to point to the correct GKE cluster.Confirm that Multi-Cluster Service Discovery is set up correctly:
kubectl get serviceimport --namespace bookinfo
The output is similar to the following, in which the
IP
column shows the IP address for three ServiceImports resources:NAME TYPE IP AGE details ClusterSetIP ["192.168.115.96"] 41h ratings ClusterSetIP ["192.168.107.46"] 41h reviews ClusterSetIP ["192.168.167.69"] 41h
When Multi-Cluster Service Discovery setup is complete, the output includes three ServiceImport resources. Each service in the output is now mapped to two DNS A records, where each
[SERVICE_NAME]
value is the name one of the ServiceImports resources in the output:- The
[SERVICE_NAME].bookinfo.svc.cluster.local
resource resolves to theClusterIP
that is local to the cluster where the Service is located. - The
[SERVICE_NAME].bookinfo.svc.clusterset.local
resource resolves to theClusterSetIP
that points to Multi-Cluster Service Discovery.
- The
Update and restart the example workload Pods that are running in the cluster
target-cluster-1
so that they use Multi-Cluster Service Discovery:scripts/workloads.sh \ --cluster-name target-cluster-1 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}" \ --consume-mcs
Confirm that the Pods of the example workload are ready:
kubectl get pods --namespace bookinfo
The output is similar to the following. When all Pods are ready, the value shown in the
STATUS
field isRUNNING
:NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-95khd 1/1 Running 0 43h productpage-v1-7b8d9dcc69-95lc6 1/1 Running 0 23h ratings-v1-b6994bb9-gt94b 1/1 Running 0 43h reviews-v3-674d9bff46-4gl2v 1/1 Running 0
Update the example workload Pods that are running in the
target-cluster-2
cluster so that they use Multi-Cluster Service Discovery:scripts/workloads.sh \ --cluster-name target-cluster-2 \ --cluster-region "${DEFAULT_REGION}" \ --google-cloud-project "${DEFAULT_PROJECT}" \ --consume-mcs
Confirm that the Pods of the example workload are ready:
kubectl get pods --namespace bookinfo
The output is similar to the following. When all Pods are ready, the value shown in the
STATUS
field isRUNNING
:NAME READY STATUS RESTARTS AGE details-v1-79f774bdb9-95khd 1/1 Running 0 43h productpage-v1-7b8d9dcc69-95lc6 1/1 Running 0 23h ratings-v1-b6994bb9-gt94b 1/1 Running 0 43h reviews-v3-674d9bff46-4gl2v 1/1 Running 0
Open a browser and go to the following URL, where
MCI_IP
is the VIP address that you noted when you exposed workloads using Multi Cluster Ingress earlier in this tutorial:http://MCI_IP/productpage
A page is displayed with information about books and relevant ratings.
You deploy Multi-Cluster Service Discovery and Pods that use Multi-Cluster Service Discovery in the target environment only. This approach ensures that you don't apply changes to the source environment until you validate the target environment. It also ensures that you don't impact your clients during the migration, which reduces downtime to the minimum.
The following diagram shows the target architecture of the system for this section:
The preceding diagram shows the example workload running in the following
GKE clusters: source-cluster-1
, target-cluster-1
, and
target-cluster-2
. Cloud Load Balancing exposes the workload running in each
GKE cluster. The architecture in the diagram matches the
configuration that you set up as follows:
- The workload that is running in the GKE cluster
source-cluster-1
is exposed by a load balancer using a GKE Ingress object and exposed by a load balancer using Multi Cluster Ingress. - The workloads that are running in the GKE clusters
target-cluster-1
andtarget-cluster-2
are exposed by a load balancer using Multi Cluster Ingress. - Multi-Cluster Service Discovery is configured to expose the workload running in the
GKE cluster
target-cluster-1
to the GKE clustertarget-cluster-2
, and the other way around.
A client accesses the workload through the load balancer that's configured with the GKE Ingress object.
Switching traffic to the target environment
At this point, clients can reach the example workload in the following ways:
- Using a GKE Ingress object that exposes the example workload from the source environment.
- Using Multi Cluster Ingress, which exposes the example workload from multiple clusters in the target environment.
In a typical production environment, clients can use a DNS record to resolve
the IP address of the
HTTP Load Balancer
that the bookinfo
GKE Ingress object created. For example,
clients can use the bookinfo.example.com
DNS A record
to resolve the IP address of the load balancer that you created using
GKE Ingress.
After you validate that the target environment meets your requirements, you can
update the bookinfo.example.com
DNS A record to point to the
Multi Cluster Ingress VIP address. This update causes clients to start pointing to
the workloads in the target environment. Before you implement this DNS-based
migration strategy, ensure that your DNS clients honor the
time to live (TTL) of DNS records.
Misbehaving DNS clients might ignore updates to DNS records and use stale cache
values instead. This DNS-based migration strategy doesn't support advanced
traffic-management features to implement gradual migration strategies, such as
partial traffic shifting or traffic mirroring. For information about
alternatives that support traffic management features, see
Evaluate your runtime platform and environments.
To implement this DNS-based migration strategy, you can use
Cloud DNS
to manage your DNS records. If you want to provision Cloud DNS
resources with Terraform, refer to
google_dns_managed_zone
and
google_dns_record_set
.
In this document, GKE Ingress and Multi Cluster Ingress use two different static IP addresses. If your workloads can afford a minimal downtime due to a cutover window, you can delete the GKE Ingress first, and then configure Multi Cluster Ingress to use the static IP address that the GKE Ingress object used. Although reusing the GKE Ingress static IP address for Multi Cluster Ingress adds a minimal downtime, you avoid changing DNS records.
The following diagram shows the target architecture of the system for this section:
The preceding diagram shows the example workload running in the following
GKE clusters: source-cluster-1
, target-cluster-1
, and
target-cluster-2
. Cloud Load Balancing exposes the workload running in each
GKE cluster. The architecture in the diagram matches the
configuration that you set up as follows:
- The workload that is running in the GKE cluster
source-cluster-1
is exposed by a load balancer using a GKE Ingress object and exposed by a load balancer using Multi Cluster Ingress. - The workloads that are running in the GKE clusters
target-cluster-1
andtarget-cluster-2
are exposed by a load balancer using Multi Cluster Ingress. - Multi-Cluster Service Discovery exposes the workload running in the GKE
cluster
target-cluster-1
to the GKE clustertarget-cluster-2
, and the other way around.
A client accesses the workload through the load balancer that's configured with Multi Cluster Ingress.
Decommissioning the source environment
In this section, you decommission the source environment. Before you proceed, ensure that the source environment is no longer serving client requests. To do so, you can use Network Telemetry to access logs for clusters in the source environment. When you're sure that all clients have switched to the target environment, follow these steps:
In Cloud Shell, change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Delete the resources from the cluster
source-cluster-1
:gcloud container clusters get-credentials source-cluster-1 --region="${DEFAULT_REGION}" \ && kubectl delete namespace bookinfo --wait
Change the working directory to the
terraform
directory:cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"/terraform
Deregister the clusters in the source environment from the fleet and delete them:
terraform destroy -target module.gke-and-hub-source
The following diagram shows the target architecture of the system for this section:
The preceding diagram shows the architecture that you configured, in which the
example workload is running in the GKE clusters
target-cluster-1
and target-cluster-2
. The cluster source-cluster-1
is
decommissioned. A client accesses the workload through the load balancer that's
configured with Multi Cluster Ingress.
Cleaning up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, delete the resources and the projects that you created.
Delete the resources and projects
In Cloud Shell:
Change the working directory to the repository directory:
cd "${MCI_MCS_TUTORIAL_DIRECTORY_PATH}"
Delete the resources that you provisioned:
scripts/cleanup.sh \ --google-cloud-project "${DEFAULT_PROJECT}" \ --cluster-region "${DEFAULT_REGION}" \ --terraform-state-project "${TERRAFORM_STATE_PROJECT}"
What's next
- Read about the other migration strategies from a single cluster environment to a multi-cluster environment.
- Learn how Multi-Cluster Service Discovery and Multi Cluster Ingress work.
- For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center.