This tutorial shows how to initialize and configure a service mesh to support a feature-by-feature migration from an on-premises (legacy) data center to Google Cloud. The tutorial and its accompanying conceptual article is intended for sysadmins, developers, and engineers who want to use a service mesh that dynamically routes traffic either to the legacy environment or to Google Cloud.
A service mesh can greatly reduce the complexity of both the migration exercise and the refactoring effort because it decouples the network functions from the service functions. It also reduces the network operational complexity because it provides load balancing, traffic management, monitoring, and observability.
The following diagram shows how you can use a service mesh to route traffic either to microservices running in the legacy environment or to Google Cloud:
In this tutorial, you use the following software:
- Ubuntu Server and Container-Optimized OS: Operating systems used in this tutorial
- Docker Community Edition: Platform to run containerized workloads
- Docker Compose: A tool for defining and running Docker apps
- Helm: A tool to install and manage Kubernetes app
- Istio: An open source service mesh
- Kiali: A tool to visualize Istio service meshes
- Envoy: sidecar proxy used when joining Istio service mesh
Objectives
- Initialize an environment simulating the on-premises data center.
- Deploy example workloads to the on-premises data center.
- Test the workloads running on the on-premises data center.
- Configure the destination environment on Google Cloud.
- Migrate the workload from the on-premises data center to the destination environment.
- Test the workloads running in the destination environment.
- Retire the on-premises data center.
Costs
This tutorial uses billable components of Google Cloud, including:
Use the Pricing Calculator to generate a cost estimate based on your projected usage.
Before you begin
-
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
-
In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project.
- Enable the Compute Engine and GKE APIs.
When you finish this tutorial, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up.
Preparing your environment
You perform most of the steps for this tutorial in Cloud Shell.
Open Cloud Shell:
Change the working directory to the
$HOME
directory:cd "$HOME"
Clone the Git repository, which contains the scripts and the manifest files to deploy and configure the demo app:
git clone https://github.com/GoogleCloudPlatform/solutions-istio-mesh-expansion-migration
Set the default region and zone:
gcloud config set compute/region us-east1 gcloud config set compute/zone us-east1-b
Initialize an environment variable that stores the Istio version identifier:
export ISTIO_VERSION=1.1.1
Download and extract Istio:
wget https://github.com/istio/istio/releases/download/"$ISTIO_VERSION"/istio-"$ISTIO_VERSION"-linux.tar.gz tar -xvzf istio-"$ISTIO_VERSION"-linux.tar.gz
Initialize environment variables that store the path where you extracted Istio, the Helm version identifier, and the path where you extract Helm:
ISTIO_PATH="$HOME"/istio-"$ISTIO_VERSION" HELM_VERSION=v2.13.0 HELM_PATH="$HOME"/helm-"$HELM_VERSION"
Download and extract Helm:
wget https://storage.googleapis.com/kubernetes-helm/helm-"$HELM_VERSION"-linux-amd64.tar.gz tar -xvzf helm-"$HELM_VERSION"-linux-amd64.tar.gz mv linux-amd64 "$HELM_PATH"
The example workload
In this tutorial, you use the Bookinfo app, which is a 4-tier, polyglot microservices app that shows information about books. This app is designed to run on Kubernetes, but you first deploy it on a Compute Engine instance using Docker and Docker Compose. With Docker Compose, you describe multi-container apps using YAML descriptors. You can then start the app by executing a single command.
Although this example workload is already containerized, this approach also applies to non-containerized services. In such cases, you can add a "modernization phase" where you containerize services that you intend to migrate.
The Bookinfo app has four microservice components:
productpage
: Calls thedetails
,ratings
, andreviews
microservices to populate the book information pagedetails
: Serves information about booksreviews
: Contains book reviewsratings
: Returns book ranking information to accompany a book review
Setting up the environments
The first step is to configure the environments that you need for this tutorial:
- An environment simulating the (legacy) on-premises data center
- An environment simulating the migration destination
This tutorial is intended to help you migrate from a non-Google Cloud environment (such as on-premises or another cloud provider) to Google Cloud. Such migrations have a layer of network complexity because you have to set up a secure communication channel between the non-Google Cloud environment and the Google Cloud one.
For this tutorial, both environments run in Google Cloud. This simplifies the setup process by requiring only a single bootstrapping phase.
Provisioning the legacy environment
In this section, you configure a Google Cloud environment to emulate a separate non-Google Cloud environment by initializing Compute Engine instances and deploying the workloads to migrate. The following diagram shows the target architecture of the legacy environment:
Create firewall rules
You create firewall rules to allow external access to the microservices and to the database.
In Cloud Shell, create the firewall rules that are required for inter-node communications:
gcloud compute firewall-rules create bookinfo \ --description="Bookinfo App rules" \ --action=ALLOW \ --rules=tcp:9080,tcp:9081,tcp:9082,tcp:9083,tcp:9084 \ --target-tags=bookinfo-legacy-vm
Initialize a service account to manage Compute Engine instances
In this tutorial, you create a service account to manage Compute Engine
instances. It's a best practice to limit the service account to just the roles
and access permissions that are required in order to run the app. For this
tutorial, the only role required for the service account is the Compute Viewer
role (roles/compute.viewer
). This role provides read-only access to
Compute Engine resources.
In Cloud Shell, initialize an environment variable that stores the service account name:
GCE_SERVICE_ACCOUNT_NAME=istio-migration-gce
Create a service account:
gcloud iam service-accounts create "$GCE_SERVICE_ACCOUNT_NAME" --display-name="$GCE_SERVICE_ACCOUNT_NAME"
Initialize an environment variable that stores the full service account email address:
GCE_SERVICE_ACCOUNT_EMAIL="$(gcloud iam service-accounts list \ --format='value(email)' \ --filter=displayName:"$GCE_SERVICE_ACCOUNT_NAME")"
Bind the
compute.viewer
role to the service account:gcloud projects add-iam-policy-binding "$(gcloud config get-value project 2> /dev/null)" \ --member serviceAccount:"$GCE_SERVICE_ACCOUNT_EMAIL" \ --role roles/compute.viewer
Initialize the runtime environment
The next step is to create and configure a Compute Engine instance to host the workloads to migrate.
In Cloud Shell, initialize and export a variable with the name of the Compute Engine instance:
export GCE_INSTANCE_NAME=legacy-vm
Create a Compute Engine instance:
gcloud compute instances create "$GCE_INSTANCE_NAME" \ --boot-disk-device-name="$GCE_INSTANCE_NAME" \ --boot-disk-size=10GB \ --boot-disk-type=pd-ssd \ --image-family=ubuntu-1804-lts \ --image-project=ubuntu-os-cloud \ --machine-type=n1-standard-1 \ --metadata-from-file startup-script="$HOME"/solutions-istio-mesh-expansion-migration/gce-startup.sh \ --scopes=storage-ro,logging-write,monitoring-write,service-control,service-management,trace \ --service-account="$GCE_SERVICE_ACCOUNT_EMAIL" \ --tags=bookinfo-legacy-vm
The
n1-standard-1
machine type specified in this command is the smallest that lets you run the example workload without impacting performance. When this command is done, the console shows details about the new instance:NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS legacy-vm us-east1-b n1-standard-1 10.142.0.38 34.73.53.145 RUNNING
The startup script configures the Compute Engine instance by:
- Installing Docker
- Installing Docker Compose
- Installing Dnsmasq
Deploy the workload in the legacy environment
In this tutorial, you deploy the Istio Bookinfo App as the workload to migrate.
In Cloud Shell, copy Docker Compose descriptors to the Compute Engine instance:
gcloud compute scp --recurse \ "$HOME"/solutions-istio-mesh-expansion-migration/compose \ "$GCE_INSTANCE_NAME":/tmp --zone=us-east1-b
Wait for Docker Compose installation to complete:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='while ! command -v docker-compose; do echo "Waiting for docker-compose to be installed"; sleep 5; done'
Start the Bookinfo app with Docker Compose:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='sudo docker-compose -f /tmp/compose/bookinfo.yaml up -d'
Test your deployment in the legacy environment
You've finished configuring the example workload, so now you can test it.
In Cloud Shell, find the external IP address of the Compute Engine instance that's running the example workload:
gcloud compute instances describe "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --format='value(networkInterfaces[0].accessConfigs[0].natIP)'
Open a browser and go to the following URL, where
[EXTERNAL_IP]
is the IP address from the previous step:http://[EXTERNAL_IP]:9083/productpage
A page is displayed with details about books and relevant ratings:
Provisioning the destination runtime environment
In this section, you configure the destination environment in Google Cloud by initializing a GKE cluster and exposing the legacy service using Istio. The following diagram shows the target architecture of destination runtime environment:
Initialize a service account to manage GKE clusters
In this tutorial, you create a service account to manage Compute Engine instances in the GKE cluster. The GKE cluster nodes use this service account instead of the default service account, because you want to provide fewer permissions than what are granted to the default service account.
The roles required for the service account are monitoring.viewer
,
monitoring.metricWriter
, and logging.logWriter
, as described in
Hardening your cluster's security.
In Cloud Shell, initialize an environment variable that stores the service account name:
GKE_SERVICE_ACCOUNT_NAME=istio-migration-gke
Create a service account:
gcloud iam service-accounts create "$GKE_SERVICE_ACCOUNT_NAME" \ --display-name="$GKE_SERVICE_ACCOUNT_NAME"
Initialize an environment variable that stores the service account's email account name:
GKE_SERVICE_ACCOUNT_EMAIL="$(gcloud iam service-accounts list \ --format='value(email)' \ --filter=displayName:"$GKE_SERVICE_ACCOUNT_NAME")"
Grant the
monitoring.viewer
,monitoring.metricWriter
, andlogging.logWriter
roles to the service account:gcloud projects add-iam-policy-binding \ "$(gcloud config get-value project 2> /dev/null)" \ --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \ --role roles/monitoring.viewer gcloud projects add-iam-policy-binding \ "$(gcloud config get-value project 2> /dev/null)" \ --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \ --role roles/monitoring.metricWriter gcloud projects add-iam-policy-binding \ "$(gcloud config get-value project 2> /dev/null)" \ --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \ --role roles/logging.logWriter
Prepare the GKE cluster
In this section, you launch the GKE cluster, install Istio, expose Istio services, and finish the cluster configuration. The first step is to create and launch the GKE cluster.
In Cloud Shell, initialize and export an environment variable that stores the GKE cluster name:
export GKE_CLUSTER_NAME=istio-migration
Create a regional GKE cluster with one node pool and a single node in each zone:
gcloud container clusters create "$GKE_CLUSTER_NAME" \ --addons=HorizontalPodAutoscaling,HttpLoadBalancing \ --enable-autoupgrade \ --enable-network-policy \ --enable-ip-alias \ --machine-type=n1-standard-4 \ --metadata disable-legacy-endpoints=true \ --node-locations us-east1-b,us-east1-c,us-east1-d \ --no-enable-legacy-authorization \ --no-enable-basic-auth \ --no-issue-client-certificate \ --num-nodes=1 \ --region us-east1 \ --service-account="$GKE_SERVICE_ACCOUNT_EMAIL"
This command creates a GKE cluster named
istio-migration
. The execution of this command might take up to five minutes. When the command is done, the console shows details about the newly created cluster:NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS istio-migration us-east1 1.11.7-gke.4 35.196.136.88 n1-standard-8 1.11.7-gke.4 3 RUNNING
Next, you use Helm to install Istio in the cluster.
In Cloud Shell, initialize and export an environment variable that stores the Istio namespace name:
export ISTIO_NAMESPACE=istio-system
Create the Istio namespace:
kubectl apply -f "$ISTIO_PATH"/install/kubernetes/namespace.yaml
Create a Kubernetes service account for Tiller, the server portion of Helm:
kubectl apply -f "$ISTIO_PATH"/install/kubernetes/helm/helm-service-account.yaml
Install Tiller in your cluster:
"$HELM_PATH"/helm init --service-account tiller
Install the
istio-init
chart to bootstrap all Istio's custom resource definitions:"$HELM_PATH"/helm install "$ISTIO_PATH"/install/kubernetes/helm/istio-init --name istio-init --namespace "$ISTIO_NAMESPACE"
When this command is done, the console shows a summary of the new objects that were installed in the GKE cluster:
NAME: istio-init LAST DEPLOYED: Wed Mar 20 11:39:12 2019 NAMESPACE: istio-system STATUS: DEPLOYED RESOURCES: ==> v1/ClusterRole NAME AGE istio-init-istio-system 1s ==> v1/ClusterRoleBinding NAME AGE istio-init-admin-role-binding-istio-system 1s ==> v1/ConfigMap NAME DATA AGE istio-crd-10 1 1s istio-crd-11 1 1s ==> v1/Job NAME COMPLETIONS DURATION AGE istio-init-crd-10 0/1 1s 1s istio-init-crd-11 0/1 1s 1s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE istio-init-crd-10-2s28z 0/1 ContainerCreating 0 1s istio-init-crd-11-28n9r 0/1 ContainerCreating 0 1s ==> v1/ServiceAccount NAME SECRETS AGE istio-init-service-account 1 1s
Verify that the Istio CRDs were committed to the Kubernetes
api-server
:kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l
The expected output is
53
.Install the Istio chart:
"$HELM_PATH"/helm install "$ISTIO_PATH"/install/kubernetes/helm/istio \ --name istio \ --namespace "$ISTIO_NAMESPACE" \ --set gateways.istio-ilbgateway.enabled=true \ --set global.meshExpansion.enabled=true \ --set global.meshExpansion.useILB=true \ --set grafana.enabled=true \ --set kiali.enabled=true \ --set kiali.createDemoSecret=true \ --set kiali.dashboard.grafanaURL=http://grafana:3000 \ --set prometheus.enabled=true \ --set tracing.enabled=true
Execution of this command might take up to two minutes. When the command is done, the console shows a summary of the new objects that were installed in the GKE cluster:
NAME: istio LAST DEPLOYED: Wed Mar 20 11:43:08 2019 NAMESPACE: istio-system STATUS: DEPLOYED RESOURCES:
A list of the deployed resources follows this summary.
Because Compute Engine instances that you want to add to the service
mesh must have access to the Istio
control plane services
(Pilot, Mixer, Citadel), you must expose these services through the
istio-ingressgateway
and istio-ilbgateway
services. You also expose the
Kubernetes DNS server using an
internal load balancer
so the server can be queried to resolve names for services running in the
cluster. You also expose Kiali to visualize the service mesh.
In Cloud Shell, create the internal load balancer to expose the Kubernetes DNS server:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/kube-dns-ilb.yaml
Wait for the service object named
kube-dns-ilb
to be assigned an external IP address:kubectl get svc kube-dns-ilb -n kube-system --watch
The output displays an IP address for
EXTERNAL-IP
:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns-ilb LoadBalancer 10.35.252.144 10.128.0.3 53:31054/UDP 4d
To stop the command, press
Control+C
.Wait for the service object named
istio-ingressgateway
to be assigned an external IP address:kubectl get svc istio-ingressgateway -n "$ISTIO_NAMESPACE" --watch
The output displays an IP address for
EXTERNAL-IP
:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.48.2.195 34.73.84.179 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31145/TCP,8060:30381/TCP,853:30784/TCP,15030:32124/TCP,15031:32703/TCP 4d
To stop the command, press
Control+C
.Wait for the service object named
istio-ilbgateway
to be assigned an external IP address:kubectl get svc istio-ilbgateway -n "$ISTIO_NAMESPACE" --watch
The output displays an IP address for
EXTERNAL-IP
:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ilbgateway LoadBalancer 10.48.14.190 10.142.0.31 15011:30805/TCP,15010:31082/TCP,8060:30953/TCP,5353:30536/TCP 2m
To stop the command, press
Control+C
.Expose Kiali with a Gateway and a Virtual Service:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/kiali.yaml
Configure Istio mesh expansion
In this section, you configure Istio mesh expansion to allow Compute Engine instances to join the service mesh. The first task is to generate the configuration files to deploy in each Compute Engine instance you want to join the mesh.
In Cloud Shell, initialize and export an environment variable that stores the default namespace name:
export SERVICE_NAMESPACE=default
Extract the keys for the service account used by Istio:
"$ISTIO_PATH"/install/tools/setupMeshEx.sh machineCerts default "$SERVICE_NAMESPACE" all
Initialize and export an environment variable that stores the options for the configuration generation script:
export GCP_OPTS="--region $(gcloud config get-value compute/region 2> /dev/null)"
Run the cluster environment configuration generation script:
"$ISTIO_PATH"/install/tools/setupMeshEx.sh generateClusterEnv "$GKE_CLUSTER_NAME"
Run the DNS configuration generation script:
"$HOME"/solutions-istio-mesh-expansion-migration/gce-mesh-expansion-setup.sh
Next, you configure the Compute Engine instance that you want to join the mesh.
Unset the
GCP_OPTS
variable to let the initialization script pick the defaults:unset GCP_OPTS
Initialize and export an environment variable that stores the path for the setup script:
export SETUP_ISTIO_VM_SCRIPT="$ISTIO_PATH"/install/tools/setupIstioVM.sh
Prepare the Istio version descriptor for the Compute Engine instance initialization:
cp "$ISTIO_PATH"/istio.VERSION "$HOME"
Run the cluster environment configuration generation script:
"$ISTIO_PATH"/install/tools/setupMeshEx.sh gceMachineSetup "$GCE_INSTANCE_NAME"
Configure the local ports that will use Envoy sidecar for inbound services:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='sudo sed -i -e "\$aISTIO_INBOUND_PORTS=9081,9082,9083,9084" /var/lib/istio/envoy/sidecar.env'
Configure the namespace of the cluster:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='sudo sed -i -e "\$aISTIO_NAMESPACE='"$SERVICE_NAMESPACE"'" /var/lib/istio/envoy/sidecar.env'
Restart Istio:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='sudo systemctl restart istio'
Check that Istio has been started correctly:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='sudo systemctl --type=service --state=running list-units | grep "dnsmasq\|istio\|systemd-resolved"'
The output shows that the
dnsmasq
,istio
,istio-auth-node-agent
, andsystemd-resolved
services are loaded, active, and running:dnsmasq.service loaded active running dnsmasq - A lightweight DHCP and caching DNS server istio-auth-node-agent.service loaded active running istio-auth-node-agent: The Istio auth node agent istio.service loaded active running istio-sidecar: The Istio sidecar systemd-resolved.service loaded active running Network Name Resolution
You then add services running on the Compute Engine instance in the Istio service mesh.
In Cloud Shell, create the Kubernetes services without selectors to expose the services running in the Compute Engine instance:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/selectorless-services.yaml
Initialize an environment variable that stores the IP address of the Compute Engine instance where the example workload is running:
GCE_INSTANCE_IP="$(gcloud compute instances describe "$GCE_INSTANCE_NAME" --format='value(networkInterfaces[].networkIP)')"
Register the services to the mesh:
"$ISTIO_PATH"/bin/istioctl register details "$GCE_INSTANCE_IP" http:9082 -n "$SERVICE_NAMESPACE" "$ISTIO_PATH"/bin/istioctl register productpage "$GCE_INSTANCE_IP" http:9083 -n "$SERVICE_NAMESPACE" "$ISTIO_PATH"/bin/istioctl register ratings "$GCE_INSTANCE_IP" http:9081 -n "$SERVICE_NAMESPACE" "$ISTIO_PATH"/bin/istioctl register reviews "$GCE_INSTANCE_IP" http:9084 -n "$SERVICE_NAMESPACE"
Finally, you configure the VirtualServices and the corresponding
routing rules for the services in the mesh that are currently running on the
Compute Engine instance. You also expose the productpage
service
through an Istio ingress gateway.
In Cloud Shell, deploy a Gateway object to expose services:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/gateway.yaml
Deploy the VirtualService that routes traffic from the Gateway object to the
productpage
instance running in Compute Engine:kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/virtualservice-productpage-vm.yaml
Create the ServiceEntries to enable service discovery for the services running in the Compute Engine instances:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/serviceentry.yaml
Wait for the Service object named
istio-ingressgateway
to be assigned an external IP address:kubectl get svc istio-ingressgateway -n istio-system --watch
In the output, you should see an IP address for
EXTERNAL-IP
:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.48.2.195 34.73.84.179 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31145/TCP,8060:30381/TCP,853:30784/TCP,15030:32124/TCP,15031:32703/TCP 4d
Confirm that a gateway object named
bookinfo-gateway
has been created:kubectl get gateway --watch
In the output, you should see the
bookinfo-gateway
in the list of gateway objects:NAME AGE bookinfo-gateway 3h
Test Istio mesh expansion
Now that you've finished exposing the example workload running in the Compute Engine instance using Istio, you can test it:
In Cloud Shell, find the external IP address of the Istio ingress gateway:
kubectl get svc istio-ingressgateway -n istio-system
Open a browser and go to the following URL, where
[EXTERNAL_IP]
is the IP address from the previous step:http://[EXTERNAL_IP]/productpage
A page is displayed with information about books and relevant ratings:
Visualizing the service mesh
In this section, you use Kiali to see a visual representation of the service mesh.
In Cloud Shell, find the external IP address of the Istio ingress gateway:
kubectl get svc istio-ingressgateway -n istio-system EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
Open a browser and go to the following URL, where
[EXTERNAL_IP]
is the IP address from the previous step:http://[EXTERNAL_IP]:15029/kiali/console/graph/namespaces/? edges=requestsPercentOfTotal&graphType=versionedApp&namespaces=default&injectServiceNodes=true&duration=60&pi=5000&layout=dagre
At the Kiali login screen, log in with the following credentials:
- Username:
admin
- Password:
admin
- Username:
Run a request multiple times for the main page of the example workload:
for i in {1..10000}; do curl -s -o /dev/null -w "%{http_code}\n" http://"$EXTERNAL_IP"/productpage; done
With this command, you generate traffic to the Bookinfo app. The expected output is a list of the HTTP return codes of each request (
200
OK
in this case):200 200 200 [...]
In the Kiali service dashboard, you should see a diagram of the current mesh, with all the traffic routed to services running in Compute Engine. All the traffic is routed from
istio-ingressgateway
to the Kubernetes Serviceproductpage.default.svc.cluster.local
, pointing to theproductpage
microservice running on the Compute Engine instance. You don't see the other microservices in the graph (details
,reviews
,ratings
) because Docker Compose is handling the routing locally on the Compute Engine instance.If you don't see the diagram, refresh the Kiali dashboard page.
Migrating the workload
In this section, you migrate the components of the example workload from the Compute Engine instance to the GKE cluster. For each service of the example workload, you:
- Deploy a Pod that runs the service in the GKE cluster.
- Configure rules to split traffic between the service that's running in the GKE cluster and the one that's running in the Compute Engine instance.
- Gradually migrate traffic from the service that's running in Compute Engine instance to the GKE cluster.
- Stop the service that's running in the Compute Engine instance.
The following diagram shows the target architecture of the system for this section:
Enabling Istio sidecar injection
Each service needs an Envoy sidecar proxy running to join Istio service mesh. You can enable automatic sidecar injection to avoid manual edits to your pods' configuration.
In Cloud Shell, stop the traffic generation command by pressing
Control+C
.Enable the Istio-managed automatic sidecar injection in the Kubernetes namespace where you will deploy service instances:
kubectl label namespace "$SERVICE_NAMESPACE" istio-injection=enabled
Deploying services in the GKE cluster
In this section, you deploy instances in the GKE cluster and route part of the traffic to those instances.
In Cloud Shell, delete the ServiceEntries before deploying other services in the mesh. Deleting these entries prevents routing requests to instances that aren't yet ready:
kubectl delete -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/serviceentry.yaml
Deploy pods and Kubernetes Services for the
productpage
,details
,reviews
, andratings
microservices in the cluster:kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/productpage.yaml kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/details.yaml kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/ratings.yaml kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/reviews.yaml
Update the VirtualServices configuration to split incoming traffic between the instances running in the Compute Engine machine and the ones running in the GKE cluster:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/virtualservice-productpage-split.yaml kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/virtualservice-details-split.yaml kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/virtualservice-ratings-split.yaml kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/virtualservice-reviews-split.yaml
Create the ServiceEntries to enable service discovery for the services running in the Compute Engine instances:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/serviceentry.yaml
The next task is to validate your hybrid deployment by inspecting traffic directed to all microservice instances in your two environments (Compute Engine and GKE).
You use Kiali to see a visual representation of the service mesh:
In Cloud Shell, find the external IP address of the Istio ingress gateway:
kubectl get svc istio-ingressgateway -n istio-system EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
Open a browser and go to the following URL, where
[EXTERNAL_IP]
is the IP address from the previous step:http://[EXTERNAL_IP]:15029/kiali/console/graph/namespaces/?edges=requestsPercentOfTotal&graphType=versionedApp&namespaces=default&injectServiceNodes=true&duration=60&pi=5000&layout=dagre
At the Kiali login screen, log in with the following credentials, if necessary:
- Username:
admin
- Password:
admin
- Username:
Run a request multiple times for the main page of the example workload:
for i in {1..10000}; do curl -s -o /dev/null -w "%{http_code}\n" http://"$EXTERNAL_IP"/productpage; done
With this command, you generate traffic to the Bookinfo app. The expected output is a list of the HTTP return codes of each request (
200
OK
in this case):200 200 200 [...]
In the Kiali service dashboard, you should see that traffic is being almost equally split between microservice instances running in Compute Engine and the ones running in GKE (with the
-gke
suffix in the first part of the service identifier). This is because you configured the routes in the VirtualService to have the same weight.If you don't see the diagram, refresh the Kiali dashboard page.
Routing traffic to the GKE cluster only
When you're confident of the deployment in the GKE cluster, you can update Services and VirtualServices to route traffic to the cluster only.
The following diagram shows the target architecture of the system for this section:
In Cloud Shell, stop the traffic generation command by pressing
Control+C
.Update Services and VirtualServices to route traffic to microservice instances in the GKE cluster:
kubectl apply -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/virtualservice-gke.yaml
To validate your deployment, you inspect traffic directed to all microservice instances in your two environments (Compute Engine and GKE).
You use Kiali to see a visual representation of the service mesh:
In Cloud Shell, find the external IP address of the Istio ingress gateway:
kubectl get svc istio-ingressgateway -n istio-system EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
Open a browser and go to the following URL, where
[EXTERNAL_IP]
is the IP address from the previous step:http://[EXTERNAL_IP]:15029/kiali/console/graph/namespaces/?edges=requestsPercentOfTotal&graphType=versionedApp&namespaces=default&injectServiceNodes=true&duration=60&pi=5000&layout=dagre
At the Kiali login screen, log in with the following credentials, if necessary:
- Username:
admin
- Password:
admin
- Username:
Run a request for the main page of the example workload multiple times:
for i in {1..10000}; do curl -s -o /dev/null -w "%{http_code}\n" http://"$EXTERNAL_IP"/productpage; done
With this command, you generate traffic to the Bookinfo app. The expected output is a list of the HTTP return codes of each request (
200
OK
in this case):200 200 200 [...]
In the Kiali service dashboard, you should see that traffic is being routed to microservice instances running in the GKE cluster (with the
-gke
suffix in the first part of the service identifier), while no traffic is routed to the instances running in Compute Engine. If you don't see the diagram, refresh the Kiali dashboard page. While you deployed two instances of each microservice (one running in the Compute Engine instance, the other running in the GKE cluster), you configured the VirtualServices to route traffic to only the microservices instances running in the GKE cluster.If you don't see the diagram, refresh the Kiali dashboard page.
Retiring the legacy data center
Because all the traffic is routed to the GKE cluster, you can now delete the ServiceEntries for the microservices running in Compute Engine and stop Docker Compose.
The following diagram shows the target architecture of the system for this section:
In Cloud Shell, stop the traffic generation command by pressing
Control+C
.Delete the ServiceEntries before deploying other services in the mesh:
kubectl delete -f "$HOME"/solutions-istio-mesh-expansion-migration/kubernetes/bookinfo/istio/serviceentry.yaml
Stop the example workload running with Docker Compose:
gcloud compute ssh "$GCE_INSTANCE_NAME" \ --zone=us-east1-b \ --command='sudo docker-compose -f /tmp/compose/bookinfo.yaml down --remove-orphans -v'
Visualize the expanded mesh - Running only in GKE
In this section, you validate your deployment by inspecting traffic directed to all microservice instances in your two environments (Compute Engine and GKE).
You use Kiali to see a visual representation of the service mesh:
In Cloud Shell, find the external IP address of the Istio ingress gateway:
kubectl get svc istio-ingressgateway -n istio-system EXTERNAL_IP="$(kubectl get svc istio-ingressgateway -n istio-system -o=jsonpath="{.status.loadBalancer.ingress[0].ip}")"
Open a browser and go to the following URL, where
[EXTERNAL_IP]
is the IP address from the previous step:http://[EXTERNAL_IP]:15029/kiali/console/graph/namespaces/?edges=requestsPercentOfTotal&graphType=versionedApp&namespaces=default&injectServiceNodes=true&duration=60&pi=5000&layout=dagre
At the Kiali login screen, log in with the following credentials, if necessary:
- Username:
admin
- Password:
admin
- Username:
Run a request multiple times for the main page of the example workload:
for i in {1..10000}; do curl -s -o /dev/null -w "%{http_code}\n" http://"$EXTERNAL_IP"/productpage; done
With this command, you generate traffic to the Bookinfo app. The expected output is a list of the HTTP return codes of each request (
200
OK
in this case):200 200 200 [...]
In the Kiali service dashboard, you should see only services pointing to instances in the GKE cluster (with the
-gke
suffix in the first part of the service identifier), while services pointing to instances running in Compute Engine aren't part of the mesh anymore, since you deleted the related ServiceEntries.If you don't see the diagram, refresh the Kiali dashboard page:
Cleaning up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
- In the Cloud Console, go to the Manage resources page.
- In the project list, select the project that you want to delete, and then click Delete.
- In the dialog, type the project ID, and then click Shut down to delete the project.
What's next
- Read about Google Kubernetes Engine.
- Read about Istio.
- Try out other Google Cloud features for yourself. Have a look at our tutorials.