Migrating a two-tier application to GKE

This tutorial demonstrates an end-to-end migration with Migrate for Anthos.

In completing this tutorial, you will:

  1. Migrate a two-tiered LAMP stack application, with both application and database VMs, from VMware to GKE.
  2. Improve security by making the database accessible from the application container only and not from outside the cluster.
  3. Replace SSH access with authenticated shell access through kubectl.
  4. See container system logs through automatic StackDriver integration.

The structure of this tutorial follows the phases in a Migration journey.

Discovery and Planning

Planning

The application running on VMware is accessible using the URL https://app.mydomain.local. It consists of 2 VMs: application and database. SSH is enabled on both for console access.

Schematic showing on-premises and GCP environments connected using a VPN. On-premises, the application and database are running within VMware. GCP has a configured VPC and subnet, but no running instances.

Creating a landing zone

A landing zone refers to the area configured to receive your migrated workloads. It includes:

A Migrate for Anthos deployment with:

  • A Cloud VPN (Cloud VPN or Cloud Interconnect connecting GCP and your corporate network.
  • A Virtual Private Cloud
  • A Migrate for Compute Engine environment including a Migrate for Compute Engine Manager and Cloud Extension
  • A GKE cluster with Migrate for Anthos components

You will need to configure:

  • Services to expose your applications on that cluster.

In this section, you will start with a configured Migrate for Anthos environment and create your application's services.

Prerequisites

Before you complete this tutorial:

  1. Install Migrate for Compute Engine on GCP and your VMware environment.
  2. Create a GKE cluster and deploy Migrate for Anthos to it.
  3. Make sure that you have credentials for your GKE cluster.

    gcloud container clusters get-credentials [CLUSTER_NAME] \
    --region=[REGION] --zone=[ZONE]
    

Deploy services

In this section, you will:

  • Create a service for the application layer with an internal load balancer (ILB) and expose HTTPS on port TCP/443.
  • Create a service for the database layer on the GKE cluster network.
  • Create a Kubernetes Network Policy to allow network access to the database from the application's pod only.

Schematic of a landing zone, showing empty services for application and database. The app and DB VMs remain on VMware.

Kubernetes configurations are defined in YAML files and applied from the command line using kubectl.

The YAML below defines:

  • A LoadBalancer for the application pod (suitecrm-app), which opens port 443 externally
  • A Service for the database (suitecrm-db), open on port 3306
  • A NetworkPolicy on suitecrm-db which allows only connections from suitecrm-app.
kind: Service
apiVersion: v1
metadata:
  name: suitecrm-app
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
spec:
  type: LoadBalancer
  selector:
    app: suitecrm-app
  ports:
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443
---

kind: Service
apiVersion: v1
metadata:
  name: suitecrm-db
spec:
  type: ClusterIP
  selector:
    app: suitecrm-db
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306

---

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: suitecrm-db-restricted-access
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: suitecrm-db
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: suitecrm-app
    ports:
    - protocol: TCP
      port: 3306

---

To apply the services configuration, run:

kubectl apply -f 1-anthos-migrate-landing-zone.yaml

Kubernetes creates the landing zone components. To check the configuration of your services, run:

kubectl get services
kubectl get networkpolicies

In the output, you will see the services and network policy, similar to the example below:

kubectl get services
NAME                    TYPE      CLUSTER-IP     EXTERNAL-IP      PORT(S)         AGE
kubernetes        ClusterIP     10.51.240.1     <none>       443/TCP        30m
suitecrm-app   LoadBalancer   10.51.243.234   10.130.0.39     443:31831/TCP       30m
suitecrm-db       ClusterIP   10.51.243.234     <none>       3306/TCP       30m

kubectl get networkpolicies
NAME                            POD-SELECTOR      AGE
suitecrm-db-restricted-access   app=suitecrm-db   30m

Migrating the application

In this section, you will migrate VMs from VMware and launch them on GKE. After completing this, your application and database pods will be connected via the services you created in the Landing Zone step.

Schematic showing application and database have been moved to services backed by containers on GCP.

Prerequisites

Before editing the YAML configuration below, you will need:

  • The VM ID of your VMware VMs. To find that, see Migrating VMware to GKE.
  • The name of the Storage Class you set when you created your Migrate for Anthos deployment.

Configuring your migration

In this section, you will create four Kubernetes resources for your migrated VMs:

When you apply this configuration, Migrate for Anthos will make a Test Clone of the source VMs, taking a snapshot and launching a container on GKE from that snapshot. The containers on GKE will start and stream storage from that snapshot.

Edit the YAML file as noted below with your VM IDs and Storage Class names. For more information on Migrate for Anthos specific parameters, see the YAML reference.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-vlsdisk-suitecrm-app
  annotations:
    # Replace vlsVmMoref with application's VM ID
    pvc.csi.kubernetes.io/vlsVmMoref: "vm-1"
    pvc.csi.kubernetes.io/vlsVmDataAccessMode: "Streaming"
    pvc.csi.kubernetes.io/vlsRunMode: "TestClone"
spec:
  accessModes:
  - ReadWriteOnce
  # Replace with your Storage Class name
  storageClassName: csi-vlsdisk-v2k-demo-env-sc
  resources:
    requests:
      storage: 1Gi
---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-vlsdisk-suitecrm-db
  annotations:
    # Replace vlsVmMoref application's VM ID
    pvc.csi.kubernetes.io/vlsVmMoref: "vm-2"
    pvc.csi.kubernetes.io/vlsVmDataAccessMode: "Streaming"
    pvc.csi.kubernetes.io/vlsRunMode: "TestClone"
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: csi-vlsdisk-v2k-demo-env-sc # Replace with your Storage Class name
  resources:
    requests:
      storage: 1Gi

---

kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
  name: suitecrm-app
  namespace: default
spec:
  serviceName: "suitecrm-app-svc"
  replicas: 1
  selector:
    matchLabels:
      app: suitecrm-app
  template:
    metadata:
      labels:
        app: suitecrm-app
      annotations:
        anthos-migrate.gcr.io/action: run
        anthos-migrate.gcr.io/source-type: vlsdisk
        # source-pvc needs to match the name of the PVC declared above.
        anthos-migrate.gcr.io/source-pvc: csi-vlsdisk-suitecrm-app
    spec:
      containers:
      - name: suitecrm-app
      # The image for the Migrate for Anthos system container.
        image: anthos-migrate.gcr.io/v2k-run:v0.9.6

---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
  name: suitecrm-db
  namespace: default
spec:
  serviceName: "suitecrm-db-svc"
  replicas: 1
  selector:
    matchLabels:
      app: suitecrm-db
  template:
    metadata:
      labels:
        app: suitecrm-db
      annotations:
        anthos-migrate.gcr.io/action: run
        anthos-migrate.gcr.io/source-type: vlsdisk
        anthos-migrate.gcr.io/source-pvc: csi-vlsdisk-suitecrm-db
    spec:
      containers:
      - name: suitecrm-db
        image: anthos-migrate.gcr.io/v2k-run:v0.9.6

---

After editing the YAML, apply the configuration to your cluster.

kubectl apply -f 2-anthos-migrate-migrate-vms.yaml

You will get a notification that the StatefulSet for app and database are being created.

At this point, you can update your corporate DNS server to point app.mydomain.local to the IP address of the application service ILB. Retrieve the application service's external IP with kubectl get service.

kubectl get service suitecrm-app
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)       AGE
suitecrm-app   LoadBalancer   10.51.243.234   10.130.0.398080:31831/TCP   59m

The EXTERNAL-IP When you update the configuration for your DNS with this IP address, clients will be able to access it without reconfiguration.

Checking your container without SSH

You can open a shell prompt on a container using kubectl exec.

First, find the name of the Pod in your cluster that you want to connect to. In the example below, the Name of the Pod is suitecrm-0

kubectl describe pods | grep Name

Name:               suitecrm-0

Then, execute the bash shell in an interactive prompt.

kubectl exec -it [POD_NAME] -- /bin/bash

For more information, see the Kubernetes documentation.

Viewing container logs

By default, Migrate for Anthos is configured to send logs from your containers to Stackdriver.

Once you enable Stackdriver, you can view workload logs in the Stackdriver user interface.

You can view system logs that are in Stackdriver from the GCP Console. To do so:

  1. Open GKE Workloads.
  2. Find your workload and click on its Name. The Deployment Details page appears.
  3. Find the row labeled Logs and click on Container logs.

This loads Stackdriver, showing logs for this workload only.

Container logs in Stackdriver

Migrating Storage

In this section, you will finalize the migration to GKE by moving your container's streaming PVs to Compute Engine Persistent Disk using the Migrate for Anthos storage exporter.

Prerequisites

Before performing an export, you need to calculate the total size of disks to be migrated with the source VM. We recommend an additional margin of 15-25% to accommodate for future growth. In this example, both application and database have a 16 GB volumes, and you will configure the exported container to have a size of 20 GB.

Configure your YAML

The YAML below defines a new:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gce-pd-ssd-intree
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  replication-type: none

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gce-pd-claim-suitecrm-app
spec:
  storageClassName: gce-pd-ssd-intree
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      # Replace this with your target volume size
      storage: 20G

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gce-pd-claim-suitecrm-db
spec:
  storageClassName: gce-pd-ssd-intree
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      # Replace this with your target volume size
      storage: 20G

---
#Storage Exporter configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: exporter-sample-config
  namespace: exporter
data:
  config: |-
    appSpec:
      dataFilter:
        - "- *.swp"
        - "- /etc/fstab"
        - "- /boot/"
        - "- /tmp/"

---
# Storage Exporter Jobs

apiVersion: batch/v1
kind: Job
metadata:
  name: exporter-sample-suitecrm-app
  namespace: exporter
spec:
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false"
        anthos-migrate.gcr.io/action: export
        anthos-migrate.gcr.io/source-type: vlsdisk
        # source-pvc is the PVC of your existing disk
        anthos-migrate.gcr.io/source-pvc: csi-vlsdisk-suitecrm-app
        # target-pvc is created by running this job
        anthos-migrate.gcr.io/target-pvc: gce-pd-claim-suitecrm-app
        anthos-migrate.gcr.io/config: exporter-sample-config
    spec:
      restartPolicy: OnFailure
      containers:
      - name: exporter-sample
        image: anthos-migrate.gcr.io/v2k-export:v0.9.6

---

apiVersion: batch/v1
kind: Job
metadata:
  name: exporter-sample-suitecrm-db
  namespace: exporter
spec:
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "false"
        anthos-migrate.gcr.io/action: export
        anthos-migrate.gcr.io/source-type: vlsdisk
        anthos-migrate.gcr.io/source-pvc: csi-vlsdisk-suitecrm-db
        anthos-migrate.gcr.io/target-pvc: gce-pd-claim-suitecrm-db
        anthos-migrate.gcr.io/config: exporter-sample-config
    spec:
      restartPolicy: OnFailure
      containers:
      - name: exporter-sample
        image: anthos-migrate.gcr.io/v2k-export:v0.9.6

---

After configuring your YAML, delete the StatefulSet objects that are holding onto the existing volumes. Don't worry, you will recreate them after the export completes.

kubectl delete suitecrm-app
kubectl delete suitecrm-db

Apply the YAML to run the export jobs.

kubectl apply -f 3-anthos-migrate-export-storage.yaml

GKE will run the jobs. To check on their status, use kubectl get jobs.

kubectl get jobs
NAME                           DESIRED   SUCCESSFUL   AGE
exporter-sample-suitecrm-app   1         0            11s

Once the jobs are marked as SUCCESSFUL, you may proceed to reconfigure the application pods to run using the exported PV.

Running containers with exported storage

After this step, the migration is complete and the container is ready to run independently from the source VM and Migrate for Anthos components.

The YAML file below recreates the StatefulSet of the application and database to use the exported storage.

---

kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
  name: suitecrm-app
  namespace: default
spec:
  serviceName: "suitecrm-app-svc"
  replicas: 1
  selector:
    matchLabels:
      app: suitecrm-app
  template:
    metadata:
      labels:
        app: suitecrm-app
      annotations:
        anthos-migrate.gcr.io/action: run
        # Setting source-type to exported boots the container from a PVC without
        # other Migrate for Anthos components
        anthos-migrate.gcr.io/source-type: exported
        anthos-migrate.gcr.io/source-pvc: gce-pd-claim-suitecrm-app
    spec:
      containers:
      - name: suitecrm-app
        image:anthos-migrate.gcr.io/v2k-run:v0.9.6

---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
  name: suitecrm-db
  namespace: default
spec:
  serviceName: "suitecrm-db-svc"
  replicas: 1
  selector:
    matchLabels:
      app: suitecrm-db
  template:
    metadata:
      labels:
        app: suitecrm-db
      annotations:
        anthos-migrate.gcr.io/action: run
        anthos-migrate.gcr.io/source-type: exported
        anthos-migrate.gcr.io/source-pvc: gce-pd-claim-suitecrm-db
    spec:
      containers:
      - name: suitecrm-db
        image: anthos-migrate.gcr.io/v2k-run:v0.9.6

Apply the YAML to launch your containers:

kubectl apply -f 4-anthos-migrate-update-storage.yaml

Finally, delete the original PersistentVolumeClaim which used the Migrate for Anthos CSI driver for both the application and database.

kubectl delete pvc csi-vlsdisk-suitecrm-app
kubectl delete pvc csi-vlsdisk-suitecrm-db

You have successfully migrated an application to GKE using Migrate for Anthos.

Was this page helpful? Let us know how we did:

Send feedback about...

Migrate for Anthos Documentation