Migrating Ruby on Rails apps on Heroku to GKE

This tutorial describes how to migrate Ruby on Rails apps from Heroku dynos and Heroku Postgres to Google Kubernetes Engine (GKE) and Cloud SQL for PostgreSQL. It is primarily intended for app owners who want to move from Heroku's proprietary hosting service to Kubernetes, a portable, extensible open-source platform for deploying containerized apps. If you are looking for a fully managed app-hosting platform, consider using App Engine instead.

The following diagram shows an example architecture for running Ruby on Rails apps on GKE. Requests come to Cloud Load Balancer, which distributes them to Rails servers on GKE nodes in two zones. The apps use a Cloud SQL for PostgreSQL backend database, which is also split across two zones for high availability.

Architecture of a Ruby on Rails app on GKE and
Cloud SQL

In this tutorial you migrate a sample Ruby on Rails app that uses a Heroku Postgres add-on database. This tutorial assumes that you are familiar with Ruby on Rails, PostgreSQL, and the fundamentals of containers.

Objectives

  • Create and scale a GKE cluster.
  • Create a Cloud SQL database with a private IP address.
  • Migrate data from Heroku Postgres to Cloud SQL.
  • Build a Docker image for a Ruby on Rails app.
  • Deploy the app to GKE.
  • Scale the app on GKE.

Costs

This tutorial uses the following billable components of Google Cloud Platform:

You can use the pricing calculator to generate a cost estimate based on your projected usage. New GCP users might be eligible for a free trial.

For example, assuming you use a three-node GKE cluster and a two-node Cloud SQL cluster for 24 hours, your total cost is $7.82. For details, read this pricing estimate.

You might also be charged for your use of Heroku.

Before you begin

  1. Select or create a GCP project. Creating a new GCP project lets you clean up easily afterwards.

    GO TO THE MANAGE RESOURCES PAGE

  2. Enable billing for your project.

    ENABLE BILLING

  3. In the Google Cloud Platform Console, click Activate Cloud Shell.

    ACTIVATE Cloud Shell

    Cloud Shell gives you access to the command line in GCP, and includes Cloud SDK and other tools you need for GCP development. Cloud Shell can take several minutes to initialize. For more information, read about Cloud Shell features.

  4. In Cloud Shell, enable the GKE, Service Networking, and Cloud SQL Admin APIs needed to provision resources later.

    gcloud services enable container.googleapis.com \
        servicenetworking.googleapis.com \
        sqladmin.googleapis.com
    
  5. Set a default GCP region and zone. Replace [REGION_NAME] with a GCP region near you.

    export REGION=[REGION_NAME]
    gcloud config set compute/region $REGION
    gcloud config set compute/zone $REGION-b
    
  6. Create a Heroku account if you don't have one already.

When you finish this tutorial, you can avoid continued billing by deleting the resources you created. See Cleaning up for more detail.

Creating a GKE cluster

In this section, you build the GKE cluster to host your app.

Map Heroku dynos to Kubernetes nodes

In Heroku, dynos are tied to single apps. In Kubernetes, a cluster can run multiple apps, providing enough computing power for all of them.

Containers on Kubernetes run on worker machines known as nodes. The following table provides a guide for converting the computing power of Heroku dynos into GKE nodes.

Heroku dyno type GKE machine type
Free or hobby

f1-micro

standard-1x g1-small
standard-2x n1-standard-1
performance-m

n1-standard-4

performance-l

n1-standard-8

For this tutorial, you use machine type g1-small to minimize cost. If you intend to use this cluster in production, change --machine-type in the following command to n1-standard-1 or larger.

  • In Cloud Shell, create a regional cluster with one node per zone.

    gcloud container clusters create ruby-cluster --region $REGION --num-nodes 1 \
        --machine-type=g1-small --enable-autorepair --enable-ip-alias
    

    This command creates a regional cluster of three nodes. This is a high-availability configuration, where each worker node is allocated in a different zone for resilience. The nodes have autorepair enabled, so they are automatically recreated in the event of failure. The cluster master that manages them is replicated in each zone as well.

    Creating the cluster might take a few minutes. When the process is complete, the cluster has a total of three nodes and the status in the output is RUNNING.

    NAME         LOCATION    MASTER_VERSION MASTER_IP   MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
    ruby-cluster us-central1 1.10.9-gke.5   35.189.7.39 g1-small     1.10.9-gke.5 3         RUNNING
    

Creating a Cloud SQL database

In the following section, you create a Cloud SQL for Postgres database with a secure connection to your GKE cluster. Like Heroku Postgres, Cloud SQL is a fully managed service that automates all your backups, replication, patches, and updates.

Alternatively, you can choose to deploy PostgreSQL directly in your Kubernetes cluster, but you need to maintain the database yourself. Use this option if you need to use a PostgreSQL version not yet supported by Cloud SQL.

Set up a private IP address range

With Cloud SQL, you can set up a database that is accessible only by using a private IP address, so traffic isn't exposed to the public internet. This is similar to a Private or Shield Tier database in Heroku.

  1. In Cloud Shell, create a dedicated range of private IP addresses for your database. For this tutorial, you use the existing default network.

    gcloud compute addresses create ruby-db-range \
        --global \
        --purpose=VPC_PEERING \
        --prefix-length=16 \
        --description="Private access to database" \
        --network=default
    
  2. Enable private services access by creating a Virtual Private Cloud (VPC) peering between the private IP address range and Google services, such as Cloud SQL.

    gcloud beta services vpc-peerings connect --service=servicenetworking.googleapis.com \
        --ranges=ruby-db-range --network=default
    

Create and configure the database

  1. In Cloud Shell, create the database instance, using the same network you created the IP address range in.

    gcloud beta sql instances create ruby-pg --availability-type=REGIONAL --region=$REGION \
        --cpu=2 --memory=4 --storage-size=68 --database-version=POSTGRES_9_6 \
        --network=default --no-assign-ip
    

    The --availability-type=REGIONAL flag is used to create a highly available configuration with a primary and standby instance in different zones. This is similar to the Heroku Postgres High Availability and is recommended for production deployments.

    You can also choose the quantity of CPUs, memory, and storage in your instances with the --cpu, --memory, and --storage-size flags. The two vCPUs, 4 GB of RAM, and 68 GB of SSD storage suggested here are roughly equivalent to the Heroku Postgres premium-0 tier.

    The number of allowed concurrent connections depends on memory size. The configuration supports up to 100 concurrent connections. All disks are encrypted by default. Cloud SQL doesn't apply artificial row limits to any database tier.

  2. Create the production database.

    gcloud sql databases create ruby-getting-started_production \
        --instance=ruby-pg
    
  3. Create a user to access the database. Set the environment variable PGPASSWORD, which is read by Postgres to authenticate users. Replace [YOUR_PASSWORD] with a password of your choice.

    export DB_USER=ruby-getting-started
    export PGPASSWORD="[YOUR_PASSWORD]"
    gcloud sql users create $DB_USER --instance=ruby-pg --password=$PGPASSWORD
    
  4. Determine the private IP address of your new database and store this value as an environment variable. This private IP address starts with 10.

    export DB_HOST=$(gcloud sql instances describe ruby-pg \
        --format="value(ip_addresses.filter("type:PRIVATE").*extract(ip_address).flatten())")
    echo $DB_HOST
    

Your database is now ready, but because it's on a private network, it cannot be reached from Cloud Shell.

Create a bastion host

Create a virtual machine (VM) to act as a bastion host for access to the Cloud SQL database. This VM is also used later for building containers and administering both Heroku and GCP.

  1. In Cloud Shell, create a VM.

    gcloud compute instances create ruby-dev-vm --machine-type=g1-small \
        --scopes=cloud-platform --tags=http-server
    
  2. Persist environment variables from Cloud Shell to your new VM's profile.

    export | grep -E 'DB_HOST|DB_USER|PGPASSWORD|REGION' | \
        gcloud compute ssh ruby-dev-vm --command 'cat >>.profile' -- \
            -oStrictHostKeyChecking=no
    
  3. Use ssh to connect to the bastion host.

    gcloud compute ssh ruby-dev-vm
    

Run all commands in the rest of this tutorial in this VM.

Configure host utilities

  1. In the terminal window of your bastion host VM, install git, kubectl, and psql on the bastion host.

    sudo apt-get update && sudo apt-get install -y git kubectl postgresql-client
    
  2. Set the default GCP region on the VM. This is the same region that you used at the beginning of the tutorial.

    gcloud config set compute/region $REGION
    
  3. Configure gcloud to connect to your GKE cluster.

    gcloud container clusters get-credentials ruby-cluster --region $REGION
    

Test Cloud SQL connectivity

  1. In the terminal window of your bastion host VM, verify that you can connect to the database.

    psql -h $DB_HOST -U $DB_USER ruby-getting-started_production
    

    The output is:

    psql (9.6.10, server 9.6.6)
    SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES128-GCM-SHA256, bits: 128, compression: off)
    Type "help" for help.
    ruby-getting-started_production=>
    
  2. To exit, type \q and press Enter.

Deploying a Heroku sample app

Next, you deploy a sample app on Heroku and migrate its database to Cloud SQL. You also package the app as a Docker container and test it locally.

Configure Heroku

  1. In the terminal window of your bastion host VM, install the Heroku CLI on the VM.

    curl https://cli-assets.heroku.com/install.sh | sh
    
  2. Log in to Heroku.

    heroku login --interactive
    
  3. Add the ssh keys of the VM to Heroku.

    heroku keys:add --yes
    

Deploy the sample app to Heroku

  1. In the terminal window of your bastion host VM, make a copy of Heroku's starter app.

    git clone https://github.com/heroku/ruby-getting-started
    cd ruby-getting-started
    
  2. Create a new Heroku app.

    heroku create
    
  3. Store the name of your Heroku app in an environment variable. Replace [YOUR_APP_NAME] with your app's name.

    export HEROKU_APP=[YOUR_APP_NAME]
    
  4. Create a Heroku Postgres 9.6 database add-on.

    heroku addons:create heroku-postgresql:hobby-dev --version=9.6 --app $HEROKU_APP
    
  5. Link the repository to Heroku, push the app, and populate its database.

    heroku git:remote -a $HEROKU_APP
    git push heroku master
    heroku run rake db:migrate
    

    Your app is now deployed on Heroku.

  6. Go to https://[YOUR_APP_NAME].herokuapp.com/widgets and click New Widget to create a new widget to migrate later.

Migrate data from Heroku Postgres to Cloud SQL

The preferred GCP approach for migrating data to Cloud SQL is to configure Cloud SQL as an external replica of the existing database. However, this isn't possible with Heroku Postgres because it doesn't support external replicas (followers) outside Heroku.

Instead, you can migrate data from Heroku Postgres to Cloud SQL with a streaming migration. This approach requires a period of downtime during the copying, when the Heroku database is read-only but the Cloud SQL database isn't ready to accept writes. By doing a practice run before the real migration, you can accurately estimate the amount of time needed.

  • In the terminal window of your bastion host VM, copy the database.

    heroku pg:pull DATABASE_URL \
        postgres://$DB_USER@$DB_HOST/ruby-getting-started_production \
        --app $HEROKU_APP
    

    Cloud SQL supports most popular PostgreSQL extensions. However, because the production user doesn't have rights to install new extensions, you might get errors similar to the following when importing:

    pg_restore: [archiver (db)] could not execute query: ERROR:  must be owner of extension plpgsql
    

The plpgsql extension is activated by default in Cloud SQL, so no action is required.

Building a Docker container for your app

The next section shows you how to build the Docker container locally in your bastion VM.

Define the build environment

Heroku is a platform as a service (PaaS) environment, so there are predefined buildpacks for each language, which are used to compile app slugs. In Kubernetes, building a container equates to compiling a slug, which you can then deploy on GKE. This means the Dockerfile used to build your container must set up a complete build environment for your app.

  1. In the terminal window of your bastion host VM, go to the top-level directory of your repository.

    cd ~/ruby-getting-started
    
  2. Check the Ruby version currently on Heroku. Make a note of your Ruby version.

    heroku run ruby -v
    

    If you use a different version of Ruby, update the FROM line of the following Dockerfile sample with the Ruby version you currently have on Heroku. This Dockerfile starts with a standard Ruby 2.4 base image from Docker Hub. It then installs necessary system packages, installs the gems specified in your app's Gemfile, and precompiles its assets.

    cat <<EOF >Dockerfile
    FROM ruby:2.4
    
    RUN apt-get update \
        && apt-get install -y --no-install-recommends \
            nodejs postgresql-client \
        && rm -rf /var/lib/apt/lists/*
    
    # Copy application files and install the bundle
    WORKDIR /usr/src/app
    COPY Gemfile* ./
    RUN bundle install
    COPY . .
    
    # Run asset pipeline.
    RUN bundle exec rake assets:precompile
    
    EXPOSE 8080
    CMD ["bundle", "exec", "rackup", "--port=8080", "--env=production"]
    EOF
    
  3. Add a pointer to your database host to the Rails database configuration.

    echo "  host: <%= ENV['DB_HOST'] %>" >>config/database.yml
    
  4. Edit your production configuration to log to STDOUT so you can access the logs later.

    sed -i'' 's/# config.logger.*/config.logger = Logger.new(STDOUT)/' \
        config/environments/production.rb
    

Build and test your Docker container

  1. In the terminal window of your bastion host VM, install Docker CE for Debian.

  2. Add your user to the Docker group.

    sudo usermod -aG docker $USER
    
  3. Log out for the group change to take effect.

    exit
    
  4. Create a new ssh connection to log back in.

    gcloud compute ssh ruby-dev-vm
    
  5. Generate the Rails secret key base.

    export SECRET_KEY_BASE=$(tr -dc 'A-F0-9' < /dev/urandom | head -c128)
    

    This key is random, but might not be cryptographically secure. For production use, generate a key with rake secret in a Ruby development environment.

  6. Build the container image. This might take several minutes while all necessary packages are downloaded.

    cd ruby-getting-started
    docker build -t ruby-app .
    
  7. Run the container locally with Docker.

    docker run -p 80:8080 -e SECRET_KEY_BASE -e DB_HOST \
        -e "RUBY-GETTING-STARTED_DATABASE_PASSWORD=$PGPASSWORD" ruby-app
    

    Your app is now running in Docker on the VM. The -p flag tells Docker to publish port 8080 from the container externally as port 80.

  8. In the Cloud Shell toolbar, click Add Cloud Shell session add to open another ssh session. Leave the first Cloud Shell session open, because it's needed later.

  9. In the new session, log into the bastion host.

    gcloud compute ssh ruby-dev-vm
    
  10. In the terminal window of your bastion host VM, run curl to test the app.

    curl localhost
    

    This command retrieves the homepage, with app logs printed in the Docker window.

  11. To view the homepage in a browser, add a temporary firewall rule permitting access.

    gcloud compute firewall-rules create allow-http --allow=tcp:80 --target-tags=http-server
    
  12. Determine the VM's external IP address.

    gcloud compute instances list --filter="name:ruby-dev-vm" --format="value(EXTERNAL_IP)"
    
  13. Go to http://[EXTERNAL_IP]/widgets to test the database integration. Replace [EXTERNAL_IP] with the IP address you retrieved in the preceding step.

  14. When you finish testing, delete the firewall rule.

    gcloud compute firewall-rules delete allow-http --quiet
    

Debug a running container

If your app doesn't load properly or returns errors, inspect the running container for errors.

  1. Determine the container's ID.

    docker ps
    

    This ID is an alphanumeric string, such as 271f580cec12.

  2. Check the container's logs. Replace [CONTAINER_ID] with the ID from the previous command.

    docker logs [CONTAINER_ID]
    
  3. Use docker exec to run commands in the container. For example, to open a Bash shell in the container so you can inspect its contents and verify that it is built correctly:

    docker exec -it [CONTAINER_ID] /bin/bash
    
  4. To exit the container shell and return to the VM, type exit .

If you need to make any changes to the app's configuration, rebuild your image and start a new container with the previous instructions, starting from Build the container image.

Push to Container Registry

When the container image can run the app, it's time to push it from your VM to Container Registry.

  1. Close the second tab in Cloud Shell to return to your original session.
  2. Press Control+C (or Command+C on macOS) to terminate the local Docker process.

  3. Configure Docker to use gcloud authorization.

    gcloud auth configure-docker --quiet
    
  4. Add the registry name as a tag to the container image.

    PROJECT=$(gcloud config get-value project)
    docker tag ruby-app:latest gcr.io/$PROJECT/ruby-app
    
  5. Push the container image to Container Registry.

    docker push gcr.io/$PROJECT/ruby-app
    

Deploying on GKE

In this section, you deploy your Rails app on GKE.

Configure secrets

Kubernetes has a secret object for storing sensitive information, such as passwords. This lets you check configuration files into source control without exposing sensitive information.

  • In the terminal window of the bastion VM, create a secret called ruby-credentials to store the database username, password, and the previously generated Rails secret key base:

    kubectl create secret generic ruby-credentials \
        --from-literal user=ruby-getting-started \
        --from-literal password="$PGPASSWORD" \
        --from-literal secret_key_base="$SECRET_KEY_BASE"
    

Deploy pods

You are now ready to deploy your containerized app on GKE.

  1. Create the deployment.yaml file to instruct Kubernetes to create 3 pods (replicas) of the latest ruby-app container image available in Container Registry. Each pod also has environment variables either populated directly or from the ruby-credentials secret you created in the previous section. Each pod is labeled as ruby-app and available on port 8080.

    cat <<EOF >deployment.yaml
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: ruby-deployment
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: ruby-app
        spec:
          containers:
          - name: ruby-app
            image: gcr.io/$PROJECT/ruby-app:latest
            ports:
            - containerPort: 8080
            livenessProbe:
              httpGet:
                path: /
                port: 8080
              initialDelaySeconds: 30
              timeoutSeconds: 1
            readinessProbe:
              httpGet:
                path: /
                port: 8080
              initialDelaySeconds: 30
              timeoutSeconds: 1
            env:
              - name: DB_HOST
                value: $DB_HOST
              - name: DB_USER
                valueFrom:
                  secretKeyRef:
                    name: ruby-credentials
                    key: user
              - name: RUBY-GETTING-STARTED_DATABASE_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: ruby-credentials
                    key: password
              - name: SECRET_KEY_BASE
                valueFrom:
                  secretKeyRef:
                    name: ruby-credentials
                    key: secret_key_base
    EOF
    
  2. Apply the configuration.

    kubectl apply -f deployment.yaml
    
  3. Confirm that the pods are running.

    kubectl get pods
    

    It might take a few minutes until they are marked as Running in the output.

    NAME                               READY   STATUS    RESTARTS   AGE
    ruby-deployment-7fdb99cfd6-bbcxp   1/1     Running   0          39s
    ruby-deployment-7fdb99cfd6-pm2x9   1/1     Running   0          42s
    ruby-deployment-7fdb99cfd6-hdbbv   1/1     Running   0          42s
    

    Your app is now running in GKE, but it isn't accessible externally yet.

  4. To test the pod's readiness and liveness checks, end the Rails process on one pod. Replace [POD_NAME] with the name of a pod from the previous command.

    kubectl exec [POD_NAME] kill 1 && kubectl get pods -w
    

    In the output, check that the pod enters an error state and restarts, indicating that it's ready again:

    NAME                               READY STATUS    RESTARTS AGE
    ruby-deployment-7fdb99cfd6-bbcxp   1/1   Running   0         7m
    ruby-deployment-7fdb99cfd6-pm2x9   1/1   Running   0         7m
    ruby-deployment-7fdb99cfd6-hdbbv   1/1   Running   0         7m
    ruby-deployment-7fdb99cfd6-hdbbv   0/1   Error     0         7m
    ruby-deployment-7fdb99cfd6-hdbbv   0/1   Running   1         7m
    ruby-deployment-7fdb99cfd6-hdbbv   1/1   Running   1         7m
    

    To stop watching, press CTRL+C.

    For more information, see Readiness and liveness probes

Create a load balancer

The final step is to create a Service to direct requests from the internet to your app.

  • In the terminal window of the bastion VM, Create the service configuration service.yml file to create a regional TCP network load balancer that listens on port 80 and distributes requests to port 8080 on pods matching the selector ruby-app.

    cat <<EOF >service.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: ruby-service
    spec:
      selector:
        app: ruby-app
      type: LoadBalancer
      ports:
      - name: http
        protocol: TCP
        port: 80
        targetPort: 8080
    EOF
    

In a production environment, if you need HTTPS or context-aware load balancing, for example, routing by URL, use the HTTP(S) load balancer with a GKE ingress instead.

  1. Apply the configuration.

    kubectl apply -f service.yaml
    
  2. Confirm that the service is running.

    kubectl get services -w
    

    It can take a few minutes to create the load balancer. The service is created when the EXTERNAL-IP address is assigned.

    NAME           TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)        AGE
    kubernetes     ClusterIP      10.0.0.1     none             443/TCP        9d
    ruby-service   LoadBalancer   10.0.2.255   35.197.189.158   80:30189/TCP   2d
    

    To stop watching, press CTRL+C.

  3. Go to the app and make sure you can edit widgets.

    curl http://[EXTERNAL_IP]/widgets
    

Scaling your GKE cluster

This section covers scaling your app on GKE and how this differs from Heroku. The following diagram shows two apps, A and B, running in both Heroku and GKE.

Scaling with Heroku compared to GKE

In Heroku, you can add more capacity by either upgrading to larger dynos (vertical scaling) or running more dynos (horizontal scaling). In either case, the additional capacity is tied to a single app, so if app A deploys a third dyno, that dyno cannot be used by app B.

GKE provides an efficient scaling model by letting you deploy pods for more than one app (deployment) onto each underlying node. In this example, both app A and app B have pods running in node 1 and node 2, and app A can make use of space capacity in node 1 by running multiple pods.

This means there are two ways to scale a GKE app: you can add more capacity by adding nodes, or you can make better use of existing capacity by adding pods.

Add more nodes

The first way to scale a GKE app is to add more nodes to increase capacity.

  1. In the terminal shell of your bastion VM, to determine whether more nodes are necessary, check how much free capacity your nodes currently have.

    kubectl top nodes
    

    In the following output, CPU usage is low, but the first node is using 81% of its memory. If either of these values approach 100%, consider adding more nodes.

    NAME                                          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
    gke-ruby-cluster-default-pool-fcdca5d1-9ljq   55m          5%     2143Mi          81%
    gke-ruby-cluster-default-pool-fcdca5d1-n7jc   36m          3%     1045Mi          39%
    gke-ruby-cluster-default-pool-fcdca5d1-zlq8   42m          4%     784Mi           29%
    
  2. To add more nodes, resize the cluster, which lets Kubernetes schedule new pods on new machines.

    gcloud container clusters resize ruby-cluster --size=2 --region=$REGION --quiet
    

    It might take a few minutes to create the new nodes. Resizing to 2 adds a new node in each zone, so this doubles capacity from 3 (1 node per zone) to 6 (2 nodes per zone).

  3. Confirm that all of the nodes have the status Ready.

    kubectl get nodes
    

Add more pods

The second way to scale a GKE app is to add new pods.

  1. Determine how pods are distributed across nodes.

    kubectl get pods -o wide
    

    The output indicates that only three nodes are used to run the app.

  2. To take advantage of the new nodes, add three more pods.

    kubectl scale deployment.v1.apps/ruby-deployment --replicas=6
    

    This command doesn't add more resources to the cluster, so the new pods compete with pods already running in the pool. By default, the new pods are automatically allocated to the least-loaded nodes.

  3. Check where your new pods are placed.

    kubectl get pods -o wide
    

    In the NODE column, the pods are evenly distributed across all 6 nodes, including those you created in the previous section.

In addition to adding pods or nodes, you can set resource requests and limits to control how many resources your pods are allowed to consume. Alternatively, you can group your pods in dedicated node pools, only used for a defined app.

Finally, you can automate both types of scaling in response to metrics, such as CPU load. A full treatment of this approach is beyond the scope of this tutorial, but Kubernetes lets you autoscale pods with the Horizontal Pod Autoscaler, while GKE lets you autoscale nodes with cluster autoscaling.

Migrating production apps

Here are additional considerations and best practices for migrating production apps.

Minimize downtime during database migration

Using pg:pull to migrate a database requires app downtime. If the downtime is too long, there are two alternatives that can reduce or eliminate downtime:

  • Incremental PostgreSQL backups with Alooma, a company specializing in data migration.
  • Building a temporary proxy in Heroku to replicate writes to Cloud SQL. To ensure consistency, best practice is to write all transactions to a message queue, then have a separate reader job write to both databases and remove the message only when both writes have succeeded.

Both options require changes to the app's code and are beyond the scope of this tutorial.

Migrate across database versions

You can only use pg:pull if you are migrating a database where your Heroku and Cloud SQL Postgres versions are the same. If you need to migrate across versions an alternative is to export and import text.

  • In the terminal window of your bastion VM, dump the contents of the database and then import it.

    heroku run 'pg_dump --format=plain --no-owner --no-acl $DATABASE_URL' | \
        psql -h $DB_HOST -U ruby-getting-started -d ruby-getting-started_production
    

However, exporting and importing text is slower and more prone to errors than using pg:pull due to potential version incompatibilities.

Install PostgreSQL extensions

If your app uses PostgreSQL extensions, log into psql as the postgres superuser and activate the extensions with CREATE EXTENSION before importing. During the migration, you might still receive extension-related warnings because the text dump tries to recreate the extensions.

  • If you are doing a text format migration, comment out the relevant SQL commands in the terminal window of your bastion host VM.

    heroku run 'pg_dump --format=plain --no-owner --no-acl $DATABASE_URL' | \
        sed -E 's/(DROP|CREATE|COMMENT ON) EXTENSION/-- \1 EXTENSION/g' | \
        psql -h $DB_HOST -U ruby-getting-started -d ruby-getting-started_production
    

Automate building and deployment

In this tutorial, the app container is built and deployed by hand, but this is cumbersome and error-prone for a frequently released app. Consider setting up a continuous integration and deployment (CI/CD) pipeline to automate and manage your build. You can test and deploy by using tools like Cloud Build or Jenkins on GKE.

Deploy stateful apps

Stateless apps in Kubernetes, such as the one used in this tutorial, are usually deployed as Deployments, which define a configuration for your app, including restarting pods as necessary. If your app is stateful, meaning the containers need local resources, consider using a StatefulSet instead.

Debug your app on GKE

If you run into any problems while deploying your app, the GKE troubleshooting guide is a good place to start.

The following tips can help you quickly locate errors and identify their root causes.

  • To examine the current state and recent events of Kubernetes resources such as pods, use kubectl to inspect the resource in the terminal window of your bastion host VM. For example, to inspect a pod, first get its pod ID, then describe the pod. Replace [POD_ID] with the pod ID.

    kubectl get pods
    kubectl describe pod [POD_ID]
    
  • The same Docker debugging commands you used earlier also work with kubectl, so you can quickly fetch logs and interactively explore a running container. For example, these commands show logs and open a shell on pod [POD_ID].

    kubectl logs `[POD_ID]`
    kubectl exec -it `[POD_ID]` bash
    
  • When your app displays an error, it's often not clear which pod it originates from. Instead of inspecting pods one by one, open the GCP console and search all pod logs with Stackdriver Logging.

    1. In the GCP Console, go to the Kubernetes Engine page.

      GO TO KUBERNETES ENGINE

    2. Click Workloads.

    3. Click ruby-deployment.

    4. Click Container logs.

Cleaning up

To avoid incurring charges to your GCP and Heroku accounts for the resources used in this tutorial:

Delete Heroku sample app

To delete the app:

  1. In Cloud Shell, delete the Heroku app if you no longer need it.

    heroku apps:destroy --app $HEROKU_APP --confirm $HEROKU_APP
    
  2. You can now exit the ssh session to ruby-dev-vm.

    exit
    

Delete the project

The easiest way to clean up resources you created for this tutorial is to delete the GCP project.

To delete the project:

  1. In the GCP Console, go to the Projects page.

    Go to the Projects page

  2. In the project list, select the project you want to delete and click Delete delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete individual resources

Instead of deleting the entire project, you can remove the individual resources used in this tutorial:

  • GKE clusters

    gcloud container clusters delete ruby-cluster \
        --region $REGION --quiet
    
  • Cloud SQL instances

    gcloud sql instances delete ruby-pg --quiet
    
  • Compute Engine IP address ranges

    gcloud compute addresses delete ruby-db-range --global --quiet
    
  • Compute Engine instances

    gcloud compute instances delete ruby-dev-vm --quiet
    
  • Images in Container Registry

    gcloud container images delete gcr.io/${GOOGLE_CLOUD_PROJECT}/ruby-app:latest --quiet
    
  • Images in Cloud Storage

    gsutil -m rm -r gs://artifacts.${GOOGLE_CLOUD_PROJECT}.appspot.com
    

What's next

Was this page helpful? Let us know how we did:

Send feedback about...