Migrate from Heroku Enterprise to Cloud Run while keeping devs and ops happy
Felipe Ryan
Customer Engineer, App Ecosystem
Try Google Cloud
Start building on Google Cloud with $300 in free credits and 20+ always free products.
Free trialModern developers worldwide have grown accustomed to the comfort of writing code, pushing to a remote Git repository and having that code be deployed at an accessible URL without having to worry about how it is deployed. This was a workflow popularized by Heroku years ago which brought joy and productivity to developers even if it did impose some loss of flexibility for operation teams.
To address that loss of flexibility when meeting security and integration requirements, Heroku introduced Private Spaces. Private Spaces provide network isolation from the internet since any application or datastore provisioned by Heroku is accessible to the internet by default.
Cloud Run is quickly becoming the “swiss army knife” of serverless here at Google Cloud and it’s a natural migration path for developers accustomed to Heroku. The fundamentals are all there:
Continuously Deploy via Git push using open source Buildpacks or Dockerfiles
Set CPU and Memory requirements for each instance
Horizontally scalable apps that scale from zero to thousands of instances to meet traffic demands automatically
So while devs are kept happy, can Cloud Run do something for the Ops folks? Yes. Here are some things available right in the Cloud Run UI:
Proper Secret Management with IAM based access control. No more setting secrets as environment variables.
Traffic management between different revisions for blue-green or canary deployments.
Define SLIs and SLOs with ease. Eg: 90% of requests have to be served under 200ms in a calendar month.
Secure your service with tools such as Software Delivery Shield, Binary Authorization, and Cloud Armor. Definitely deserves its own blog post.
Recreating Private Spaces on Cloud Run
Let’s focus on network isolation now, let’s say you have an internet facing app and a private backend API that talks to a private database. Simplest architecture ever, it conceptually looks a bit like this:
Let’s address the database first. If you want to use Postgres then Cloud SQL is most likely what you want, but do keep in mind that we have other datastores that speak Postgres such as AlloyDB and Spanner.
Cloud SQL allows you to create a Postgres instance that’s isolated from the internet by simply unchecking the Public IP checkbox and checking the Private IP checkbox. This will assign an IP address to your Postgres instance on your project’s network.
Once the DB is provisioned you’ll see the IP clearly listed, such as:
Of course there’s so much more to say about CloudSQL, to learn more please take a look at our documentation.
Ok now that you've dealt with Postgres, let’s address the private backend API on Cloud Run.
When creating a new Cloud Run service via the Google Cloud Console, Ingress can be limited to “Internal traffic only” so only traffic from internal sources, including your VPC, can access the service. In other words, the internet can not touch it.
As an additional level of security, it’s also possible to enforce that only requests from authorized users be served, In this case a “user” is most likely another service using its associated service account which will need the “roles/run.invoker” in order to call this service.
Now let’s make sure that our Backend API Service can reach the Postgres instance by configuring a VPC Connector. This will allow Cloud Run services to reach into the VPC and therefore, the internal IP for the Postgres instance.
Once the VPC Connector is created, you can associate it with a Cloud Run service.
Then it’s just a matter of configuring your code to use the Postgres instance’s private IP address. A good 12-Factor app friendly spot to do that is with a connection string in an environment variable as part of the Cloud Run service configuration. Better yet, as this may contain a DB password, you can use Secret Manager to mount this environment variable from an encrypted and protected secret.
Finally, let’s now set up that Front End Cloud Run service which will respond to requests from the internet, and securely communicate with the backend API service.
For the frontend service choose to “Allow all traffic” and also “Allow unauthenticated invocations” so anyone on the web can access our URL. We could of course choose the middle option and use Cloud Load Balancing in conjunction with Cloud Armor which provides defenses against DDoS and application attacks, and offers a rich set of WAF rules. However, let’s keep it simple for now.
Keep in mind that our Backend service will only accept requests from within our VPC network, and that we don’t have a private IP address for Cloud Run.
So let’s ensure that all egresses from our Frontend actually get routed to the VPC Connector, this way when our Frontend calls a Backend API via it’s URL endpoint, the Backend will receive the request from within the VPC and allow it in.
PS: If your Backend requires authentication don’t forget to create a Service Account for your Frontend Service and then give it the necessary role following a service-to-service auth pattern.
And that’s it. You now have an operationally acceptable private space like environment with an app composed of two Cloud Run services where the Backend service and Postgres instance are network isolated from the Internet. If after reading this blog you would like to get hands-on experience with the technologies mentioned above, then take a look at Google Cloud Skills Boost. There you will find learning paths, quests, and labs curated to boost your cloud skills in a particular area.
For example here’s a great lab that takes you through developing a REST API on Cloud Run using Go.