Multi-environment service orchestrations
Mete Atamel
Cloud Developer Advocate
Multi-environment service orchestrations
In a previous post, I showed how to use a GitOps approach to manage the deployment lifecycle of a service orchestration. This approach makes it easy to deploy changes to a workflow in a staging environment, run tests against it, and gradually roll out these changes to the production environment.
While GitOps helps to manage the deployment lifecycle, it’s not enough. Sometimes, you need to make changes to the workflow before deploying to different environments. You need to design workflows with multiple environments in mind.
For example, instead of hardcoding the URLs called from the workflow, you should replace the URLs with staging and production URLs depending on where the workflow is being deployed.
Let’s explore three different ways of replacing URLs in a workflow.
Option 1: Pass URLs as runtime arguments
In the first option, you define URLs as runtime arguments and use them whenever you need to call a service:
You can deploy workflow1.yaml as an example:
Run the workflow in the staging environment with staging URLs:
And, run the workflow in the prod environment with prod URLs:
Note: These runtime arguments can also be passed when triggering using API, client libraries, or scheduled triggers but not when triggering with Eventarc.
Option 2: Use Cloud Build to deploy multiple versions
In the second option, you use Cloud Build to deploy multiple versions of the workflow with the appropriate staging and prod URLs replaced at deployment time.
Run setup.sh to enable required services and grant necessary roles.
Define a YAML (see workflow2.yaml for an example) that has placeholder values for URLs:
Define cloubuild.yaml that has a step to replace placeholder URLs and a deployment step:
Deploy the workflow in the staging environment with staging URLs:
Deploy the workflow in the prod environment with prod URLs:
Now, you have two workflows ready to run in staging and prod environments:
Option 3: Use Terraform to deploy multiple versions
In the third option, you use Terraform to deploy multiple versions of the workflow with the appropriate staging and prod URLs replaced at deployment time.
Define a YAML (see workflow3.yaml for an example) that has placeholder values for URLs:
Define main.tf that creates staging and prod workflows:
Initialize Terraform:
Check the planned changes:
Deploy the workflow in the staging environment with staging URLs and the prod environment with prod URLs:
Now, you have two workflows ready to run in staging and prod environments:
Pros and cons
At this point, you might be wondering which option is best.
Option 1 has a simpler setup (a single workflow deployment) but a more complicated execution, as you need to pass in URLs for every execution. If you have a lot of URLs, executions can get too verbose with all the runtime arguments for URLs. Also, you can't tell which URLs your workflow will call until you actually execute the workflow.
Option 2 has a more complicated setup with multiple workflow deployments with Cloud Build. However, the workflow contains the URLs being called and that results in a simpler execution and debugging experience.
Option 3 is pretty much the same as Option 2 but for Terraform users. If you’re already using Terraform, it probably makes sense to also rely on Terraform to replace URLs for different environments.
This post provided examples of how to implement multi-environment workflows. If you have questions or feedback, feel free to reach out to me on Twitter @meteatamel.