Introducing Parallel Steps for Workflows: Speed up workflow executions by running steps concurrently
Megan Bruce
Outbound Product Manager, Google Cloud
Introducing Parallel Steps for Workflows: Speed up workflow execution by running steps concurrently
We’re excited to launch a new feature for Workflows, a serverless orchestrator for developers that connects multiple Google Cloud and external services. Parallel Steps—now in Preview—enables developers to run multiple concurrent steps, which can help reduce the time it takes to execute a workflow, particularly one that includes long-running operations like HTTP requests and callbacks.
To create a workflow, developers define a series of steps and order of execution. Each step performs an operation, like assigning variables, returning a value, or calling an HTTP endpoint.
By default, Workflows executes steps in sequential order, one at a time. Running steps in serial order like this can prove inefficient for long-running operations that take minutes, hours, or days because they include, for example, long-running calls, callbacks, polling, or waiting for human approval. (Workflows’ execution duration limit is one year.)
To address this inefficiency, we’ve introduced the ability to execute steps concurrently using parallel branches and parallel iteration, to speed up overall workflow execution time. A workflow can now contain both serial steps for sequential operations, and parallel steps for non-linear ones.
For many users, Workflows with parallel steps will be the most efficient way on Google Cloud to run a batch of services in parallel and aggregate the results. Because serverless compute services like Cloud Functions and Cloud Run can autoscale, you can use Workflows to run those services with high concurrency when needed, without needing to provision high capacity when idle.
Let’s take a closer look at how this new feature works.
Parallel steps in action: Running concurrent BigQuery jobs to speed up data processing
To test out the benefit of parallel steps, here’s an example tutorial of a workflow that runs five BigQuery jobs to process a Wikipedia dataset. In this tutorial, we compare parallel and non-parallel execution of this workflow, and see a major improvement in execution time when parallelizing those BigQuery jobs.
First, you execute the workflow serially, using the for loop below. Each BigQuery job will execute in about 20 seconds, bringing total execution time to 1 minute:
Serial iteration
Next, we tried executing the BigQuery jobs concurrently. Note that it was very simple to make the change to from a non-parallel to a parallel iteration, simply by adding the parallel parameter and declaring results as a shared variable, as highlighted below:
Parallel iteration
Executing these BigQuery jobs concurrently in a parallel iteration took 20 seconds total. That’s 5x faster, as compared to the non-parallel execution.
Running BigQuery jobs in parallel helped speed up total workflow execution time by 5x.
Note that when using parallel steps, all variable assignments are guaranteed to be atomic, meaning that you don’t have to worry about variable read/write ordering or race conditions. Declaring a variable as shared
(like in the example above) allows that variable to write to other branches. With shared variables, the assigned value is determined and written without any intervening reads or writes by other branches.Shared variable writes are immediately seen by other branches. (It’s important to note, though, that execution order is not guaranteed.)
Parallel branches: Run a set of operations in parallel
If your workflow has multiple and different sets of steps that can be executed at the same time, placing them in parallel branches can decrease the total time needed to complete those steps. You can define up to 10 branches per parallel step, and run up to 20 concurrent branches (after 20, additional parallel branches will be queued).
Here’s an example of using parallel branches to retrieve data in parallel from two different services:
When should I use parallel steps?
You’ll see the most efficiency gains by parallelizing long-running steps (~1 second or greater) that include operations like sleep, HTTP requests, or callbacks. For fast-running compute operations and steps that include operations like assign, switch, or next, you should continue running those in serial order, because you won’t see any efficiency gains by parallelizing them.
For example, in the preceding tutorial, each BigQuery job takes approximately 20 seconds to run. For that reason, parallelizing those jobs makes a lot of sense to speed up execution time, because those jobs don’t need to run in sequential order.
Next steps: Getting started with Workflows Parallel Steps
Check out our Parallel Steps codelab: Try out this codelab for a walkthrough on using parallel iteration to run multiple BigQuery services concurrently to process a data set.
Check out our documentation for more information on parallel steps, and for more sample code.
Test Workflows out for free: Workflows is pay-per-use, and your first 5,000 internal steps per month are free. Just head to Google Cloud Console to get started.
Parallel Steps is currently in Preview, and we’d love to hear your feedback on this feature as you’re using it. You can send us your feedback through this form.
If you’re a Google Cloud developer or a data engineer and you’re not using Workflows today, we encourage you to test it out—especially if you want to build an event-driven business process or application, or a lightweight data pipeline. Get familiar with Workflows in this free codelab or view Workflows product documentation.