Pricing for Managed Service for Apache Spark depends on your chosen deployment mode: serverless or clusters.
Both models offer access to Lightning Engine, our next-generation vectorized execution engine that delivers up to 4.9x faster performance than open source Spark and up to 2x the price-performance over the leading high-speed Spark alternative.
For both deployment types, you pay for incurred Google Cloud Storage, BigQuery storage, and network egress charges.
With the serverless deployment model, you pay only for the resources you use. You are billed per second based on Data Compute Units (DCUs), shuffle storage, and any attached accelerators. Scale-to-zero ensures you never pay for idle capacity.
Serverless deployments offer two performance tiers to match your workload requirements:
Compare serverless performance tiers in greater detail
Note: The following rates are examples for the us-central1 region.
Data Compute Unit (DCU) pricing
The DCU rate shown below is an hourly / monthly rate. It is prorated and billed per second, with a 1-minute minimum charge.
Type | Default* (USD) | BigQuery CUD - 1 Year* (USD) | BigQuery CUD - 3 Year* (USD) |
|---|---|---|---|
Data Compute Unit (standard) | US$0.06 | US$0.054 | US$0.048 |
Data Compute Unit (premium) | US$0.089 | US$0.0801 | US$0.0712 |
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.
Interactive workloads are charged at premium.
Shuffle storage pricing
The shuffle storage rate shown below is an hourly / monthly rate. It is prorated and billed per second, with a 1-minute minimum charge for standard shuffle storage and a 5-minute minimum charge for premium shuffle storage. Premium shuffle storage can only be used with premium compute unit.
Type | Price (USD) per hour |
|---|---|
Shuffle Storage (standard) | US$0.000054795 / 1 gibibyte hour |
Shuffle Storage (premium) | US$0.000136986 / 1 gibibyte hour |
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.
Accelerator pricing
The accelerator rate shown below is an hourly / monthly rate. It is prorated and billed per second, with a 5-minute minimum charge.
Type | Price (USD) per hour |
|---|---|
NVIDIA L4 | US$0.672048287 |
NVIDIA a100 40GB | US$3.5206896 |
NVIDIA a100 80GB | US$4.713696 |
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.
Serverless pricing examples
If the serverless batch workload runs with 12 DCUs for 24 hours in the us-central1 region and consumes 25GB of shuffle storage, the price calculation is as follows:
(spark.driver.cores=4,spark.executor.cores=4,spark.executor.instances=2)
Note: The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.
If the serverless batch workload runs with 12 DCUs and 2 L4 GPUs for 24 hours in the us-central1 region and consumes 25GB of shuffle storage, the price calculation is as follows:
(spark.driver.cores=4,spark.executor.cores=4, spark.executor.instances=2,spark.dataproc.driver.compute.tier=premium, spark.dataproc.executor.compute.tier=premium, spark.dataproc.executor.disk.tier=premium, spark.dataproc.executor.resource.accelerator.type=l4)
Note: The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.
If the serverless interactive workload runs with 12 DCUs for 24 hours in the us-central1 region and consumes 25GB of shuffle storage, the price calculation is as follows:
(spark.driver.cores=4,spark.executor.cores=4,spark.executor.instances=2)
Note: The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.
Serverless pricing estimation
When a serverless batch workload completes, Managed Service for Apache Spark calculates UsageMetrics, which contain an approximation of the total DCU, accelerator, and shuffle storage resources consumed by the completed workload. After running a workload, you can run the gcloud dataproc batches describe BATCH_ID command to view workload usage metrics to help you estimate the cost of running the workload.
Example:
Managed Service for Apache Spark runs a serverless workload on an ephemeral cluster with one master and two workers. Each node consumes 4 DCUs (default is 4 DCUs per core—see spark.dataproc.driver.disk.size) and 400 GB shuffle storage (default is 100GB per core—see spark.driver.cores). Workload run time is 60 seconds. Also, each worker has 1 GPU for a total of 2 across the cluster.
The user runs gcloud dataproc batches describe BATCH_ID --region REGION to obtain usage metrics. The command output includes the following snippet (milliDcuSeconds: 4 DCUs x 3 VMs x 60 seconds x 1000 = 720000, milliAcceleratorSeconds: 1 GPU x 2 VMs x 60 seconds x 1000 = 120000, and shuffleStorageGbSeconds: 400GB x 3 VMs x 60 seconds = 72000):
Note: The example assumes a 30-day month. Since the batch workload duration is one day, the monthly shuffle storage rate is divided by 30.
Serverless use of other Google Cloud resources
Your serverless Spark workload can optionally utilize the following resources, each billed at its own pricing, including but not limited to:
With the clusters deployment model, you pay for cluster uptime billed by the second. All clusters are billed in one-second clock-time increments, subject to a 1-minute minimum billing plus the cost of the underlying Google Cloud infrastructure.
Management Fee
The management fee is billed by the second, subject to a 1-minute minimum. It applies to all vCPUs across your master, worker, and secondary worker nodes.
The pricing formula is: $0.010 * # of vCPUs * hourly duration.
Price per vCPU / hour |
|---|
| US$0.01 / 1 hour |
Lightning Engine add-on
Bring breakthrough performance by adding Lightning Engine to your clusters. You can experience up to 4.9x faster execution than open source Spark and up to 2x the price-performance over the leading high-speed Spark alternative
Price per vCPU / hour |
|---|
Through 5/31/26: $0.0000 / 1 hour |
Starting 6/1/26: $0.0025 / 1 hour |
Clusters accrued charges
The following Managed Service for Apache Spark cluster operations and scenarios cause charges to accrue:
Clusters pricing example
Persistent cluster (standard tier)
You run persistent clusters for 730 hours (one month) in us-central1. The cluster consists of 1 master node and 4 worker nodes, totaling 20 vCPUs.
Clusters use of other Google Cloud resources
As a managed and integrated solution, Managed Service for Apache Spark clusters are built on top of other Google Cloud technologies. Clusters consume the following resources, each billed at its own pricing:
Clusters can optionally utilize the following resources, each billed at its own pricing, including but not limited to:
Clusters on GKE pricing
This section explains the charges that apply only to the virtual clusters that runs on a user-managed Google Kubernetes Engine. See GKE pricing to learn about the added charges that apply to the user-managed GKE cluster.
The Managed clusters on GKE pricing formula, $0.010 * # of vCPUs * hourly duration, is the same as clusters on Compute Engine pricing formula, and is applied to the aggregate number of virtual CPUs running in VMs instances in Managed Service for Apache Spark-created node pools in the cluster. The duration of a virtual machine instance is the length of time from its creation to its deletion. Clusters on GKE are billed by the second, subject to a 1-minute minimum billing per virtual machine instance. Other Google Cloud charges are applied in addition to Managed Service for Apache Spark charges.
Managed Service for Apache Spark-created node pools continue to exist after deletion of the cluster since they may be shared by multiple clusters. If you delete the node pools or scale node pools down to zero instances, continued Managed Service for Apache Spark charges will not be incurred. Any remaining node pool VMs will continue to incur charges until you delete them.