Setup
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to check if billing is enabled on a project.
-
Enable the Dataproc API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project. Learn how to check if billing is enabled on a project.
-
Enable the Dataproc API.
Submit a Spark batch workload
Console
Go to Dataproc Batches in the Google Cloud console. Click CREATE to open the Create batch page.
Select and fill in the following fields on the page to submit a Spark batch workload that computes the approximate value of pi:
- Batch Info
- Batch ID: Specify an ID for your batch workload. This value must be 4-63 lowercase characters. Valid characters are /[a-z][0-9]-/.
- Region: Select a region where your workload will run.
- Container
- Batch type: Spark
- Runtime version: The default runtime version is selected. You can optionally specify a non-default Dataproc Serverless runtime version.
- Main class:
org.apache.spark.examples.SparkPi
- Jar files:
file:///usr/lib/spark/examples/jars/spark-examples.jar
- Arguments: 1000
- Execution Configuration You can specify a service account to use to run your workload. If you do not specify a service account, the workload will run under the Compute Engine default service account.
- Network configuration The VPC subnetwork that executes Serverless Spark workloads must meet the requirements listed in Dataproc Serverless for Spark network configuration. The subnetwork list display subnets in your selected network that are enabled for Private Google Access.
- Properties: Enter the Key (property name) and Value of any supported Spark properties you want your Spark batch workload to use. Note: Unlike Dataproc on Compute Engine cluster properties, Dataproc Serverless for Spark workload properties do not include a "spark:" prefix.
- Other options
- Configure the batch workload to use an external self-managed Hive Metastore.
- Use a Spark History Server.
Click SUBMIT to run the Spark batch workload.
gcloud
To submit a Spark batch workload to compute the approximate value
of pi
, run the following gcloud CLI
gcloud dataproc batches submit spark
command locally in a terminal window or in
Cloud Shell.
gcloud dataproc batches submit spark \ --region=REGION \ --jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \ --class=org.apache.spark.examples.SparkPi \ -- 1000
Notes:
- Subnetwork:
The VPC subnetwork that executes Serverless Spark workloads must
meet the requirements listed in
Dataproc Serverless for Spark network configuration. If the
default
network's subnet for the region specified in thegcloud dataproc batches submit
command is not enabled for Private Google Access, you must do one of the following:- Enable the default network's subnet for the region for Private Google Access, or
- Use the
--subnet=[SUBNET_URI]
flag in the command to specify a subnet that has Private Google Access enabled. You can run thegcloud compute networks describe [NETWORK_NAME]
command to list the URIs of subnets in a network.
--jars
: The example jar file is pre-installed, and the1000
command argument passed to the SparkPi workload specifies 1000 iterations of the pi estimation logic (workload input arguments are included after the "-- ").--properties
: You can add the--properties
flag to enter any supported Spark properties you want your Spark batch workload to use.--deps-bucket
: You can add this flag to specify a Cloud Storage bucket where Dataproc Serverless for Spark will upload workload dependencies. Thegs://
URI prefix of the bucket is not required; you can specify the bucket path/name only, for example, "mybucketname". Exception: If your batch workload references files on your local machine, the --deps-bucket flag is required; Dataproc Serverless for Spark will upload the local file(s) to a/dependencies
folder in the bucket before running the batch workload.- Other options:
- You can add other optional command flags. For example,
the following command configures the batch workload to use an external
self-managed Hive Metastore
using a standard Spark configuration:
gcloud dataproc batches submit \ --properties=spark.sql.catalogImplementation=hive,spark.hive.metastore.uris=METASTORE_URI,spark.hive.metastore.warehouse.dir=WAREHOUSE_DIR> \ other args ...
Seegcloud dataproc batches submit
for supported command flags. - Use a Persistent History Server.
- Create a Persistent History Server (PHS) on a single-node Dataproc
cluster. Note: the Cloud Storage bucket-name must
exist.
gcloud dataproc clusters create PHS_CLUSTER_NAME \ --region=REGION \ --single-node \ --enable-component-gateway \ --properties=spark:spark.history.fs.logDirectory=gs://bucket-name/phs/*/spark-job-history
- Submit a batch workload, specifying your running Persistent History Server.
gcloud dataproc batches submit spark \ --region=REGION \ --jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \ --class=org.apache.spark.examples.SparkPi \ --history-server-cluster=projects/project-id/regions/region/clusters/PHS-cluster-name -- 1000
- Create a Persistent History Server (PHS) on a single-node Dataproc
cluster. Note: the Cloud Storage bucket-name must
exist.
- Specify a
non-default Dataproc Serverless runtime version.
gcloud dataproc batches submit spark \ --region=REGION \ --jars=file:///usr/lib/spark/examples/jars/spark-examples.jar \ --class=org.apache.spark.examples.SparkPi \ --version=VERSION -- 1000
- You can add other optional command flags. For example,
the following command configures the batch workload to use an external
self-managed Hive Metastore
using a standard Spark configuration:
API
This section shows how to create a batch workload
to compute the approximate value
of pi
using the Dataproc Serverless for Spark
batches.create API.
Before using any of the request data, make the following replacements:
- project-id: Google Cloud project ID
- region: region Notes:
- Custom-container-image: Specify the custom container image using the
Docker image naming format:
{hostname}/{project-id}/{image}:{tag}
, for example, "gcr.io/my-project-id/my-image:1.0.1". Note: You must host your custom container on Container Registry. - Subnetwork:
If the
default
network's subnet for the specified region is not enabled for Private Google Access, you must do one of the following:- Enable the default network's subnet for the region for Private Google Access, or
- Use the
ExecutionConfig.subnetworkUri
field to specify a subnet that has Private Google Access enabled. You can run thegcloud compute networks describe [NETWORK_NAME]
command to list the URIs of subnets in a network.
sparkBatch.jarFileUris
: The example jar file is pre-installed in the Spark execution environment. The "1000"sparkBatch.args
is passed to the SparkPi workload, and specifies 1000 iterations of the pi estimation logic.Spark properties
: You can use the RuntimeConfig.properties field to enter any supported Spark properties you want your Spark batch workload to use.- Other options:
- Configure the batch workload to use an external self-managed Hive Metastore.
- Use a Spark History Server.
- Use the
RuntimeConfig.version
field as part of thebatches.create
request to specify a non-default Dataproc Serverless runtime version .
HTTP method and URL:
POST https://dataproc.googleapis.com/v1/projects/project-id/locations/region/batches
Request JSON body:
{ "sparkBatch":{ "args":[ "1000" ], "jarFileUris":[ "file:///usr/lib/spark/examples/jars/spark-examples.jar" ], "mainClass":"org.apache.spark.examples.SparkPi" } }
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{ "name":"projects/project-id/locations/region/batches/batch-id", "uuid":",uuid", "createTime":"2021-07-22T17:03:46.393957Z", "sparkBatch":{ "mainClass":"org.apache.spark.examples.SparkPi", "args":[ "1000" ], "jarFileUris":[ "file:///usr/lib/spark/examples/jars/spark-examples.jar" ] }, "runtimeInfo":{ "outputUri":"gs://dataproc-.../driveroutput" }, "state":"SUCCEEDED", "stateTime":"2021-07-22T17:06:30.301789Z", "creator":"account-email-address", "runtimeConfig":{ "properties":{ "spark:spark.executor.instances":"2", "spark:spark.driver.cores":"2", "spark:spark.executor.cores":"2", "spark:spark.app.name":"projects/project-id/locations/region/batches/batch-id" } }, "environmentConfig":{ "peripheralsConfig":{ "sparkHistoryServerConfig":{ } } }, "operation":"projects/project-id/regions/region/operation-id" }
Estimate workload costs.
Dataproc Serverless for Spark workloads consume Data Compute Unit (DCU) and shuffle storage resources. See Dataproc Serverless pricing for an example that outputs Dataproc UsageMetrics to estimate workload resource consumption and costs.
What's next
Learn about: