API Explorer Quickstart—Submit a Spark job

This page shows you how to use an inline Google API Explorer template to run a simple Spark job in a Google Cloud Dataproc cluster. You can learn how to do the same task using the Google Cloud Platform Console in Quickstart Using the Console or using the command line in Quickstart using the gcloud command-line tool.

Before you begin

Before you can run a Cloud Dataproc job, you need to create a cluster of virtual machines (VMs) to run it on. You can use the API Explorer, the Google Cloud Platform Console, or the Google Cloud SDK gcloud command-line tool to create a cluster.

Submit a job

To submit a sample Apache Spark job that calculates a rough value for pi, fill in and execute the API Explorer template, below, as follows:

  1. Enter you project ID (project name) in the projectID field.
  2. The following fields are filled in for you:
    1. region = a "global". global is the default region when a Cloud Dataproc cluster is created. This is a special multi-region namespace that is capable of deploying instances into all Google Compute zones globally when a Cloud Dataproc cluster is created. If you created your cluster (see API Explorer—Create a cluster) in a different region, replace "global" with the name of your cluster's region.
    2. Request body job.placement.clusterName = "example-cluster". This is the name of the Cloud Dataproc cluster (created in the previous quickstarts—see API Explorer—Create a cluster) where the job will be run. Replace this name with the name of your cluster if it is different.
    3. Request body job.sparkJob:
      1. args = "1000". The number of tasks.
      2. jarFileUris = "file:///usr/lib/spark/examples/jars/spark-examples.jar". The location of the pre-installed jar file on the master VM instance in your cluster that contains the Spark Scala job code.
      3. mainClass = "org.apache.spark.examples.SparkPi". The main method for the job's pi-calculating Scala application.
  3. Click EXECUTE. A dialog will ask you to confirm the default https://www.googleapis.com/auth/cloud-platform scope. Click the dialog's ALLOW to send the request to the service. After less than one second (typically), the JSON response showing that the example-cluster is pending appears below the template.

You can inspect the job output by going to GCP Console—Clusters, then clicking on the Job ID link (select the "Line wrapping" box to bring lines that exceed the right margin into view).

Congratulations! You've used the Google API Explorer to submit a Spark job to a Cloud Dataproc cluster.

What's next

Send feedback about...

Google Cloud Dataproc Documentation