Create a Dataproc-enabled instance

This page describes how to create a Dataproc-enabled Vertex AI Workbench instance. This page also describes the benefits of the Dataproc JupyterLab plugin and provides an overview on how to use the plugin with Dataproc Serverless for Spark and Dataproc on Compute Engine.

Overview of the Dataproc JupyterLab plugin

Vertex AI Workbench instances have the Dataproc JupyterLab plugin preinstalled, as of version M113 and later.

The Dataproc JupyterLab plugin provides two ways to run Apache Spark notebooks jobs: Dataproc clusters and Serverless Spark on Dataproc.

  • Dataproc clusters include a rich set of features with control over the infrastructure that Spark runs on. You choose the size and configuration of your Spark cluster, allowing for customization and control over your environment. This approach is ideal for complex workloads, long-running jobs, and fine-grained resource management.
  • Serverless Spark powered by Dataproc eliminates infrastructure concerns. You submit your Spark jobs, and Google handles the provisioning, scaling, and optimization of resources behind the scenes. This serverless approach offers an easy and cost-efficient option for data science and ML workloads.

With both options, you can use Spark for data processing and analysis. The choice between Dataproc clusters and Serverless Spark depends on your specific workload requirements, desired level of control, and resource usage patterns.

Benefits of using Serverless Spark for data science and ML workloads include:

  • No cluster management: You don't need to worry about provisioning, configuring, or managing Spark clusters. This saves you time and resources.
  • Autoscaling: Serverless Spark automatically scales up and down based on the workload, so you only pay for the resources you use.
  • High performance: Serverless Spark is optimized for performance and takes advantage of Google Cloud's infrastructure.
  • Integration with other Google Cloud technologies: Serverless Spark integrates with other Google Cloud products, such as BigQuery and Dataplex.

For more information, see the Dataproc Serverless documentation.

Limitations

Consider the following limitations when planning your project:

  • The Dataproc JupyterLab plugin doesn't support VPC Service Controls.

Dataproc limitations

The following Dataproc limitations apply:

  • Spark jobs are executed with the service account identity, not the submitting user's identity.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Enable the Cloud Resource Manager, Dataproc, and Notebooks APIs.

    Enable the APIs

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Enable the Cloud Resource Manager, Dataproc, and Notebooks APIs.

    Enable the APIs

Required roles

To ensure that the service account has the necessary permissions to run a notebook file on a Dataproc Serverless cluster or a Dataproc cluster, ask your administrator to grant the service account the following IAM roles:

For more information about granting roles, see Manage access.

These predefined roles contain the permissions required to run a notebook file on a Dataproc Serverless cluster or a Dataproc cluster. To see the exact permissions that are required, expand the Required permissions section:

Required permissions

The following permissions are required to run a notebook file on a Dataproc Serverless cluster or a Dataproc cluster:

  • dataproc.agents.create
  • dataproc.agents.delete
  • dataproc.agents.get
  • dataproc.agents.update
  • dataproc.tasks.lease
  • dataproc.tasks.listInvalidatedLeases
  • dataproc.tasks.reportStatus
  • dataproc.clusters.use

Your administrator might also be able to give the service account these permissions with custom roles or other predefined roles.

Create an instance with Dataproc enabled

To create a Vertex AI Workbench instance with Dataproc enabled, do the following:

  1. In the Google Cloud console, go to the Instances page.

    Go to Instances

  2. Click  Create new.

  3. In the New instance dialog, click Advanced options.

  4. In the Create instance dialog, in the Details section, make sure Enable Dataproc is selected.

  5. Make sure Workbench type is set to Instance.

  6. In the Environment section, make sure you use the latest version or a version numbered M113 or higher.

  7. Click Create.

    Vertex AI Workbench creates an instance and automatically starts it. When the instance is ready to use, Vertex AI Workbench activates an Open JupyterLab link.

Open JupyterLab

Next to your instance's name, click Open JupyterLab.

The JupyterLab Launcher tab opens in your browser. By default it contains sections for Dataproc Serverless Notebooks and Dataproc Jobs and Sessions. If there are Jupyter-ready clusters in the selected project and region, there will be a section called Dataproc Cluster Notebooks.

Use the plugin with Dataproc Serverless for Spark

Serverless Spark runtime templates that are in the same region and project as your Vertex AI Workbench instance appear in the Dataproc Serverless Notebooks section of the JupyterLab Launcher tab.

To create a runtime template, see Create a Dataproc Serverless runtime template.

To open a new Serverless Spark notebook, click a runtime template. It takes about a minute for the remote Spark kernel to start. After the kernel starts, you can start coding. To run your code on Serverless Spark, run a code cell in your notebook.

Use the plugin with Dataproc on Compute Engine

If you created a Dataproc on Compute Engine Jupyter cluster, the Launcher tab has a Dataproc Cluster Notebooks section.

Four cards appear for each Jupyter-ready Dataproc cluster that you have access to in that region and project.

To change the region and project, do the following:

  1. Select Settings > Cloud Dataproc Settings.

  2. On the Setup Config tab, under Project Info, change the Project ID and Region, and then click Save.

    These changes don't take effect until you restart JupyterLab.

  3. To restart JupyterLab, select File > Shut Down, and then click Open JupyterLab on the Vertex AI Workbench instances page.

To create a new notebook, click a card. After the remote kernel on the Dataproc cluster starts, you can start writing your code and then run it on your cluster.

Manage Dataproc on Vertex AI Workbench instance using the gcloud CLI

Vertex AI Workbench instances are created with Dataproc enabled by default. You can create a Vertex AI Workbench instance with Dataproc turned off by setting the disable-mixer metadata key to true.

gcloud workbench instances create INSTANCE_NAME --metadata=disable-mixer=true

Dataproc can be enabled on a stopped Vertex AI Workbench instance by updating the metadata value.

gcloud workbench instances create INSTANCE_NAME --metadata=disable-mixer=false

Manage Dataproc using Terraform

Dataproc for Vertex AI Workbench instances on Terraform is managed using the disable-mixer key in the metadata field. Turn on Dataproc by setting the disable-mixer metadata key to false. Turn off Dataproc by setting the disable-mixer metadata key to true.

To learn how to apply or remove a Terraform configuration, see Basic Terraform commands.

resource "google_workbench_instance" "default" {
  name     = "workbench-instance-example"
  location = "us-central1-a"

  gce_setup {
    machine_type = "n1-standard-1"
    vm_image {
      project = "cloud-notebooks-managed"
      family  = "workbench-instances"
    }
    metadata = {
      disable-mixer = "false"
    }
  }
}

What's next