Run a managed notebooks instance on a Dataproc cluster
This page shows you how to run a managed notebooks instance's notebook file on a Dataproc cluster.
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Notebooks and Dataproc APIs.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Notebooks and Dataproc APIs.
- If you haven't already, create a managed notebooks instance.
Required roles
To ensure that the service account has the necessary permissions to run a notebook file on a Dataproc Serverless cluster, ask your administrator to grant the service account the following IAM roles:
-
Dataproc Worker (
roles/dataproc.worker
) on your project -
Dataproc Editor (
roles/dataproc.editor
) on the cluster for thedataproc.clusters.use
permission
For more information about granting roles, see Manage access to projects, folders, and organizations.
These predefined roles contain the permissions required to run a notebook file on a Dataproc Serverless cluster. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to run a notebook file on a Dataproc Serverless cluster:
-
dataproc.agents.create
-
dataproc.agents.delete
-
dataproc.agents.get
-
dataproc.agents.update
-
dataproc.tasks.lease
-
dataproc.tasks.listInvalidatedLeases
-
dataproc.tasks.reportStatus
-
dataproc.clusters.use
Your administrator might also be able to give the service account these permissions with custom roles or other predefined roles.
Create a Dataproc cluster
To run a managed notebooks instance's notebook file in a Dataproc cluster, your cluster must meet the following criteria:
The cluster's component gateway must be enabled.
The cluster must have the Jupyter component.
The cluster must be in the same region as your managed notebooks instance.
To create your Dataproc cluster, enter the following command in either Cloud Shell or another environment where the Google Cloud CLI is installed.
gcloud dataproc clusters create CLUSTER_NAME\ --region=REGION \ --enable-component-gateway \ --optional-components=JUPYTER
Replace the following:
REGION
: the Google Cloud location of your managed notebooks instanceCLUSTER_NAME
: the name of your new cluster
After a few minutes, your Dataproc cluster is available for use. Learn more about creating Dataproc clusters.
Open JupyterLab
If you haven't already, create a managed notebooks instance in the same region where your Dataproc cluster is.
In the Google Cloud console, go to the Managed notebooks page.
Next to your managed notebooks instance's name, click Open JupyterLab.
Run a notebook file in your Dataproc cluster
You can run a notebook file in your Dataproc cluster from any managed notebooks instance in the same project and region.
Run a new notebook file
In your managed notebooks instance's JupyterLab interface, select File > New > Notebook.
Your Dataproc cluster's available kernels appear in the Select kernel menu. Select the kernel that you want to use, and then click Select.
Your new notebook file opens.
Add code to your new notebook file, and run the code.
To change the kernel that you want to use after you've created your notebook file, see the following section.
Run an existing notebook file
In your managed notebooks instance's JupyterLab interface, click the
File Browser button, navigate to the notebook file that you want to run, and open it.To open the Select kernel dialog, click the kernel name of your notebook file, for example: Python (Local).
To select a kernel from your Dataproc cluster, select a kernel name that includes your cluster name at the end of it. For example, a PySpark kernel on a Dataproc cluster named
mycluster
is named PySpark on mycluster.Click Select to close the dialog.
You can now run your notebook file's code on the Dataproc cluster.
What's next
- Learn more about Dataproc.