Cloud Dataproc Optional Components

When you create a cluster, standard Apache Hadoop ecosystem components are automatically installed on the cluster (see Cloud Dataproc Version List. You can install additional components on the cluster when you create the cluster using the Cloud Dataproc Optional Components feature described on this page. Adding components to a cluster using the Optional Components feature is similar to adding components through the use of initialization actions, but has the following advantages:

  • Faster cluster startup times
  • Tested compatibility with specific Cloud Dataproc versions
  • Use of a cluster parameter instead of an initialization action script
  • Optional components are integrated. For example, when Anaconda and Zeppelin are installed on a cluster using the Optional Components feature, Zeppelin will make use of Anaconda's Python interpreter and libraries.

Optional components can be added to clusters created with Cloud Dataproc version 1.3 and later.

Using Optional Components

gcloud command

To create a Cloud Dataproc cluster that uses Optional Components, use the gcloud beta dataproc clusters create command. Set the --properties flag, and specify the dataproc:alpha.components property to include the optional components.

gcloud beta dataproc clusters create \
  --properties dataproc:alpha.components=OPTIONAL_COMPONENTS

REST API

Optional components can be specified through the Cloud Dataproc API using a SoftwareConfig.properties key:value pair as part of a clusters.create request.

"properties": {
  "dataproc:alpha.component": 'OPTIONAL_COMPONENTS'
}

Console

Currently, the Cloud Dataproc Optional Components feature is not supported in the Google Cloud Platform Console.

Optional components

The following optional components and Web interfaces are available for installation on Cloud Dataproc clusters.

Anaconda

Anaconda (Anaconda2-5.1.0) is a Python distribution and Package Manager with over 1000 popular data science packages. Anaconda is installed on all cluster nodes in /opt/conda/anaconda, and becomes the default Python interpreter.

$ gcloud beta dataproc clusters create … \
    --properties dataproc:alpha.components=ANACONDA

Hive WebHCat

The Hive WebHCat server (2.3.2) provides a REST API for HCatalog. The REST service is available on port 50111 on the cluster's first master node.

$ gcloud beta dataproc clusters create … \
    --properties dataproc:alpha.components=HIVE_WEBHCAT

Jupyter Notebook

Jupyter (4.4.0), a Web-based notebook for interactive data analytics. The Jupyter Web UI is available on port 8123 on the cluster's first master node. By default, notebooks are saved in Cloud Storage] in the Cloud Dataproc staging bucket (specified by user or auto-created). Python2 and PySpark kernels are available for Jupyter notebooks.

$ gcloud beta dataproc clusters create … \
  --properties ^;^dataproc:alpha.components=ANACONDA,JUPYTER

In the above command, the ^;^ prefix at the start of the value passed to the --properties flag changes the key/value separator from "," to ";" to allow the glcoud command to treat "ANACONDA,JUPYTER" as a single string value passed to the flag (see gcloud topic escaping for more information).

Zeppelin Notebook

Zeppelin Notebook (0.8.0) is a Web-based notebook for interactive data analytics. The Zeppelin Web UI is available on port 8080 on the cluster's first master node.

$ gcloud beta dataproc clusters create … \
    --properties dataproc:alpha.components=ZEPPELIN
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud Dataproc Documentation