Dataproc Component Gateway

Some of the default open source components included with Google Dataproc clusters, such as Apache Hadoop and Apache Spark, provide web interfaces. These interfaces can be used to manage and monitor cluster resources and facilities, such as the YARN resource manager, the Hadoop Distributed File System (HDFS), MapReduce, and Spark. Component Gateway provides secure access to web endpoints for Dataproc default and optional components.

Clusters created with Dataproc image version 1.3.29 and later can enable access to component web interfaces without relying on SSH tunnels or modifying firewall rules to allow inbound traffic.

Considerations

  • Component web interfaces can be accessed by users who have dataproc.clusters.use IAM permission. By default, only project members have this permission (see Dataproc Roles).
  • Component Gateway cannot be used to access REST APIs, such as Apache Hadoop YARN and Apache Livy, and history servers.
  • Component Gateway is not available with Kerberized clusters.
  • When Component Gateway is enabled, the first master node in the cluster will have the following additional services:
  • Component gateway does not enable direct access to node:port interfaces, but proxies a specific subset of services automatically. If you want to access services on nodes (node:port), use an SSH SOCKS proxy.

Creating a cluster with Component Gateway

Console

To enable Component Gateway from the Cloud Console, check the Component Gateway checkbox on the Dataproc Create a cluster form.

gcloud command

Run the Cloud SDK gcloud dataproc clusters create command locally in a terminal window or in Cloud Shell.

gcloud dataproc clusters create cluster-name \
    --enable-component-gateway \
    --region=region \
    other args ...

REST API

Set the EndpointConfig.enableHttpPortAccess property to true as part of a clusters.create request.

Viewing and Accessing Component Gateway URLs

When Component Gateway is enabled on a cluster, you can connect to component web interfaces running on the cluster's first master node by clicking links provided on the Cloud Console. Component Gateway also sets endpointConfig.httpPorts with a map of port names to URLs. As an alternative to using the console, you can use the gcloud command-line tool or the Dataproc REST API to view this mapping information, then copy and paste the URL into your browser to connect with the component's UI.

Console

Navigate to the Dataproc Clusters form on Google Cloud Console, then select your cluster to open the Cluster details form. Click the Web Interfaces tab to display a list of Component Gateway links to the web interfaces of default and optional components installed on the cluster. Click a link to open the web interface running on the master node of the cluster in your local browser.

gcloud command

Run the Cloud SDK gcloud dataproc clusters describe command locally in a terminal window or in Cloud Shell.

gcloud dataproc clusters describe cluster-name \
    --region=region

Sample Output

...
config:
  endpointConfig:
    enableHttpPortAccess: true
    httpPorts:
      HDFS NameNode:
https://584bbf70-7a12-4120-b25c-31784c94dbb4-dot-dataproc.google.com/hdfs/ MapReduce Job History:
https://584bbf70-7a12-4120-b25c-31784c94dbb4-dot-dataproc.google.com/jobhistory/ Spark HistoryServer:
https://584bbf70-7a12-4120-b25c-31784c94dbb4-dot-dataproc.google.com/sparkhistory/ YARN ResourceManager:
https://584bbf70-7a12-4120-b25c-31784c94dbb4-dot-dataproc.google.com/yarn/ YARN Application Timeline:
https://584bbf70-7a12-4120-b25c-31784c94dbb4-dot-dataproc.google.com/apphistory/ ...

REST API

Call clusters.get to get the endpointConfig.httpPorts map of port names to URLs.

Using Component Gateway with VPC-SC

Component Gateway supports VPC Service Controls. For service perimeter enforcement, requests to interfaces through Component Gateway are treated as part of the Dataproc API surface, and any access policies that control permissions for dataproc.googleapis.com will also control access to Component Gateway UIs.

Component Gateway also supports VPC-SC configurations that rely on private Google connectivity for Dataproc clusters without external IP addresses, but you must manually configure your network to allow access from the Dataproc master VM to *.dataproc.cloud.google.com through the restricted Google virtual IP range 199.36.153.4/30 by doing the following:

  1. Follow the instructions to configure private Google connectivity for all Google APIs.
  2. Either Configure DNS with Cloud DNS or configure DNS locally on the Dataproc master node to allow access to *.dataproc.cloud.google.com.

Configure DNS with Cloud DNS

Create a Cloud DNS zone that maps traffic destined for *.dataproc.cloud.google.com to the restricted Google API virtual IP range.

  1. Create a managed private zone for your VPC network.

    gcloud dns managed-zones create ZONE_NAME \
     --visibility=private \
     --networks=https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/NETWORK_NAME \
     --description=DESCRIPTION \
     --dns-name=dataproc.cloud.google.com
     --project=PROJECT_ID
    
    • ZONE_NAME is a name for the zone that you are creating. For example, vpc. This zone name will be used in each of the following steps.

    • PROJECT_ID is the ID of the project that hosts your VPC network.

    • NETWORK_NAME is the name of your VPC network.

    • DESCRIPTION is an optional, human-readable description of the managed zone.

  2. Start a transaction.

    gcloud dns record-sets transaction start --zone=ZONE_NAME
    
    • ZONE_NAME is your zone name.
  3. Add DNS records.

    gcloud dns record-sets transaction add --name=*.dataproc.cloud.google.com. \
        --type=A 199.36.153.4 199.36.153.5 199.36.153.6 199.36.153.7 \
        --zone=ZONE_NAME \
        --ttl=300
    
    • ZONE_NAME is your zone name.
    gcloud dns record-sets transaction add --name=dataproc.cloud.google.com. \
        --type=A 199.36.153.4 199.36.153.5 199.36.153.6 199.36.153.7 \
        --zone=ZONE_NAME \
        --ttl=300
    
    • ZONE_NAME is your zone name.
  4. Execute the transaction.

    gcloud dns record-sets transaction execute --zone=ZONE_NAME --project=PROJECT_ID
    
    • ZONE_NAME is your zone name.

    • PROJECT_ID is the ID of the project that hosts your VPC network.

Configure DNS locally on Dataproc master node with an initialization action

You can locally configure DNS on Dataproc master nodes to allow private connectivity to dataproc.cloud.google.com. This procedure is intended for short-term testing and development. It is not recommended for use in production workloads.

  1. Stage the initialization action to Cloud Storage.

    cat <<EOF >component-gateway-vpc-sc-dns-init-action.sh
    #!/bin/bash
    readonly ROLE="$(/usr/share/google/get_metadata_value attributes/dataproc-role)"
    
    if [[ "${ROLE}" == 'Master' ]]; then
      readonly PROXY_ENDPOINT=$(grep "^dataproc.proxy.agent.endpoint=" \
        "/etc/google-dataproc/dataproc.properties" | \
        tail -n 1 | cut -d '=' -f 2- | sed -r 's/\\([#!=:])/\1/g')
    
      readonly HOSTNAME=$(echo ${PROXY_ENDPOINT} | \
        sed -n -E 's;^https://([^/?#]*).*;\1;p')
    
      echo "199.36.153.4 ${HOSTNAME}  # Component Gateway VPC-SC" >> "/etc/hosts"
    fi
    EOF
    
    gsutil cp component-gateway-vpc-sc-dns-init-action.sh gs://BUCKET/
    
    • BUCKET is a Cloud Storage bucket accessible from the Dataproc cluster.
  2. Create a Dataproc cluster with the staged initialization action and Component Gateway enabled.

    gcloud dataproc clusters create cluster-name \
        --region=region \
        --initialization-actions=gs://BUCKET/component-gateway-vpc-sc-dns-init-action.sh \
        --enable-component-gateway \
        other args ...
    
    • BUCKET is the Cloud Storage bucket used in step 1, above.

What's Next