Stay organized with collections
Save and categorize content based on your preferences.
You can install additional components like Docker when you create a Dataproc
cluster using the
Optional components
feature. This page describes the Docker component.
The Dataproc component installs a
Docker daemon
on each cluster node and creates a Linux user "docker" and a Linux group
"docker" on each node to run the Docker daemon. This component also creates
a "docker" systemd
service to run the dockerd service. You should use the systemd service to manage the
lifecycle of the Docker service.
Install the component
Install the component when you create a Dataproc cluster.
The Docker component can be installed on clusters created with
Dataproc image version 1.5
or later.
To create a Dataproc cluster that includes the Docker component,
use the
gcloud dataproc clusters createcluster-name
command with the --optional-components flag.
The Dataproc Docker component configures Docker to
use Container Registry in addition to the default Docker registries.
Docker will use the Docker credential helper to authenticate with
Container Registry.
Use the Docker component on a Kerberos cluster
You can install the Docker optional component on a cluster that is
being created with
Kerberos security enabled.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThe Docker component, installable on Dataproc clusters, provides a Docker daemon, a "docker" Linux user and group, and a "docker" systemd service on each cluster node.\u003c/p\u003e\n"],["\u003cp\u003eInstallation of the Docker component is supported on Dataproc clusters with image version 1.5 or later, and can be installed when creating a Dataproc cluster.\u003c/p\u003e\n"],["\u003cp\u003eThe component can be installed using the \u003ccode\u003egcloud\u003c/code\u003e command with the \u003ccode\u003e--optional-components=DOCKER\u003c/code\u003e flag, via the Dataproc API, or through the Google Cloud console during cluster creation.\u003c/p\u003e\n"],["\u003cp\u003eBy default, the Dataproc Docker component directs logs to Cloud Logging and configures Docker to use Container Registry in addition to the standard Docker registries.\u003c/p\u003e\n"],["\u003cp\u003eThe Docker component can be installed on clusters with Kerberos security enabled, but containers interacting with Hadoop services need their own Kerberos credentials.\u003c/p\u003e\n"]]],[],null,["You can install additional components like Docker when you create a Dataproc\ncluster using the\n[Optional components](/dataproc/docs/concepts/components/overview#available_optional_components)\nfeature. This page describes the Docker component.\n\nThe Dataproc component installs a\n[Docker daemon](https://docs.docker.com/get-started/overview/#the-docker-daemon)\non each cluster node and creates a Linux user \"docker\" and a Linux group\n\"docker\" on each node to run the Docker daemon. This component also creates\na \"docker\" [`systemd`](https://docs.docker.com/config/daemon/systemd/)\nservice to run the [`dockerd`](https://docs.docker.com/engine/reference/commandline/dockerd/) service. You should use the `systemd` service to manage the\nlifecycle of the Docker service.\n\nInstall the component\n\nInstall the component when you create a Dataproc cluster.\nThe Docker component can be installed on clusters created with\nDataproc **image [version 1.5](/dataproc/docs/concepts/versioning/dataproc-release-1.5)\nor later**.\n\nSee\n[Supported Dataproc versions](/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions)\nfor the component version included in each Dataproc image release. \n\ngcloud command\n\nTo create a Dataproc cluster that includes the Docker component,\nuse the\n[gcloud dataproc clusters create](/sdk/gcloud/reference/dataproc/clusters/create) \u003cvar translate=\"no\"\u003ecluster-name\u003c/var\u003e\ncommand with the `--optional-components` flag. \n\n```\ngcloud dataproc clusters create cluster-name \\\n --optional-components=DOCKER \\\n --region=region \\\n --image-version=1.5 \\\n ... other flags\n```\n\nREST API\n\nThe Docker component can be specified through the Dataproc API using\n[SoftwareConfig.Component](/dataproc/docs/reference/rest/v1/ClusterConfig#Component)\nas part of a\n[clusters.create](/dataproc/docs/reference/rest/v1/projects.regions.clusters/create)\nrequest.\n\nConsole\n\n1. Enable the component.\n - In the Google Cloud console, open the Dataproc [Create a cluster](https://console.cloud.google.com/dataproc/clustersAdd) page. The Set up cluster panel is selected.\n - In the Components section:\n - Under Optional components, select Docker and other optional components to install on your cluster.\n\nEnable Docker on YARN\n\nSee [Customize your Spark job runtime environment with Docker on YARN](/dataproc/docs/guides/dataproc-docker-yarn)\nto use a customized Docker image with YARN.\n\nDocker Logging\n\nBy default, the Dataproc Docker component writes logs to\nCloud Logging by [setting the `gcplogs driver`](/community/tutorials/docker-gcplogs-driver#setting_the_default_logging_driver)---see\n[Viewing your logs](/community/tutorials/docker-gcplogs-driver#viewing_your_logs).\n\nDocker Registry\n\nThe Dataproc Docker component configures Docker to\nuse Container Registry in addition to the default Docker registries.\nDocker will use the Docker credential helper to authenticate with\nContainer Registry.\n\nUse the Docker component on a Kerberos cluster\n\nYou can install the Docker optional component on a cluster that is\nbeing created with\n[Kerberos security enabled](/dataproc/docs/concepts/configuring-clusters/security#enabling_hadoop_secure_mode_via_kerberos).\n| Docker is not part of the Hadoop ecosystem, and isn't recognized by Hadoop services. If you run a container that communicates with Hadoop services directly, your container must have the required Kerberos keytab file and credential."]]