You can use Dataproc to create one or more Compute Engine instances that can connect to a Cloud Bigtable instance and run Hadoop jobs. This page explains how to use Dataproc to automate the following tasks:
- Installing Hadoop and the HBase client for Java
- Configuring Hadoop and Cloud Bigtable
- Setting the correct authorization scopes for Cloud Bigtable
After you create your Dataproc cluster, you can use the cluster to run Hadoop jobs that read and write data to and from Cloud Bigtable.
This page assumes that you are already familiar with Hadoop. For additional information about Dataproc, see the Dataproc documentation.
Before you begin
Before you begin, you'll need to complete the following tasks:
- Create a Cloud Bigtable instance. Be sure to note the project ID and Cloud Bigtable instance ID.
- Enable the Cloud Bigtable, Cloud Bigtable Admin, Dataproc, and Cloud Storage JSON APIs.
- Verify that your user account is in a role that includes the permission
Open the IAM page in the Cloud Console.
- Install the Cloud SDK and
gcloudcommand-line tool. See the Cloud SDK setup instructions for details.
- Install the
gsutiltool by running the following command:
gcloud components install gsutil
Install Apache Maven, which is used to run a sample Hadoop job.
On Debian GNU/Linux or Ubuntu, run the following command:
sudo apt-get install maven
On RedHat Enterprise Linux or CentOS, run the following command:
sudo yum install maven
On macOS, install Homebrew, then run the following command:
brew install maven
- Clone the GitHub repository
which contains an example of a Hadoop job that uses Cloud Bigtable:
git clone https://github.com/GoogleCloudPlatform/cloud-bigtable-examples.git
Creating a Cloud Storage bucket
Dataproc uses a Cloud Storage bucket to store temporary files. To prevent file-naming conflicts, create a new bucket for Dataproc.
Cloud Storage bucket names must be globally unique across all buckets. Choose a bucket name that is likely to be available, such as a name that incorporates your Google Cloud project's name.
After you choose a name, use the following command to create a new bucket, replacing values in brackets with the appropriate values:
gsutil mb -p [PROJECT_ID] gs://[BUCKET_NAME]
Creating the Dataproc cluster
Run the following command to create a Dataproc cluster with four worker nodes, replacing values in brackets with the appropriate values:
gcloud dataproc clusters create [DATAPROC_CLUSTER_NAME] --bucket [BUCKET_NAME] \ --zone [ZONE] --num-workers 4 --master-machine-type n1-standard-4 \ --worker-machine-type n1-standard-4
gcloud dataproc clusters create documentation
for additional settings that you can configure. If you get an error message that
includes the text
Insufficient 'CPUS' quota, try setting the
flag to a lower value.
Testing the Dataproc cluster
After you set up your Dataproc cluster, you can test the cluster by running a sample Hadoop job that counts the number of times a word appears in a text file. The sample job uses Cloud Bigtable to store the results of the operation. You can use this sample job as a reference when you set up your own Hadoop jobs.
Running the sample Hadoop job
- In the directory where you cloned the GitHub repository, change to the
Run the following command to build the project, replacing values in brackets with the appropriate values:
mvn clean package -Dbigtable.projectID=[PROJECT_ID] \ -Dbigtable.instanceID=[BIGTABLE_INSTANCE_ID]
Run the following command to start the Hadoop job, replacing values in brackets with the appropriate values:
./cluster.sh start [DATAPROC_CLUSTER_NAME]
When the job is complete, it displays the name of the output table, which is the
WordCount followed by a hyphen and a unique number:
Output table is: WordCount-1234567890
Verifying the results of the Hadoop job
Optionally, after you run the Hadoop job, you can use cbt tool to verify that the job ran successfully:
Open a terminal window in Cloud Shell.
- Install the
gcloud components update
gcloud components install cbt
- Scan the output table to view the results of the Hadoop job, replacing
[TABLE_NAME]with the name of your output table:
cbt -instance [BIGTABLE_INSTANCE_ID] read [TABLE_NAME]
Now that you've verified that the cluster is set up correctly, you can use it to run your own Hadoop jobs.
Deleting the Dataproc cluster
When you are done using the Dataproc cluster, run the following
command to shut down and delete the cluster, replacing
with the name of your Dataproc cluster:
gcloud dataproc clusters delete [DATAPROC_CLUSTER_NAME]