The Cloud Storage connector is an open source Java library that lets you run Apache Hadoop or Apache Spark jobs directly on data in Cloud Storage, and offers a number of benefits over choosing the Hadoop Distributed File System (HDFS).
Benefits of the Cloud Storage connector
- Direct data access – Store your data in Cloud Storage and access it directly, with no need to transfer it into HDFS first.
- HDFS compatibility – You can easily access your data in Cloud Storage using the
gs://prefix instead of
- Interoperability – Storing data in Cloud Storage enables seamless interoperability between Spark, Hadoop, and Google services.
- Data accessibility – When you shut down a Hadoop cluster, you still have access to your data in Cloud Storage, unlike HDFS.
- High data availability – Data stored in Cloud Storage is highly available and globally replicated without a loss of performance.
- No storage management overhead – Unlike HDFS, Cloud Storage requires no routine maintenance such as checking the file system, upgrading or rolling back to a previous version of the file system, etc.
- Quick startup – In HDFS, a MapReduce job can't start until the
NameNodeis out of safe mode—a process that can take from a few seconds to many minutes depending on the size and state of your data. With Cloud Storage, you can start your job as soon as the task nodes start, leading to significant cost savings over time.
Getting the connector
Cloud Dataproc clusters
The Cloud Storage connector is installed by default on all Cloud Dataproc
cluster nodes under
/usr/lib/hadoop/lib/ (for image versions 1.4 and higher, the location
/usr/local/share/google/dataproc/lib/). It's available in both Spark and
Other Spark/Hadoop clusters
You can download the following Cloud Storage connectors for Hadoop:
- Cloud Storage connector for Hadoop 1.x
- Cloud Storage connector for Hadoop 2.x
- Cloud Storage connector for Hadoop 3.x
Using the connector
There are multiple ways to access data stored in Cloud Storage:
- In a Spark (or PySpark) or Hadoop application using the
- The hadoop shell:
hadoop fs -ls gs://CONFIGBUCKET/dir/file.
- The GCP Console Cloud Storage browser.
- Using the
The Cloud Storage connector requires Java 8.
Apache Maven Dependency Information
<dependency> <groupId>com.google.cloud.bigdataoss</groupId> <artifactId>gcs-connector</artifactId> <version>insert "hadoopX-X.X.X" connector version number here</version> <scope>provided</scope> </dependency>
- Learn more about Cloud Storage