Dataproc is a fully managed and highly scalable service for running Apache Hadoop, Apache Spark, Apache Flink, Presto, and 30+ open source tools and frameworks. Use Dataproc for data lake modernization, ETL, and secure data science, at scale, integrated with Google Cloud, at a fraction of the cost.
Modernize your open source data processing
Whether you need VMs or Kubernetes, extra memory for Presto, or even GPUs, Dataproc can help accelerate your data and analytics processing through on-demand purpose-built or serverless environments.
Submit Spark jobs which auto-provision and auto-scale. More details with the quickstart link below.
Dataproc initialization actions
Add other OSS projects to your Dataproc clusters with pre-built initialization actions.
Open source connectors
Libraries and tools for Apache Hadoop interoperability.
Dataproc Workflow Templates
The Dataproc WorkflowTemplates API provides a flexible and easy-to-use mechanism for managing and executing workflows.
Enterprises are migrating their existing on-premises Apache Hadoop and Spark clusters over to Dataproc to manage costs and unlock the power of elastic scale. With Dataproc, enterprises get a fully managed, purpose-built cluster that can autoscale to support any data or analytics processing job.
Create your ideal data science environment by spinning up a purpose-built Dataproc cluster. Integrate open source software like Apache Spark, NVIDIA RAPIDS, and Jupyter notebooks with Google Cloud AI services and GPUs to help accelerate your machine learning and AI development.
|Serverless Spark||Deploy Spark applications and pipelines that autoscale without any manual infrastructure provisioning or tuning.|
|Resizable clusters||Create and scale clusters quickly with various virtual machine types, disk sizes, number of nodes, and networking options.|
|Autoscaling clusters||Dataproc autoscaling provides a mechanism for automating cluster resource management and enables automatic addition and subtraction of cluster workers (nodes).|
|Cloud integrated||Built-in integration with Cloud Storage, BigQuery, Dataplex, Vertex AI, Composer, Cloud Bigtable, Cloud Logging, and Cloud Monitoring, giving you a more complete and robust data platform.|
|Versioning||Image versioning allows you to switch between different versions of Apache Spark, Apache Hadoop, and other tools.|
|Cluster scheduled deletion||To help avoid incurring charges for an inactive cluster, you can use Dataproc's scheduled deletion, which provides options to delete a cluster after a specified cluster idle period, at a specified future time, or after a specified time period.|
|Automatic or manual configuration||Dataproc automatically configures hardware and software but also gives you manual control.|
|Developer tools||Multiple ways to manage a cluster, including an easy-to-use web UI, the Cloud SDK, RESTful APIs, and SSH access.|
|Initialization actions||Run initialization actions to install or customize the settings and libraries you need when your cluster is created.|
|Optional components||Use optional components to install and configure additional components on the cluster. Optional components are integrated with Dataproc components and offer fully configured environments for Zeppelin, Presto, and other open source software components related to the Apache Hadoop and Apache Spark ecosystem.|
|Custom containers and images||Dataproc serverless Spark can be provisioned with custom docker containers. Dataproc clusters can be provisioned with a custom image that includes your pre-installed Linux operating system packages.|
|Flexible virtual machines||Clusters can use custom machine types and preemptible virtual machines to make them the perfect size for your needs.|
|Component Gateway and notebook access||Dataproc Component Gateway enables secure, one-click access to Dataproc default and optional component web interfaces running on the cluster.|
|Workflow templates||Dataproc workflow templates provide a flexible and easy-to-use mechanism for managing and executing workflows. A workflow template is a reusable workflow configuration that defines a graph of jobs with information on where to run those jobs.|
|Automated policy management||Standardize security, cost, and infrastructure policies across a fleet of clusters. You can create policies for resource management, security, or network at a project level. You can also make it easy for users to use the correct images, components, metastore, and other peripheral services, enabling you to manage your fleet of clusters and serverless Spark policies in the future.|
|Smart alerts||Dataproc recommended alerts allow customers to adjust the thresholds for the pre-configured alerts to get alerts on idle, runaway clusters, jobs, overutilized clusters and more. Customers can further customize these alerts and even create advanced cluster and job management capabilities. These capabilities allow customers to manage their fleet at scale.|
|Dataproc metastore||Fully managed, highly available Hive Metastore (HMS) with fine-grained access control and integration with BigQuery metastore, Dataplex, and Data Catalog.|