Many options for many scenarios
Migrating Hadoop and Spark clusters to the cloud can deliver significant benefits, but choices that don’t address existing on-premises Hadoop workloads only make life harder for already strained IT resources. Google Cloud Platform works with customers to help them build Hadoop migration plans designed to both fit their current needs as well as help them look to the future. From lift and shift onto virtual machines to exploring new services that take advantage of cloud scale and efficiency, GCP offers a variety of solutions for helping customers bring their Hadoop and Spark workloads to the cloud in a way that is tailored to their success.
Lift and shift Hadoop clusters
Rapidly migrate your existing Hadoop and Spark deployment as is to the Google Cloud Platform without re-architecting. Take advantage of GCP’s fast and flexible compute infrastructure as a service, Compute Engine, to provision your ideal Hadoop cluster and use your existing distribution. Let your Hadoop administrators focus on cluster usefulness, not server procurement and solving hardware issues.
Optimize for cloud scale and efficiency
Drive down Hadoop costs by migrating to Google Cloud Platform’s managed Hadoop and Spark service, Cloud Dataproc. Explore new approaches for processing data in a Hadoop ecosystem by separating storage and compute using Cloud Storage as well as exploring the practice of on-demand ephemeral clusters.
Modernize your data processing pipeline
Reduce your Hadoop operational overhead by considering cloud managed services to remove complexity from how you process data. For streaming analytics, explore using a serverless option like Cloud Dataflow to handle real-time streaming data needs. For Hadoop use cases focused on analytics and that use SQL compatible solutions like Apache Hive, consider BigQuery, Google’s enterprise-scale serverless cloud data warehouse.
Mapping on-premises Hadoop workloads to Google Cloud Platform products
With our previous infrastructure, it took about three weeks to set up our Hadoop cluster, and we spent five hours a week on maintenance. It took only a few minutes to get up and running on Google Cloud Bigtable, and we don’t spend any time maintaining it. It just runs.Sergey Belov, Director of Technology, 3PM Solutions
With Cloud Dataproc, we implemented autoscaling which enabled us to increase or reduce the clusters easily depending on the size of the project. On top of that, we also used preemptible nodes for parts of the clusters, which helped us with efficiency of costs.Orit Yaron, VP of Cloud Platform, Outbrain
Learn and build
New to GCP? Get started with any GCP product for free with a $300 credit.
Need more help?
Our experts will help you build the right solution or find the right partner for your needs.