Jump to Content
Storage & Data Transfer

New storage innovations to drive your next-gen applications

September 8, 2022
https://storage.googleapis.com/gweb-cloudblog-publish/images/storage_spotlight.max-2600x2600.jpg
Guru Pangal

VP and GM Storage, Google Cloud

As we talk to customers large and small, we are seeing more and more “data-rich” workloads moving to the cloud. Customers are collecting more valuable data than ever before, and they want that data from different sources to be centralized and normalized before running analytics on it. Storage is becoming the common substrate for enabling higher-value services like data lakes, modeling and simulation, big data, and AI and machine learning. These applications demand the flexibility of object storage, the manageability of file storage, and the performance of block storage — all on one platform. 

As your needs evolve, we’ve committed to delivering products that deliver enterprise-ready performance and scale, that support data-driven applications, that enable business insights while being easy to manage, and that protect your data from loss or disaster.

Last year, we made continental-scale application availability and data centralization easier by expanding the number of unique Cloud Storage Dual Regions, and adding the Turbo Replication feature, which is available across nine regions and three continents. This feature gives you a single, continent-sized bucket, effectively delivering a RTO of zero and optional RPO of less than 15 minutes. This also makes app design easier with high availability and a single set of APIs regardless of where data is stored.

How we’re evolving storage in the cloud to meet your changing needs 

At today’s digital customer event, A Spotlight on Storage, we announced a number of storage innovations — here are a few that highlight our commitments to you. 

Advancing our enterprise-readiness, we announced Google Cloud Hyperdisk, the next generation of Persistent Disk, bringing you the ability to easily and dynamically tune the performance of your block storage to your workload. With Hyperdisk, you can provision IOPS and throughput independently for applications and adapt to changing application performance needs over time.

We also launched Filestore Enterprise multishare for Google Kubernetes Engine (GKE). This new service enables administrators to seamlessly create a Filestore instance and carve out portions of the storage to be used simultaneously across one or thousands of GKE clusters. It also offers nondisruptive storage upgrades in the background while GKE is running, and a 99.99% regional storage availability SLA. This, combined with Backup for GKE, truly enables enterprises to modernize by bringing their stateful workloads into GKE. 

Based on your input, we continue to evolve our storage to support your data-driven applications. To make it easier to manage storage and help you optimize your costs, we’ve developed a new Cloud Storage feature called Autoclass, which automatically moves objects based on last access time, by policy, to colder or warmer storage classes. We have seen many of you do this manually, and are excited to offer this easier and automated policy-based option to optimize Cloud Storage costs. 

“Not only would it cost valuable engineering resources to build cost-optimization ourselves, but it would open us up to potentially costly mistakes in which we incur retrieval charges for prematurely archived data. Autoclass helps us reduce storage costs and achieve price predictability in a simple and automated way.” —Ian Mathews, co-founder, Redivis

We’re focused on delivering you more business insights from your storage choices, making it easier to manage and optimize your stored data. With the release of the new Storage Insights, you gain actionable insights about the objects stored in Cloud Storage. Whether you’re managing millions or trillions of objects, you have the information you need to make informed storage management decisions, and easily answer questions like, “How many objects are there? Which bucket are they located in?” Then, when paired with products like BigQuery, you can imagine organizations building unique dashboards to visualize insights about their stored data. The possibilities are truly exciting.

Lastly, to help you protect your most valuable applications and data we announced Google Cloud Backup and DR. This service is a fully integrated data-protection solution for critical applications and databases (e.g., Google Cloud VMware Engine, Compute Engine, and databases like SAP HANA) that lets you centrally manage data protection and disaster recovery policies directly within the Google Cloud console, and fully protect databases and applications with a few mouse clicks. 

Storage choices abound, but here’s why we’re different

Choosing to build your business on Google Cloud is choosing the same foundation that Google uses for planet-scale applications like Photos, YouTube, and Gmail. This approach, built over the last 20 years, allows us to deliver exabyte-scale and performant services to enterprises and digital-first organizations. This storage infrastructure is based on Colossus, a cluster-level global file system that stores and manages your data while providing the availability, performance, and durability to Google Cloud storage services such as Cloud Storage, Persistent Disk, Hyperdisk, and Filestore.

Bring in our state-of-the-art dedicated Google Cloud backbone network (which has nearly 3x the throughput of AWS and Azure1) and 173 network edge locations, and you start to see how our infrastructure is fundamentally different: It’s our global network paired with disaggregated compute and storage built on Colossus that brings you the benefits of speed and resilience to your applications. 

Learn more about Google Cloud storage 

To learn more about our latest product innovations, watch our 75-minute Spotlight on Storage, and visit our storage pages to learn more about our products.


1. 2021 Cloud Report, Cockroach Labs
Posted in