Bigtable reliability guide

Last reviewed 2023-08-08 UTC

Bigtable is a fully managed, scalable, NoSQL database for large analytical and operational workloads. It is designed as a sparsely populated table that can scale to billions of rows and thousands of columns, and supports high read and write throughput at low latency.

Best practices

  • Understand Bigtable performance - estimating throughput for Bigtable, how to plan Bigtable capacity by looking at throughput and storage use, how enabling replication affects read and write throughput differently, and how Bigtable optimizes data over time.
  • Bigtable schema design - guidance on designing Bigtable schema, including concepts of key/value store, designing row keys based on planned read requests, handling columns and rows, and special use cases.
  • Bigtable replication overview - how to replicate Bigtable across multiple zones or regions, understand performance implications of replication, and how Bigtable resolves conflicts and handles failovers.
  • About Bigtable backups- how to save a copy of a table's schema and data with Bigtable Backups, which can help you recover from application-level data corruption or from operator errors, such as accidentally deleting a table.