Last reviewed 2023-08-08 UTC
Cloud Bigtable is a fully managed, scalable, NoSQL database for large analytical and operational workloads. It is designed as a sparsely populated table that can scale to billions of rows and thousands of columns, and supports high read and write throughput at low latency.
Best practices
- Understand Cloud Bigtable performance - estimating throughput for Cloud Bigtable, how to plan Cloud Bigtable capacity by looking at throughput and storage use, how enabling replication affects read and write throughput differently, and how Cloud Bigtable optimizes data over time.
- Cloud Bigtable schema design - guidance on designing Cloud Bigtable schema, including concepts of key/value store, designing row keys based on planned read requests, handling columns and rows, and special use cases.
- Cloud Bigtable replication overview - how to replicate Cloud Bigtable across multiple zones or regions, understand performance implications of replication, and how Cloud Bigtable resolves conflicts and handles failovers.
- About Bigtable backups- how to save a copy of a table's schema and data with Bigtable Backups, which can help you recover from application-level data corruption or from operator errors, such as accidentally deleting a table.