[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eBigtable utilizes instances as containers for data, with each instance comprised of one or more clusters located in different zones, and each cluster containing nodes for data management and maintenance.\u003c/p\u003e\n"],["\u003cp\u003eWhen creating an instance, the storage type, either SSD or HDD, must be selected, and this choice is permanent for all clusters within the instance, with every cluster using the same storage type.\u003c/p\u003e\n"],["\u003cp\u003eClusters, which represent the Bigtable service in a specific location, belong to a single instance and handle requests to that instance, and an instance can have up to 8 regions.\u003c/p\u003e\n"],["\u003cp\u003eEach node within a cluster is a compute resource responsible for managing data tablets, handling reads and writes, and performing maintenance tasks like compactions, ensuring sufficient nodes are present to support the workload.\u003c/p\u003e\n"],["\u003cp\u003eInstances with multiple clusters utilize replication, keeping separate copies of data in each cluster's zone and synchronizing updates between them, allowing for traffic isolation, load balancing, and failover capabilities.\u003c/p\u003e\n"]]],[],null,["Instances, clusters, and nodes\n\nTo use Bigtable, you create *instances* , which contain *clusters* that\nyour applications can connect to. Each cluster contains *nodes*, the compute\nunits that manage your data and perform maintenance tasks.\n\nThis page provides more information about Bigtable instances,\nclusters, and nodes.\n\n- To learn how to create an instance, see [Create an instance](/bigtable/docs/creating-instance).\n- To learn how to add or delete clusters, see [Modify an instance](/bigtable/docs/modifying-instance#clusters).\n- To learn how to monitor an instance and its clusters, see [Monitoring an\n instance](/bigtable/docs/monitoring-instance).\n- To learn how to update the number of nodes in a cluster, see [Add or\n remove nodes](/bigtable/docs/modifying-instance#nodes).\n\n\nBefore you read this page, you should be familiar with the\n[overview of Bigtable](/bigtable/docs/overview).\n\nInstances\n\nA Bigtable instance is a container for your data. Instances have\none or more [clusters](#clusters), located in different zones. Each\ncluster has at least 1 [node](#nodes).\n\nA table belongs to an instance, not to a cluster or node. If you have an instance\nwith more than one cluster, you are using replication. This means you can't\nassign a table to an individual cluster or create unique garbage collection\npolicies for each cluster in an instance. You also can't make each cluster store\na different set of data in the same table.\n\nAn instance has a few important properties that you need to know about:\n\n- The *storage type* (SSD or HDD)\n- The *application profiles*, which are primarily for instances that use replication\n\nThe following sections describe these properties.\n\nStorage types\n\nWhen you [create an instance](/bigtable/docs/creating-instance), you must choose whether\nthe instance's clusters will store data on solid-state drives (SSD) or hard disk\ndrives (HDD). SSD is often, but not always, the most efficient and\ncost-effective choice.\n\nThe choice between SSD and HDD is permanent, and every cluster in your instance\nmust use the same type of storage, so make sure you pick the right storage type\nfor your use case. See [Choosing between SSD and HDD storage](/bigtable/docs/choosing-ssd-hdd)\nfor more information to help you decide.\n\nApplication profiles\n\nAfter you [create an instance](/bigtable/docs/creating-instance), Bigtable uses\nthe instance to store [application profiles](/bigtable/docs/app-profiles), or app profiles. For\ninstances that use replication, app profiles control how your applications\nconnect to the instance's clusters.\n\nIf your instance doesn't use replication, you can still use app profiles to\nprovide separate identifiers for each of your applications, or each function\nwithin an application. You can then [view separate charts for each app profile\nin the Google Cloud console](/bigtable/docs/monitoring-instance#console-monitoring-resources).\n\nTo learn more about app profiles, see [application profiles](/bigtable/docs/app-profiles). To\nlearn how to set up your instance's app profiles, see [Configuring app\nprofiles](/bigtable/docs/configuring-app-profiles).\n\nClusters\n\nA cluster represents the Bigtable service in a specific location.\nEach cluster belongs to a single Bigtable instance, and an\ninstance can have clusters in up to 8 regions.\nWhen your application sends requests to a Bigtable instance, those\nrequests are handled by one of the clusters in the instance.\n\nEach cluster is located in a single zone.\nAn instance can have clusters in up to 8 regions where\nBigtable is available.\n\n\nEach [zone in a region](/docs/geography-and-regions#regions_and_zones) can contain only\none cluster.\n\nFor example, if an instance has a cluster in `us-east1-b`, you can add a cluster\nin a different zone in the same region, such as `us-east1-c`, or a zone in a separate\nregion, such as `europe-west2-a`.\n\nThe number of clusters that you can create in an instance depends on the number\nof available zones in the regions that you choose. For example, if you create\nclusters in 8 regions that have 3 zones each, the maximum number of clusters\nthat the instance can have is 24. For a list of zones and regions where\nBigtable is available, see\n[Bigtable locations](/bigtable/docs/locations).\n\nBigtable instances that have only 1 cluster don't use\nreplication. If you add a second cluster to an instance,\nBigtable automatically starts replicating your data\nby keeping separate copies of the data in each of the clusters' zones and\nsynchronizing updates between the copies. You can choose which cluster your\napplications connect to, which makes it possible to isolate different types of\ntraffic from one another. You can also let Bigtable balance traffic\nbetween clusters. If a cluster becomes unavailable, you can fail over from one\ncluster to another. To learn more about how replication works, see the\n[replication overview](/bigtable/docs/replication-overview).\n\nIn most cases, you should [enable autoscaling](/bigtable/docs/autoscaling) for a\ncluster, so that Bigtable adds and removes nodes as needed to\nhandle the cluster's workloads.\n\nWhen you create a cluster, you can enable 2x node scaling, a configuration that\nsets the cluster to always scale in increments of two nodes. For more\ninformation, see [Node scaling\nfactor](/bigtable/docs/scaling#node-scaling-factor).\n\nNodes\n\nEach cluster in an instance has 1 or more\nnodes, which are compute resources that Bigtable uses to manage\nyour data.\n\nBehind the scenes, Bigtable splits all of the data in a\ntable into separate [*tablets*](/bigtable/docs/overview#architecture). Tablets are stored on disk,\nseparate from the nodes but in the same zone as the nodes. A tablet is\nassociated with a single node.\n\nEach node is responsible for:\n\n- Keeping track of specific tablets on disk.\n- Handling incoming reads and writes for its tablets.\n- Performing maintenance tasks on its tablets, such as periodic [compactions](/bigtable/docs/overview#compactions).\n\nA cluster must have enough nodes to support its current workload and the amount\nof data it stores. Otherwise, the cluster might not be able to handle incoming\nrequests, and latency could go up. [Monitor your clusters' CPU and\ndisk usage](/bigtable/docs/monitoring-instance#cpu-and-disk), and [add nodes](/bigtable/docs/modifying-instance#nodes) to an instance\nwhen its metrics exceed the recommendations at\n[Plan your capacity](/bigtable/docs/performance#planning-your-capacity).\n\nFor more details about how Bigtable stores and manages data, see\n[Bigtable architecture](/bigtable/docs/overview#architecture).\n\nFor high-throughput read jobs, you can use Data Boost for\nBigtable for compute instead of your cluster's nodes.\nData Boost lets you send large read jobs and queries using serverless\ncompute while your core application continues using cluster nodes for compute.\nFor more information, see [Data Boost\noverview](/bigtable/docs/data-boost-overview).\n\nNodes for replicated clusters\n\nWhen your instance has more than one cluster, failover becomes a consideration\nwhen you configure the maximum number of nodes for autoscaling or manually\nallocate the nodes.\n\n- If you use [multi-cluster routing](/bigtable/docs/app-profiles#routing) in any of your app profiles,\n [automatic failover](/bigtable/docs/failovers#automatic) can occur in the event that one or\n more clusters is unavailable.\n\n- When you manually fail over from one cluster to another, or when automatic\n failover occurs, the receiving cluster should ideally have enough capacity to\n support the load. You can either always allocate enough nodes to support\n failover, which can be costly, or you can rely on autoscaling to add nodes\n when traffic fails over, but be aware that there might be a brief impact on\n performance while the cluster scales up.\n\n- If all of your app profiles use [single-cluster routing](/bigtable/docs/app-profiles#routing), each\n cluster can have a different number of nodes. Resize each cluster as needed\n based on the cluster's workload.\n\n Because Bigtable stores a separate copy of your data with\n each cluster, each cluster must always have enough nodes to support your\n disk usage and to replicate writes between clusters.\n\n You can still [fail over manually](/bigtable/docs/failovers#manual) from one cluster to\n another if necessary. However, if one cluster has many more nodes than\n another, and you need to fail over to the cluster with fewer nodes, you\n might need to [add nodes](/bigtable/docs/modifying-instance#nodes) first. There is no\n guarantee that additional nodes will be available when you need to fail\n over---the only way to reserve nodes in advance is to add them to your\n cluster.\n\nWhat's next\n\n- [Create a Bigtable instance.](/bigtable/docs/creating-instance)\n- [Monitor a Bigtable instance.](/bigtable/docs/monitoring-instance)\n- [Find out how replication works.](/bigtable/docs/replication-overview)"]]