Data Boost 用量以无服务器处理单元 (SPU) 为单位,在性能方面,1,000 SPU 相当于一个节点。与预配的节点不同,您只有在使用 Data Boost 时才需要为 SPU 付费。对于每项请求,系统至少会向您收取 60 个 SPU 秒的费用,并且每秒至少收取 10 个 SPU 的费用。如需详细了解 Data Boost 价格,请参阅 Bigtable 价格。
您将获得 SPU 配额,并单独为 SPU 付费,这与节点配额和费用无关。
资格指标
Data Boost 专为高吞吐量扫描而设计,工作负载必须兼容才能使用 Data Boost。在将标准应用配置文件转换为使用 Data Boost 的配置文件,或为现有工作负载创建 Data Boost 应用配置文件之前,请查看 Data Boost 资格指标,确保您的配置和使用情况符合所需条件。您还应查看限制。
监控
如需监控 Data Boost 流量,您可以在Google Cloud 控制台的 Bigtable 系统数据分析页面上查看 Data Boost 应用配置文件的指标。如需查看按应用配置文件提供的指标列表,请参阅 Bigtable 资源的系统数据分析图表。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eData Boost is a serverless compute service for running high-throughput read jobs on Bigtable data without affecting application traffic performance.\u003c/p\u003e\n"],["\u003cp\u003eIt is ideal for data analysis and processing workloads like ETL pipelines, scheduled exports to Cloud Storage, and Spark applications, but not for point reads or latency-sensitive applications.\u003c/p\u003e\n"],["\u003cp\u003eData Boost utilizes serverless processing units (SPUs) for billing, charged separately from provisioned nodes, and you are charged at least 10 SPUs per second with a minimum of 60 SPU-seconds per request.\u003c/p\u003e\n"],["\u003cp\u003eTo use Data Boost, you must use a Data Boost app profile, configured for single-cluster routing, and it's recommended to use separate app profiles for different workloads or applications.\u003c/p\u003e\n"],["\u003cp\u003eData Boost cannot guarantee that data will be available if it has been written within the last 35 minutes.\u003c/p\u003e\n"]]],[],null,["Bigtable Data Boost overview\n\n*Data Boost* is a serverless compute service designed to run high-throughput\nread jobs on your Bigtable data without impacting the performance of the\nclusters that handle your application traffic. It lets you send large read jobs and\nqueries using serverless compute while your core application continues using\ncluster nodes for compute. Serverless compute SKUs and billing rates are\nseparate from the SKUs and rates for provisioned nodes. You can't send write or\ndelete requests with Data Boost.\n\nThis document describes Data Boost and when and how to use\nit. Before you read this page, you should understand [Instances, clusters, and\nnodes](/bigtable/docs/instances-clusters-nodes).\n| **Note:** Data Boost is not a covered service as defined in the [Bigtable SLA](/bigtable/sla).\n\nWhat it's good for\n\nData Boost is ideal for **data analysis and data processing workloads**.\nIsolating your analytics and processing traffic with Data Boost ensures that\nyou don't need to adjust a cluster's capacity or node count to accommodate\nanalytics workloads. You can run your high-throughput analytics jobs on a\nsingle cluster with Data Boost while your ongoing application traffic is\nrouted through cluster nodes.\n\nThe following are ideal use cases for Data Boost:\n\n- Scheduled or triggered export or ETL pipeline jobs from Bigtable to Cloud Storage for data enrichment, analysis, archiving, offline ML model training, or ingestion by your customers' third-party partners\n- ETL using a tool such as Dataflow for short scan or batch read processes that support in-place aggregations, rule-based transformations for MDM, or ML jobs\n- Spark applications that use the Bigtable Spark connector to read Bigtable data\n- Ad hoc queries and scheduled analytics jobs that use BigQuery external tables to read Bigtable data.\n\nWhat it's not good for\n\n**Point reads** - Data Boost is not the best option for *point read*\noperations, which are read requests sent for single rows. This includes\nbatched point reads. Because of the billing structure, many single-row point\nreads are considerably more expensive than one long scan.\n\n**Reading data immediately after it's written** - When you read data with\nData Boost, you might not read all data that was written in the most recent\n35 minutes. This is especially true if your instance uses replication and you are\nreading data that was written to a cluster in a different region than you are\nreading from. For more information, see\n[Consistency tokens](#consistency-tokens).\n\n**Latency-sensitive workloads** - Data Boost is optimized for throughput,\nso read latency is slower when you use Data Boost than when you read using\nclusters and nodes. For this reason, Data Boost is not suitable for\napplication serving workloads.\n\nFor more information on workloads, configurations, and features that are\nnot compatible with Data Boost, see [Limitations](#limitations).\n\nData Boost app profiles\n\nTo use Data Boost, you send your read requests using a *Data Boost app\nprofile* instead of a *standard app profile*.\n\nStandard app profiles let you specify the\n[routing policy](/bigtable/docs/routing)\nand\n[priority level](/bigtable/docs/request-priorities)\nfor requests that use the app profile, as well as whether single-row\ntransactions are permitted. Traffic sent using a standard app profile is routed\nto a cluster, and that cluster's nodes route the traffic to disk. For more\ninformation, see\n[Standard app profiles overview](/bigtable/docs/app-profiles).\n\nWith a Data Boost app profile, on the other hand, you configure a\nsingle-cluster routing policy to one of your instance's clusters, and traffic\nusing that app profile uses serverless compute instead of the cluster's nodes.\n\nYou can create a new Data Boost app profile, or you can convert a standard\napp profile to use Data Boost instead. We recommend using a\n[separate app profile for each workload or application](/bigtable/docs/app-profiles#multiple-app-profiles).\n\nConsistency tokens\n\nData that was written or replicated to your target cluster more than 35 minutes\nbefore your read request is readable by Data Boost.\n| **Important:** Data Boost provides no guarantee for the readability of data that was written less than 35 minutes ago.\n\nYou can make sure that the data from a specific write job or time period is\nreadable by Data Boost, before you initiate a Data Boost\nworkload, by creating and using a\n*consistency token*. A sample workflow is as follows:\n\n1. Write some data to a table.\n2. Create a consistency token.\n3. Send the token in `DataBoostReadLocalWrites` mode to determine when the writes are readable by Data Boost on your target cluster.\n\nYou can optionally check replication consistency before you check Data Boost\nconsistency by first sending a consistency token in `StandardReadRemoteWrites` mode.\n\nFor more information, see the API reference for\n[CheckConsistencyRequest](/bigtable/docs/reference/admin/rpc/google.bigtable.admin.v2#checkconsistencyrequest).\n\nQuota and billing\n\nData Boost usage is measured in *serverless processing units* (SPUs), and\n1,000 SPUs = one node in performance. Unlike\nwith provisioned nodes, you are charged for SPUs only when you use\nData Boost. Each request is billed for a minimum of 60 SPU-seconds, and you\nare charged at least 10 SPUs per second. For more information on Data Boost\npricing, see [Bigtable\npricing](/bigtable/pricing).\n\nYou are allocated quota and billed for SPUs separately from the quota and\ncharges for nodes.\n\nEligibility metrics\n\nData Boost is designed for high-throughput scans, and workloads must be\ncompatible to be able to use Data Boost. Before you convert a standard app\nprofile to use Data Boost or create a Data Boost app profile for an\nexisting workload, [view Data Boost\neligibility metrics](/bigtable/docs/data-boost-eligibility) to make sure your\nconfiguration and usage meet the required criteria. You should also review the\n[limitations](#limitations).\n\nMonitoring\n\nTo monitor your Data Boost traffic, you can check the metrics for your\nData Boost app profile on the Bigtable system insights page in the\nGoogle Cloud console. For a list of metrics available by app profile, see\n[System insights charts for Bigtable resources](/bigtable/docs/monitoring-instance#console-monitoring-resources).\n\nYou can monitor your usage of serverless processing units (SPUs), by checking\nthe SPU usage count (`data_boost/spu_usage_count`) metric on the **App profile**\ntab on the Bigtable system insights page.\n\nYou can also continue to monitor the [eligibility\nmetrics](/bigtable/docs/data-boost-eligibility)\nfor the app profile after you've started using Data Boost.\n\nLimitations\n\nThe following workload properties and resource configurations are not supported\nfor Data Boost.\n\n- Writes and deletes\n- Traffic that is mostly point reads (single-row reads)\n- More than 1,000 reads per second per cluster\n- Reverse scans\n- Change streams\n- Request priorities\n- Multi-cluster routing\n- Single-row transactions\n- Regional endpoints\n- HDD instances\n- GoogleSQL for Bigtable queries\n- Bigtable Studio [query builder](/bigtable/docs/query-builder) queries\n- Instances that use CMEK encryption\n- Incompatible client libraries. You must use the [Bigtable client for Java](/java/docs/reference/google-cloud-bigtable/latest/overview) version 2.31.0 or later.\n - For Dataflow jobs using `BigtableIO` to read Bigtable data, you must use Apache Beam version 2.54.0 or later.\n - For Dataflow jobs using `CloudBigtableIO` to read Bigtable data, you must use `bigtable-hbase-beam` version 2.14.1 or later.\n\nWhat's next\n\n- [Create or update an app profile.](/bigtable/docs/configuring-app-profiles)\n- [Learn about the Bigtable Beam connector.](/bigtable/docs/beam-connector)\n- [Use the Bigtable Spark connector.](/bigtable/docs/use-bigtable-spark-connector)\n- [Query and analyze Bigtable data\n with BigQuery.](/bigtable/docs/bigquery-analysis)"]]