Data Boost 用量以無伺服器處理單元 (SPU) 為單位計算,1,000 個 SPU 等於一個節點的效能。與佈建的節點不同,只有在您使用 Data Boost 時,SPU 才會產生費用。每項要求至少會產生 60 個 SPU 秒數的費用,且每秒至少會收取 10 個 SPU。如要進一步瞭解 Data Boost 定價,請參閱 Bigtable 定價。
系統會為您分配配額,並分別計算 SPU 的配額和費用,以及節點的配額和費用。
資格指標
Data Boost 專為高處理量掃描而設計,工作負載必須相容,才能使用 Data Boost。將標準應用程式設定檔轉換為使用 Data Boost,或為現有工作負載建立 Data Boost 應用程式設定檔之前,請先查看 Data Boost 資格指標,確認您的設定和用量符合必要條件。此外,請務必詳閱限制。
監控
如要監控 Data Boost 流量,請在Google Cloud 控制台的 Bigtable 系統洞察頁面中,查看 Data Boost 應用程式設定檔的指標。如需應用程式設定檔可用的指標清單,請參閱「Bigtable 資源的系統洞察圖表」。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-09-04 (世界標準時間)。"],[[["\u003cp\u003eData Boost is a serverless compute service for running high-throughput read jobs on Bigtable data without affecting application traffic performance.\u003c/p\u003e\n"],["\u003cp\u003eIt is ideal for data analysis and processing workloads like ETL pipelines, scheduled exports to Cloud Storage, and Spark applications, but not for point reads or latency-sensitive applications.\u003c/p\u003e\n"],["\u003cp\u003eData Boost utilizes serverless processing units (SPUs) for billing, charged separately from provisioned nodes, and you are charged at least 10 SPUs per second with a minimum of 60 SPU-seconds per request.\u003c/p\u003e\n"],["\u003cp\u003eTo use Data Boost, you must use a Data Boost app profile, configured for single-cluster routing, and it's recommended to use separate app profiles for different workloads or applications.\u003c/p\u003e\n"],["\u003cp\u003eData Boost cannot guarantee that data will be available if it has been written within the last 35 minutes.\u003c/p\u003e\n"]]],[],null,["Bigtable Data Boost overview\n\n*Data Boost* is a serverless compute service designed to run high-throughput\nread jobs on your Bigtable data without impacting the performance of the\nclusters that handle your application traffic. It lets you send large read jobs and\nqueries using serverless compute while your core application continues using\ncluster nodes for compute. Serverless compute SKUs and billing rates are\nseparate from the SKUs and rates for provisioned nodes. You can't send write or\ndelete requests with Data Boost.\n\nThis document describes Data Boost and when and how to use\nit. Before you read this page, you should understand [Instances, clusters, and\nnodes](/bigtable/docs/instances-clusters-nodes).\n| **Note:** Data Boost is not a covered service as defined in the [Bigtable SLA](/bigtable/sla).\n\nWhat it's good for\n\nData Boost is ideal for **data analysis and data processing workloads**.\nIsolating your analytics and processing traffic with Data Boost ensures that\nyou don't need to adjust a cluster's capacity or node count to accommodate\nanalytics workloads. You can run your high-throughput analytics jobs on a\nsingle cluster with Data Boost while your ongoing application traffic is\nrouted through cluster nodes.\n\nThe following are ideal use cases for Data Boost:\n\n- Scheduled or triggered export or ETL pipeline jobs from Bigtable to Cloud Storage for data enrichment, analysis, archiving, offline ML model training, or ingestion by your customers' third-party partners\n- ETL using a tool such as Dataflow for short scan or batch read processes that support in-place aggregations, rule-based transformations for MDM, or ML jobs\n- Spark applications that use the Bigtable Spark connector to read Bigtable data\n- Ad hoc queries and scheduled analytics jobs that use BigQuery external tables to read Bigtable data.\n\nWhat it's not good for\n\n**Point reads** - Data Boost is not the best option for *point read*\noperations, which are read requests sent for single rows. This includes\nbatched point reads. Because of the billing structure, many single-row point\nreads are considerably more expensive than one long scan.\n\n**Reading data immediately after it's written** - When you read data with\nData Boost, you might not read all data that was written in the most recent\n35 minutes. This is especially true if your instance uses replication and you are\nreading data that was written to a cluster in a different region than you are\nreading from. For more information, see\n[Consistency tokens](#consistency-tokens).\n\n**Latency-sensitive workloads** - Data Boost is optimized for throughput,\nso read latency is slower when you use Data Boost than when you read using\nclusters and nodes. For this reason, Data Boost is not suitable for\napplication serving workloads.\n\nFor more information on workloads, configurations, and features that are\nnot compatible with Data Boost, see [Limitations](#limitations).\n\nData Boost app profiles\n\nTo use Data Boost, you send your read requests using a *Data Boost app\nprofile* instead of a *standard app profile*.\n\nStandard app profiles let you specify the\n[routing policy](/bigtable/docs/routing)\nand\n[priority level](/bigtable/docs/request-priorities)\nfor requests that use the app profile, as well as whether single-row\ntransactions are permitted. Traffic sent using a standard app profile is routed\nto a cluster, and that cluster's nodes route the traffic to disk. For more\ninformation, see\n[Standard app profiles overview](/bigtable/docs/app-profiles).\n\nWith a Data Boost app profile, on the other hand, you configure a\nsingle-cluster routing policy to one of your instance's clusters, and traffic\nusing that app profile uses serverless compute instead of the cluster's nodes.\n\nYou can create a new Data Boost app profile, or you can convert a standard\napp profile to use Data Boost instead. We recommend using a\n[separate app profile for each workload or application](/bigtable/docs/app-profiles#multiple-app-profiles).\n\nConsistency tokens\n\nData that was written or replicated to your target cluster more than 35 minutes\nbefore your read request is readable by Data Boost.\n| **Important:** Data Boost provides no guarantee for the readability of data that was written less than 35 minutes ago.\n\nYou can make sure that the data from a specific write job or time period is\nreadable by Data Boost, before you initiate a Data Boost\nworkload, by creating and using a\n*consistency token*. A sample workflow is as follows:\n\n1. Write some data to a table.\n2. Create a consistency token.\n3. Send the token in `DataBoostReadLocalWrites` mode to determine when the writes are readable by Data Boost on your target cluster.\n\nYou can optionally check replication consistency before you check Data Boost\nconsistency by first sending a consistency token in `StandardReadRemoteWrites` mode.\n\nFor more information, see the API reference for\n[CheckConsistencyRequest](/bigtable/docs/reference/admin/rpc/google.bigtable.admin.v2#checkconsistencyrequest).\n\nQuota and billing\n\nData Boost usage is measured in *serverless processing units* (SPUs), and\n1,000 SPUs = one node in performance. Unlike\nwith provisioned nodes, you are charged for SPUs only when you use\nData Boost. Each request is billed for a minimum of 60 SPU-seconds, and you\nare charged at least 10 SPUs per second. For more information on Data Boost\npricing, see [Bigtable\npricing](/bigtable/pricing).\n\nYou are allocated quota and billed for SPUs separately from the quota and\ncharges for nodes.\n\nEligibility metrics\n\nData Boost is designed for high-throughput scans, and workloads must be\ncompatible to be able to use Data Boost. Before you convert a standard app\nprofile to use Data Boost or create a Data Boost app profile for an\nexisting workload, [view Data Boost\neligibility metrics](/bigtable/docs/data-boost-eligibility) to make sure your\nconfiguration and usage meet the required criteria. You should also review the\n[limitations](#limitations).\n\nMonitoring\n\nTo monitor your Data Boost traffic, you can check the metrics for your\nData Boost app profile on the Bigtable system insights page in the\nGoogle Cloud console. For a list of metrics available by app profile, see\n[System insights charts for Bigtable resources](/bigtable/docs/monitoring-instance#console-monitoring-resources).\n\nYou can monitor your usage of serverless processing units (SPUs), by checking\nthe SPU usage count (`data_boost/spu_usage_count`) metric on the **App profile**\ntab on the Bigtable system insights page.\n\nYou can also continue to monitor the [eligibility\nmetrics](/bigtable/docs/data-boost-eligibility)\nfor the app profile after you've started using Data Boost.\n\nLimitations\n\nThe following workload properties and resource configurations are not supported\nfor Data Boost.\n\n- Writes and deletes\n- Traffic that is mostly point reads (single-row reads)\n- More than 1,000 reads per second per cluster\n- Reverse scans\n- Change streams\n- Request priorities\n- Multi-cluster routing\n- Single-row transactions\n- Regional endpoints\n- HDD instances\n- GoogleSQL for Bigtable queries\n- Bigtable Studio [query builder](/bigtable/docs/query-builder) queries\n- Instances that use CMEK encryption\n- Incompatible client libraries. You must use the [Bigtable client for Java](/java/docs/reference/google-cloud-bigtable/latest/overview) version 2.31.0 or later.\n - For Dataflow jobs using `BigtableIO` to read Bigtable data, you must use Apache Beam version 2.54.0 or later.\n - For Dataflow jobs using `CloudBigtableIO` to read Bigtable data, you must use `bigtable-hbase-beam` version 2.14.1 or later.\n\nWhat's next\n\n- [Create or update an app profile.](/bigtable/docs/configuring-app-profiles)\n- [Learn about the Bigtable Beam connector.](/bigtable/docs/beam-connector)\n- [Use the Bigtable Spark connector.](/bigtable/docs/use-bigtable-spark-connector)\n- [Query and analyze Bigtable data\n with BigQuery.](/bigtable/docs/bigquery-analysis)"]]