为避免影响应用处理流量,请在通过 BigQuery 外部表读取 Bigtable 数据时使用 Data Boost 无服务器计算。对于临时查询,使用 Data Boost 尤其经济实惠。如需使用 Data Boost,请在创建外部表定义时指定 Data Boost 应用配置文件。如需详细了解 Data Boost,请参阅 Bigtable Data Boost 概览。
如果您使用 Data Boost 读取数据,则无需避免扫描整个表。不过,如果您使用预配的节点进行计算,则需要这样做。与直接向 Bigtable 表发送读取请求类似,当您查询外部表中的表且未使用 Data Boost 时,通常应避免进行全表扫描。全表扫描会增加 CPU 利用率,并且比选择性查询花费的时间长得多。它们还需要更高的 BigQuery 吞吐量。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-09-04。"],[[["\u003cp\u003eBigQuery allows you to query and analyze Bigtable data using SQL, which is useful for data analysts, engineers, and scientists seeking to leverage Bigtable data.\u003c/p\u003e\n"],["\u003cp\u003eTo query Bigtable data, an external table must be created in BigQuery, which acts as a metadata pointer to the Bigtable table and must be in the same region as the Bigtable table.\u003c/p\u003e\n"],["\u003cp\u003eQueries to the Bigtable external table can be executed via the \u003ccode\u003ebq\u003c/code\u003e command-line tool, BigQuery API, or BigQuery client libraries, and can also be scheduled for recurring data imports.\u003c/p\u003e\n"],["\u003cp\u003eIt is advised to avoid full table scans when querying Bigtable data, as they increase CPU utilization, take longer, and require more BigQuery throughput; selective queries are preferred.\u003c/p\u003e\n"],["\u003cp\u003eWhen querying external tables there are costs for BigQuery, and an increase in Bigtable node requirement, you can optimize these costs by enabling Bigtable autoscaling, or by avoiding full table scans.\u003c/p\u003e\n"]]],[],null,["# Query and analyze Bigtable data with BigQuery\n\nQuery and analyze Bigtable data with\nBigQuery\n=============================================\n\nBigQuery is a managed data warehouse that can help you query and\nanalyze your Bigtable data using SQL queries.\nBigQuery is useful for data analysts, data engineers, data\nscientists, or anyone who wants to use Bigtable data to answer\nbusiness questions.\n\nBigQuery lets you query your Bigtable data\nfrom BigQuery. This feature is helpful when you want to join your\nBigtable data to BigQuery tables.\n\nThis document provides an overview of querying Bigtable data with\nBigQuery. Before you read this page you should be familiar with\nthe\n[Bigtable overview](/bigtable/docs/overview)\nand\n[BigQuery overview](/bigquery/docs/introduction)\n\nUsing BigQuery to query a Bigtable table is ideal\nfor tables that have the same column families and column qualifiers in every\nrow.\n| **Note:** If you want to export from BigQuery to Bigtable, you can set up a reverse extract-load-transform (ETL). For details, see [Export data to Bigtable (Reverse\n| ETL)](/bigquery/docs/export-to-bigtable) in the BigQuery documentation.\n\nExternal table creation\n-----------------------\n\nBefore you can query your Bigtable data, you or an\nadministrator in your organization must create an\n*external table* , which is a BigQuery table containing metadata\npointers to your Bigtable table that you send your queries to. For\nmore information about external tables, see [Introduction to external data\nsources](/bigquery/docs/external-data-sources).\n\nYou must create the external table in the same region as the\nBigtable table. This means, for example, that if the table is in\nan instance that has clusters in `europe-central2-a` (Warsaw), `europe-west1-c`\n(Belgium), and `asia-east1-a` (Tokyo), then you must create the external table\nin Warsaw, Belgium, or Tokyo.\n\n### Recommended configurations\n\nWe recommend the following best practices when you create your external table:\n\n- To avoid impacting your application-serving traffic, use Data Boost\n serverless compute when you read Bigtable data with\n BigQuery external tables. Using Data Boost is especially\n cost-effective for ad hoc queries. To use Data Boost, specify\n Data Boost app profile when you create the external table definition.\n For more information about Data Boost, see [Bigtable\n Data Boost overview](/bigtable/docs/data-boost-overview).\n\n- For most cases, when you create an external table, set `readRowkeyAsString`\n and `ignoreUnspecifiedColumnFamilies` to **true.**\n\n- When `ignoreUnspecifiedColumnFamilies` is true, when you create a table\n definition that includes only some columns in a column family, only the\n selected columns are promoted as columns in the external table. Data in the\n unselected columns is grouped under a general `column` column.\n\nTo create your external table, follow the instructions at\n[Create a Bigtable external table](/bigquery/docs/create-bigtable-external-table).\n\nQuery data in the external table\n--------------------------------\n\nAfter you have an external table for your Bigtable table, you\ncan send SQL queries to it using one of the following methods:\n\n- At the command line using [`bq`, the BigQuery CLI](/bigquery/docs/reference/bq-cli-reference)\n- Calls to the [BigQuery API](/bigquery/docs/reference/libraries-overview)\n- Any of the [BigQuery client libraries](/bigquery/docs/reference/libraries)\n\nTo learn how to compose and run a query, see\n[Run a query](/bigquery/docs/running-queries).\nFor Bigtable-specific instructions including required permissions\nand code samples, see\n[Query Bigtable data](/bigquery/docs/external-data-bigtable).\n\nScheduled queries\n-----------------\n\nScheduled queries are useful when you want to import Bigtable\ndata into BigQuery on a recurring basis. They are also helpful\nfor use cases that might otherwise require you to build a data pipeline and\nstream the data to BigQuery. For instructions on managing\nscheduled queries, see\n[Scheduling queries](/bigquery/docs/scheduling-queries).\n\nFull table scans\n----------------\n\nIf you use Data Boost to read your data, you don't need to avoid scanning\nthe entire table. If you use provisioned nodes for compute, however, you do.\nSimilar to when you send read requests directly to your Bigtable\ntable, when you query the external table for the table and you're not using\nData Boost, you generally want to avoid full table scans. Full table scans\nincrease CPU utilization and take considerably longer than selective queries.\nThey also require more BigQuery throughput.\n\nIf your query involves all rows, it triggers a full table scan. On the other\nhand, if you limit the query and request a range of rows or specified\nnon-contiguous rows, then the entire table is not scanned. Examples in\nGoogleSQL syntax of limiting the query include the following:\n\n- `WHERE rowkey = \"abc123\"`\n- `WHERE rowkey BETWEEN \"abc123\" PRECEDING AND \"abc999\" FOLLOWING`\n- `WHERE rowkey \u003e 999999` (if you read the row key as a string)\n\nJoins\n-----\n\nIf you plan to use a join to analyze your Bigtable table data in\nconjunction with data from another source, you should create a subquery that\nextracts the relevant fields from Bigtable for the planned join.\nFor more best practices when joining tables, see\n[Optimize query computation](/bigquery/docs/best-practices-performance-compute).\n\nCosts\n-----\n\nWhen you create an external table and query it, you are charged for\nBigQuery costs and for an increase in the number of\nBigtable nodes that are required to handle the traffic. Because\nyour external table is in the same region as your Bigtable table,\nno network costs are incurred.\n\nIf you tend to run your queries during regular business hours, consider\nenabling Bigtable autoscaling so that the number of nodes\nincreases when needed and then decreases when the jobs are complete. Autoscaling\nis also an effective tactic if you run scheduled queries that don't have firm\ndeadlines.\n\nAnother way to limit costs is to\n[avoid a full table scan](#avoid-full-table-scans).\n\nFor more information on cost optimization for BigQuery, see\n[Estimate and control costs](/bigquery/docs/best-practices-costs).\n\nLimitations\n-----------\n\nThe following limitation applies:\n\n- Query results that contain serialized data that has nested types, such as protocol buffer (protobufs) and Avro formats, might render incorrectly or be difficult to read in the Google Cloud console.\n\nWhat's next\n-----------\n\n- [Learn the difference between external tables and federated queries.](/bigquery/docs/external-data-sources)\n- [Create a Bigtable external table.](/bigquery/docs/create-bigtable-external-table)\n- [Query Bigtable data stored in an external table.](/bigquery/docs/external-data-bigtable)\n- [Export data from BigQuery to Bigtable.](/bigquery/docs/export-to-bigtable)\n- [Build a real-time analytics database with Bigtable and\n BigQuery](/solutions/real-time-analytics-for-databases)"]]