[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-27 (世界標準時間)。"],[[["\u003cp\u003eBigtable integrates with BigQuery, allowing users to query Bigtable data and join it with other BigQuery tables, as well as export data from BigQuery to Bigtable via reverse ETL.\u003c/p\u003e\n"],["\u003cp\u003eDataplex Catalog, a feature of Dataplex, automatically catalogs metadata about Bigtable resources, facilitating data analysis, reuse, application development, and management.\u003c/p\u003e\n"],["\u003cp\u003eDataflow can process data stored in Bigtable, store the output of pipelines, and export/import data in various formats using Dataflow templates, with support for Dataflow writes through the Python client library.\u003c/p\u003e\n"],["\u003cp\u003eBigtable can be used with Vertex AI Vector Search by storing vector embeddings, exporting them to a Vector Search index, and querying the index, including real-time updates via change streams.\u003c/p\u003e\n"],["\u003cp\u003eBigtable integrates with various big data tools like Apache Beam and Apache Hadoop, as well as graph databases like HGraphDB and JanusGraph, alongside infrastructure tools such as Terraform, and time-series databases like Heroic and OpenTSDB.\u003c/p\u003e\n"]]],[],null,["# Integrations with Bigtable\n==========================\n\nThis page describes integrations between Bigtable and other products and\nservices.\n\nGoogle Cloud services\n---------------------\n\nThis section describes the Google Cloud services that\nBigtable integrates with.\n\n### BigQuery\n\n[BigQuery](/bigquery) is Google's fully managed, petabyte-scale,\nlow-cost analytics data warehouse. You can use BigQuery with\nBigtable for the following purposes:\n\n- You can create a BigQuery external table and then use it to\n query your Bigtable table and join the data to other\n BigQuery tables. For more information, see [Query\n Bigtable data](/bigquery/external-data-bigtable).\n\n- You can export your BigQuery data to a Bigtable\n table by using reverse ETL (RETL) from BigQuery to\n Bigtable. For more information, see [Export data to\n Bigtable](/bigquery/docs/export-to-bigtable).\n\n### Cloud Asset Inventory\n\nCloud Asset Inventory, which provides inventory services based on a time series database,\nsupports and returns Bigtable resources types. For a complete\nlist, see [Supported resource types](/asset-inventory/docs/supported-asset-types#supported_resource_types).\n\n### Dataplex Universal Catalog\n\nDataplex Universal Catalog and Data Catalog (deprecated)\nautomatically catalog metadata about\nBigtable resources. Cataloged information about\nyour data can help facilitate analysis, data reuse, application development, and\ndata management. For more information, see\n[Manage data assets using Data Catalog](/bigtable/docs/manage-data-assets-using-data-catalog).\n\n### Dataflow\n\n[Dataflow](/dataflow) is a cloud service and programming model for\nbig data processing. Dataflow supports both batch and streaming\nprocessing. You can use Dataflow to process data that is stored\nin Bigtable or to store the output of your Dataflow\npipeline. You can also use Dataflow templates to\n[export](/dataflow/docs/templates/provided-templates#cloudbigtabletosequencefile) and [import](/dataflow/docs/templates/provided-templates#sequencefiletocloudbigtable) your\ndata as Avro, Parquet, or SequenceFiles.\n| **Note:** The Python client library for Bigtable supports Dataflow writes but not reads.\n\nTo get started, see [Bigtable Beam connector](/bigtable/docs/beam-connector).\n\nYou can also use Bigtable as a key-value lookup to enrich the data\nin a pipeline. For an overview, see [Enrich streaming\ndata](/dataflow/docs/guides/enrichment). For a tutorial, see [Use Apache Beam\nand Bigtable to enrich\ndata](/dataflow/docs/notebooks/bigtable_enrichment_transform).\n\n### Dataproc\n\n[Dataproc](/dataproc) provides Apache Hadoop and related products\nas a managed service in the cloud. With Dataproc, you can run\nHadoop jobs that read from and write to Bigtable.\n\nFor an example of a Hadoop MapReduce job that uses Bigtable, see\nthe `/java/dataproc-wordcount` directory in the GitHub repository\n[GoogleCloudPlatform/cloud-bigtable-examples](https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/java/dataproc-wordcount).\n\n### Vertex AI Vector Search\n\n[Vertex AI Vector Search](/vertex-ai/docs/vector-search/overview) is a technology that\ncan search from billions of semantically similar or semantically related items.\nIt's useful for implementing recommendation engines, chatbots, and text\nclassification.\n\nYou can use Bigtable to store vector embeddings, export them into\na Vector Search index, and then query the index for similar items. For\na tutorial that demonstrates a sample workflow, see [Bigtable to\nVertex AI Vector Search Export](https://github.com/GoogleCloudPlatform/workflows-demos/tree/master/bigtable-ai/vertex-vector-search/workflows#readme) in the\n`workflows-demos` GitHub repository.\n\nYou can also push streaming updates to keep the vector search index in sync with\nBigtable in real time. For more information, see the\n[Bigtable change streams to Vector Search\ntemplate](/dataflow/docs/guides/templates/provided/bigtable-change-streams-to-vector-search).\n\nBig Data\n--------\n\nThis section describes Big Data products that Bigtable integrates\nwith.\n\n### Apache Beam\n\nApache Beam is a unified model for defining batch and streaming data-parallel\nprocessing pipelines. The\n[Bigtable Beam connector](/bigtable/docs/(/bigtable/docs/beam-connector)) (`BigtableIO`)\nhelps you perform batch and streaming operations on Bigtable\ndata in a pipeline.\n\nFor a tutorial showing how to use the Bigtable Beam connector to deploy a\ndata pipeline to Dataflow, see [Process a Bigtable\nchange stream](/bigtable/docs/change-streams-tutorial).\n\n### Apache Hadoop\n\n[Apache Hadoop](https://hadoop.apache.org/) is a framework that enables distributed processing of\nlarge data sets across clusters of computers. You can use\n[Dataproc](/dataproc) to create a Hadoop cluster, then run\nMapReduce jobs that read from and write to Bigtable.\n\nFor an example of a Hadoop MapReduce job that uses Bigtable, see\nthe `/java/dataproc-wordcount` directory in the GitHub repository\n[GoogleCloudPlatform/cloud-bigtable-examples](https://github.com/GoogleCloudPlatform/cloud-bigtable-examples/tree/master/java/dataproc-wordcount).\n\n### StreamSets Data Collector\n\nStreamSets Data Collector is a data-streaming application that you can\n[configure to write data to Bigtable](https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Destinations/Bigtable.html).\nStreamSets provides a Bigtable library in its GitHub repository at\n[streamsets/datacollector](https://github.com/streamsets/datacollector).\n\nGraph databases\n---------------\n\nThis section describes graph databases that Bigtable integrates\nwith.\n\n### HGraphDB\n\n[HGraphDB](https://github.com/rayokota/hgraphdb) is a client layer for using Apache HBase or\nBigtable as a graph database. It implements the [Apache TinkerPop\n3](https://tinkerpop.apache.org/) interfaces.\n\nFor more information about running HGraphDB with Bigtable support,\nsee the [HGraphDB documentation](https://github.com/rayokota/hgraphdb#support-for-google-cloud-bigtable).\n\n\n| **Note:** This integration was developed independently by a third party. Google is not affiliated with, and does not endorse, this integration.\n\n\u003cbr /\u003e\n\n### JanusGraph\n\n[JanusGraph](https://janusgraph.org/) is a scalable graph database. It is optimized for\nstoring and querying graphs containing hundreds of billions of vertices and\nedges.\n\nFor more information about running JanusGraph with Bigtable\nsupport, see [Running JanusGraph with\nBigtable](/solutions/running-janusgraph-with-bigtable) or the [JanusGraph\ndocumentation](https://docs.janusgraph.org/storage-backend/bigtable/).\n\nInfrastructure management\n-------------------------\n\nThis section describes infrastructure management tools that\nBigtable integrates with.\n\n### Pivotal Cloud Foundry\n\nPivotal Cloud Foundry is an application development and deployment platform that\noffers the ability to [bind an application to Bigtable](https://docs.pivotal.io/partners/gcp-sb/using.html).\n\n### Terraform\n\nTerraform is an open source tool that codifies APIs into declarative\nconfiguration files. These files can be shared among team members, treated as\ncode, edited, reviewed, and versioned.\n\nFor more information about using Bigtable with Terraform, see\n[Bigtable Instance](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigtable_instance) and\n[Bigtable Table](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/bigtable_table) in the\nTerraform documentation.\n\nTime-series databases and monitoring\n------------------------------------\n\nThis section describes time-series databases and monitoring tools that\nBigtable integrates with.\n\n### OpenTSDB\n\n[OpenTSDB](http://opentsdb.net/) is a time-series database that can use\nBigtable for storage. The [OpenTSDB documentation](http://opentsdb.net/docs/build/html/user_guide/backends/bigtable.html)\nprovides information to help you get started."]]