Dataflow 파이프라인을 만들기 전에 Apache Beam 런타임 지원을 확인하여 Dataflow에 지원되는 Java 버전을 사용하고 있는지 확인합니다. Apache Beam의 최신 지원 출시 버전을 사용합니다.
Bigtable Beam 커넥터는 Bigtable API를 호출하는 클라이언트 라이브러리인 Java용 Bigtable 클라이언트와 함께 사용됩니다. 커넥터를 사용하는 파이프라인을 리소스 프로비저닝과 관리를 처리하고 데이터 처리의 확장성과 신뢰성을 지원하는 Dataflow에 배포하는 코드를 작성합니다.
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-09-04(UTC)"],[[["\u003cp\u003eThe Bigtable Beam connector (\u003ccode\u003eBigtableIO\u003c/code\u003e) facilitates batch and streaming operations on Bigtable data within Apache Beam pipelines, especially when used in conjunction with Dataflow.\u003c/p\u003e\n"],["\u003cp\u003eFor migrations from HBase or applications using the HBase API, the Bigtable HBase Beam connector (\u003ccode\u003eCloudBigtableIO\u003c/code\u003e) should be used instead of the standard \u003ccode\u003eBigtableIO\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe Bigtable Beam connector is a component of the Apache Beam GitHub repository, and users should refer to the \u003ccode\u003eClass BigtableIO\u003c/code\u003e Javadoc for detailed information.\u003c/p\u003e\n"],["\u003cp\u003eWhen using Dataflow pipelines with the connector, it is important to refer to the Apache Beam runtime support page to ensure you are using a compatible version of Java.\u003c/p\u003e\n"],["\u003cp\u003eBatch write flow control, when enabled, allows Bigtable to automatically rate-limit traffic and ensure the cluster has enough load to trigger autoscaling.\u003c/p\u003e\n"]]],[],null,["# Bigtable Beam connector\n=======================\n\nThe Bigtable Beam connector (`BigtableIO`) is an open source [Apache\nBeam](https://beam.apache.org/) I/O connector that can help you perform batch and streaming\noperations on Bigtable data in a [pipeline](https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline) using\n[Dataflow](/dataflow/docs/overview).\n\nIf you are migrating from HBase to Bigtable or you are running an\napplication uses the HBase API instead of the Bigtable\nAPIs, use the [Bigtable HBase Beam connector](/bigtable/docs/hbase-dataflow-java)\n(`CloudBigtableIO`) instead of the connector described on this page.\n\nConnector details\n-----------------\n\nThe Bigtable Beam connector is a component of the\n[Apache Beam GitHub\nrepository](https://github.com/apache/beam). The Javadoc is available\nat [`Class\nBigtableIO`](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/io/gcp/bigtable/BigtableIO.html).\n\nBefore you create a Dataflow pipeline, check [Apache Beam\nruntime support](/dataflow/docs/support/beam-runtime-support) to make sure you\nare using a version of Java that is supported for Dataflow. Use\nthe most recent supported release of Apache Beam.\n\nThe Bigtable Beam connector is used in conjunction with the\nBigtable client for Java, a client library that calls the\nBigtable APIs. You write code to deploy a pipeline that uses the\nconnector to Dataflow, which handles the provisioning and\nmanagement of resources and assists with the scalability and reliability of data\nprocessing.\n\nFor more information on the Apache Beam programming model, see the [Beam\ndocumentation](https://beam.apache.org/get-started/beam-overview/).\n\nBatch write flow control\n------------------------\n\nWhen you send batch writes (including delete requests) to a table using the\nBigtable Beam connector, you can enable *batch write flow control*. When\nthis feature is enabled, Bigtable automatically does the\nfollowing:\n\n- Rate-limits traffic to avoid overloading your Bigtable cluster\n- Ensures the cluster is under enough load to trigger Bigtable autoscaling (if enabled), so that more nodes are automatically added to the cluster when needed\n\nFor more information, see [Batch write flow\ncontrol](/bigtable/docs/writes#flow-control). For a code sample, see [Enable\nbatch write flow control](/bigtable/docs/writing-data#batch-write-flow-control).\n\nWhat's next\n-----------\n\n- [Read an overview of Bigtable write requests.](/bigtable/docs/writes)\n- [Review a list of Dataflow templates that work with\n Bigtable.](/bigtable/docs/dataflow-templates)\n- [Bigtable Kafka Connect sink connector](/bigtable/docs/kafka-sink-connector)"]]