[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-08-06(UTC)"],[[["\u003cp\u003eManaged I/O for BigQuery supports dynamic table creation and dynamic destinations.\u003c/p\u003e\n"],["\u003cp\u003eFor reading, the connector utilizes the BigQuery Storage Read API, and for writing, it uses the BigQuery Storage Write API in exactly-once mode for unbounded sources or BigQuery file loads for bounded sources.\u003c/p\u003e\n"],["\u003cp\u003eThe connector requires Apache Beam SDK for Java version 2.61.0 or later.\u003c/p\u003e\n"],["\u003cp\u003eConfiguration options include specifying the BigQuery \u003ccode\u003etable\u003c/code\u003e, \u003ccode\u003ekms_key\u003c/code\u003e, \u003ccode\u003efields\u003c/code\u003e, \u003ccode\u003equery\u003c/code\u003e, \u003ccode\u003erow_restriction\u003c/code\u003e, and \u003ccode\u003etriggering_frequency\u003c/code\u003e depending on the operation.\u003c/p\u003e\n"],["\u003cp\u003eManaged I/O for BigQuery does not support automatic upgrades.\u003c/p\u003e\n"]]],[],null,["Managed I/O supports the following capabilities for BigQuery:\n\n- Dynamic table creation\n- [Dynamic destinations](/dataflow/docs/guides/write-to-iceberg#dynamic-destinations%22)\n- For reads, the connector uses the [BigQuery Storage Read API](/bigquery/docs/reference/storage).\n- For writes, the connector uses the following BigQuery methods:\n\n - If the source is unbounded and Dataflow is using [streaming exactly-once processing](/dataflow/docs/guides/streaming-modes), the connector performs writes to BigQuery, by using the [BigQuery Storage Write API](/bigquery/docs/write-api) with exactly-once delivery semantics.\n - If the source is unbounded and Dataflow is using [streaming at-least-once processing](/dataflow/docs/guides/streaming-modes), the connector performs writes to BigQuery, by using the [BigQuery Storage Write API](/bigquery/docs/write-api) with at-least-once delivery semantics.\n - If the source is bounded, the connector uses [BigQuery file loads](/bigquery/docs/batch-loading-data).\n\nRequirements\n\nThe following SDKs support managed I/O for BigQuery:\n\n- Apache Beam SDK for Java version 2.61.0 or later\n- Apache Beam SDK for Python version 2.61.0 or later\n\nConfiguration\n\nManaged I/O for BigQuery supports the following configuration\nparameters:\n\n`BIGQUERY` Read \n\n| Configuration | Type | Description |\n|-----------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| kms_key | `str` | Use this Cloud KMS key to encrypt your data |\n| query | `str` | The SQL query to be executed to read from the BigQuery table. |\n| row_restriction | `str` | Read only rows that match this filter, which must be compatible with Google standard SQL. This is not supported when reading via query. |\n| fields | `list[`str`]` | Read only the specified fields (columns) from a BigQuery table. Fields may not be returned in the order specified. If no value is specified, then all fields are returned. Example: \"col1, col2, col3\" |\n| table | `str` | The fully-qualified name of the BigQuery table to read from. Format: \\[${PROJECT}:\\]${DATASET}.${TABLE} |\n\n\u003cbr /\u003e\n\n`BIGQUERY` Write \n\n| Configuration | Type | Description |\n|------------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| **table** | `str` | The bigquery table to write to. Format: \\[${PROJECT}:\\]${DATASET}.${TABLE} |\n| drop | `list[`str`]` | A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'. |\n| keep | `list[`str`]` | A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'. |\n| kms_key | `str` | Use this Cloud KMS key to encrypt your data |\n| only | `str` | The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'. |\n| triggering_frequency_seconds | `int64` | Determines how often to 'commit' progress into BigQuery. Default is every 5 seconds. |\n\n\u003cbr /\u003e\n\nWhat's next\n\nFor more information and code examples, see the following topics:\n\n- [Read from BigQuery](/dataflow/docs/guides/read-from-bigquery)\n- [Write to BigQuery](/dataflow/docs/guides/write-to-bigquery)"]]