Los siguientes SDK admiten E/S administrada para BigQuery:
SDK de Apache Beam para Java, versión 2.61.0 o posterior
SDK de Apache Beam para Python, versión 2.61.0 o posterior
Configuración
La E/S administrada para BigQuery admite los siguientes parámetros de configuración:
BIGQUERY Leer
Configuración
Tipo
Descripción
kms_key
str
Usa esta clave de Cloud KMS para encriptar tus datos
consulta
str
Es la consulta en SQL que se ejecutará para leer desde la tabla de BigQuery.
row_restriction
str
Son filas de solo lectura que coinciden con este filtro, que debe ser compatible con SQL estándar de Google. Esto no se admite cuando se lee a través de una consulta.
campos
list[str]
Leer solo los campos (columnas) especificados de una tabla de BigQuery Es posible que los campos no se devuelvan en el orden especificado. Si no se especifica ningún valor, se devuelven todos los campos. Ejemplo: "col1, col2, col3"
tabla
str
Es el nombre completamente calificado de la tabla de BigQuery desde la que se leerá. Formato: [${PROJECT}:]${DATASET}.${TABLE}
BIGQUERY Escribir
Configuración
Tipo
Descripción
table
str
Es la tabla de BigQuery en la que se escribirá. Formato: [${PROJECT}:]${DATASET}.${TABLE}
drop
list[str]
Es una lista de nombres de campos que se quitarán del registro de entrada antes de escribirlo. Es mutuamente excluyente con "keep" y "only".
keep
list[str]
Es una lista de nombres de campos que se conservarán en el registro de entrada. Todos los demás campos se descartan antes de la escritura. Es mutuamente excluyente con "drop" y "only".
kms_key
str
Usa esta clave de Cloud KMS para encriptar tus datos
solo
str
Nombre de un solo campo de registro que se debe escribir. Es mutuamente exclusivo con "keep" y "drop".
triggering_frequency_seconds
int64
Determina la frecuencia con la que se "confirma" el progreso en BigQuery. El valor predeterminado es cada 5 segundos.
¿Qué sigue?
Para obtener más información y ejemplos de código, consulta los siguientes temas:
[[["Fácil de comprender","easyToUnderstand","thumb-up"],["Resolvió mi problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Información o código de muestra incorrectos","incorrectInformationOrSampleCode","thumb-down"],["Faltan la información o los ejemplos que necesito","missingTheInformationSamplesINeed","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-09-10 (UTC)"],[[["\u003cp\u003eManaged I/O for BigQuery supports dynamic table creation and dynamic destinations.\u003c/p\u003e\n"],["\u003cp\u003eFor reading, the connector utilizes the BigQuery Storage Read API, and for writing, it uses the BigQuery Storage Write API in exactly-once mode for unbounded sources or BigQuery file loads for bounded sources.\u003c/p\u003e\n"],["\u003cp\u003eThe connector requires Apache Beam SDK for Java version 2.61.0 or later.\u003c/p\u003e\n"],["\u003cp\u003eConfiguration options include specifying the BigQuery \u003ccode\u003etable\u003c/code\u003e, \u003ccode\u003ekms_key\u003c/code\u003e, \u003ccode\u003efields\u003c/code\u003e, \u003ccode\u003equery\u003c/code\u003e, \u003ccode\u003erow_restriction\u003c/code\u003e, and \u003ccode\u003etriggering_frequency\u003c/code\u003e depending on the operation.\u003c/p\u003e\n"],["\u003cp\u003eManaged I/O for BigQuery does not support automatic upgrades.\u003c/p\u003e\n"]]],[],null,["Managed I/O supports the following capabilities for BigQuery:\n\n- Dynamic table creation\n- [Dynamic destinations](/dataflow/docs/guides/write-to-iceberg#dynamic-destinations%22)\n- For reads, the connector uses the [BigQuery Storage Read API](/bigquery/docs/reference/storage).\n- For writes, the connector uses the following BigQuery methods:\n\n - If the source is unbounded and Dataflow is using [streaming exactly-once processing](/dataflow/docs/guides/streaming-modes), the connector performs writes to BigQuery, by using the [BigQuery Storage Write API](/bigquery/docs/write-api) with exactly-once delivery semantics.\n - If the source is unbounded and Dataflow is using [streaming at-least-once processing](/dataflow/docs/guides/streaming-modes), the connector performs writes to BigQuery, by using the [BigQuery Storage Write API](/bigquery/docs/write-api) with at-least-once delivery semantics.\n - If the source is bounded, the connector uses [BigQuery file loads](/bigquery/docs/batch-loading-data).\n\nRequirements\n\nThe following SDKs support managed I/O for BigQuery:\n\n- Apache Beam SDK for Java version 2.61.0 or later\n- Apache Beam SDK for Python version 2.61.0 or later\n\nConfiguration\n\nManaged I/O for BigQuery supports the following configuration\nparameters:\n\n`BIGQUERY` Read \n\n| Configuration | Type | Description |\n|-----------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| kms_key | `str` | Use this Cloud KMS key to encrypt your data |\n| query | `str` | The SQL query to be executed to read from the BigQuery table. |\n| row_restriction | `str` | Read only rows that match this filter, which must be compatible with Google standard SQL. This is not supported when reading via query. |\n| fields | `list[`str`]` | Read only the specified fields (columns) from a BigQuery table. Fields may not be returned in the order specified. If no value is specified, then all fields are returned. Example: \"col1, col2, col3\" |\n| table | `str` | The fully-qualified name of the BigQuery table to read from. Format: \\[${PROJECT}:\\]${DATASET}.${TABLE} |\n\n\u003cbr /\u003e\n\n`BIGQUERY` Write \n\n| Configuration | Type | Description |\n|------------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| **table** | `str` | The bigquery table to write to. Format: \\[${PROJECT}:\\]${DATASET}.${TABLE} |\n| drop | `list[`str`]` | A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'. |\n| keep | `list[`str`]` | A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'. |\n| kms_key | `str` | Use this Cloud KMS key to encrypt your data |\n| only | `str` | The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'. |\n| triggering_frequency_seconds | `int64` | Determines how often to 'commit' progress into BigQuery. Default is every 5 seconds. |\n\n\u003cbr /\u003e\n\nWhat's next\n\nFor more information and code examples, see the following topics:\n\n- [Read from BigQuery](/dataflow/docs/guides/read-from-bigquery)\n- [Write to BigQuery](/dataflow/docs/guides/write-to-bigquery)"]]