Os seguintes SDKs são compatíveis com E/S gerenciada para BigQuery:
SDK do Apache Beam para Java versão 2.61.0 ou mais recente
SDK do Apache Beam para Python versão 2.61.0 ou mais recente
Configuração
A E/S gerenciada para BigQuery é compatível com os seguintes parâmetros de configuração:
BIGQUERY Ler
Configuração
Tipo
Descrição
kms_key
str
Use essa chave do Cloud KMS para criptografar seus dados
consulta
str
A consulta SQL a ser executada para ler da tabela do BigQuery.
row_restriction
str
Leia somente as linhas que correspondem a esse filtro, que precisa ser compatível com o SQL padrão do Google. Isso não é compatível com a leitura por consulta.
campos
list[str]
Leia apenas os campos (colunas) especificados de uma tabela do BigQuery. Os campos podem não ser retornados na ordem especificada. Se nenhum valor for especificado, todos os campos serão retornados. Exemplo: "col1, col2, col3"
tabela
str
O nome totalmente qualificado da tabela do BigQuery de onde ler. Formato: [${PROJECT}:]${DATASET}.${TABLE}
BIGQUERY Gravar
Configuração
Tipo
Descrição
table
str
A tabela do BigQuery em que a gravação será feita. Formato: [${PROJECT}:]${DATASET}.${TABLE}
drop
list[str]
Uma lista de nomes de campos a serem descartados do registro de entrada antes da gravação. É mutuamente exclusivo com "keep" e "only".
keep
list[str]
Uma lista de nomes de campos a serem mantidos no registro de entrada. Todos os outros campos são descartados antes da gravação. É mutuamente exclusivo com "drop" e "only".
kms_key
str
Use essa chave do Cloud KMS para criptografar seus dados
apenas
str
O nome de um único campo de registro que precisa ser gravado. É mutuamente exclusivo com "keep" e "drop".
triggering_frequency_seconds
int64
Determina a frequência com que o progresso é "confirmado" no BigQuery. O padrão é a cada 5 segundos.
A seguir
Para mais informações e exemplos de código, consulte os seguintes tópicos:
[[["Fácil de entender","easyToUnderstand","thumb-up"],["Meu problema foi resolvido","solvedMyProblem","thumb-up"],["Outro","otherUp","thumb-up"]],[["Difícil de entender","hardToUnderstand","thumb-down"],["Informações incorretas ou exemplo de código","incorrectInformationOrSampleCode","thumb-down"],["Não contém as informações/amostras de que eu preciso","missingTheInformationSamplesINeed","thumb-down"],["Problema na tradução","translationIssue","thumb-down"],["Outro","otherDown","thumb-down"]],["Última atualização 2025-09-10 UTC."],[[["\u003cp\u003eManaged I/O for BigQuery supports dynamic table creation and dynamic destinations.\u003c/p\u003e\n"],["\u003cp\u003eFor reading, the connector utilizes the BigQuery Storage Read API, and for writing, it uses the BigQuery Storage Write API in exactly-once mode for unbounded sources or BigQuery file loads for bounded sources.\u003c/p\u003e\n"],["\u003cp\u003eThe connector requires Apache Beam SDK for Java version 2.61.0 or later.\u003c/p\u003e\n"],["\u003cp\u003eConfiguration options include specifying the BigQuery \u003ccode\u003etable\u003c/code\u003e, \u003ccode\u003ekms_key\u003c/code\u003e, \u003ccode\u003efields\u003c/code\u003e, \u003ccode\u003equery\u003c/code\u003e, \u003ccode\u003erow_restriction\u003c/code\u003e, and \u003ccode\u003etriggering_frequency\u003c/code\u003e depending on the operation.\u003c/p\u003e\n"],["\u003cp\u003eManaged I/O for BigQuery does not support automatic upgrades.\u003c/p\u003e\n"]]],[],null,["Managed I/O supports the following capabilities for BigQuery:\n\n- Dynamic table creation\n- [Dynamic destinations](/dataflow/docs/guides/write-to-iceberg#dynamic-destinations%22)\n- For reads, the connector uses the [BigQuery Storage Read API](/bigquery/docs/reference/storage).\n- For writes, the connector uses the following BigQuery methods:\n\n - If the source is unbounded and Dataflow is using [streaming exactly-once processing](/dataflow/docs/guides/streaming-modes), the connector performs writes to BigQuery, by using the [BigQuery Storage Write API](/bigquery/docs/write-api) with exactly-once delivery semantics.\n - If the source is unbounded and Dataflow is using [streaming at-least-once processing](/dataflow/docs/guides/streaming-modes), the connector performs writes to BigQuery, by using the [BigQuery Storage Write API](/bigquery/docs/write-api) with at-least-once delivery semantics.\n - If the source is bounded, the connector uses [BigQuery file loads](/bigquery/docs/batch-loading-data).\n\nRequirements\n\nThe following SDKs support managed I/O for BigQuery:\n\n- Apache Beam SDK for Java version 2.61.0 or later\n- Apache Beam SDK for Python version 2.61.0 or later\n\nConfiguration\n\nManaged I/O for BigQuery supports the following configuration\nparameters:\n\n`BIGQUERY` Read \n\n| Configuration | Type | Description |\n|-----------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| kms_key | `str` | Use this Cloud KMS key to encrypt your data |\n| query | `str` | The SQL query to be executed to read from the BigQuery table. |\n| row_restriction | `str` | Read only rows that match this filter, which must be compatible with Google standard SQL. This is not supported when reading via query. |\n| fields | `list[`str`]` | Read only the specified fields (columns) from a BigQuery table. Fields may not be returned in the order specified. If no value is specified, then all fields are returned. Example: \"col1, col2, col3\" |\n| table | `str` | The fully-qualified name of the BigQuery table to read from. Format: \\[${PROJECT}:\\]${DATASET}.${TABLE} |\n\n\u003cbr /\u003e\n\n`BIGQUERY` Write \n\n| Configuration | Type | Description |\n|------------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| **table** | `str` | The bigquery table to write to. Format: \\[${PROJECT}:\\]${DATASET}.${TABLE} |\n| drop | `list[`str`]` | A list of field names to drop from the input record before writing. Is mutually exclusive with 'keep' and 'only'. |\n| keep | `list[`str`]` | A list of field names to keep in the input record. All other fields are dropped before writing. Is mutually exclusive with 'drop' and 'only'. |\n| kms_key | `str` | Use this Cloud KMS key to encrypt your data |\n| only | `str` | The name of a single record field that should be written. Is mutually exclusive with 'keep' and 'drop'. |\n| triggering_frequency_seconds | `int64` | Determines how often to 'commit' progress into BigQuery. Default is every 5 seconds. |\n\n\u003cbr /\u003e\n\nWhat's next\n\nFor more information and code examples, see the following topics:\n\n- [Read from BigQuery](/dataflow/docs/guides/read-from-bigquery)\n- [Write to BigQuery](/dataflow/docs/guides/write-to-bigquery)"]]