讀取到 TableRow 物件
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
使用 BigQueryIO 連接器讀取 TableRow 物件。
深入探索
如需包含這個程式碼範例的詳細說明文件,請參閱下列內容:
程式碼範例
Java
如要向 Dataflow 進行驗證,請設定應用程式預設憑證。
詳情請參閱「為本機開發環境設定驗證」。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[[["\u003cp\u003eThe provided code demonstrates how to use the BigQueryIO connector to read data from a BigQuery table into \u003ccode\u003eTableRow\u003c/code\u003e objects within a Dataflow pipeline.\u003c/p\u003e\n"],["\u003cp\u003eApplication Default Credentials are required to authenticate to Dataflow and can be set up for a local development environment.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eBigQueryIO.readTableRows()\u003c/code\u003e method is used to read table data, with the \u003ccode\u003efrom()\u003c/code\u003e method specifying the table in the format of project:dataset.table, and the \u003ccode\u003ewithMethod()\u003c/code\u003e method using direct read.\u003c/p\u003e\n"],["\u003cp\u003eAfter reading the table data, a \u003ccode\u003eMapElements\u003c/code\u003e transform is applied to process each \u003ccode\u003eTableRow\u003c/code\u003e object and access individual fields, such as user name and age.\u003c/p\u003e\n"]]],[],null,["# Read into TableRow objects\n\nUse the BigQueryIO connector to read into TableRow objects.\n\nExplore further\n---------------\n\n\nFor detailed documentation that includes this code sample, see the following:\n\n- [Read from BigQuery to Dataflow](/dataflow/docs/guides/read-from-bigquery)\n\nCode sample\n-----------\n\n### Java\n\n\nTo authenticate to Dataflow, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import com.google.api.services.bigquery.model.TableRow;\n import org.apache.beam.sdk.Pipeline;\n import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;\n import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.TypedRead.Method;\n import org.apache.beam.sdk.options.PipelineOptionsFactory;\n import org.apache.beam.sdk.transforms.MapElements;\n import org.apache.beam.sdk.values.TypeDescriptor;\n\n public class BiqQueryReadTableRows {\n public static void main(String[] args) {\n // Parse the pipeline options passed into the application. Example:\n // --projectId=$PROJECT_ID --datasetName=$DATASET_NAME --tableName=$TABLE_NAME\n // For more information, see https://beam.apache.org/documentation/programming-guide/#configuring-pipeline-options\n PipelineOptionsFactory.register(ExamplePipelineOptions.class);\n ExamplePipelineOptions options = PipelineOptionsFactory.fromArgs(args)\n .withValidation()\n .as(ExamplePipelineOptions.class);\n\n // Create a pipeline and apply transforms.\n Pipeline pipeline = Pipeline.create(options);\n pipeline\n // Read table data into TableRow objects.\n .apply(BigQueryIO.readTableRows()\n .from(String.format(\"%s:%s.%s\",\n options.getProjectId(),\n options.getDatasetName(),\n options.getTableName()))\n .withMethod(Method.DIRECT_READ)\n )\n // The output from the previous step is a PCollection\u003cTableRow\u003e.\n .apply(MapElements\n .into(TypeDescriptor.of(TableRow.class))\n // Use TableRow to access individual fields in the row.\n .via((TableRow row) -\u003e {\n var name = (String) row.get(\"user_name\");\n var age = (String) row.get(\"age\");\n System.out.printf(\"Name: %s, Age: %s%n\", name, age);\n return row;\n }));\n pipeline.run().waitUntilFinish();\n }\n }\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=dataflow)."]]