使用客戶自行管理的加密金鑰
透過集合功能整理內容
你可以依據偏好儲存及分類內容。
這個範例說明如何搭配 Dataflow 管道使用客戶管理的加密金鑰。
程式碼範例
Java
如要向 Dataflow 進行驗證,請設定應用程式預設憑證。
詳情請參閱「為本機開發環境設定驗證」。
Python
如要向 Dataflow 進行驗證,請設定應用程式預設憑證。
詳情請參閱「為本機開發環境設定驗證」。
除非另有註明,否則本頁面中的內容是採用創用 CC 姓名標示 4.0 授權,程式碼範例則為阿帕契 2.0 授權。詳情請參閱《Google Developers 網站政策》。Java 是 Oracle 和/或其關聯企業的註冊商標。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[],[],null,["# Use customer-managed encryption keys\n\nThis sample shows how to use encryption keys managed by the customer, with a Dataflow pipeline.\n\nCode sample\n-----------\n\n### Java\n\n\nTo authenticate to Dataflow, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n // Query from the NASA wildfires public dataset:\n // https://console.cloud.google.com/bigquery?p=bigquery-public-data&d=nasa_wildfire&t=past_week&page=table\n String query =\n \"SELECT latitude,longitude,acq_date,acq_time,bright_ti4,confidence \"\n + \"FROM `bigquery-public-data.nasa_wildfire.past_week` \"\n + \"LIMIT 10\";\n\n // Schema for the output BigQuery table.\n final TableSchema outputSchema = new TableSchema().setFields(Arrays.asList(\n new TableFieldSchema().setName(\"latitude\").setType(\"FLOAT\"),\n new TableFieldSchema().setName(\"longitude\").setType(\"FLOAT\"),\n new TableFieldSchema().setName(\"acq_date\").setType(\"DATE\"),\n new TableFieldSchema().setName(\"acq_time\").setType(\"TIME\"),\n new TableFieldSchema().setName(\"bright_ti4\").setType(\"FLOAT\"),\n new TableFieldSchema().setName(\"confidence\").setType(\"STRING\")));\n\n // Create the BigQuery options from the command line arguments.\n BigQueryKmsKeyOptions options = PipelineOptionsFactory.fromArgs(args)\n .withValidation().as(BigQueryKmsKeyOptions.class);\n\n // String outputBigQueryTable = \"\u003cproject\u003e:\u003cdataset\u003e.\u003ctable\u003e\";\n String outputBigQueryTable = options.getOutputBigQueryTable();\n\n // String kmsKey =\n // \"projects/\u003cproject\u003e/locations/\u003ckms-location\u003e/keyRings/\u003ckms-keyring\u003e/cryptoKeys/\u003ckms-key\u003e\";\n String kmsKey = options.getKmsKey();\n\n // Create and run an Apache Beam pipeline.\n Pipeline pipeline = Pipeline.create(options);\n pipeline\n .apply(\"Read from BigQuery with KMS key\",\n BigQueryIO.readTableRows()\n .fromQuery(query)\n .usingStandardSql()\n .withKmsKey(kmsKey))\n .apply(\"Write to BigQuery with KMS key\",\n BigQueryIO.writeTableRows()\n .to(outputBigQueryTable)\n .withSchema(outputSchema)\n .withWriteDisposition(WriteDisposition.WRITE_TRUNCATE)\n .withKmsKey(kmsKey));\n pipeline.run().waitUntilFinish();\n\n### Python\n\n\nTo authenticate to Dataflow, set up Application Default Credentials.\nFor more information, see\n\n[Set up authentication for a local development environment](/docs/authentication/set-up-adc-local-dev-environment).\n\n import apache_beam as beam\n\n # output_bigquery_table = '\u003cproject\u003e:\u003cdataset\u003e.\u003ctable\u003e'\n # kms_key = 'projects/\u003cproject\u003e/locations/\u003ckms-location\u003e/keyRings/\u003ckms-keyring\u003e/cryptoKeys/\u003ckms-key\u003e' # noqa\n # beam_args = [\n # '--project', 'your-project-id',\n # '--runner', 'DataflowRunner',\n # '--temp_location', 'gs://your-bucket/samples/dataflow/kms/tmp',\n # '--region', 'us-central1',\n # ]\n\n # Query from the NASA wildfires public dataset:\n # https://console.cloud.google.com/bigquery?p=bigquery-public-data&d=nasa_wildfire&t=past_week&page=table\n query = \"\"\"\n SELECT latitude,longitude,acq_date,acq_time,bright_ti4,confidence\n FROM `bigquery-public-data.nasa_wildfire.past_week`\n LIMIT 10\n \"\"\"\n\n # Schema for the output BigQuery table.\n schema = {\n \"fields\": [\n {\"name\": \"latitude\", \"type\": \"FLOAT\"},\n {\"name\": \"longitude\", \"type\": \"FLOAT\"},\n {\"name\": \"acq_date\", \"type\": \"DATE\"},\n {\"name\": \"acq_time\", \"type\": \"TIME\"},\n {\"name\": \"bright_ti4\", \"type\": \"FLOAT\"},\n {\"name\": \"confidence\", \"type\": \"STRING\"},\n ],\n }\n\n options = beam.options.pipeline_options.PipelineOptions(beam_args)\n with beam.Pipeline(options=options) as pipeline:\n (\n pipeline\n | \"Read from BigQuery with KMS key\"\n \u003e\u003e beam.io.Read(\n beam.io.BigQuerySource(\n query=query,\n use_standard_sql=True,\n kms_key=kms_key,\n )\n )\n | \"Write to BigQuery with KMS key\"\n \u003e\u003e beam.io.WriteToBigQuery(\n output_bigquery_table,\n schema=schema,\n write_disposition=beam.io.BigQueryDisposition.WRITE_TRUNCATE,\n kms_key=kms_key,\n )\n )\n\nWhat's next\n-----------\n\n\nTo search and filter code samples for other Google Cloud products, see the\n[Google Cloud sample browser](/docs/samples?product=dataflow)."]]