AstraDB to BigQuery 템플릿

AstraDB에서 BigQuery로 템플릿은 AstraDB에서 레코드를 읽고 BigQuery에 쓰는 일괄 파이프라인입니다.

대상 테이블이 BigQuery에 없으면 파이프라인에서 다음 값을 사용하여 테이블을 만듭니다.

  • Cassandra 키스페이스에서 상속된 Dataset ID
  • Cassandra 테이블에서 상속된 Table ID

대상 테이블 스키마는 소스 Cassandra 테이블에서 추론됩니다.

  • ListSet는 BigQuery REPEATED 필드에 매핑됩니다.
  • Map은 BigQuery RECORD 필드에 매핑됩니다.
  • 다른 모든 유형은 해당 유형이 있는 BigQuery 필드에 매핑됩니다.
  • Cassandra 사용자 정의 유형(UDT) 및 튜플 데이터 유형은 지원되지 않습니다.

파이프라인 요구사항

  • 토큰이 있는 AstraDB 계정

템플릿 매개변수

필수 매개변수

  • astraToken: 토큰 값 또는 보안 비밀 리소스 ID입니다. 예를 들면 AstraCS:abcdefghij입니다.
  • astraDatabaseId: 데이터베이스 고유 식별자(UUID)입니다. 예를 들면 cf7af129-d33a-498f-ad06-d97a6ee6eb7입니다.
  • astraKeyspace: Astra 데이터베이스에 있는 Cassandra 키스페이스 이름입니다.
  • astraTable: Cassandra 데이터베이스 내부의 테이블 이름입니다. 예를 들면 my_table입니다.

선택적 매개변수

  • astraQuery: 전체 테이블을 읽는 대신 행을 필터링하는 데 사용되는 쿼리입니다.
  • astraDatabaseRegion: 제공하지 않으면 멀티 리전 데이터베이스에 유용한 기본값이 선택됩니다.
  • minTokenRangesCount: 쿼리 배포에 사용할 최소 분할 수입니다.
  • outputTableSpec: 출력을 작성할 BigQuery 테이블 위치입니다. <PROJECT_ID>:<DATASET_NAME>.<TABLE_NAME> 형식을 사용합니다. 테이블 스키마가 입력 객체와 일치해야 합니다.

템플릿 실행

콘솔gcloudAPI
  1. Dataflow 템플릿에서 작업 만들기 페이지로 이동합니다.
  2. 템플릿에서 작업 만들기로 이동
  3. 작업 이름 필드에 고유한 작업 이름을 입력합니다.
  4. (선택사항): 리전 엔드포인트의 드롭다운 메뉴에서 값을 선택합니다. 기본 리전은 us-central1입니다.

    Dataflow 작업을 실행할 수 있는 리전 목록은 Dataflow 위치를 참조하세요.

  5. Dataflow 템플릿 드롭다운 메뉴에서 the AstraDB to BigQuery template을 선택합니다.
  6. 제공된 매개변수 필드에 매개변수 값을 입력합니다.
  7. 작업 실행을 클릭합니다.

셸 또는 터미널에서 템플릿을 실행합니다.

gcloud dataflow flex-template run JOB_NAME \
    --template-file-gcs-location=gs://dataflow-templates-REGION_NAME/VERSION/flex/AstraDB_To_BigQuery \
    --project=PROJECT_ID \
    --region=REGION_NAME \
    --parameters \
       astraToken=ASTRA_TOKEN,\
       astraDatabaseId=ASTRA_DATABASE_ID,\
       astraKeyspace=ASTRA_KEYSPACE,\
       astraTable=ASTRA_TABLE,\

다음을 바꿉니다.

  • JOB_NAME: 선택한 고유한 작업 이름
  • VERSION: 사용할 템플릿 버전

    다음 값을 사용할 수 있습니다.

  • REGION_NAME: Dataflow 작업을 배포할 리전(예: us-central1)
  • ASTRA_TOKEN: Astra 토큰입니다.
  • ASTRA_DATABASE_ID: 데이터베이스 식별자입니다.
  • ASTRA_KEYSPACE: Cassandra 키스페이스입니다.
  • ASTRA_TABLE: Cassandra 테이블입니다.

REST API를 사용하여 템플릿을 실행하려면 HTTP POST 요청을 전송합니다. API 및 승인 범위에 대한 자세한 내용은 projects.templates.launch를 참조하세요.

POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/flexTemplates:launch
{
   "launchParameter": {
     "jobName": "JOB_NAME",
     "parameters": {
       "astraToken": "ASTRA_TOKEN",
       "astraDatabaseId": "ASTRA_DATABASE_ID",
       "astraKeyspace": "ASTRA_KEYSPACE",
       "astraTable": "ASTRA_TABLE",
     },
     "containerSpecGcsPath": "gs://dataflow-templates-LOCATION/VERSION/flex/AstraDB_To_BigQuery",
     "environment": { "maxWorkers": "10" }
  }
}

다음을 바꿉니다.

  • PROJECT_ID: Dataflow 작업을 실행하려는 Google Cloud 프로젝트 ID
  • JOB_NAME: 선택한 고유한 작업 이름
  • VERSION: 사용할 템플릿 버전

    다음 값을 사용할 수 있습니다.

  • LOCATION: Dataflow 작업을 배포할 리전(예: us-central1)
  • ASTRA_TOKEN: Astra 토큰입니다.
  • ASTRA_DATABASE_ID: 데이터베이스 식별자입니다.
  • ASTRA_KEYSPACE: Cassandra 키스페이스입니다.
  • ASTRA_TABLE: Cassandra 테이블입니다.
Java
/*
 * Copyright (C) 2023 Google LLC
 *
 * Licensed under the Apache License, Version 2.0 (the "License"); you may not
 * use this file except in compliance with the License. You may obtain a copy of
 * the License at
 *
 *   http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
 * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 * License for the specific language governing permissions and limitations under
 * the License.
 */
package com.google.cloud.teleport.v2.astradb.templates;

import com.datastax.oss.driver.api.core.CqlSession;
import com.dtsx.astra.sdk.db.DatabaseClient;
import com.google.api.services.bigquery.model.TableReference;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.DatasetId;
import com.google.cloud.bigquery.DatasetInfo;
import com.google.cloud.teleport.metadata.Template;
import com.google.cloud.teleport.metadata.TemplateCategory;
import com.google.cloud.teleport.v2.astradb.options.AstraDbToBigQueryOptions;
import com.google.cloud.teleport.v2.astradb.transforms.AstraDbToBigQueryMappingFn;
import com.google.cloud.teleport.v2.common.UncaughtExceptionLogger;
import com.google.cloud.teleport.v2.utils.SecretManagerUtils;
import java.util.AbstractMap;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.SerializableCoder;
import org.apache.beam.sdk.io.astra.db.AstraDbIO;
import org.apache.beam.sdk.io.astra.db.CqlSessionHolder;
import org.apache.beam.sdk.io.astra.db.mapping.AstraDbMapper;
import org.apache.beam.sdk.io.astra.db.mapping.BeamRowDbMapperFactoryFn;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryUtils;
import org.apache.beam.sdk.options.PipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.transforms.SerializableFunction;
import org.apache.beam.sdk.values.Row;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * The {@link AstraDbToBigQuery} pipeline is a batch pipeline which ingests data from AstraDB and
 * outputs the resulting records to BigQuery.
 *
 * <p>Check out <a
 * href="https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/main/v2/astradb-to-bigquery/README_AstraDB_to_BigQuery.md">README</a>
 * for instructions on how to use or modify this template.
 */
@Template(
    name = "AstraDB_To_BigQuery",
    category = TemplateCategory.BATCH,
    displayName = "AstraDB to BigQuery",
    description = {
      "The AstraDB to BigQuery template is a batch pipeline that reads records from AstraDB and writes them to BigQuery.",
      "If the destination table doesn't exist in BigQuery, the pipeline creates a table with the following values:\n"
          + "- The `Dataset ID` is inherited from the Cassandra keyspace.\n"
          + "- The `Table ID` is inherited from the Cassandra table.\n",
      "The schema of the destination table is inferred from the source Cassandra table.\n"
          + "- `List` and `Set` are mapped to BigQuery `REPEATED` fields.\n"
          + "- `Map` are mapped to BigQuery `RECORD` fields.\n"
          + "- All other types are mapped to BigQuery fields with the corresponding types.\n"
          + "- Cassandra user-defined types (UDTs) and tuple data types are not supported."
    },
    optionsClass = AstraDbToBigQuery.Options.class,
    flexContainerName = "astradb-to-bigquery",
    documentation =
        "https://cloud.google.com/dataflow/docs/guides/templates/provided/astradb-to-bigquery",
    contactInformation = "https://cloud.google.com/support",
    preview = true,
    requirements = {"AstraDB account with a token"})
public class AstraDbToBigQuery {

  /** Logger for the class. */
  private static final Logger LOGGER = LoggerFactory.getLogger(AstraDbToBigQuery.class);

  /** If not provided, it is the default token range value. */
  public static final int DEFAULT_TOKEN_RANGE = 18;

  /**
   * Options for the sample
   *
   * <p>Inherits standard configuration options.
   */
  public interface Options
      extends PipelineOptions,
          AstraDbToBigQueryOptions.AstraDbSourceOptions,
          AstraDbToBigQueryOptions.BigQueryWriteOptions {}

  /** Main operations. */
  public static void main(String[] args) {
    UncaughtExceptionLogger.register();
    LOGGER.info("Starting pipeline");

    try {

      Options options = PipelineOptionsFactory.fromArgs(args).withValidation().as(Options.class);
      LOGGER.debug("Pipeline Arguments (options) validated");

      // --------------------------------
      // AstraDbIO.Read<Row>
      // --------------------------------

      // Credentials are read from secrets manager
      AbstractMap.SimpleImmutableEntry<String, byte[]> astraCredentials =
          parseAstraCredentials(options);
      LOGGER.debug("Astra Credentials parsed");

      // Map Cassandra Table Schema into BigQuery Table Schema
      SerializableFunction<AstraDbIO.Read<?>, TableSchema> bigQuerySchemaFactory =
          new AstraDbToBigQueryMappingFn(options.getAstraKeyspace(), options.getAstraTable());
      LOGGER.debug("Schema Mapper has been initialized");

      // Map Cassandra Rows into (Apache) Beam Rows (DATA)
      SerializableFunction<CqlSession, AstraDbMapper<Row>> beamRowMapperFactory =
          new BeamRowDbMapperFactoryFn(options.getAstraKeyspace(), options.getAstraTable());
      LOGGER.debug("Row Mapper has been initialized");

      // Distribute reads across all available Cassandra nodes
      int minimalTokenRangesCount =
          (options.getMinTokenRangesCount() == null)
              ? DEFAULT_TOKEN_RANGE
              : options.getMinTokenRangesCount();

      // Source: AstraDb
      AstraDbIO.Read<Row> astraSource =
          AstraDbIO.<Row>read()
              .withToken(astraCredentials.getKey())
              .withSecureConnectBundle(astraCredentials.getValue())
              .withKeyspace(options.getAstraKeyspace())
              .withTable(options.getAstraTable())
              .withMinNumberOfSplits(minimalTokenRangesCount)
              .withMapperFactoryFn(beamRowMapperFactory)
              .withCoder(SerializableCoder.of(Row.class))
              .withEntity(Row.class);
      LOGGER.debug("AstraDb Source initialization [OK]");

      // --------------------------------
      //  BigQueryIO.Write<Row>
      // --------------------------------

      TableReference bqTableRef = parseBigQueryDestinationTable(options);
      createBigQueryDestinationTableIfNotExist(options, bqTableRef);
      LOGGER.debug("BigQuery Sink Table has been initialized");

      // Sink: BigQuery
      BigQueryIO.Write<Row> bigQuerySink =
          BigQueryIO.<Row>write()
              .to(bqTableRef)
              // Specialized function reading cassandra source table and mapping to BigQuery Schema
              .withSchema(bigQuerySchemaFactory.apply(astraSource))
              // Provided by google, convert a Beam Row to a BigQuery TableRow
              .withFormatFunction(row -> row != null ? BigQueryUtils.toTableRow(row) : null)
              // Table Will be created if not exist
              .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
              .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND);
      LOGGER.debug("BigQuery Sink initialization [OK]");

      // --------------------------------
      //  Pipeline
      // --------------------------------

      Pipeline astraDbToBigQueryPipeline = Pipeline.create(options);
      astraDbToBigQueryPipeline
          .apply("Read From Astra", astraSource)
          .apply("Write To BigQuery", bigQuerySink);
      astraDbToBigQueryPipeline.run();

    } finally {
      // Cassandra Connection is stateful and needs to be closed
      CqlSessionHolder.cleanup();
    }
  }

  /**
   * Parse Astra Credentials from secrets in secret Manager. - SecretManagerUtils is not used as
   * only applied to String secrets
   *
   * @param options pipeline options
   * @return a pair with the token and the secure bundle
   */
  private static AbstractMap.SimpleImmutableEntry<String, byte[]> parseAstraCredentials(
      Options options) {

    String astraToken = options.getAstraToken();
    if (!astraToken.startsWith("AstraCS")) {
      astraToken = SecretManagerUtils.getSecret(options.getAstraToken());
    }
    LOGGER.info("Astra Token is parsed, value={}", astraToken.substring(0, 10) + "...");
    /*
     * Accessing the devops Api to retrieve the secure bundle.
     */
    DatabaseClient astraDbClient = new DatabaseClient(astraToken, options.getAstraDatabaseId());
    if (!astraDbClient.exist()) {
      throw new RuntimeException(
          "Astra Database does not exist, please check your Astra Token and Database ID");
    }
    byte[] astraSecureBundle = astraDbClient.downloadDefaultSecureConnectBundle();
    if (!StringUtils.isEmpty(options.getAstraDatabaseRegion())) {
      astraSecureBundle =
          astraDbClient.downloadSecureConnectBundle(options.getAstraDatabaseRegion());
    }
    LOGGER.info("Astra Bundle is parsed, length={}", astraSecureBundle.length);
    return new AbstractMap.SimpleImmutableEntry<>(astraToken, astraSecureBundle);
  }

  /**
   * Create the Bog Query table Reference (provided or based on Cassandra table name).
   *
   * @param options pipeline options
   * @return the big query table reference
   */
  private static TableReference parseBigQueryDestinationTable(Options options) {
    /*
     * bigQueryOutputTableSpec argument is the Big Query table specification. This is parameter
     * is optional. If not set, the table specification is built from the cassandra source table
     * attributes: keyspace=dataset name, table=table name.
     */
    String bigQueryOutputTableSpec = options.getOutputTableSpec();
    if (StringUtils.isEmpty(bigQueryOutputTableSpec)) {
      bigQueryOutputTableSpec =
          options.getProject() + ":" + options.getAstraKeyspace() + "." + options.getAstraTable();
    }
    TableReference bigQueryTableReference = BigQueryUtils.toTableReference(bigQueryOutputTableSpec);
    LOGGER.info("Big Query table spec has been set to {}", bigQueryOutputTableSpec);
    return bigQueryTableReference;
  }

  /**
   * Create destination dataset and tables if needed (schema mapped from Cassandra).
   *
   * @param options pipeline options
   * @param bqTableRef big query table reference
   */
  private static void createBigQueryDestinationTableIfNotExist(
      Options options, TableReference bqTableRef) {
    BigQuery bigquery =
        BigQueryOptions.newBuilder().setProjectId(options.getProject()).build().getService();
    if (null
        == bigquery.getDataset(
            DatasetId.of(bqTableRef.getProjectId(), bqTableRef.getDatasetId()))) {
      LOGGER.info(
          "Dataset was not found: creating DataSet {} in region {}",
          bqTableRef.getDatasetId(),
          options.getWorkerRegion());
      bigquery.create(
          DatasetInfo.newBuilder(bqTableRef.getDatasetId())
              .setLocation(options.getWorkerRegion())
              .build());
      LOGGER.debug("Dataset has been created [OK]");
    } else {
      LOGGER.info("Dataset {} already exist", bqTableRef.getDatasetId());
    }
  }
}

다음 단계