보관처리된 백업을 위해 BigQuery에서 사전 빌드된 보고서 설정

이 페이지에서는 백업 계획을 사용하여 보호되는 리소스에 대해 BigQuery에서 사전 빌드된 보고서를 설정하고 보는 방법을 설명합니다.

보고서를 설정하려면 로깅 데이터를 BigQuery로 스트리밍하는 로그 싱크를 만드는 일회성 활동을 한 다음 사전 빌드된 스크립트를 실행해야 합니다. 관리 콘솔의 백업 템플릿을 사용하여 보호되는 리소스에 대해 BigQuery에서 사전 빌드된 보고서를 설정하고 보려면 BigQuery에서 사전 빌드된 보고서 설정하기를 참고하세요.

필요한 IAM 역할

보관처리된 백업에 대해 BigQuery에서 사전 빌드된 보고서를 보려면 다음 IAM 권한이 필요합니다. IAM 역할 부여 방법을 알아보세요.

역할 역할을 부여할 시점
로그 구성 작성자 (roles/logging.configWriter) 또는
로깅 관리자 (roles/logging.admin) 및
BigQuery 데이터 편집자 (roles/bigquery.dataEditor)
Google Cloud 콘솔에서 싱크 및 BigQuery 데이터 세트를 만듭니다.
소유자(roles/owner) Google Cloud CLI에서 싱크 및 BigQuery 데이터 세트를 만듭니다.
BigQuery 관리자(bigquery.admin) 맞춤 쿼리를 작성하거나 쿼리를 다운로드합니다.

보관소 백업을 위해 싱크를 만들고 로그를 BigQuery로 라우트

BigQuery는 로그 싱크가 생성된 후에 생성된 로그만 저장합니다. 로그 싱크를 만들기 전에 생성된 로그는 BigQuery에 표시되지 않습니다. Google Cloud 콘솔 또는 Google Cloud CLI에서 로그 싱크를 만들 수 있습니다.

보관소 백업을 위해 BigQuery에서 싱크를 만들고 로그를 라우팅하려면 다음 단계를 따르세요.

콘솔

  1. Google Cloud 콘솔에서 로그 라우터 페이지로 이동합니다.

    로그 라우터 페이지로 이동

  2. 기존 Google Cloud 프로젝트를 선택합니다.
  3. 싱크 만들기를 클릭합니다.
  4. 싱크 세부정보 패널에서 다음 필드를 입력합니다.
    1. 싱크 이름: 싱크 이름을 bdr_report_sink로 입력합니다. 다른 싱크의 백업 및 DR 보고서를 식별하려면 싱크 이름 bdr_report_sink를 사용해야 합니다.
    2. 싱크 설명: 싱크의 목적 또는 사용 사례를 설명합니다.
  5. 싱크 대상 위치 패널에서 다음을 수행합니다.

    1. Select sink service(싱크 서비스 선택) 메뉴에서 BigQuery dataset(BigQuery 데이터 세트) 싱크 서비스를 선택합니다.
    2. BigQuery 데이터 세트 선택에서 새 BigQuery 데이터 세트 만들기를 선택합니다.
    3. 데이터 세트 만들기 페이지에서 다음을 수행합니다.
      1. 데이터 세트 ID에 데이터 세트 이름을 BackupandDR_reports로 입력하여 다른 데이터 세트와 구분합니다. 데이터 세트 이름을 BackupandDR_reports에서 변경하지 마세요.
      2. 위치 유형에서 데이터 세트의 지리적 위치를 선택합니다. 데이터 세트가 생성된 후에는 위치를 변경할 수 없습니다.
      3. 선택사항: 이 데이터 세트의 테이블을 만료시키려면 테이블 만료 시간 사용 설정을 선택한 다음 기본 최대 테이블 기간을(일 단위) 지정합니다.
      4. 데이터 세트 만들기를 클릭합니다.
  6. 싱크에 포함할 로그 선택 패널에서 다음을 수행합니다.

    1. 포함 필터 만들기 필드에 포함하려는 로그 항목과 일치하는 다음 필터 표현식을 입력합니다.

       logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
      
    2. 올바른 필터를 입력했는지 확인하려면 미리보기 로그를 선택합니다. 그러면 필터가 미리 채워진 새 탭에서 로그 탐색기가 열립니다.

  7. 선택사항: 싱크에서 제외할 로그 선택 패널에서 다음을 수행합니다.

    1. 제외 필터 이름 필드에 이름을 입력합니다.
    2. 제외 필터 빌드 필드에 제외하려는 로그 항목과 일치하는 필터 표현식을 입력합니다. 샘플 함수를 사용하여 제외할 로그 항목 부분을 선택할 수도 있습니다.

  8. 싱크 만들기를 선택합니다.

    BigQuery Studio에서 데이터 세트를 확인할 수 있습니다.

gcloud

  1. Cloud Shell 활성화로 이동하여 편집기 열기를 클릭합니다.
  2. 아이콘을 클릭하고 파일을 선택한 다음 새 텍스트 파일을 선택합니다.
  3. 다음 스크립트를 복사하여 붙여넣습니다.

       #!/bin/bash
        echo "This script will set up a log sink for BackupDR reports to be available in BigQuery"
    
        # Get the default project ID
        DEFAULT_PROJECT_ID=$(gcloud config get-value project)
        read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
        # Use default if no input is provided
        if [ -z "$PROJECT_ID" ]; then
          PROJECT_ID=$DEFAULT_PROJECT_ID
        fi
          # Set the project ID
        result=$(gcloud config set project $PROJECT_ID)
        if [ $? -ne 0 ]; then
          echo "Error setting the project to $PROJECT_ID"
          exit 1
        fi
          # --- Check if BigQuery API is already enabled, enable if not ---
        echo "Checking if BigQuery API is enabled..."
        if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then
          echo "BigQuery API is already enabled for $PROJECT_ID"
        else
          echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the associated costs before proceeding."
          read -p "Do you want to continue(Y/N)?" continue
          if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then
            echo "Enabling BigQuery API..."
            result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID)
            if [ $? -eq 0 ]; then
              echo "Successfully enabled BigQuery api for $PROJECT_ID"
            else
              echo "Error in setting up the BigQuery api for the project. $result"
              exit 1
            fi
          else
            exit 0
          fi
        fi
          # --- Check if BigQuery data set already exists, create if not ---
        echo "Checking if BigQuery data set exists..."
        if bq ls | grep "bdr_reports" >/dev/null; then
          echo "Dataset bdr_reports already exists for $PROJECT_ID"
        else
          echo "Creating bigQuery dataset bdr_reports..."
          # --- Get dataset location from user (default: US) ---
          read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION
          if [ -z "$DATASET_LOCATION" ]; then
            DATASET_LOCATION="US"
          fi
          # --- Get table expiration in days from user (default: no expiration) ---
          read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS
    
          # Calculate table expiration in seconds if provided
          if [ -n "$TABLE_EXPIRATION_DAYS" ]; then
            TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60))
            EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS"
          else
            EXPIRATION_FLAG=""
          fi
          result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG bdr_reports)
          if [ $? -eq 0 ]; then
            echo "Created a BigQuery dataset bdr_reports successfully."
          else
            echo ""
            echo "ERROR : Failed to create the BigQuery dataset."
            echo $result
            exit 1
          fi
        fi
          # --- Check if Log Sink already exists, create if not ---
        echo "Checking if Log Sink exists..."
        if gcloud logging sinks list | grep "bdr_report_sink" >/dev/null; then
          echo "Log Sink bdr_reports_sink already exists for $PROJECT_ID"
        else
          log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fbdr_*"
          echo "Creating log sink bdr_reports..."
          result=$(gcloud logging sinks create bdr_report_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/bdr_reports --log-filter="logName=~\"$log_filter\"")
          if [ $? -eq 0 ]; then
            echo "Created a logsink bdr_reports_sink successfully."
          else
            echo ""
            echo "ERROR : Failed to create logsink."
            exit 1
          fi
        fi
    
        # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery ---
        result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None)
        if [ $? -eq 0 ]; then
          echo "Added permission for cloud logging to write to BigQuery datasets"
        else
          echo ""
          echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have Owner access rights in order to be able to proceed."
          exit 1
        fi
    
        echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery."
        exit 0
    
      exit 0
    
  4. Bash 파일 확장자가 있는 이름(예: script.sh)으로 파일을 저장합니다.

  5. 방금 만든 파일을 사용하여 bash 명령어를 실행합니다. 예를 들면 bash script.sh입니다.

    BigQuery Studio에서 생성된 데이터 세트를 확인할 수 있습니다.

보관처리된 백업용 사전 빌드된 보고서 설정

로그가 로그 싱크를 통해 라우팅되는 데이터 세트 내에서 다음 스크립트를 실행하여 사전 빌드된 보고서를 설정할 수 있습니다.

스크립트는 다음과 같은 사전 빌드된 보고서를 추가합니다.

보관처리된 백업을 위해 BigQuery에서 사전 빌드된 보고서를 설정하려면 다음 단계를 따르세요.

gcloud

  1. Cloud Shell 활성화로 이동하여 편집기 열기를 클릭합니다.
  2. 아이콘을 클릭하고 파일을 선택한 다음 새 텍스트 파일을 선택합니다.
  3. 다음 사전 빌드된 보고서 스크립트를 복사하여 Google Cloud CLI에 붙여넣습니다.

      #!/bin/bash
    
      # Default config values. Modify only if required.
      DATASET="bdr_reports"
      JOBS_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_restore_jobs_"
      PROTECTED_RESOURCE_TABLE_PREFIX="backupdr_googleapis_com_bdr_protected_resource_"
      BACKUP_VAULT_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_vault_details_"
      VIEW_FROM_DATE_SUFFIX="20000101"  #Use an old enough date so that the views should contain all the data.
    
      # Function to check if a routine exists
      routine_exists() {
      local routine_name="$1"
      bq --project_id $PROJECT_ID ls --routines "$DATASET" | grep "$routine_name" >/dev/null
      }
    
      # Function to check if a view exists
      view_exists() {
      local view_name="$1"
      check_view=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from $DATASET.INFORMATION_SCHEMA.VIEWS WHERE table_name='$view_name'" | tail -n +2 | tr -d ' ')
      if [ $check_view == 'true' ]; then
        return 0
      fi
      return -1
      }
    
      # Function to create a routine
      create_routine() {
      local routine_name="$1"
      local routine_type="$2"
      local definition_body="$3"
        if routine_exists "$routine_name"; then
        echo "Routine $routine_name already exists. Skipping."
        return 0
      fi
      # Construct the routine body JSON
      local routine_body='
        {
        "definitionBody": "'"$definition_body"'",
        "routineType": "'"$routine_type"'",
        "routineReference": {
            "projectId": "'"$PROJECT_ID"'",
            "datasetId": "'"$DATASET"'",
            "routineId": "'"$routine_name"'"
        },
        "arguments": [
            {
            "name": "FROM_DATE",
            "dataType": {
                "typeKind": "STRING"
            }
            },
            {
            "name": "TO_DATE",
            "dataType": {
                "typeKind": "STRING"
            }
            }
        ]
        }
      '
    
      # Construct the curl command with the routine body
      local curl_command=(
        curl -s --request POST \
          "https://bigquery.googleapis.com/bigquery/v2/projects/$PROJECT_ID/datasets/$DATASET/routines" \
          --header "Authorization: Bearer $ACCESS_TOKEN" \
          --header 'Accept: application/json' \
          --header 'Content-Type: application/json' \
          --data "$routine_body" --compressed
      )
        # Execute the curl command and capture the result
      local result=$( "${curl_command[@]}" )
    
      # Check if creation was successful
      if echo "$result" | grep '"etag":' > /dev/null; then
        echo "Routine '$routine_name' created successfully."
      else
        echo "$result"
        exit -1
      fi
      }
    
      # Function to create a view
      create_view() {
      local view_name="$1"
      local parent_routine_name="$1"
      local view_query='
        SELECT * FROM `'$PROJECT_ID'.'$DATASET'.'$parent_routine_name'`("'$VIEW_FROM_DATE_SUFFIX'", FORMAT_DATE("%Y%m%d", CURRENT_DATE()));
      '
    
      # Check if view exists.
      if routine_exists "$parent_routine_name"; then
        view_exists "$view_name"
        if [ $? -eq 0 ]; then
          echo "View $view_name already exists. Skipping."
          return 0
        fi
        # Create the view if it does not exist
        bq --project_id $PROJECT_ID mk --use_legacy_sql=false --expiration 0 --view "$view_query" $DATASET.$view_name
      else
        echo "Routine $parent_routine_name is not present. Skipping the creation of $view_name."
        return -1
      fi
      }
    
      # Create all jobs related routines and views
      create_jobs_routines_and_views() {
      # Backup Restore Jobs
      create_routine "backup_restore_jobs" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        dat AS (
        SELECT
          *,
          LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
          LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
        FROM
          `'$PROJECT_ID'.'$DATASET'.'$JOBS_TABLE_PREFIX'*`
        WHERE
          _TABLE_SUFFIX >= FROM_DATE
          AND _TABLE_SUFFIX <= TO_DATE
          AND jsonpayload_v1_bdrbackuprestorejoblog.resourceType IN ('"'"'Compute Engine'"'"') ),
        deduped AS (
        SELECT
          * EXCEPT (insert_id_is_same,
            timestamp_is_same),
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime), NULL) AS job_end_time,
    
        FROM
          dat
        WHERE
          NOT (insert_id_is_same
            AND timestamp_is_same) ),
        latest AS (
        SELECT
          jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
          MAX(CASE jsonpayload_v1_bdrbackuprestorejoblog.jobStatus
              WHEN '"'"'RUNNING'"'"' THEN 1
              WHEN '"'"'SKIPPED'"'"' THEN 2
              WHEN '"'"'SUCCESSFUL'"'"' THEN 3
              WHEN '"'"'FAILED'"'"' THEN 4
          END
            ) AS final_status_code
        FROM
          deduped
        GROUP BY
          jsonpayload_v1_bdrbackuprestorejoblog.jobid),
        filled_latest AS (
        SELECT
          jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
          jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS final_status,
          PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime) AS parsed_start_time,
          PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time) AS parsed_end_time,
          TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), HOUR) AS duration_hours,
          MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), MINUTE), 60) AS duration_minutes,
          MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), SECOND), 60) AS duration_seconds,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename), NULL) AS restore_resource_name,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype), NULL) AS error_type,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage), NULL) AS error_message,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname), NULL) AS backup_name,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname), NULL) AS backup_plan_name,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule), NULL) AS backup_rule,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime), NULL) AS backup_consistency_time,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib), NULL) AS incremental_backup_size_in_gib,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode), NULL) AS error_code,
    
        FROM
          deduped
        JOIN
          latest
        ON
          latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
          AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = (
            CASE final_status_code
              WHEN 1 THEN '"'"'RUNNING'"'"'
              WHEN 2 THEN '"'"'SKIPPED'"'"'
              WHEN 3 THEN '"'"'SUCCESSFUL'"'"'
              WHEN 4 THEN '"'"'FAILED'"'"'
          END
            ) )
      SELECT
        FORMAT_TIMESTAMP('"'"'%m%d'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"') || '"'"'-'"'"' || ARRAY_TO_STRING(REGEXP_EXTRACT_ALL(jsonpayload_v1_bdrbackuprestorejoblog.resourceType, '"'"'\\\\b[A-Z]'"'"'), '"'"''"'"') || '"'"'-'"'"' || (CASE jsonpayload_v1_bdrbackuprestorejoblog.jobCategory
            WHEN '"'"'ON_DEMAND_BACKUP'"'"' THEN '"'"'B'"'"'
            WHEN '"'"'SCHEDULED_BACKUP'"'"' THEN '"'"'B'"'"'
            WHEN '"'"'RESTORE'"'"' THEN '"'"'R'"'"'
        END
          ) || '"'"'-'"'"' || REPLACE(FORMAT_TIMESTAMP('"'"'%H%M%E3S'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"'), '"'"'.'"'"', '"'"''"'"') AS `job_name`,
        REGEXP_EXTRACT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'([^/]+)$'"'"') AS `resource_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName,'"'"'/'"'"')[1]) AS `resource_project_name`,
        jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS `job_status`,
        jsonpayload_v1_bdrbackuprestorejoblog.jobCategory AS `job_category`,
        Filled_latest.error_type AS `error_type`,
        CAST(filled_latest.error_code AS INT64) AS `error_code`,
        filled_latest.error_message AS `error_message`,
        parsed_start_time AS `job_start_time`,
        parsed_end_time AS `job_end_time`,
      IF
        (duration_hours > 0, duration_hours || '"'"'h '"'"', '"'"''"'"') ||
      IF
        (duration_minutes > 0, duration_minutes || '"'"'m '"'"', '"'"''"'"') ||
      IF
        (duration_seconds > 0, duration_seconds || '"'"'s '"'"', '"'"''"'"') AS `duration`,
        ROUND(filled_latest.incremental_backup_size_in_gib, 2) AS `incremental_backup_size_in_gib`,
        PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', filled_latest.backup_consistency_time) AS `backup_consistency_time`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.backup_plan_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.backup_plan_name,'"'"'/'"'"')[5]) AS `backup_plan_name`,
        filled_latest.backup_rule AS `backup_rule`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[5]) AS `backup_vault_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.backup_name, '"'"'/'"'"')) < 10, NULL, SPLIT(filled_latest.backup_name,'"'"'/'"'"')[9]) AS `backup_name`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[1]) AS `restore_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[5]) AS `restore_resource_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName,'"'"'/'"'"')[3]) AS `resource_location`,
        jsonpayload_v1_bdrbackuprestorejoblog.resourceType AS `resource_type`,
        jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceId AS `resource_id`,
        jsonpayload_v1_bdrbackuprestorejoblog.jobId AS `job_id`,
        timestamp AS `log_timestamp`
      FROM
        deduped
      JOIN
        filled_latest
      ON
        filled_latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
        AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = filled_latest.final_status
      ORDER BY
        parsed_start_time DESC
      '
    
      # Backup Jobs Summary
      create_routine "backup_jobs_summary" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          DATE(log_timestamp) AS `date`,
          resource_project_name,
          job_category,
          ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
          COUNT(*) AS `total`,
          COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
          COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
          COUNTIF(job_status = '"'"'SKIPPED'"'"') AS `skipped`,
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"') AND job_status IN ('"'"'SUCCESSFUL'"'"', '"'"'FAILED'"'"', '"'"'SKIPPED'"'"')
        GROUP BY
          `date`,
          resource_project_name,
          job_category
        ORDER BY
          `date` DESC
      '
      create_view "backup_jobs_summary"
    
      # Backup Jobs Details
      create_routine "backup_jobs_details" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          job_status,
          error_type,
          error_code,
          error_message,
          job_start_time,
          job_end_time,
          duration,
          incremental_backup_size_in_gib,
          backup_consistency_time,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          backup_vault_project_name,
          resource_location,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE
          job_status IN ('"'"'SUCCESSFUL'"'"',
            '"'"'FAILED'"'"', '"'"'SKIPPED'"'"') AND job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"')
      '
      create_view "backup_jobs_details"
    
      # Successful Backup Jobs Details
      create_routine "successful_backup_jobs" "TABLE_VALUED_FUNCTION" \
      'SELECT
          job_name,
          resource_name,
          resource_project_name,
          job_start_time,
          job_end_time,
          duration,
          incremental_backup_size_in_gib,
          backup_consistency_time,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          backup_vault_project_name,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
            TO_DATE) WHERE job_status = '"'"'SUCCESSFUL'"'"'
      '
      create_view "successful_backup_jobs"
    
      # Failed Backup Job Details
      create_routine "failed_backup_jobs" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          error_type,
          error_code,
          error_message,
          job_start_time,
          job_end_time,
          duration,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          backup_vault_project_name,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
            TO_DATE) WHERE job_status = '"'"'FAILED'"'"'
      '
      create_view "failed_backup_jobs"
    
      # Skipped Backup Job Details
      create_routine "skipped_backup_jobs" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          job_start_time,
          backup_vault_project_name,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
            TO_DATE)
        WHERE
          job_status = '"'"'SKIPPED'"'"'
      '
      create_view "skipped_backup_jobs"
    
      # Restore Jobs Summary
      create_routine "restore_jobs_summary" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          DATE(log_timestamp) AS `date`,
          restore_project_name,
          backup_vault_project_name,
          ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
          COUNT(*) AS `total`,
          COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
          COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE
          job_category = '"'"'RESTORE'"'"'
          AND job_status IN ('"'"'SUCCESSFUL'"'"',
            '"'"'FAILED'"'"')
        GROUP BY
          `date`,
          restore_project_name,
          backup_vault_project_name
        ORDER BY
          `date` DESC
      '
      create_view "restore_jobs_summary"
    
      # Restore Jobs Details
      create_routine "restore_jobs_details" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          job_status,
          error_type,
          error_code,
          error_message,
          job_start_time,
          job_end_time,
          duration,
          restore_resource_name,
          restore_project_name,
          backup_vault_name,
          backup_vault_project_name,
          resource_location,
          resource_type,
          resource_id,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE
          job_category IN ('"'"'RESTORE'"'"')
          AND job_status IN ('"'"'SUCCESSFUL'"'"',
            '"'"'FAILED'"'"')
      '
      create_view "restore_jobs_details"
      }
    
      # Protected Resource Details
      create_protected_resource_routines_and_views() {
      create_routine "protected_resource_details" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        dat AS (
        SELECT
          *,
          LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
          LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
    
        FROM
          `'$PROJECT_ID'.'$DATASET'.'$PROTECTED_RESOURCE_TABLE_PREFIX'*` WHERE _TABLE_SUFFIX >= FROM_DATE AND _TABLE_SUFFIX <= TO_DATE
          ),
        deduped AS (
        SELECT
          * EXCEPT (`insert_id_is_same`,
            `timestamp_is_same`)
        FROM
          dat
        WHERE
          NOT (`insert_id_is_same`
            AND `timestamp_is_same`) ),
        latest_for_date AS (
        SELECT
          DATE(timestamp) AS `log_date`,
          jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename AS `source_resource_name`,
          MAX(timestamp) AS `latest_timestamp`
        FROM
          deduped
        GROUP BY
          `log_date`,
          `source_resource_name` )
      SELECT
        DATE(timestamp) AS `date`,
        jsonpayload_v1_bdrprotectedresourcelog.resourcetype AS `resource_type`,
        REGEXP_EXTRACT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename,'"'"'/'"'"')[1]) AS `resource_project_name`,
        jsonpayload_v1_bdrprotectedresourcelog.sourceresourcedatasizegib AS `resource_data_size_in_gib`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[5]) AS `backup_plan_name`,
        (
        SELECT
          ARRAY_AGG(STRUCT(
              array_entry.rulename AS `rule_name`,
              array_entry.recurrence AS `recurrence`,
              array_entry.recurrenceschedule AS `recurrence_schedule`,
              array_entry.backupwindow AS `backup_window`,
              IF(array_entry.backupwindowtimezone = '"'"'Etc/UTC'"'"', '"'"'UTC'"'"',array_entry.backupwindowtimezone)  AS `backup_window_time_zone`,
              array_entry.retentiondays AS `retention_days`
            ))
        FROM
          UNNEST(jsonpayload_v1_bdrprotectedresourcelog.currentbackupruledetails) array_entry ) AS `backup_rules`,
      DATE(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrprotectedresourcelog.lastprotectedon)) AS `last_backup_plan_assoc_date`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 4, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename,'"'"'/'"'"')[3]) AS `resource_location`,
        jsonpayload_v1_bdrprotectedresourcelog.sourceresourceid AS `resource_id`,
      FROM
        deduped
      INNER JOIN
        latest_for_date
      ON
        DATE(timestamp) = latest_for_date.log_date
        AND timestamp = latest_for_date.latest_timestamp
        AND latest_for_date.`source_resource_name` = jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename
      ORDER BY
        `date` DESC,
        `resource_name` ASC
      '
      create_view "protected_resource_details"
      }
    
      # Backup Vault Consumption
      create_backup_vault_consumption_routines_and_views() {
      create_routine "backup_vault_consumption" "TABLE_VALUED_FUNCTION" \
      '
      WITH
          dat AS (
          SELECT
            *,
          LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
          LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
          FROM
            `'$PROJECT_ID'.'$DATASET'.'$BACKUP_VAULT_TABLE_PREFIX'*`
          WHERE
            jsonpayload_v1_bdrbackupvaultdetailslog.resourceType IN ('"'"'Compute Engine'"'"')
            AND _TABLE_SUFFIX >= FROM_DATE
            AND _TABLE_SUFFIX <= TO_DATE ),
          deduped AS (
          SELECT
            * EXCEPT (`insert_id_is_same`,
              `timestamp_is_same`)
          FROM
            dat
          WHERE
            NOT (`insert_id_is_same`
              AND `timestamp_is_same`) ),
          latest_for_date AS (
          SELECT
            DATE(timestamp) AS `log_date`,
            MAX(timestamp) AS `latest_timestamp`,
            jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename AS `source_resource_name`,
          FROM
            deduped
          GROUP BY
            log_date,
            `source_resource_name`)
        SELECT
          DATE(timestamp) AS `date`,
          jsonpayload_v1_bdrbackupvaultdetailslog.resourcetype AS `resource_type`,
          REGEXP_EXTRACT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
          SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'/'"'"')[1] AS `resource_project_name`,
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
          jsonpayload_v1_bdrbackupvaultdetailslog.storedbytesgib AS `backup_vault_stored_bytes_in_gib`,
      CAST(jsonpayload_v1_bdrbackupvaultdetailslog.minimumEnforcedRetentionDays AS INT64) AS `backup_vault_minimum_enforced_retention_days`,
        IF
      (ARRAY_LENGTH(SPLIT(IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL), '"'"'/'"'"')) < 6, NULL, SPLIT(IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL),'"'"'/'"'"')[5]) AS `backup_plan_name`,
      PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint), NULL)) AS `first_available_restore_point`,
      PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint), NULL)) AS `last_available_restore_point`,
    
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename,'"'"'/'"'"')[3]) AS `resource_location`,
        FROM
          deduped
        INNER JOIN
          latest_for_date
        ON
          DATE(timestamp) = latest_for_date.log_date
          AND timestamp = latest_for_date.latest_timestamp
          AND jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename = latest_for_date.`source_resource_name`
        ORDER BY
          `date` DESC,
          `resource_name` ASC
      '
      create_view "backup_vault_consumption"
      }
    
      # Daily Scheduled Compliance
      create_daily_scheduled_compliance_routine_and_view() {
      create_routine "daily_scheduled_compliance" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        DailyComplianceWindows AS (
        SELECT
          * EXCEPT (backup_rules),
          PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[0] || '"'"' '"'"' || backup_window_time_zone) AS expected_start_time,
          resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
        IF
          (REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] = '"'"'24:00'"'"', TIMESTAMP_ADD(PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' 23:59 '"'"' || backup_window_time_zone), INTERVAL 1 MINUTE), PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] || '"'"' '"'"' || backup_window_time_zone)) AS expected_end_time
        FROM
          `'$PROJECT_ID'.'$DATASET'.protected_resource_details`(FROM_DATE, TO_DATE)
        CROSS JOIN
          UNNEST(backup_rules) AS rule
        ON
          rule.recurrence='"'"'Daily'"'"'),
        BackupJobs AS (
        SELECT
          *,
          resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE, FORMAT_DATE('"'"'%Y%m%d'"'"', DATE_ADD(PARSE_DATE('"'"'%Y%m%d'"'"', TO_DATE), INTERVAL 10 DAY)))
        WHERE
          job_category IN ('"'"'ON_DEMAND_BACKUP'"'"',
            '"'"'SCHEDULED_BACKUP'"'"') ),
        DailyCompliance AS (
        SELECT
          date,
          DailyComplianceWindows.resource_type,
          DailyComplianceWindows.resource_name,
          DailyComplianceWindows.resource_project_name as `resource_project_name`,
          DailyComplianceWindows.backup_plan_name,
          DailyComplianceWindows.rule_name,
          DailyComplianceWindows.backup_window,
          DailyComplianceWindows.backup_window_time_zone,
          (CASE BackupJobs.job_status
              WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Successful'"'"'
              WHEN '"'"'RUNNING'"'"' THEN '"'"'Pending'"'"'
              ELSE
          IF
            (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Pending'"'"', '"'"'Failed'"'"')
          END
            ) AS backup_schedule_compliance_status,
          (CASE BackupJobs.job_status
              WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' was successful.'"'"'
              WHEN '"'"'RUNNING'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' is running.'"'"'
              ELSE
          IF
            (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Backup window for '"'"'|| date || '"'"' has not passed yet.'"'"', '"'"'No successful backup job detected within backup window.'"'"')
          END
            ) AS comment,
          DailyComplianceWindows.backup_vault_name,
          DailyComplianceWindows.backup_vault_project_name as `backup_vault_project_name`,
          DailyComplianceWindows.backup_vault_location as `backup_vault_location`,
          DailyComplianceWindows.resource_location,
          ROW_NUMBER() OVER (PARTITION BY BackupJobs.unique_resource_name, DailyComplianceWindows.date, DailyComplianceWindows.rule_name ORDER BY (CASE BackupJobs.job_status WHEN '"'"'SUCCESSFUL'"'"' THEN 4 WHEN '"'"'RUNNING'"'"' THEN 3 ELSE 1 END ) DESC,
            BackupJobs.job_start_time ASC) AS row_number,
        FROM
          DailyComplianceWindows
        LEFT JOIN
          BackupJobs
        ON
          (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
            AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
            AND BackupJobs.job_status = '"'"'SUCCESSFUL'"'"'
            AND BackupJobs.backup_consistency_time >= DailyComplianceWindows.expected_start_time
            AND BackupJobs.backup_consistency_time <= DailyComplianceWindows.expected_end_time)
          OR (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
            AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
            AND BackupJobs.job_status = '"'"'RUNNING'"'"'
            AND BackupJobs.job_start_time >= DailyComplianceWindows.expected_start_time
            AND BackupJobs.job_start_time <= DailyComplianceWindows.expected_end_time)
        ORDER BY
          date DESC,
          BackupJobs.resource_name ASC)
      SELECT
        * EXCEPT(row_number)
      FROM
        DailyCompliance
      WHERE
        row_number = 1
      '
      create_view "daily_scheduled_compliance"
      }
    
      # --- Main script ---
    
      # Get the default project ID
      DEFAULT_PROJECT_ID=$(gcloud config get-value project)
      read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
      # Use default if no input is provided
      if [ -z "$PROJECT_ID" ]; then
        PROJECT_ID=$DEFAULT_PROJECT_ID
      fi
    
      ## Generate Access Token
      echo "Generating access token..."
      ACCESS_TOKEN=$(gcloud auth print-access-token --quiet)
      if [ -z "$ACCESS_TOKEN" ]; then
        echo "Failed to retrieve access token" >&2
        exit 1
      fi
      echo "Access token generated successfully..."
    
      ## Check if the dataset exists
      echo "Check if Reporting Dataset exists..."
      if bq --project_id $PROJECT_ID ls | grep "$DATASET" >/dev/null; then
        echo "Dataset $DATASET exists for $PROJECT_ID. Continuing."
      else
        echo "Dataset $DATASET does not exist for $PROJECT_ID. Exiting."
        exit 0
      fi
    
      ## Check if the tables exist
      echo "Determining which tables are available in BigQuery..."
      check_jobs_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$JOBS_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
      check_protected_resource_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$PROTECTED_RESOURCE_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
      check_backup_vault_consumption_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$BACKUP_VAULT_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    
      echo "Creating routines and views based on available tables.."
      if [ "$check_jobs_table" == 'true' ]; then
      create_jobs_routines_and_views
      fi
    
      if [ "$check_protected_resource_table" == 'true' ]; then
      create_protected_resource_routines_and_views
      fi
    
      if [ "$check_backup_vault_consumption_table" == 'true' ]; then
      create_backup_vault_consumption_routines_and_views
      fi
    
      if [ "$check_jobs_table" == 'true' ] && [ "$check_protected_resource_table" == 'true' ] ; then
      create_daily_scheduled_compliance_routine_and_view
      fi
    
      if [ $check_jobs_table == 'false' ] || [ $check_protected_resource_table == 'false' ] || [ $check_backup_vault_consumption_table == 'false' ]; then
      echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset $DATASET."
      echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, and before running the script to create pre built reports. Try re-running the script again after some time to fix the issue."
      echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m"
      else
      echo "Set up completed..."
      fi
    
    
  4. Bash 파일 확장자가 있는 이름(예: backupdrreports.sh)으로 파일을 저장합니다.

  5. 방금 만든 파일을 사용하여 bash 명령어를 실행합니다. 예를 들면 bash backupdrreports.sh입니다.

    BigQuery 스튜디오의 데이터 세트 아래에 사전 빌드된 보고서가 표시됩니다.

사전 빌드된 보고서를 설정한 후 Looker 스튜디오에서 보고서를 확인하거나 CSV 파일로 사전 빌드된 보고서를 다운로드할 수 있습니다.

다음 단계