在 BigQuery 中设置预构建报告

本页介绍了如何为使用备份方案保护的资源在 BigQuery 中设置和查看预建报告。

如需设置报告,您需要执行一次性活动,即创建日志接收器以将日志记录数据流式传输到 BigQuery,然后执行预构建的脚本。如果您想在 BigQuery 中设置和查看管理控制台中使用备份模板保护的资源的预建报告,请参阅在 BigQuery 中设置预建报告

所需 IAM 角色

必须拥有以下 IAM 权限才能在 BigQuery 中查看保险库备份的预建报告。了解如何授予 IAM 角色

角色 何时授予角色
Logs Configuration Writer (roles/logging.configWriter) 或

Logging Admin (roles/logging.admin) 和 BigQuery Data Editor (roles/bigquery.dataEditor) | 通过 Google Cloud 控制台创建接收器和 BigQuery 数据集。| Owner (roles/owner) | 通过 Google Cloud CLI 创建接收器和 BigQuery 数据集。| BigQuery Admin (bigquery.admin) | 编写自定义查询或下载查询。|

创建接收器并将日志路由到 BigQuery,以进行保险库备份

BigQuery 仅存储在创建日志接收器后生成的日志。在创建日志接收器之前生成的日志不会显示在 BigQuery 中。您可以通过 Google Cloud 控制台或 Google Cloud CLI 创建日志接收器。

如需创建接收器并将日志路由到 BigQuery 以用于保险库备份,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,前往日志路由器页面:

    前往“日志路由器”页面

  2. 选择现有 Google Cloud 项目。
  3. 点击创建接收器
  4. 接收器详情面板中,输入以下字段:
    1. 接收器名称:输入接收器名称,例如 bdr_report_sink。 您必须使用接收器名称 bdr_report_sink 来标识来自其他接收器的 Backup and DR 报告。
    2. 接收器说明:描述接收器的用途或使用场景。
  5. 接收器目标位置面板中,执行以下操作:

    1. 选择接收器服务菜单中,选择 BigQuery 数据集接收器服务。
    2. 选择 BigQuery 数据集中,选择新建 BigQuery 数据集
    3. 创建数据集页面中,执行以下操作:
      1. 对于数据集 ID,输入数据集名称 bdr_reports,以便与其他数据集区分开来。请勿将数据集名称从 bdr_reports 更改为其他名称。
      2. 位置类型字段中,为数据集选择一个地理位置。创建数据集后,就无法再更改此位置。
      3. 可选:如果您希望此数据集中的表过期,请选择启用表过期时间,然后指定默认表存在时间上限(以天为单位)。
      4. 点击创建数据集
  6. 选择要包含在接收器中的日志面板中,执行以下操作:

    1. 构建包含项过滤条件字段中,输入与要包含的日志条目匹配的以下过滤条件表达式。

       logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fbdr_*" 
       Replace `PROJECT_ID` with the project name.
      
    2. 要验证您输入的过滤条件是否正确,请选择预览日志。此操作会在新标签页中打开 Logs Explorer,其中预填充了过滤条件。

  7. 可选:在选择要从接收器中过滤掉的日志面板中,执行以下操作:

    1. 排除项过滤条件名称字段中输入名称。
    2. 构建排除项过滤条件字段中,输入与要排除的日志条目匹配的过滤条件表达式。您还可以使用抽样函数选择要排除的日志条目。

  8. 选择创建接收器

    您可以在 BigQuery Studio 中查看相应数据集。

gcloud

  1. 前往激活 Cloud Shell,然后点击打开编辑器
  2. 点击 图标,选择文件,然后选择新建文本文件
  3. 复制并粘贴以下脚本。

       #!/bin/bash
        echo "This script will set up a log sink for BackupDR reports to be available in BigQuery"
    
        # Get the default project ID
        DEFAULT_PROJECT_ID=$(gcloud config get-value project)
        read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
        # Use default if no input is provided
        if [ -z "$PROJECT_ID" ]; then
          PROJECT_ID=$DEFAULT_PROJECT_ID
        fi
          # Set the project ID
        result=$(gcloud config set project $PROJECT_ID)
        if [ $? -ne 0 ]; then
          echo "Error setting the project to $PROJECT_ID"
          exit 1
        fi
          # --- Check if BigQuery API is already enabled, enable if not ---
        echo "Checking if BigQuery API is enabled..."
        if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then
          echo "BigQuery API is already enabled for $PROJECT_ID"
        else
          echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the associated costs before proceeding."
          read -p "Do you want to continue(Y/N)?" continue
          if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then
            echo "Enabling BigQuery API..."
            result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID)
            if [ $? -eq 0 ]; then
              echo "Successfully enabled BigQuery api for $PROJECT_ID"
            else
              echo "Error in setting up the BigQuery api for the project. $result"
              exit 1
            fi
          else
            exit 0
          fi
        fi
          # --- Check if BigQuery data set already exists, create if not ---
        echo "Checking if BigQuery data set exists..."
        if bq ls | grep "bdr_reports" >/dev/null; then
          echo "Dataset bdr_reports already exists for $PROJECT_ID"
        else
          echo "Creating bigQuery dataset bdr_reports..."
          # --- Get dataset location from user (default: US) ---
          read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION
          if [ -z "$DATASET_LOCATION" ]; then
            DATASET_LOCATION="US"
          fi
          # --- Get table expiration in days from user (default: no expiration) ---
          read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS
    
          # Calculate table expiration in seconds if provided
          if [ -n "$TABLE_EXPIRATION_DAYS" ]; then
            TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60))
            EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS"
          else
            EXPIRATION_FLAG=""
          fi
          result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG bdr_reports)
          if [ $? -eq 0 ]; then
            echo "Created a BigQuery dataset bdr_reports successfully."
          else
            echo ""
            echo "ERROR : Failed to create the BigQuery dataset."
            echo $result
            exit 1
          fi
        fi
          # --- Check if Log Sink already exists, create if not ---
        echo "Checking if Log Sink exists..."
        if gcloud logging sinks list | grep "bdr_report_sink" >/dev/null; then
          echo "Log Sink bdr_reports_sink already exists for $PROJECT_ID"
        else
          log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fbdr_*"
          echo "Creating log sink bdr_reports..."
          result=$(gcloud logging sinks create bdr_report_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/bdr_reports --log-filter="logName=~\"$log_filter\"")
          if [ $? -eq 0 ]; then
            echo "Created a logsink bdr_reports_sink successfully."
          else
            echo ""
            echo "ERROR : Failed to create logsink."
            exit 1
          fi
        fi
    
        # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery ---
        result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None)
        if [ $? -eq 0 ]; then
          echo "Added permission for cloud logging to write to BigQuery datasets"
        else
          echo ""
          echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have Owner access rights in order to be able to proceed."
          exit 1
        fi
    
        echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery."
        exit 0
    
      exit 0
    
  4. 使用带有 Bash 文件扩展名的名称(例如 script.sh)保存该文件。

  5. 使用您刚刚创建的文件运行命令 bash。例如 bash script.sh

    您可以在 BigQuery Studio 中看到创建的数据集。

为受保管的备份设置预建报告

您可以在日志通过日志接收器路由到的数据集中执行以下脚本,以设置预建报告。

该脚本会添加以下预构建的报告:

如需在 BigQuery 中为保险库备份设置预构建报告,请执行以下操作:

gcloud

  1. 前往激活 Cloud Shell,然后点击打开编辑器
  2. 点击 图标,选择文件,然后选择新建文本文件
  3. 将以下预建报告脚本复制并粘贴到 Google Cloud CLI 中。

    #!/bin/bash
    
    # Default config values. Modify only if required.
    DATASET="bdr_reports"
    JOBS_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_restore_jobs_"
    PROTECTED_RESOURCE_TABLE_PREFIX="backupdr_googleapis_com_bdr_protected_resource_"
    BACKUP_VAULT_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_vault_details_"
    BACKUP_PLAN_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_plan_details_"
    VIEW_FROM_DATE_SUFFIX="20000101"  #Use an old enough date so that the views should contain all the data.
    ALLOWED_RESOURCE_TYPES='('"'"'Compute Engine'"'"', '"'"'Disk'"'"')'
    
    # Function to check if a routine exists
    routine_exists() {
    local routine_name="$1"
    bq --project_id $PROJECT_ID ls --routines "$DATASET" | grep "$routine_name" >/dev/null
    }
    
    # Function to check if a view exists
    view_exists() {
    local view_name="$1"
    check_view=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from $DATASET.INFORMATION_SCHEMA.VIEWS WHERE table_name='$view_name'" | tail -n +2 | tr -d ' ')
    if [ $check_view == 'true' ]; then
      return 0
    fi
    return -1
    }
    
    # Function to create a routine
    create_routine() {
    local routine_name="$1"
    local routine_type="$2"
    local definition_body="$3"
    # Construct the routine body JSON
    local routine_body='
      {
      "definitionBody": "'"$definition_body"'",
      "routineType": "'"$routine_type"'",
      "routineReference": {
          "projectId": "'"$PROJECT_ID"'",
          "datasetId": "'"$DATASET"'",
          "routineId": "'"$routine_name"'"
      },
      "arguments": [
          {
          "name": "FROM_DATE",
          "dataType": {
              "typeKind": "STRING"
          }
          },
          {
          "name": "TO_DATE",
          "dataType": {
              "typeKind": "STRING"
          }
          }
      ]
      }
    '
    
    # Use POST for creating a new routine
    local http_method="POST"
    local routine_id=""
    # If routine exists use PUT to update the routine
    if routine_exists "$routine_name"; then
    echo "Routine $routine_name already exists. Updating the existing routine"
    http_method="PUT"
    routine_id="/$routine_name"
    fi
    
    # Construct the curl command with the routine body
    local curl_command=(
      curl -s --request $http_method \
        "https://bigquery.googleapis.com/bigquery/v2/projects/$PROJECT_ID/datasets/$DATASET/routines$routine_id" \
        --header "Authorization: Bearer $ACCESS_TOKEN" \
        --header 'Accept: application/json' \
        --header 'Content-Type: application/json' \
        --data "$routine_body" --compressed
    )
      # Execute the curl command and capture the result
    local result=$( "${curl_command[@]}" )
    
    # Check if creation was successful
    if echo "$result" | grep '"etag":' > /dev/null; then
      echo "Routine '$routine_name' created/updated successfully."
    else
      echo "$result"
      exit -1
    fi
    }
    
    # Function to create a view
    create_view() {
    local view_name="$1"
    local parent_routine_name="$1"
    local view_query='
      SELECT * FROM `'$PROJECT_ID'.'$DATASET'.'$parent_routine_name'`("'$VIEW_FROM_DATE_SUFFIX'", FORMAT_DATE("%Y%m%d", CURRENT_DATE()));
    '
    
    # Check if view exists.
    if routine_exists "$parent_routine_name"; then
      view_exists "$view_name"
      if [ $? -eq 0 ]; then
        echo "View $view_name already exists. Skipping."
        return 0
      fi
      # Create the view if it does not exist
      bq --project_id $PROJECT_ID mk --use_legacy_sql=false --expiration 0 --view "$view_query" $DATASET.$view_name
    else
      echo "Routine $parent_routine_name is not present. Skipping the creation of $view_name."
      return -1
    fi
    }
    
    # Create all jobs related routines and views
    create_jobs_routines_and_views() {
    # Backup Restore Jobs
    create_routine "backup_restore_jobs" "TABLE_VALUED_FUNCTION" \
    '
    WITH
      dat AS (
      SELECT
        *,
        LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
        LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
      FROM
        `'$PROJECT_ID'.'$DATASET'.'$JOBS_TABLE_PREFIX'*`
      WHERE
        _TABLE_SUFFIX >= FROM_DATE
        AND _TABLE_SUFFIX <= TO_DATE
        AND jsonpayload_v1_bdrbackuprestorejoblog.resourceType IN '"${ALLOWED_RESOURCE_TYPES}"' ),
      deduped AS (
      SELECT
        * EXCEPT (insert_id_is_same,
          timestamp_is_same),
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime), NULL) AS job_end_time,
    
      FROM
        dat
      WHERE
        NOT (insert_id_is_same
          AND timestamp_is_same) ),
      latest AS (
      SELECT
        jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
        MAX(CASE jsonpayload_v1_bdrbackuprestorejoblog.jobStatus
            WHEN '"'"'RUNNING'"'"' THEN 1
            WHEN '"'"'SKIPPED'"'"' THEN 2
            WHEN '"'"'SUCCESSFUL'"'"' THEN 3
            WHEN '"'"'FAILED'"'"' THEN 4
        END
          ) AS final_status_code
      FROM
        deduped
      GROUP BY
        jsonpayload_v1_bdrbackuprestorejoblog.jobid),
      filled_latest AS (
      SELECT
        jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
        jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS final_status,
        PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime) AS parsed_start_time,
        PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time) AS parsed_end_time,
        TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), HOUR) AS duration_hours,
        MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), MINUTE), 60) AS duration_minutes,
        MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), SECOND), 60) AS duration_seconds,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename), NULL) AS restore_resource_name,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype), NULL) AS error_type,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage), NULL) AS error_message,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname), NULL) AS backup_name,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname), NULL) AS backup_plan_name,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule), NULL) AS backup_rule,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime), NULL) AS backup_consistency_time,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib), NULL) AS incremental_backup_size_in_gib,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode), NULL) AS error_code,
    
      FROM
        deduped
      JOIN
        latest
      ON
        latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
        AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = (
          CASE final_status_code
            WHEN 1 THEN '"'"'RUNNING'"'"'
            WHEN 2 THEN '"'"'SKIPPED'"'"'
            WHEN 3 THEN '"'"'SUCCESSFUL'"'"'
            WHEN 4 THEN '"'"'FAILED'"'"'
        END
          ) )
    SELECT
      FORMAT_TIMESTAMP('"'"'%m%d'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"') || '"'"'-'"'"' || ARRAY_TO_STRING(REGEXP_EXTRACT_ALL(jsonpayload_v1_bdrbackuprestorejoblog.resourceType, '"'"'\\\\b[A-Z]'"'"'), '"'"''"'"') || '"'"'-'"'"' || (CASE jsonpayload_v1_bdrbackuprestorejoblog.jobCategory
          WHEN '"'"'ON_DEMAND_BACKUP'"'"' THEN '"'"'B'"'"'
          WHEN '"'"'SCHEDULED_BACKUP'"'"' THEN '"'"'B'"'"'
          WHEN '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"' THEN '"'"'B'"'"'
          WHEN '"'"'RESTORE'"'"' THEN '"'"'R'"'"'
      END
        ) || '"'"'-'"'"' || REPLACE(FORMAT_TIMESTAMP('"'"'%H%M%E3S'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"'), '"'"'.'"'"', '"'"''"'"') AS `job_name`,
      REGEXP_EXTRACT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'([^/]+)$'"'"') AS `resource_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName,'"'"'/'"'"')[1]) AS `resource_project_name`,
      jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS `job_status`,
      jsonpayload_v1_bdrbackuprestorejoblog.jobCategory AS `job_category`,
      Filled_latest.error_type AS `error_type`,
      CAST(filled_latest.error_code AS INT64) AS `error_code`,
      filled_latest.error_message AS `error_message`,
      parsed_start_time AS `job_start_time`,
      parsed_end_time AS `job_end_time`,
    IF
      (duration_hours > 0, duration_hours || '"'"'h '"'"', '"'"''"'"') ||
    IF
      (duration_minutes > 0, duration_minutes || '"'"'m '"'"', '"'"''"'"') ||
    IF
      (duration_seconds > 0, duration_seconds || '"'"'s '"'"', '"'"''"'"') AS `duration`,
      ROUND(filled_latest.incremental_backup_size_in_gib, 2) AS `incremental_backup_size_in_gib`,
      PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', filled_latest.backup_consistency_time) AS `backup_consistency_time`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.backup_plan_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.backup_plan_name,'"'"'/'"'"')[5]) AS `backup_plan_name`,
      filled_latest.backup_rule AS `backup_rule`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[5]) AS `backup_vault_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.backup_name, '"'"'/'"'"')) < 10, NULL, SPLIT(filled_latest.backup_name,'"'"'/'"'"')[9]) AS `backup_name`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[1]) AS `restore_project_name`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[5]) AS `restore_resource_name`,
      jsonpayload_v1_bdrbackuprestorejoblog.sourceresourcelocation AS `resource_location`,
      jsonpayload_v1_bdrbackuprestorejoblog.resourceType AS `resource_type`,
      jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceId AS `resource_id`,
      jsonpayload_v1_bdrbackuprestorejoblog.jobId AS `job_id`,
      timestamp AS `log_timestamp`
    FROM
      deduped
    JOIN
      filled_latest
    ON
      filled_latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
      AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = filled_latest.final_status
    ORDER BY
      parsed_start_time DESC
    '
    
    # Backup Jobs Summary
    create_routine "backup_jobs_summary" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        DATE(log_timestamp) AS `date`,
        resource_project_name,
        job_category,
        ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
        COUNT(*) AS `total`,
        COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
        COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
        COUNTIF(job_status = '"'"'SKIPPED'"'"') AS `skipped`,
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"', '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"') AND job_status IN ('"'"'SUCCESSFUL'"'"', '"'"'FAILED'"'"', '"'"'SKIPPED'"'"')
      GROUP BY
        `date`,
        resource_project_name,
        job_category
      ORDER BY
        `date` DESC
    '
    create_view "backup_jobs_summary"
    
    # Backup Jobs Details
    create_routine "backup_jobs_details" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        job_status,
        error_type,
        error_code,
        error_message,
        job_start_time,
        job_end_time,
        duration,
        incremental_backup_size_in_gib,
        backup_consistency_time,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        backup_vault_project_name,
        resource_location,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE
        job_status IN ('"'"'SUCCESSFUL'"'"',
          '"'"'FAILED'"'"', '"'"'SKIPPED'"'"') AND job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"', '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"')
    '
    create_view "backup_jobs_details"
    
    # Successful Backup Jobs Details
    create_routine "successful_backup_jobs" "TABLE_VALUED_FUNCTION" \
    'SELECT
        job_name,
        resource_name,
        resource_project_name,
        job_start_time,
        job_end_time,
        duration,
        incremental_backup_size_in_gib,
        backup_consistency_time,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        backup_vault_project_name,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
          TO_DATE) WHERE job_status = '"'"'SUCCESSFUL'"'"'
    '
    create_view "successful_backup_jobs"
    
    # Failed Backup Job Details
    create_routine "failed_backup_jobs" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        error_type,
        error_code,
        error_message,
        job_start_time,
        job_end_time,
        duration,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        backup_vault_project_name,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
          TO_DATE) WHERE job_status = '"'"'FAILED'"'"'
    '
    create_view "failed_backup_jobs"
    
    # Skipped Backup Job Details
    create_routine "skipped_backup_jobs" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        job_start_time,
        backup_vault_project_name,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
          TO_DATE)
      WHERE
        job_status = '"'"'SKIPPED'"'"'
    '
    create_view "skipped_backup_jobs"
    
    # Restore Jobs Summary
    create_routine "restore_jobs_summary" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        DATE(log_timestamp) AS `date`,
        restore_project_name,
        backup_vault_project_name,
        ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
        COUNT(*) AS `total`,
        COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
        COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE
        job_category = '"'"'RESTORE'"'"'
        AND job_status IN ('"'"'SUCCESSFUL'"'"',
          '"'"'FAILED'"'"')
      GROUP BY
        `date`,
        restore_project_name,
        backup_vault_project_name
      ORDER BY
        `date` DESC
    '
    create_view "restore_jobs_summary"
    
    # Restore Jobs Details
    create_routine "restore_jobs_details" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        job_status,
        error_type,
        error_code,
        error_message,
        job_start_time,
        job_end_time,
        duration,
        restore_resource_name,
        restore_project_name,
        backup_vault_name,
        backup_vault_project_name,
        resource_location,
        resource_type,
        resource_id,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE
        job_category IN ('"'"'RESTORE'"'"')
        AND job_status IN ('"'"'SUCCESSFUL'"'"',
          '"'"'FAILED'"'"')
    '
    create_view "restore_jobs_details"
    }
    
    # Protected Resource Details
    create_protected_resource_routines_and_views() {
    create_routine "protected_resource_details" "TABLE_VALUED_FUNCTION" \
    '
    WITH
      dat AS (
      SELECT
        *,
        LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
        LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
    
      FROM
        `'$PROJECT_ID'.'$DATASET'.'$PROTECTED_RESOURCE_TABLE_PREFIX'*` WHERE _TABLE_SUFFIX >= FROM_DATE AND _TABLE_SUFFIX <= TO_DATE
        AND jsonpayload_v1_bdrprotectedresourcelog.resourceType IN '"${ALLOWED_RESOURCE_TYPES}"'
        ),
      deduped AS (
      SELECT
        * EXCEPT (`insert_id_is_same`,
          `timestamp_is_same`)
      FROM
        dat
      WHERE
        NOT (`insert_id_is_same`
          AND `timestamp_is_same`) ),
      latest_for_date AS (
      SELECT
        DATE(timestamp) AS `log_date`,
        jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename AS `source_resource_name`,
        MAX(timestamp) AS `latest_timestamp`
      FROM
        deduped
      GROUP BY
        `log_date`,
        `source_resource_name` )
    SELECT
      DATE(timestamp) AS `date`,
      jsonpayload_v1_bdrprotectedresourcelog.resourcetype AS `resource_type`,
      REGEXP_EXTRACT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename,'"'"'/'"'"')[1]) AS `resource_project_name`,
      jsonpayload_v1_bdrprotectedresourcelog.sourceresourcedatasizegib AS `resource_data_size_in_gib`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[5]) AS `backup_plan_name`,
      (
      SELECT
        ARRAY_AGG(STRUCT(
            array_entry.rulename AS `rule_name`,
            array_entry.recurrence AS `recurrence`,
            array_entry.recurrenceschedule AS `recurrence_schedule`,
            array_entry.backupwindow AS `backup_window`,
            IF(array_entry.backupwindowtimezone = '"'"'Etc/UTC'"'"', '"'"'UTC'"'"',array_entry.backupwindowtimezone)  AS `backup_window_time_zone`,
            array_entry.retentiondays AS `retention_days`
          ))
      FROM
        UNNEST(jsonpayload_v1_bdrprotectedresourcelog.currentbackupruledetails) array_entry ) AS `backup_rules`,
    DATE(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrprotectedresourcelog.lastprotectedon)) AS `last_backup_plan_assoc_date`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 4, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
      jsonpayload_v1_bdrprotectedresourcelog.sourceresourcelocation AS `resource_location`,
      jsonpayload_v1_bdrprotectedresourcelog.sourceresourceid AS `resource_id`,
    FROM
      deduped
    INNER JOIN
      latest_for_date
    ON
      DATE(timestamp) = latest_for_date.log_date
      AND timestamp = latest_for_date.latest_timestamp
      AND latest_for_date.`source_resource_name` = jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename
    ORDER BY
      `date` DESC,
      `resource_name` ASC
    '
    create_view "protected_resource_details"
    }
    
    # Backup Vault Consumption
    create_backup_vault_consumption_routines_and_views() {
    create_routine "backup_vault_consumption" "TABLE_VALUED_FUNCTION" \
    '
    WITH
        dat AS (
        SELECT
          *,
        LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
        LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
        FROM
          `'$PROJECT_ID'.'$DATASET'.'$BACKUP_VAULT_TABLE_PREFIX'*`
        WHERE
          jsonpayload_v1_bdrbackupvaultdetailslog.resourceType IN '"${ALLOWED_RESOURCE_TYPES}"'
          AND _TABLE_SUFFIX >= FROM_DATE
          AND _TABLE_SUFFIX <= TO_DATE ),
        deduped AS (
        SELECT
          * EXCEPT (`insert_id_is_same`,
            `timestamp_is_same`)
        FROM
          dat
        WHERE
          NOT (`insert_id_is_same`
            AND `timestamp_is_same`) ),
        latest_for_date AS (
        SELECT
          DATE(timestamp) AS `log_date`,
          MAX(timestamp) AS `latest_timestamp`,
          jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename AS `source_resource_name`,
        FROM
          deduped
        GROUP BY
          log_date,
          `source_resource_name`)
      SELECT
        DATE(timestamp) AS `date`,
        jsonpayload_v1_bdrbackupvaultdetailslog.resourcetype AS `resource_type`,
        REGEXP_EXTRACT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
        SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'/'"'"')[1] AS `resource_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
        jsonpayload_v1_bdrbackupvaultdetailslog.storedbytesgib AS `backup_vault_stored_bytes_in_gib`,
    CAST(jsonpayload_v1_bdrbackupvaultdetailslog.minimumEnforcedRetentionDays AS INT64) AS `backup_vault_minimum_enforced_retention_days`,
      IF
    (ARRAY_LENGTH(SPLIT(IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL), '"'"'/'"'"')) < 6, NULL, SPLIT(IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL),'"'"'/'"'"')[5]) AS `backup_plan_name`,
    PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint), NULL)) AS `first_available_restore_point`,
    PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint), NULL)) AS `last_available_restore_point`,
    
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
      jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcelocation AS `resource_location`,
      FROM
        deduped
      INNER JOIN
        latest_for_date
      ON
        DATE(timestamp) = latest_for_date.log_date
        AND timestamp = latest_for_date.latest_timestamp
        AND jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename = latest_for_date.`source_resource_name`
      ORDER BY
        `date` DESC,
        `resource_name` ASC
    '
    create_view "backup_vault_consumption"
    }
    
    # Backup Plan Details
    create_backup_plan_details_routines_and_views() {
      create_routine "backup_plan_details" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        backup_plan_dat AS (
          SELECT
            *,
            LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
            LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
          FROM
            `hk-staging-saas-report-1.bdr_reports.staging_backupdr_sandbox_googleapis_com_bdr_backup_plan_details_*`
          WHERE
            _TABLE_SUFFIX >= FROM_DATE
            AND _TABLE_SUFFIX <= TO_DATE
        ),
        backup_plan_deduped AS (
          SELECT
            * EXCEPT (`insert_id_is_same`, `timestamp_is_same`)
          FROM
            backup_plan_dat
          WHERE
            NOT (`insert_id_is_same` AND `timestamp_is_same`)
        ),
        backup_plan_latest_for_date AS (
          SELECT
            DATE(timestamp) AS `log_date`,
            MAX(timestamp) AS `latest_timestamp`,
            jsonpayload_v1_bdrbackupplandetailslog.backupplanname AS `backupplanname`
          FROM
            backup_plan_deduped
          GROUP BY
            `log_date`,
            `backupplanname`
        ),
        latest_backup_vault AS (
          SELECT
            timestamp,
            jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname as backupvaultname,
            jsonpayload_v1_bdrbackupvaultdetailslog.minimumenforcedretentiondays as minimumenforcedretentiondays,
            jsonpayload_v1_bdrbackupvaultdetailslog.effectivedateforenforcedretentionlock as effectivedateforenforcedretentionlock
          FROM `hk-staging-saas-report-1.bdr_reports.staging_backupdr_sandbox_googleapis_com_bdr_backup_vault_details_*`
          QUALIFY ROW_NUMBER() OVER (PARTITION BY jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname ORDER BY timestamp DESC) = 1
        )
      SELECT
        DATE(bp_d.timestamp) AS `date`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname, '"'"'/'"'"')) < 6, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname,'"'"'/'"'"')[OFFSET(5)]) AS `backup_plan_name`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname, '"'"'/'"'"')) < 4, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname, '"'"'/'"'"')[OFFSET(3)]) AS `backup_plan_location`,
        (
          SELECT
            ARRAY_AGG(STRUCT(
                rule_details.rulename AS `rule_name`,
                rule_details.recurrence AS `recurrence`,
                rule_details.recurrenceschedule AS `recurrence_schedule`,
                rule_details.backupwindow AS `backup_window`,
                IF(rule_details.backupwindowtimezone = '"'"'Etc/UTC'"'"', '"'"'UTC'"'"',rule_details.backupwindowtimezone)  AS `backup_window_time_zone`,
                rule_details.retentiondays AS `retention_days`
              ))
          FROM
            UNNEST(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupruledetails) rule_details
        ) AS `backup_rules`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname,'"'"'/'"'"')[OFFSET(5)]) AS `backup_vault_name`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname, '"'"'/'"'"')) < 4, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname, '"'"'/'"'"')[OFFSET(3)]) AS `backup_vault_location`,
        bp_d.jsonpayload_v1_bdrbackupplandetailslog.resourcesprotectedcount AS `resources_protected_count`,
        bp_d.jsonpayload_v1_bdrbackupplandetailslog.protecteddatavolumegib AS `protected_data_volume_gib`,
        bv_d.minimumenforcedretentiondays AS `minimum_enforced_retention_days`,
        bv_d.effectivedateforenforcedretentionlock AS `effective_date_for_enforced_retention_lock`,
        CASE
          WHEN bv_d.effectivedateforenforcedretentionlock IS NULL THEN '"'"'unlocked'"'"'
          WHEN LENGTH(TRIM(bv_d.effectivedateforenforcedretentionlock)) = 0 THEN '"'"'unlocked'"'"'
          WHEN PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', bv_d.effectivedateforenforcedretentionlock) <= CURRENT_TIMESTAMP() THEN '"'"'locked'"'"'
          ELSE '"'"'unlocked'"'"'
        END AS `lock_on_enforced_retention`
      FROM
        backup_plan_deduped AS bp_d
        INNER JOIN backup_plan_latest_for_date AS bp_lfd
          ON DATE(bp_d.timestamp) = bp_lfd.log_date
          AND bp_d.timestamp = bp_lfd.latest_timestamp
          AND bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname = bp_lfd.backupplanname
        LEFT JOIN latest_backup_vault AS bv_d
          ON bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname = bv_d.backupvaultname
      ORDER BY
        `date` DESC,
        `backup_plan_name` ASC
      '
      create_view "backup_plan_details"
    }
    
    # Daily Scheduled Compliance
    create_daily_scheduled_compliance_routine_and_view() {
    create_routine "daily_scheduled_compliance" "TABLE_VALUED_FUNCTION" \
    '
    WITH
      DailyComplianceWindows AS (
      SELECT
        * EXCEPT (backup_rules),
        PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[0] || '"'"' '"'"' || backup_window_time_zone) AS expected_start_time,
        resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
      IF
        (REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] = '"'"'24:00'"'"', TIMESTAMP_ADD(PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' 23:59 '"'"' || backup_window_time_zone), INTERVAL 1 MINUTE), PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] || '"'"' '"'"' || backup_window_time_zone)) AS expected_end_time
      FROM
        `'$PROJECT_ID'.'$DATASET'.protected_resource_details`(FROM_DATE, TO_DATE)
      CROSS JOIN
        UNNEST(backup_rules) AS rule
      ON
        rule.recurrence='"'"'Daily'"'"'),
      BackupJobs AS (
      SELECT
        *,
        resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE, FORMAT_DATE('"'"'%Y%m%d'"'"', DATE_ADD(PARSE_DATE('"'"'%Y%m%d'"'"', TO_DATE), INTERVAL 10 DAY)))
      WHERE
        job_category IN ('"'"'ON_DEMAND_BACKUP'"'"',
          '"'"'SCHEDULED_BACKUP'"'"',
          '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"') ),
      DailyCompliance AS (
      SELECT
        date,
        DailyComplianceWindows.resource_type,
        DailyComplianceWindows.resource_name,
        DailyComplianceWindows.resource_project_name as `resource_project_name`,
        DailyComplianceWindows.backup_plan_name,
        DailyComplianceWindows.rule_name,
        DailyComplianceWindows.backup_window,
        DailyComplianceWindows.backup_window_time_zone,
        (CASE BackupJobs.job_status
            WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Successful'"'"'
            WHEN '"'"'RUNNING'"'"' THEN '"'"'Pending'"'"'
            ELSE
        IF
          (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Pending'"'"', '"'"'Failed'"'"')
        END
          ) AS backup_schedule_compliance_status,
        (CASE BackupJobs.job_status
            WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' was successful.'"'"'
            WHEN '"'"'RUNNING'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' is running.'"'"'
            ELSE
        IF
          (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Backup window for '"'"'|| date || '"'"' has not passed yet.'"'"', '"'"'No successful backup job detected within backup window.'"'"')
        END
          ) AS comment,
        DailyComplianceWindows.backup_vault_name,
        DailyComplianceWindows.backup_vault_project_name as `backup_vault_project_name`,
        DailyComplianceWindows.backup_vault_location as `backup_vault_location`,
        DailyComplianceWindows.resource_location,
        ROW_NUMBER() OVER (PARTITION BY BackupJobs.unique_resource_name, DailyComplianceWindows.date, DailyComplianceWindows.rule_name ORDER BY (CASE BackupJobs.job_status WHEN '"'"'SUCCESSFUL'"'"' THEN 4 WHEN '"'"'RUNNING'"'"' THEN 3 ELSE 1 END ) DESC,
          BackupJobs.job_start_time ASC) AS row_number,
      FROM
        DailyComplianceWindows
      LEFT JOIN
        BackupJobs
      ON
        (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
          AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
          AND BackupJobs.job_status = '"'"'SUCCESSFUL'"'"'
          AND BackupJobs.backup_consistency_time >= DailyComplianceWindows.expected_start_time
          AND BackupJobs.backup_consistency_time <= DailyComplianceWindows.expected_end_time)
        OR (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
          AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
          AND BackupJobs.job_status = '"'"'RUNNING'"'"'
          AND BackupJobs.job_start_time >= DailyComplianceWindows.expected_start_time
          AND BackupJobs.job_start_time <= DailyComplianceWindows.expected_end_time)
      ORDER BY
        date DESC,
        BackupJobs.resource_name ASC)
    SELECT
      * EXCEPT(row_number)
    FROM
      DailyCompliance
    WHERE
      row_number = 1
    '
    create_view "daily_scheduled_compliance"
    }
    
    # --- Main script ---
    
    # Get the default project ID
    DEFAULT_PROJECT_ID=$(gcloud config get-value project)
    read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
    # Use default if no input is provided
    if [ -z "$PROJECT_ID" ]; then
      PROJECT_ID=$DEFAULT_PROJECT_ID
    fi
    
    ## Generate Access Token
    echo "Generating access token..."
    ACCESS_TOKEN=$(gcloud auth print-access-token --quiet)
    if [ -z "$ACCESS_TOKEN" ]; then
      echo "Failed to retrieve access token" >&2
      exit 1
    fi
    echo "Access token generated successfully..."
    
    ## Check if the dataset exists
    echo "Check if Reporting Dataset exists..."
    if bq --project_id $PROJECT_ID ls | grep "$DATASET" >/dev/null; then
      echo "Dataset $DATASET exists for $PROJECT_ID. Continuing."
    else
      echo "Dataset $DATASET does not exist for $PROJECT_ID. Exiting."
      exit 0
    fi
    
    ## Check if the tables exist
    echo "Determining which tables are available in BigQuery..."
    check_jobs_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$JOBS_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    check_protected_resource_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$PROTECTED_RESOURCE_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    check_backup_vault_consumption_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$BACKUP_VAULT_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    check_backup_plan_details_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$BACKUP_PLAN_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    
    echo "Creating routines and views based on available tables.."
    if [ "$check_jobs_table" == 'true' ]; then
    create_jobs_routines_and_views
    fi
    
    if [ "$check_protected_resource_table" == 'true' ]; then
    create_protected_resource_routines_and_views
    fi
    
    if [ "$check_backup_vault_consumption_table" == 'true' ]; then
    create_backup_vault_consumption_routines_and_views
    fi
    
    if [ "$check_backup_plan_details_table" == 'true' ]; then
    create_backup_plan_details_routines_and_views
    fi
    
    if [ "$check_jobs_table" == 'true' ] && [ "$check_protected_resource_table" == 'true' ]; then
    create_daily_scheduled_compliance_routine_and_view
    fi
    
    if [ $check_jobs_table == 'false' ] || [ $check_protected_resource_table == 'false' ] || [ $check_backup_vault_consumption_table == 'false' ] || [ $check_backup_plan_details_table == 'false' ]; then
    echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset $DATASET."
    echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, and before running the script to create pre built reports. Try re-running the script again after some time to fix the issue."
    echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m"
    else
    echo "Set up completed..."
    fi
    
  4. 使用带有 Bash 文件扩展名的名称(例如 backupdrreports.sh)保存该文件。

  5. 使用您刚刚创建的文件运行命令 bash。例如 bash backupdrreports.sh

    您可以在 BigQuery Studio 中查看数据集下的预构建报告。

设置预建报告后,您可以在 Looker Studio 中查看报告

后续步骤