在 BigQuery 中设置预构建报告

本页介绍了如何在 BigQuery 中设置和查看预构建报告。 这些报告是使用 Cloud Logging 中日志创建的。如需设置报告,您需要执行一次性活动来创建日志接收器,以将日志记录数据流式传输到 BigQuery,然后执行预构建的脚本。

所需 IAM 角色

如需在 BigQuery 中查看预构建报告,您需要具备以下 IAM 权限。了解如何授予 IAM 角色

角色 何时授予角色
Logs Configuration Writer (roles/logging.configWriter) 或 Logging Admin (roles/logging.admin) 以及 BigQuery Data Editor (roles/bigquery.dataEditor)

如需通过 Google Cloud 控制台创建接收器和 BigQuery 数据集,请参阅相关文档。
Owner (roles/owner) 如需通过 Google Cloud CLI 创建接收器和 BigQuery 数据集,请执行以下操作。
BigQuery Admin (bigquery.admin) 撰写自定义查询或下载查询。

创建接收器并将日志转送到 BigQuery

BigQuery 仅存储在创建日志接收器后生成的日志。在创建日志接收器之前生成的日志不会显示在 BigQuery 中。您可以通过 Google Cloud 控制台或 Google Cloud CLI 创建日志接收器。

如需在 BigQuery 中创建接收器并路由日志,请执行以下操作:

控制台

  1. 在 Google Cloud 控制台中,前往 Log Router 页面:

    前往“日志路由器”页面

  2. 选择一个现有的 Google Cloud 项目。
  3. 点击创建接收器
  4. 接收器详情面板中,输入以下字段:
    1. 接收器名称:输入接收器名称 BackupandDR_reports_sink。您必须使用接收器名称 BackupandDR_reports_sink 来识别来自其他接收器的备份和灾难恢复报告。
    2. 接收器说明:描述接收器的用途或使用场景。
  5. 接收器目标位置面板中,执行以下操作:

    1. 选择接收器服务菜单中,选择 BigQuery 数据集接收器服务。
    2. 选择 BigQuery 数据集中,选择新建 BigQuery 数据集
    3. 创建数据集页面上,执行以下操作:
      1. 对于数据集 ID,请输入数据集名称 BackupandDR_reports,以便与其他数据集区分开来。请勿更改数据集名称 BackupandDR_reports
      2. 对于位置类型,为数据集选择一个地理位置。创建数据集后,就无法再更改此位置。
      3. 可选:如果您希望此数据集中的表过期,请选择启用表过期时间,然后指定默认表存在时间上限(以天为单位)。
      4. 点击创建数据集
  6. 选择要包含在接收器中的日志面板中,执行以下操作:

    1. 构建包含项过滤条件字段中,输入与要包含的日志条目匹配的以下过滤条件表达式。

       logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
      
    2. 要验证您输入的过滤条件是否正确,请选择预览日志。此操作会在新标签页中打开 Logs Explorer,其中预填充了过滤条件。

  7. 可选:在选择要从接收器中过滤掉的日志面板中,执行以下操作:

    1. 排除项过滤条件名称字段中输入名称。
    2. 构建排除项过滤条件字段中,输入与要排除的日志条目匹配的过滤条件表达式。您还可以使用示例函数选择要排除的日志条目。

  8. 选择创建接收器

    您可以在 BigQuery Studio 中查看该数据集。

gcloud

  1. 前往激活 Cloud Shell,然后点击打开编辑器
  2. 点击 图标,选择文件,然后选择新建文本文件
  3. 复制并粘贴以下脚本。

      #!/bin/bash
      echo "This script will set up a log sink for BackupDR reports to be available in BigQuery"
    
      # Get the default project ID
      DEFAULT_PROJECT_ID=$(gcloud config get-value project)
      read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
      # Use default if no input is provided
      if [ -z "$PROJECT_ID" ]; then
        PROJECT_ID=$DEFAULT_PROJECT_ID
      fi
        # Set the project ID
      result=$(gcloud config set project $PROJECT_ID)
      if [ $? -ne 0 ]; then
        echo "Error setting the project to $PROJECT_ID"
        exit 1
      fi
        # --- Check if BigQuery API is already enabled, enable if not ---
      echo "Checking if BigQuery API is enabled..."
      if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then
        echo "BigQuery API is already enabled for $PROJECT_ID"
      else
        echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the pricing at https://cloud.google.com/backup-disaster-recovery/docs/monitor-reports/reports-overview#pricing before proceeding."
        read -p "Do you want to continue(Y/N)?" continue
        if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then
          echo "Enabling BigQuery API..."
          result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID)
          if [ $? -eq 0 ]; then
            echo "Successfully enabled BigQuery api for $PROJECT_ID"
          else
            echo "Error in setting up the BigQuery api for the project. $result"
            exit 1
          fi
        else
          exit 0
        fi
      fi
        # --- Check if BigQuery data set already exists, create if not ---
      echo "Checking if BigQuery data set exists..."
      if bq ls | grep "BackupandDR_reports" >/dev/null; then
        echo "Dataset BackupandDR_reports already exists for $PROJECT_ID"
      else
        echo "Creating bigQuery dataset BackupandDR_reports..."
        # --- Get dataset location from user (default: US) ---
        read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION
        if [ -z "$DATASET_LOCATION" ]; then
          DATASET_LOCATION="US"
        fi
        # --- Get table expiration in days from user (default: no expiration) ---
        read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS
    
        # Calculate table expiration in seconds if provided
        if [ -n "$TABLE_EXPIRATION_DAYS" ]; then
          TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60))
          EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS"
        else
          EXPIRATION_FLAG=""
        fi
        result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG BackupandDR_reports)
        if [ $? -eq 0 ]; then
          echo "Created a BigQuery dataset BackupandDR_reports successfully."
        else
          echo ""
          echo "ERROR : Failed to create the BigQuery dataset."
          echo $result
          exit 1
        fi
      fi
        # --- Check if Log Sink already exists, create if not ---
      echo "Checking if Log Sink exists..."
      if gcloud logging sinks list | grep "BackupandDR_reports_sink" >/dev/null; then
        echo "Log Sink BackupandDR_reports_sink already exists for $PROJECT_ID"
      else
        log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
        echo "Creating log sink BackupandDR_reports_sink..."
        result=$(gcloud logging sinks create BackupandDR_reports_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/BackupandDR_reports --log-filter="logName=~\"$log_filter\"")
        if [ $? -eq 0 ]; then
          echo "Created a logsink BackupandDR_reports_sink successfully."
        else
          echo ""
          echo "ERROR : Failed to create logsink."
          exit 1
        fi
      fi
    
      # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery ---
      result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None)
      if [ $? -eq 0 ]; then
        echo "Added permission for cloud logging to write to BigQuery datasets"
      else
        echo ""
        echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have correct access rights in order to be able to proceed."
        exit 1
      fi
    
      echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery."
      exit 0
    
  4. 使用具有 Bash (.sh) 文件扩展名的名称(例如 script.sh)保存该文件。

  5. 使用您刚刚创建的文件运行 bash 命令。例如 bash script.sh

    您可以在 BigQuery Studio 中查看创建的数据集。

设置预构建报告

您可以在日志通过日志接收器转送到的数据集中执行以下脚本,以设置预构建报告。

该脚本会添加以下预构建报告:

如需在 BigQuery 中设置预构建报告,请执行以下操作:

gcloud

  1. 前往激活 Cloud Shell,然后点击打开编辑器
  2. 创建一个新文本文件。
  3. 将以下预构建报告脚本复制并粘贴到 Google Cloud CLI 中。

      backup_job_summary_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1
      GROUP BY insertId
      )
      SELECT
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent,
      COUNT(*) as Total_Jobs,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded,
      COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed,
      COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled,
      COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run,
      FROM filtered_data
      WHERE jsonPayload.job_category = "Backup Job" AND
      jsonPayload.job_status != "retry"
      GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type
      ORDER BY  Date, Resource_Name;'
    
      recovery_job_summary_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1
      GROUP BY insertId
      )
      SELECT
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent,
      COUNT(*) as Total_Jobs,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded,
      COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed,
      COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled,
      COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run,
      FROM filtered_data
      WHERE jsonPayload.job_category = "Recovery Job" AND
      jsonPayload.job_status != "retry"
      GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type
      ORDER BY Date, Resource_Name;'
    
      failed_job_query='WITH filtered_data AS (
      SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      GROUP BY insertId
      )
      SELECT
      jsonPayload.job_name as Job_Name,
      jsonPayload.job_category as Job_Category,
      jsonPayload.job_type as Job_Type,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.error_code as Error_Code,
      jsonPayload.error_message as Error_Message,
      jsonPayload.hostname as Host_Name,
      jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      jsonPayload.appliance_name as Appliance_Name,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(jsonPayload.job_start_time)) as Job_Start_Time,
      DATE(jsonPayload.job_start_time) as Job_Date,
      jsonPayload.backup_type as Backup_Type
      FROM filtered_data
      WHERE
      jsonPayload.job_status = "failed"
      ORDER BY Job_Start_Time DESC, Resource_Name'
    
      job_details_query='WITH filtered_data AS (
      SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      GROUP BY insertId
      )
      SELECT
      jsonPayload.job_name as Job_Name,
      jsonPayload.job_category as Job_Category,
      jsonPayload.job_type as Job_Type,
      jsonPayload.log_backup as Log_Backup,
      jsonPayload.job_status as Job_Status,
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Job_Date,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.error_code as Error_Code,
      jsonPayload.error_message as Error_Message,
      jsonPayload.job_initiation_failure_reason as Job_Initiation_Failure_Reason,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.hostname as Host_Name,
      jsonPayload.target_appliance_name as Target_Appliance_Name,
      jsonPayload.target_pool_name as Target_Pool_Name,
      jsonPayload.target_host_name as Target_Host_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_queued_time !="" THEN jsonPayload.job_queued_time ELSE NULL END))  as Job_Queue_Time,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_start_time!="" THEN jsonPayload.job_start_time ELSE NULL END))  as Job_Start_Time,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_end_time!="" THEN jsonPayload.job_end_time ELSE NULL END))  as Job_End_Time,
      jsonPayload.job_duration_in_hours as Job_Duration_In_Hours,
      jsonPayload.backup_consistency as Backup_Consistency,
      jsonPayload.resource_data_size_in_gib as Resource_Data_Size_In_GiB,
      jsonPayload.snapshot_disk_size_in_gib as Snapshot_Disk_Size_in_GiB,
      jsonPayload.pre_compress_in_gib as Pre_Compress_In_GiB,
      jsonPayload.compression_ratio as Compression_Ratio,
      jsonPayload.data_sent_in_gib as Streamsnap_Data_Sent_In_GiB,
      jsonPayload.data_written_in_gib as Streamsnap_Data_Written_In_GiB,
      jsonPayload.data_copied_in_gib as Data_Copied_In_GiB,
      jsonPayload.data_change_rate as Data_Change_Rate_Percent,
      jsonPayload.backup_type as Backup_Type,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.recovery_point!="" THEN jsonPayload.recovery_point ELSE NULL END))  as Recovery_Point,
      jsonPayload.onvault_pool_storage_consumed_in_gib as OnVault_Pool_Storage_Consumed_In_GiB,
      FROM filtered_data
      ORDER BY Job_Start_Time DESC, Resource_Name'
    
      unresolved_failures_query='WITH job_data AS (
      SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      WHERE
      jsonPayload.job_status in ("succeeded", "failed")
      GROUP BY insertId
      ),
      -- Select resources which have their latest status as failed
      unresolved_failed_resources AS (
      SELECT
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      jsonPayload.job_status as Job_Status,
      jsonPayload.job_start_time as Job_Start_Time,
      jsonPayload.hostname as Hostname,
      jsonPayload.error_code as Error_Code,
      jsonPayload.error_message as Error_Message
      FROM job_data j1
      WHERE jsonPayload.job_start_time = (SELECT MAX(jsonPayload.job_start_time) FROM job_data j2 WHERE j1.jsonPayload.appliance_name = j2.jsonPayload.appliance_name AND j1.jsonPayload.resource_name = j2.jsonPayload.resource_name AND j1.jsonPayload.resource_type = j2.jsonPayload.resource_type AND j1.jsonPayload.job_type = j2.jsonPayload.job_type) AND
      jsonPayload.job_status = "failed"
      ),
      -- Select all jobs for all the resources that have unresolved failures
      all_jobs_of_unresolved_failure_resources AS (
      SELECT
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      jsonPayload.job_category as Job_Category,
      jsonPayload.job_name as Job_Name,
      jsonPayload.recovery_point as Recovery_Point,
      jsonPayload.job_status as Job_Status,
      jsonPayload.job_start_time as Job_Start_Time,
      jsonPayload.hostname as Hostname,
      jsonPayload.error_code as Error_Code
      FROM job_data j1
      JOIN unresolved_failed_resources fr
      ON
      j1.jsonPayload.appliance_name = fr.Appliance_Name AND
      j1.jsonPayload.resource_name = fr.Resource_Name AND
      j1.jsonPayload.resource_type = fr.Resource_Type AND
      j1.jsonPayload.job_type = fr.Job_Type
      ),
      -- Select the latest successful jobs for the resources with unresolved failures
      latest_success AS (
      SELECT *
      FROM all_jobs_of_unresolved_failure_resources all_jobs
      WHERE
      all_jobs.Job_Status = "succeeded" AND
      all_jobs.Job_Start_Time = (
        SELECT MAX(Job_Start_Time) FROM
        all_jobs_of_unresolved_failure_resources all_jobs_2
        WHERE all_jobs_2.Appliance_Name = all_jobs.Appliance_Name AND
        all_jobs_2.Resource_Name = all_jobs.Resource_Name AND
        all_jobs_2.Resource_Type = all_jobs.Resource_Type AND
        all_jobs_2.Job_Type = all_jobs.Job_Type AND
        all_jobs_2.job_status = "succeeded"
        )
      ),
      -- Select all failed jobs after the last success for the resources with unresolved failures
      failed_jobs_to_report AS (
      SELECT
      all_jobs.Appliance_Name as Appliance_Name,
      all_jobs.Resource_Name as Resource_Name,
      all_jobs.Resource_Type as Resource_Type,
      all_jobs.Job_Type as Job_Type,
      all_jobs.Job_Name as Job_Name,
      all_jobs.Recovery_Point as Recovery_Point,
      all_jobs.Job_Status as Job_Status,
      all_jobs.Job_Start_Time as Job_Start_Time,
      all_jobs.Hostname as Hostname,
      all_jobs.Error_Code as Error_Code,
      all_jobs.Job_Category as Job_Category,
      FROM all_jobs_of_unresolved_failure_resources all_jobs
      LEFT JOIN latest_success success
      ON success.Appliance_Name = all_jobs.Appliance_Name AND
        success.Resource_Name = all_jobs.Resource_Name AND
        success.Resource_Type = all_jobs.Resource_Type AND
        success.Job_Type = all_jobs.Job_Type
      WHERE all_jobs.Job_Status = "failed" AND
      TIMESTAMP(all_jobs.Job_Start_Time) > COALESCE(TIMESTAMP(success.Job_Start_Time), TIMESTAMP(DATE_SUB(CURRENT_DATE(), INTERVAL 3 DAY)))
      )
      SELECT
      failed_jobs.Resource_Name,
      failed_jobs.Resource_Type,
      failed_jobs.Job_Type,
      failed_jobs.Job_Category,
      COUNT(*) as Error_Count,
      failed_jobs.Error_Code,
      ANY_VALUE(ufr.Error_Message) as Error_Message,
      FORMAT_TIMESTAMP("%F %R %Z", MIN(TIMESTAMP(failed_jobs.Job_Start_Time))) as First_Failure,
      FORMAT_TIMESTAMP("%F %R %Z", MAX(TIMESTAMP(failed_jobs.Job_Start_Time))) as Last_Failure,
      TIMESTAMP_DIFF(CURRENT_TIMESTAMP,TIMESTAMP(MAX(failed_jobs.Job_Start_Time)), HOUR) as Hours_Since_Last_Failure,
      failed_jobs.Appliance_Name
      FROM failed_jobs_to_report failed_jobs
      LEFT JOIN unresolved_failed_resources ufr
      ON failed_jobs.Appliance_Name = ufr.Appliance_Name AND
        failed_jobs.Resource_Name = ufr.Resource_Name AND
        failed_jobs.Resource_Type = ufr.Resource_Type AND
        failed_jobs.Job_Type = ufr.Job_Type AND
        failed_jobs.Error_Code = ufr.Error_Code
      GROUP BY Appliance_Name, Resource_Name, Resource_Type, Job_Type, Job_Category, Error_Code
      ORDER BY Last_Failure DESC
      '
      daily_schedule_compliance_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_daily_schedule_compliance_*`
      WHERE DATE(jsonPayload.date) >= DATE_SUB(CURRENT_DATE(), INTERVAL 8 DAY) AND
      DATE(jsonPayload.date) < CURRENT_DATE() + 1
      GROUP BY insertId
      )
      SELECT
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      DATE(jsonPayload.date) as Date,
      jsonPayload.backup_window_start_time as Backup_Window_Start_Time,
      jsonPayload.job_type as Job_Type,
      jsonPayload.status as Status,
      jsonPayload.comment as Comment,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.host_name as Host_Name
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Resource_Name, Date;
      '
      protected_data_resource_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_protected_resource_*`
      GROUP BY insertId
      )
      SELECT
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.host_name as Host_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      jsonPayload.protected_on as Protected_On,
      jsonPayload.recovery_point as Recovery_Point,
      jsonPayload.protected_data_in_gib as Protected_Data_in_Gib,
      jsonPayload.onvault_in_gib as OnVault_Data_in_Gib,
      jsonPayload.backup_plan_restrictions as Backup_Plan_Restrictions,
      jsonPayload.backup_inclusion_or_exclusion as Backup_Inclusion_or_Exclusion,
      jsonPayload.policy_overrides as Policy_Overrides,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.remote_appliance as Remote_Appliance,
      jsonPayload.source_appliance as Source_Appliance,
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Resource_Name;'
    
      storage_utilization_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_*`
      GROUP BY insertId
      )
      SELECT
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.pool_name as Pool_Name,
      jsonPayload.storage_type as Storage_Type,
      jsonPayload.total_capacity_in_gib as Total_Capacity_In_GiB,
      jsonPayload.used_capacity_in_gib as Used_Capacity_In_GiB,
      jsonPayload.utilization_percentage as Utilization_percentage,
      jsonPayload.appliance_id as Appliance_id,
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Appliance_Name;'
    
      mounted_image_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_mounted_images_*`
      GROUP BY insertId
      )
      SELECT
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on,
      jsonPayload.source_resource_name as Source_Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.mounted_image_name as Mounted_Image_Name,
      jsonPayload.source_image_name as Source_Image_Name,
      jsonPayload.source_image_type as Source_Image_Type,
      jsonPayload.recovery_point_date as Recovery_Point_Date,
      jsonPayload.last_mount_date as Last_Mount_Date,
      jsonPayload.source_host_name as Source_Host_Name,
      jsonPayload.mounted_host_name as Mounted_Host_Name,
      jsonPayload.mounted_resource_name as Mounted_Resource_Name,
      jsonPayload.resource_virtual_size_in_gib as Resource_Virtual_Size_In_Gib,
      jsonPayload.storage_consumed_in_gib as Storage_Consumed_In_Gib,
      jsonPayload.mounted_resource_label as Mounted_Resource_Label,
      jsonPayload.restorable_object as Restorable_Object,
      jsonPayload.mounted_image_age_in_days as Mounted_Image_Age_In_Days,
      jsonPayload.user_name as User_Name,
      jsonPayload.read_mode as Read_Mode,
      jsonPayload.resource_size_in_gib as Resource_Size_In_Gib,
      jsonPayload.source_image_expiration_date as Source_Image_Expiration_Date,
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Mounted_Resource_Name;'
    
      unprotected_resource_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_unprotected_resource_*`
      GROUP BY insertId
      )
      SELECT
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.host_name as Host_Name,
      jsonPayload.instance_name as Instance_Name,
      jsonPayload.discovered_on as Discovered_On,
      jsonPayload.discovered_by as Discovery_Appliance,
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Resource_Name;'
    
      connector_version_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_connector_version_*`
      GROUP BY insertId
      )
      SELECT
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.host_name as Host_Name,
      jsonPayload.host_os_type as Host_OS_Type,
      jsonPayload.host_ip_address as Host_IP_Address,
      jsonPayload.db_authentication as DB_Authentication,
      jsonPayload.installed_version as Installed_Version,
      jsonPayload.available_version as Available_Version,
      jsonPayload.version_check as Version_Check,
      jsonPayload.disk_preference as Disk_Preference,
      jsonPayload.transport as Transport,
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Host_Name;'
    
      table_not_exists_error_flag=false
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_recovery_jobs%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary')
    
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary')
    
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs')
    
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures'
      else
      echo $result
      fi
    
      else
      echo "The table backupdr_googleapis_com_gcb_backup_recovery_jobs does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
    
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_daily_schedule_compliance%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance'
      else
      echo $result
      fi
      else
      echo "The table backupdr_googleapis_com_gcb_daily_schedule_compliance does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
    
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_protected_resource%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource'
      else
      echo $result
      fi
      else
      echo "The table backupdr_googleapis_com_gcb_protected_resource does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
    
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_storage_utilization%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization'
      else
      echo $result
      fi
      else
      echo "The table backupdr_googleapis_com_gcb_backup_storage_utilization does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_unprotected_resource%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource'
      else
      echo $result
      fi
      else
      echo "The table backupdr_googleapis_com_gcb_unprotected_resource does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
    
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_mounted_images%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details'
      else
      echo $result
      fi
      else
      echo "The table backupdr_googleapis_com_gcb_mounted_images does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
    
      check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_connector_version%'" | tail -n +2 | tr -d ' ')
      if [ $check_table == 'true' ]; then
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details'
      else
      echo $result
      fi
      else
      echo "The table backupdr_googleapis_com_gcb_connector_version does not exist in the dataset DATASET_NAME."
      table_not_exists_error_flag=true
      fi
    
      if [ $table_not_exists_error_flag == true ]; then
      echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset DATASET_NAME."
      echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, before running the script to create pre built reports. Try re-running the script again after some time to fix the issue."
      echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m"
      fi
    
      # Check if view exists
      check_view=$(bq --project_id PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.VIEWS WHERE table_name='Storage Resource Utilization'" | tail -n +2 | tr -d ' ')
      if [ "$check_view" == 'true' ]; then
        # Check if view is working correctly
        fetch_result=$(bq query --format=csv --use_legacy_sql=false "SELECT COUNT(*) FROM \`DATASET_NAME.Storage Resource Utilization\`" 2>&1)
        if [[ $? -eq 0 ]]; then
            echo -e "\e[1m\e[33mTo fix a schema related error in the Backup Storage Utilization view, we updated the software on June 3rd, 2024. As part of the fix, storage utilization data before June 3rd 2024, will be no longer available in the report.\e[0m"
            echo -e "Deleting the data before June 3rd 2024..."
            start_date="20240301"
            end_date="20240603"
            current_date="$start_date"
            while [ "$current_date" -le "$end_date" ]; do
                table_name="DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_${current_date}"
                bq rm -f -t "$table_name"
                current_date=$(date -d "$(date -d "$current_date" +%Y-%m-%d) + 1 day" +%Y%m%d)
            done
            echo -e "Completed successfully."
        fi
      fi
    
    

    替换以下内容:

    • PROJECT_ID:您的项目的名称。
    • DATASET_NAME:BigQuery 数据集的名称。我们建议您将备份和灾难恢复服务报告的 BigQuery 数据集名称设为默认数据集名称 BackupandDR_reports。如果您在创建接收器时设置的名称与默认数据集名称不同,请更改数据集名称,使其与您的 BigQuery 数据集保持一致。
  4. 使用具有 Bash (.sh) 文件扩展名的名称(例如 backupdrreports.sh)保存该文件。

  5. 使用您刚刚创建的文件运行 bash 命令。例如 bash backupdrreports.sh

    您可以在 BigQuery Studio 中的数据集下方看到预构建的报告。

设置预构建报告后,如果您有权访问 Google Workspace,则可以在关联的 Google 表格中查看结果。如果您无权使用 Google Workspace,则可以在 Looker Studio 中查看报告,或下载预构建报告(以 CSV 文件的形式)。

价格

您在 BigQuery 中查看预构建报告时可能会产生费用。这些费用基于在 BigQuery 中流式传输、存储和查询的报告数据量而定。如需了解详情,请参阅价格

后续步骤