Set up prebuilt reports in BigQuery

This page explains how to set up and view prebuilt reports in BigQuery. These reports are created using logs from Cloud Logging. To set up reports, you need to do a one-time activity of creating a log sink to stream logging data into BigQuery and then execute the prebuilt script.

Required IAM role

The following IAM permissions are required to view prebuilt reports in BigQuery. Learn how to grant an IAM role.

Role When to grant the role
Logs Configuration Writer (roles/logging.configWriter) or
Logging Admin (roles/logging.admin) and
BigQuery Data Editor (roles/bigquery.dataEditor)
To create a sink and BigQuery dataset from the Google Cloud console.
Owner (roles/owner) To create a sink and BigQuery dataset from the Google Cloud CLI.
BigQuery Admin (bigquery.admin) To write custom queries or download queries.

Create a sink and route logs to BigQuery

BigQuery stores only the logs that are generated after a log sink is created. The logs that are generated before creating a log sink are not visible in BigQuery. You can create the log sink from the Google Cloud console or Google Cloud CLI.

To create sink and route logs in BigQuery, do the following:

Console

  1. In the Google Cloud console, go to the Log Router page:

    Go to Log Router page

  2. Select an existing Google Cloud project.
  3. Click Create sink.
  4. In the Sink details panel, enter the following fields:
    1. Sink name: enter the sink name as BackupandDR_reports_sink. You must use the sink name BackupandDR_reports_sink for the identification of Backup and DR reports from other sinks.
    2. Sink description: Describe the purpose or use case for the sink.
  5. In the Sink destination panel, do the following:

    1. In the Select sink service menu, select the BigQuery table sink service.
    2. In the Select BigQuery table dataset, select Create new BigQuery dataset.
    3. On the Create dataset page, do the following:
      1. For Dataset ID, enter the dataset name as BackupandDR_reports to identify from other datasets. Don't change the dataset name from BackupandDR_reports.
      2. For Location type, choose a geographic location for the dataset. After a dataset is created, the location can't be changed.
      3. Optional: If you want tables in this dataset to expire, select Enable table expiration, then specify the Default maximum table age in days.
      4. Click Create dataset.
  6. In the Choose logs to include in sink panel, do the following:

    1. In the Build inclusion filter field, enter the following filter expression that matches the log entries you want to include.

      `logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"`
      
    2. To verify you entered the correct filter, select Preview logs. This opens the Logs Explorer in a new tab with the filter prepopulated.

  7. Optional: In the Choose logs to filter out of sink panel, do the following:

    1. In the Exclusion filter name field, enter a name.
    2. In the Build an exclusion filter field, enter a filter expression that matches the log entries you want to exclude. You can also use the sample function to select a portion of the log entries to exclude.

  8. Select Create sink.

    You can see the dataset in the BigQuery Studio.

gcloud

  1. Go to Activate cloud shell and click Open editor.
  2. Click the icon, select File, and then select New text file.
  3. Copy and paste the following script.

      #!/bin/bash
      echo "This script will set up a log sink for BackupDR reports to be available in BigQuery"
    
      result=$(gcloud config set project PROJECT_ID)
      if [ $? -eq 0 ]; then
        current_project_id=$(gcloud config get-value project)
        if [ "$current_project_id" = "PROJECT_ID" ]; then
            echo "Successfully set the project to PROJECT_ID"
        else
            echo "The project is not set to PROJECT_ID. Exiting"
            exit 0
        fi
      else
        echo "Error setting the project"
        exit 1
      fi
    
      echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the associated costs before proceeding."
      read -p "Do you want to continue(Y/N)?" continue
      if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then
        result=$(gcloud services enable bigquery.googleapis.com --project PROJECT_ID)
        if [ $? -eq 0 ]; then
            echo "Successfully enabled BigQuery api for PROJECT_ID"
        else
            echo "Error in setting up the BigQuery api for the project. $result"
            exit 1
        fi
      else
        exit 0
      fi
    
      echo "Creating a BigQuery Dataset..."
      result=$(bq mk BackupandDR_reports)
      if [ $? -eq 0 ]; then
        echo "Created a BigQuery dataset BackupandDR_reports successfully"
      else
        echo ""
        echo "ERROR : Failed to create a BigQuery dataset."
        echo $result
        exit 1
      fi
    
      log_filter="projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
      result=$(gcloud logging sinks create BackupandDR_reports_sink bigquery.googleapis.com/projects/PROJECT_ID/datasets/BackupandDR_reports --log-filter="logName=~\"$log_filter\"")
      if [ $? -eq 0 ]; then
        echo "Created a logsink BackupandDR_reports_sink successfully. You will now be able to see the logs in big query."
      else
        echo ""
        echo "ERROR : Failed to create logsink."
        echo "Performing cleanup and exiting."
        bq rm -r -f -d PROJECT_ID:BackupandDR_reports
        exit 1
      fi
    
      result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None)
      if [ $? -eq 0 ]; then
        echo "Added permission for cloud logging to write to BigQuery datasets"
      else
        echo ""
        echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have Owner access rights in order to be able to proceed."
        echo "Performing cleanup and exiting."
        bq rm -r -f -d PROJECT_ID:BackupandDR_reports
        gcloud logging sinks delete BackupandDR_reports_sink
        exit 1
      fi
      exit 0
    

    Replace the following:

    • PROJECT_ID: the name of your project.
  4. Save the file with a name with a Bash (.sh) file extension, for example, script.sh.

  5. Run the command bash using the file you just created. For example, bash script.sh.

    You can see the created dataset in the BigQuery Studio.

Set up prebuilt reports

You can set up prebuilt reports by executing the following script within the dataset where the logs are being routed through the log sink.

The script adds the following prebuilt reports:

To set up prebuilt reports in BigQuery, do the following:

gcloud

  1. Go to Activate cloud shell and click Open editor.
  2. Create a new text file.
  3. Copy and paste the following prebuilt reports script into the Google Cloud CLI.

      backup_job_summary_query='WITH filtered_data AS (
      SELECT 
      insertId, 
      ANY_VALUE(jsonPayload) as jsonPayload,
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 6 DAY) AND
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1
      GROUP BY insertId
      )
      SELECT
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent,
      COUNT(*) as Total_Jobs,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded,
      COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed,
      COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled,
      COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run,
      FROM filtered_data
      WHERE jsonPayload.job_category = "Backup Job" AND
      jsonPayload.job_status != "retry"
      GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type
      ORDER BY  Date, Resource_Name;'
    
      recovery_job_summary_query='WITH filtered_data AS (
      SELECT 
      insertId, 
      ANY_VALUE(jsonPayload) as jsonPayload,
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 6 DAY) AND
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1
      GROUP BY insertId
      )
      SELECT
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent,
      COUNT(*) as Total_Jobs,
      COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded,
      COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed,
      COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled,
      COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run,
      FROM filtered_data
      WHERE jsonPayload.job_category = "Recovery Job" AND
      jsonPayload.job_status != "retry"
      GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type
      ORDER BY Date, Resource_Name;'
    
      failed_job_query='WITH filtered_data AS (
      SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      GROUP BY insertId
      )
      SELECT
      jsonPayload.job_name as Job_Name,
      jsonPayload.job_category as Job_Category,
      jsonPayload.job_type as Job_Type,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.error_code as Error_Code,
      jsonPayload.error_message as Error_Message,
      jsonPayload.hostname as Host_Name,
      jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      jsonPayload.appliance_name as Appliance_Name,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(jsonPayload.job_start_time)) as Job_Start_Time,
      DATE(jsonPayload.job_start_time) as Job_Date,
      jsonPayload.backup_type as Backup_Type
      FROM filtered_data
      WHERE
      jsonPayload.job_status = "failed"
      ORDER BY Job_Start_Time DESC, Resource_Name'
    
      job_details_query='WITH filtered_data AS (
      SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      GROUP BY insertId
      )
      SELECT
      jsonPayload.job_name as Job_Name,
      jsonPayload.job_category as Job_Category,
      jsonPayload.job_type as Job_Type,
      jsonPayload.log_backup as Log_Backup,
      jsonPayload.job_status as Job_Status,
      DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Job_Date,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.error_code as Error_Code,
      jsonPayload.error_message as Error_Message,
      jsonPayload.job_initiation_failure_reason as Job_Initiation_Failure_Reason,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.hostname as Host_Name,
      jsonPayload.target_appliance_name as Target_Appliance_Name,
      jsonPayload.target_pool_name as Target_Pool_Name,
      jsonPayload.target_host_name as Target_Host_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_queued_time !="" THEN jsonPayload.job_queued_time ELSE NULL END))  as Job_Queue_Time,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_start_time!="" THEN jsonPayload.job_start_time ELSE NULL END))  as Job_Start_Time,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_end_time!="" THEN jsonPayload.job_end_time ELSE NULL END))  as Job_End_Time,
      jsonPayload.job_duration_in_hours as Job_Duration_In_Hours,
      jsonPayload.backup_consistency as Backup_Consistency,
      jsonPayload.resource_data_size_in_gib as Resource_Data_Size_In_GiB,
      jsonPayload.snapshot_disk_size_in_gib as Snapshot_Disk_Size_in_GiB,
      jsonPayload.pre_compress_in_gib as Pre_Compress_In_GiB,
      jsonPayload.compression_ratio as Compression_Ratio,
      jsonPayload.data_sent_in_gib as Streamsnap_Data_Sent_In_GiB,
      jsonPayload.data_written_in_gib as Streamsnap_Data_Written_In_GiB,
      jsonPayload.data_copied_in_gib as Data_Copied_In_GiB,
      jsonPayload.data_change_rate as Data_Change_Rate_Percent,
      jsonPayload.backup_type as Backup_Type,
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.recovery_point!="" THEN jsonPayload.recovery_point ELSE NULL END))  as Recovery_Point,
      jsonPayload.onvault_pool_storage_consumed_in_gib as OnVault_Pool_Storage_Consumed_In_GiB,
      FROM filtered_data
      ORDER BY Job_Start_Time DESC, Resource_Name'
    
      unresolved_failures_query='WITH job_data AS (
      SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_backup_recovery_jobs_*`
      WHERE
      jsonPayload.job_status in ("succeeded", "failed")
      GROUP BY insertId
      ),
      -- Select resources which have their latest status as failed
      unresolved_failed_resources AS (
      SELECT
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      jsonPayload.job_status as Job_Status,
      jsonPayload.job_start_time as Job_Start_Time,
      jsonPayload.hostname as Hostname,
      jsonPayload.error_code as Error_Code,
      jsonPayload.error_message as Error_Message
      FROM job_data j1
      WHERE jsonPayload.job_start_time = (SELECT MAX(jsonPayload.job_start_time) FROM job_data j2 WHERE j1.jsonPayload.appliance_name = j2.jsonPayload.appliance_name AND j1.jsonPayload.resource_name = j2.jsonPayload.resource_name AND j1.jsonPayload.resource_type = j2.jsonPayload.resource_type AND j1.jsonPayload.job_type = j2.jsonPayload.job_type) AND
      jsonPayload.job_status = "failed"
      ),
      -- Select all jobs for all the resources that have unresolved failures
      all_jobs_of_unresolved_failure_resources AS (
      SELECT
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.job_type as Job_Type,
      jsonPayload.job_category as Job_Category,
      jsonPayload.job_name as Job_Name,
      jsonPayload.recovery_point as Recovery_Point,
      jsonPayload.job_status as Job_Status,
      jsonPayload.job_start_time as Job_Start_Time,
      jsonPayload.hostname as Hostname,
      jsonPayload.error_code as Error_Code
      FROM job_data j1
      JOIN unresolved_failed_resources fr
      ON
      j1.jsonPayload.appliance_name = fr.Appliance_Name AND
      j1.jsonPayload.resource_name = fr.Resource_Name AND
      j1.jsonPayload.resource_type = fr.Resource_Type AND
      j1.jsonPayload.job_type = fr.Job_Type
      ),
      -- Select the latest successful jobs for the resources with unresolved failures
      latest_success AS (
      SELECT *
      FROM all_jobs_of_unresolved_failure_resources all_jobs
      WHERE
      all_jobs.Job_Status = "succeeded" AND
      all_jobs.Job_Start_Time = (
        SELECT MAX(Job_Start_Time) FROM
        all_jobs_of_unresolved_failure_resources all_jobs_2
        WHERE all_jobs_2.Appliance_Name = all_jobs.Appliance_Name AND
        all_jobs_2.Resource_Name = all_jobs.Resource_Name AND
        all_jobs_2.Resource_Type = all_jobs.Resource_Type AND
        all_jobs_2.Job_Type = all_jobs.Job_Type AND
        all_jobs_2.job_status = "succeeded"
        )
      ),
      -- Select all failed jobs after the last success for the resources with unresolved failures
      failed_jobs_to_report AS (
      SELECT
      all_jobs.Appliance_Name as Appliance_Name,
      all_jobs.Resource_Name as Resource_Name,
      all_jobs.Resource_Type as Resource_Type,
      all_jobs.Job_Type as Job_Type,
      all_jobs.Job_Name as Job_Name,
      all_jobs.Recovery_Point as Recovery_Point,
      all_jobs.Job_Status as Job_Status,
      all_jobs.Job_Start_Time as Job_Start_Time,
      all_jobs.Hostname as Hostname,
      all_jobs.Error_Code as Error_Code,
      all_jobs.Job_Category as Job_Category,
      FROM all_jobs_of_unresolved_failure_resources all_jobs
      LEFT JOIN latest_success success
      ON success.Appliance_Name = all_jobs.Appliance_Name AND
        success.Resource_Name = all_jobs.Resource_Name AND
        success.Resource_Type = all_jobs.Resource_Type AND
        success.Job_Type = all_jobs.Job_Type
      WHERE all_jobs.Job_Status = "failed" AND
      TIMESTAMP(all_jobs.Job_Start_Time) > COALESCE(TIMESTAMP(success.Job_Start_Time), TIMESTAMP(DATE_SUB(CURRENT_DATE(), INTERVAL 3 DAY)))
      )
      SELECT
      failed_jobs.Resource_Name,
      failed_jobs.Resource_Type,
      failed_jobs.Job_Type,
      failed_jobs.Job_Category,
      COUNT(*) as Error_Count,
      failed_jobs.Error_Code,
      ANY_VALUE(ufr.Error_Message) as Error_Message,
      FORMAT_TIMESTAMP("%F %R %Z", MIN(TIMESTAMP(failed_jobs.Job_Start_Time))) as First_Failure,
      FORMAT_TIMESTAMP("%F %R %Z", MAX(TIMESTAMP(failed_jobs.Job_Start_Time))) as Last_Failure,
      TIMESTAMP_DIFF(CURRENT_TIMESTAMP,TIMESTAMP(MAX(failed_jobs.Job_Start_Time)), HOUR) as Hours_Since_Last_Failure,
      failed_jobs.Appliance_Name
      FROM failed_jobs_to_report failed_jobs
      LEFT JOIN unresolved_failed_resources ufr
      ON failed_jobs.Appliance_Name = ufr.Appliance_Name AND
        failed_jobs.Resource_Name = ufr.Resource_Name AND
        failed_jobs.Resource_Type = ufr.Resource_Type AND
        failed_jobs.Job_Type = ufr.Job_Type AND
        failed_jobs.Error_Code = ufr.Error_Code
      GROUP BY Appliance_Name, Resource_Name, Resource_Type, Job_Type, Job_Category, Error_Code
      ORDER BY Last_Failure DESC
      '
      daily_schedule_compliance_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_daily_schedule_compliance_*`
      WHERE DATE(jsonPayload.date) >= DATE_SUB(CURRENT_DATE(), INTERVAL 8 DAY) AND
      DATE(jsonPayload.date) < CURRENT_DATE() + 1
      GROUP BY insertId
      )
      SELECT
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      DATE(jsonPayload.date) as Date,
      jsonPayload.backup_window_start_time as Backup_Window_Start_Time,
      jsonPayload.job_type as Job_Type,
      jsonPayload.status as Status,
      jsonPayload.comment as Comment,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.host_name as Host_Name
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Resource_Name, Date;
      '
      protected_data_resource_query='WITH filtered_data AS (
      SELECT
      insertId,
      ANY_VALUE(jsonPayload) as jsonPayload,
      ANY_VALUE(timestamp) as Report_Timestamp
      FROM `PROJECT_ID.BackupandDR_reports.backupdr_googleapis_com_gcb_protected_resource_*`
      GROUP BY insertId
      )
      SELECT
      FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on,
      jsonPayload.resource_name as Resource_Name,
      jsonPayload.resource_type as Resource_Type,
      jsonPayload.host_name as Host_Name,
      jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template,
      jsonPayload.protected_on as Protected_On,
      jsonPayload.recovery_point as Recovery_Point,
      jsonPayload.protected_data_in_gib as Protected_Data_in_Gib,
      jsonPayload.onvault_in_gib as OnVault_Data_in_Gib,
      jsonPayload.backup_plan_restrictions as Backup_Plan_Restrictions,
      jsonPayload.backup_inclusion_or_exclusion as Backup_Inclusion_or_Exclusion,
      jsonPayload.policy_overrides as Policy_Overrides,
      jsonPayload.appliance_name as Appliance_Name,
      jsonPayload.remote_appliance as Remote_Appliance,
      jsonPayload.source_appliance as Source_Appliance,
      FROM filtered_data f1
      WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id)
      ORDER BY Resource_Name;'
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" BackupandDR_reports.'Backup Jobs Summary')
    
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" BackupandDR_reports.'Backup Jobs Summary'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" BackupandDR_reports.'Recovery Jobs Summary')
    
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" BackupandDR_reports.'Recovery Jobs Summary'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$failed_job_query" BackupandDR_reports.'Failed Jobs')
    
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$failed_job_query" BackupandDR_reports.'Failed Jobs'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$job_details_query" BackupandDR_reports.'Job Details')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$job_details_query" BackupandDR_reports.'Job Details'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" BackupandDR_reports.'Unresolved Failures')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" BackupandDR_reports.'Unresolved Failures'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" BackupandDR_reports.'Daily Schedule Compliance')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" BackupandDR_reports.'Daily Schedule Compliance'
      else
      echo $result
      fi
    
      result=$(bq update --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" BackupandDR_reports.'Protected Data Resource')
      if [ $? -ne 0 ]; then
      bq mk --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" BackupandDR_reports.'Protected Data Resource'
      else
      echo $result
      fi
    

    Replace the following:

    • PROJECT_ID: the name of your project.
  4. Save the file with a name with a Bash (.sh) file extension, for example, backupdrreports.sh.

  5. Run the command bash using the file you just created. For example, bash backupdrreports.sh.

    You can see the prebuilt reports under the dataset in the BigQuery Studio.

View prebuilt reports

After you set up the prebuilt reports, you can view the results in Google connected sheets. For more information, see Using Connected Sheets.

If you don't have access to Google Workspace, you might not be able to view report in Google connected sheet. In that case, you can download the report as a CSV file. For instructions, see Download prebuilt reports.

To view prebuilt reports in BigQuery, do the following:

  1. In the Google Cloud console, go to the BigQuery Studio page:

    Go to BigQuery Studio page

  2. In the Explorer panel, click the expander arrow to expand a dataset.
  3. Select a prebuilt report. For example, Backup Jobs Summary.
  4. From Export, click Explore with sheets.

    A new tab opens in connected sheet with the report data.

Download prebuilt reports

After you set up the prebuilt reports, you can download them to a local file. For more information, see Downloading and saving query results from the Google Cloud console.

To download prebuilt reports in BigQuery, do the following:

  1. In the Google Cloud console, go to the BigQuery Studio page:

    Go to BigQuery Studio page

  2. In the Explorer panel, click the expander arrow to expand a dataset.
  3. Select a prebuilt report. For example, Backup Jobs Summary.
  4. Click Query and select either In new tab or In split tab.

    A new tab opens with a sample query.

  5. Write a query based on the field names that are populated in the Schema tab. For example, you can use the following query to download the entire report.

    SELECT * FROM `PROJECT_ID.BackupandDR_reports.Backup Jobs Summary`.
    
  6. Click Run to display the preview of the query results.

  7. From Save results, select CSV (local file) to download report to a local file.

Write custom queries in BigQuery

In Google Cloud console, you can write your custom query based on the table name or prebuilt report and save it as a query or as a view.

To write custom queries in BigQuery, do the following:

  1. In the Google Cloud console, go to the BigQuery Studio page:

    Go to BigQuery Studio page

  2. In the Explorer panel, expand your project and select a dataset.
  3. Click the dataset name to expand it.

    The tables and views in the dataset appear.

  4. Click the table name or a prebuilt report.

    The Details tab displays the table's description and the Schema tab displays the table's schema definition.

  5. Click Query and select either In new tab or In split tab.

    A new tab opens with a sample query.

  6. Write a query based on the field names that are populated in the Schema tab.

  7. Click Run. This displays the preview of the query results.

  8. To save a query, do the following:

    1. Click Save query.
    2. In the Save query dialog, type a name for the saved query.
  9. To save the query as a view, do the following:

    1. Click Save view.
    2. In the Save view dialog, do the following:
      1. In the Project drop-down, click Browse and select a project.
      2. In the Dataset drop-down, select a dataset to store the view. The dataset that contains your view and the dataset that contains the tables referenced by the view must be in the same location.
      3. In the Table name, enter a name for the view.
      4. Click Save.

You can download and save queries to the selected location. For more information, see Downloading and saving query results from the Google Cloud console.

Pricing

You may be charged for viewing prebuilt reports in BigQuery. These charges are based on the volume of reporting data, that is streamed, stored, and queried in BigQuery. For the details, see Pricing.