Configurar informes predefinidos en BigQuery

En esta página se explica cómo configurar y ver informes predefinidos en BigQuery para los recursos protegidos con planes de copia de seguridad.

Para configurar los informes, debes crear un receptor de registro para transmitir datos de registro a BigQuery y, a continuación, ejecutar la secuencia de comandos precompilada. Si quieres configurar y ver informes pregenerados en BigQuery de los recursos protegidos con la plantilla de copia de seguridad en la consola de gestión, consulta Configurar informes pregenerados en BigQuery.

Rol de gestión de identidades y accesos necesario

Se necesitan los siguientes permisos de IAM para ver los informes pregenerados de BigQuery sobre las copias de seguridad protegidas. Consulta cómo conceder un rol de gestión de identidades y accesos.

Role Cuándo se debe conceder el rol
Editor de configuración de registros (roles/logging.configWriter) o

Administrador de registros (roles/logging.admin) y Editor de datos de BigQuery (roles/bigquery.dataEditor) | Para crear un receptor y un conjunto de datos de BigQuery desde la consola Google Cloud . | | Propietario (roles/owner) | Para crear un receptor y un conjunto de datos de BigQuery desde la CLI de Google Cloud. | | Administrador de BigQuery (bigquery.admin) | Para escribir o descargar consultas personalizadas. |

Crear un sumidero y enrutar registros a BigQuery para copias de seguridad protegidas

BigQuery solo almacena los registros que se generan después de crear un sumidero de registros. Los registros que se generan antes de crear un receptor de registro no se ven en BigQuery. Puedes crear el receptor de registro desde la Google Cloud consola o la CLI de Google Cloud.

Para crear un sumidero y enrutar registros en BigQuery para copias de seguridad protegidas, haz lo siguiente:

Consola

  1. En la Google Cloud consola, ve a la página Enrutador de registros:

    Ir a la página Enrutador de registros

  2. Selecciona un proyecto Google Cloud .
  3. Haz clic en Crear sumidero.
  4. En el panel Detalles del receptor, rellena los siguientes campos:
    1. Nombre del sumidero: introduce el nombre del sumidero como bdr_report_sink. Debes usar el nombre de receptor bdr_report_sink para identificar los informes de Backup and DR de otros receptores.
    2. Descripción del sumidero: describe el objetivo o el caso práctico del sumidero.
  5. En el panel Destino del receptor, haz lo siguiente:

    1. En el menú Seleccionar servicio de receptor, selecciona el servicio de receptor Conjunto de datos de BigQuery.
    2. En Seleccionar conjunto de datos de BigQuery, elige Crear conjunto de datos de BigQuery.
    3. En la página Crear conjunto de datos, haz lo siguiente:
      1. En ID del conjunto de datos, introduce el nombre del conjunto de datos como bdr_reports para distinguirlo de otros conjuntos de datos. No cambies el nombre del conjunto de datos bdr_reports.
      2. En Tipo de ubicación, elige una ubicación geográfica para el conjunto de datos. Una vez creado un conjunto de datos, no se puede cambiar su ubicación.
      3. Opcional: Si quieres que las tablas de este conjunto de datos caduquen, selecciona Habilitar caducidad de tablas y, a continuación, especifica la Antigüedad máxima predeterminada de las tablas en días.
      4. Haz clic en Crear conjunto de datos.
  6. En el panel Seleccionar los registros que se incluirán en el sumidero, haz lo siguiente:

    1. En el campo Crear filtro de inclusión, introduce la siguiente expresión de filtro que coincida con las entradas de registro que quieras incluir.

       logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fbdr_*" 
       Replace `PROJECT_ID` with the project name.
      
    2. Para comprobar que has introducido el filtro correcto, selecciona Vista previa de los registros. Se abrirá el Explorador de registros en una nueva pestaña con el filtro rellenado automáticamente.

  7. Opcional: En el panel Seleccionar los registros que se excluirán del sumidero, haz lo siguiente:

    1. En el campo Nombre del filtro de exclusión, introduce un nombre.
    2. En el campo Crear filtro de exclusión, introduce una expresión de filtro que coincida con las entradas de registro que quieras excluir. También puede usar la función de muestra para seleccionar una parte de las entradas de registro que quiera excluir.

  8. Selecciona Crear sumidero.

    Puedes ver el conjunto de datos en BigQuery Studio.

gcloud

  1. Ve a Activar Cloud Shell y haz clic en Abrir editor.
  2. Haz clic en el icono , selecciona Archivo y, a continuación, Nuevo archivo de texto.
  3. Copia y pega la siguiente secuencia de comandos.

       #!/bin/bash
        echo "This script will set up a log sink for BackupDR reports to be available in BigQuery"
    
        # Get the default project ID
        DEFAULT_PROJECT_ID=$(gcloud config get-value project)
        read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
        # Use default if no input is provided
        if [ -z "$PROJECT_ID" ]; then
          PROJECT_ID=$DEFAULT_PROJECT_ID
        fi
          # Set the project ID
        result=$(gcloud config set project $PROJECT_ID)
        if [ $? -ne 0 ]; then
          echo "Error setting the project to $PROJECT_ID"
          exit 1
        fi
          # --- Check if BigQuery API is already enabled, enable if not ---
        echo "Checking if BigQuery API is enabled..."
        if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then
          echo "BigQuery API is already enabled for $PROJECT_ID"
        else
          echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the associated costs before proceeding."
          read -p "Do you want to continue(Y/N)?" continue
          if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then
            echo "Enabling BigQuery API..."
            result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID)
            if [ $? -eq 0 ]; then
              echo "Successfully enabled BigQuery api for $PROJECT_ID"
            else
              echo "Error in setting up the BigQuery api for the project. $result"
              exit 1
            fi
          else
            exit 0
          fi
        fi
          # --- Check if BigQuery data set already exists, create if not ---
        echo "Checking if BigQuery data set exists..."
        if bq ls | grep "bdr_reports" >/dev/null; then
          echo "Dataset bdr_reports already exists for $PROJECT_ID"
        else
          echo "Creating bigQuery dataset bdr_reports..."
          # --- Get dataset location from user (default: US) ---
          read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION
          if [ -z "$DATASET_LOCATION" ]; then
            DATASET_LOCATION="US"
          fi
          # --- Get table expiration in days from user (default: no expiration) ---
          read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS
    
          # Calculate table expiration in seconds if provided
          if [ -n "$TABLE_EXPIRATION_DAYS" ]; then
            TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60))
            EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS"
          else
            EXPIRATION_FLAG=""
          fi
          result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG bdr_reports)
          if [ $? -eq 0 ]; then
            echo "Created a BigQuery dataset bdr_reports successfully."
          else
            echo ""
            echo "ERROR : Failed to create the BigQuery dataset."
            echo $result
            exit 1
          fi
        fi
          # --- Check if Log Sink already exists, create if not ---
        echo "Checking if Log Sink exists..."
        if gcloud logging sinks list | grep "bdr_report_sink" >/dev/null; then
          echo "Log Sink bdr_reports_sink already exists for $PROJECT_ID"
        else
          log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fbdr_*"
          echo "Creating log sink bdr_reports..."
          result=$(gcloud logging sinks create bdr_report_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/bdr_reports --log-filter="logName=~\"$log_filter\"")
          if [ $? -eq 0 ]; then
            echo "Created a logsink bdr_reports_sink successfully."
          else
            echo ""
            echo "ERROR : Failed to create logsink."
            exit 1
          fi
        fi
    
        # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery ---
        result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None)
        if [ $? -eq 0 ]; then
          echo "Added permission for cloud logging to write to BigQuery datasets"
        else
          echo ""
          echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have Owner access rights in order to be able to proceed."
          exit 1
        fi
    
        echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery."
        exit 0
    
      exit 0
    
  4. Guarda el archivo con un nombre que tenga la extensión de archivo de Bash (por ejemplo, script.sh).

  5. Ejecuta el comando bash con el archivo que acabas de crear. Por ejemplo, bash script.sh.

    Puedes ver el conjunto de datos creado en BigQuery Studio.

Configurar informes predefinidos para copias de seguridad protegidas

Para configurar informes predefinidos, ejecuta la siguiente secuencia de comandos en el conjunto de datos en el que se están enrutando los registros a través del receptor de registro.

La secuencia de comandos añade los siguientes informes predefinidos:

Para configurar informes predefinidos en BigQuery para copias de seguridad protegidas, haz lo siguiente:

gcloud

  1. Ve a Activar Cloud Shell y haz clic en Abrir editor.
  2. Haz clic en el icono , selecciona Archivo y, a continuación, Nuevo archivo de texto.
  3. Copia y pega la siguiente secuencia de comandos de informes predefinidos en Google Cloud CLI.

    #!/bin/bash
    
    # Default config values. Modify only if required.
    DATASET="bdr_reports"
    JOBS_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_restore_jobs_"
    PROTECTED_RESOURCE_TABLE_PREFIX="backupdr_googleapis_com_bdr_protected_resource_"
    BACKUP_VAULT_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_vault_details_"
    BACKUP_PLAN_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_plan_details_"
    VIEW_FROM_DATE_SUFFIX="20000101"  #Use an old enough date so that the views should contain all the data.
    ALLOWED_RESOURCE_TYPES='('"'"'Compute Engine'"'"', '"'"'Disk'"'"')'
    
    # Function to check if a routine exists
    routine_exists() {
    local routine_name="$1"
    bq --project_id $PROJECT_ID ls --routines "$DATASET" | grep "$routine_name" >/dev/null
    }
    
    # Function to check if a view exists
    view_exists() {
    local view_name="$1"
    check_view=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from $DATASET.INFORMATION_SCHEMA.VIEWS WHERE table_name='$view_name'" | tail -n +2 | tr -d ' ')
    if [ $check_view == 'true' ]; then
      return 0
    fi
    return -1
    }
    
    # Function to create a routine
    create_routine() {
    local routine_name="$1"
    local routine_type="$2"
    local definition_body="$3"
    # Construct the routine body JSON
    local routine_body='
      {
      "definitionBody": "'"$definition_body"'",
      "routineType": "'"$routine_type"'",
      "routineReference": {
          "projectId": "'"$PROJECT_ID"'",
          "datasetId": "'"$DATASET"'",
          "routineId": "'"$routine_name"'"
      },
      "arguments": [
          {
          "name": "FROM_DATE",
          "dataType": {
              "typeKind": "STRING"
          }
          },
          {
          "name": "TO_DATE",
          "dataType": {
              "typeKind": "STRING"
          }
          }
      ]
      }
    '
    
    # Use POST for creating a new routine
    local http_method="POST"
    local routine_id=""
    # If routine exists use PUT to update the routine
    if routine_exists "$routine_name"; then
    echo "Routine $routine_name already exists. Updating the existing routine"
    http_method="PUT"
    routine_id="/$routine_name"
    fi
    
    # Construct the curl command with the routine body
    local curl_command=(
      curl -s --request $http_method \
        "https://bigquery.googleapis.com/bigquery/v2/projects/$PROJECT_ID/datasets/$DATASET/routines$routine_id" \
        --header "Authorization: Bearer $ACCESS_TOKEN" \
        --header 'Accept: application/json' \
        --header 'Content-Type: application/json' \
        --data "$routine_body" --compressed
    )
      # Execute the curl command and capture the result
    local result=$( "${curl_command[@]}" )
    
    # Check if creation was successful
    if echo "$result" | grep '"etag":' > /dev/null; then
      echo "Routine '$routine_name' created/updated successfully."
    else
      echo "$result"
      exit -1
    fi
    }
    
    # Function to create a view
    create_view() {
    local view_name="$1"
    local parent_routine_name="$1"
    local view_query='
      SELECT * FROM `'$PROJECT_ID'.'$DATASET'.'$parent_routine_name'`("'$VIEW_FROM_DATE_SUFFIX'", FORMAT_DATE("%Y%m%d", CURRENT_DATE()));
    '
    
    # Check if view exists.
    if routine_exists "$parent_routine_name"; then
      view_exists "$view_name"
      if [ $? -eq 0 ]; then
        echo "View $view_name already exists. Skipping."
        return 0
      fi
      # Create the view if it does not exist
      bq --project_id $PROJECT_ID mk --use_legacy_sql=false --expiration 0 --view "$view_query" $DATASET.$view_name
    else
      echo "Routine $parent_routine_name is not present. Skipping the creation of $view_name."
      return -1
    fi
    }
    
    # Create all jobs related routines and views
    create_jobs_routines_and_views() {
    # Backup Restore Jobs
    create_routine "backup_restore_jobs" "TABLE_VALUED_FUNCTION" \
    '
    WITH
      dat AS (
      SELECT
        *,
        LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
        LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
      FROM
        `'$PROJECT_ID'.'$DATASET'.'$JOBS_TABLE_PREFIX'*`
      WHERE
        _TABLE_SUFFIX >= FROM_DATE
        AND _TABLE_SUFFIX <= TO_DATE
        AND jsonpayload_v1_bdrbackuprestorejoblog.resourceType IN '"${ALLOWED_RESOURCE_TYPES}"' ),
      deduped AS (
      SELECT
        * EXCEPT (insert_id_is_same,
          timestamp_is_same),
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime), NULL) AS job_end_time,
    
      FROM
        dat
      WHERE
        NOT (insert_id_is_same
          AND timestamp_is_same) ),
      latest AS (
      SELECT
        jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
        MAX(CASE jsonpayload_v1_bdrbackuprestorejoblog.jobStatus
            WHEN '"'"'RUNNING'"'"' THEN 1
            WHEN '"'"'SKIPPED'"'"' THEN 2
            WHEN '"'"'SUCCESSFUL'"'"' THEN 3
            WHEN '"'"'FAILED'"'"' THEN 4
        END
          ) AS final_status_code
      FROM
        deduped
      GROUP BY
        jsonpayload_v1_bdrbackuprestorejoblog.jobid),
      filled_latest AS (
      SELECT
        jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
        jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS final_status,
        PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime) AS parsed_start_time,
        PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time) AS parsed_end_time,
        TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), HOUR) AS duration_hours,
        MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), MINUTE), 60) AS duration_minutes,
        MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), SECOND), 60) AS duration_seconds,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename), NULL) AS restore_resource_name,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype), NULL) AS error_type,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage), NULL) AS error_message,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname), NULL) AS backup_name,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname), NULL) AS backup_plan_name,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule), NULL) AS backup_rule,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime), NULL) AS backup_consistency_time,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib), NULL) AS incremental_backup_size_in_gib,
      IF
        (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode), NULL) AS error_code,
    
      FROM
        deduped
      JOIN
        latest
      ON
        latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
        AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = (
          CASE final_status_code
            WHEN 1 THEN '"'"'RUNNING'"'"'
            WHEN 2 THEN '"'"'SKIPPED'"'"'
            WHEN 3 THEN '"'"'SUCCESSFUL'"'"'
            WHEN 4 THEN '"'"'FAILED'"'"'
        END
          ) )
    SELECT
      FORMAT_TIMESTAMP('"'"'%m%d'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"') || '"'"'-'"'"' || ARRAY_TO_STRING(REGEXP_EXTRACT_ALL(jsonpayload_v1_bdrbackuprestorejoblog.resourceType, '"'"'\\\\b[A-Z]'"'"'), '"'"''"'"') || '"'"'-'"'"' || (CASE jsonpayload_v1_bdrbackuprestorejoblog.jobCategory
          WHEN '"'"'ON_DEMAND_BACKUP'"'"' THEN '"'"'B'"'"'
          WHEN '"'"'SCHEDULED_BACKUP'"'"' THEN '"'"'B'"'"'
          WHEN '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"' THEN '"'"'B'"'"'
          WHEN '"'"'RESTORE'"'"' THEN '"'"'R'"'"'
      END
        ) || '"'"'-'"'"' || REPLACE(FORMAT_TIMESTAMP('"'"'%H%M%E3S'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"'), '"'"'.'"'"', '"'"''"'"') AS `job_name`,
      REGEXP_EXTRACT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'([^/]+)$'"'"') AS `resource_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName,'"'"'/'"'"')[1]) AS `resource_project_name`,
      jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS `job_status`,
      jsonpayload_v1_bdrbackuprestorejoblog.jobCategory AS `job_category`,
      Filled_latest.error_type AS `error_type`,
      CAST(filled_latest.error_code AS INT64) AS `error_code`,
      filled_latest.error_message AS `error_message`,
      parsed_start_time AS `job_start_time`,
      parsed_end_time AS `job_end_time`,
    IF
      (duration_hours > 0, duration_hours || '"'"'h '"'"', '"'"''"'"') ||
    IF
      (duration_minutes > 0, duration_minutes || '"'"'m '"'"', '"'"''"'"') ||
    IF
      (duration_seconds > 0, duration_seconds || '"'"'s '"'"', '"'"''"'"') AS `duration`,
      ROUND(filled_latest.incremental_backup_size_in_gib, 2) AS `incremental_backup_size_in_gib`,
      PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', filled_latest.backup_consistency_time) AS `backup_consistency_time`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.backup_plan_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.backup_plan_name,'"'"'/'"'"')[5]) AS `backup_plan_name`,
      filled_latest.backup_rule AS `backup_rule`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[5]) AS `backup_vault_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.backup_name, '"'"'/'"'"')) < 10, NULL, SPLIT(filled_latest.backup_name,'"'"'/'"'"')[9]) AS `backup_name`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[1]) AS `restore_project_name`,
    IF
      (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[5]) AS `restore_resource_name`,
      jsonpayload_v1_bdrbackuprestorejoblog.sourceresourcelocation AS `resource_location`,
      jsonpayload_v1_bdrbackuprestorejoblog.resourceType AS `resource_type`,
      jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceId AS `resource_id`,
      jsonpayload_v1_bdrbackuprestorejoblog.jobId AS `job_id`,
      timestamp AS `log_timestamp`
    FROM
      deduped
    JOIN
      filled_latest
    ON
      filled_latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
      AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = filled_latest.final_status
    ORDER BY
      parsed_start_time DESC
    '
    
    # Backup Jobs Summary
    create_routine "backup_jobs_summary" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        DATE(log_timestamp) AS `date`,
        resource_project_name,
        job_category,
        ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
        COUNT(*) AS `total`,
        COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
        COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
        COUNTIF(job_status = '"'"'SKIPPED'"'"') AS `skipped`,
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"', '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"') AND job_status IN ('"'"'SUCCESSFUL'"'"', '"'"'FAILED'"'"', '"'"'SKIPPED'"'"')
      GROUP BY
        `date`,
        resource_project_name,
        job_category
      ORDER BY
        `date` DESC
    '
    create_view "backup_jobs_summary"
    
    # Backup Jobs Details
    create_routine "backup_jobs_details" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        job_status,
        error_type,
        error_code,
        error_message,
        job_start_time,
        job_end_time,
        duration,
        incremental_backup_size_in_gib,
        backup_consistency_time,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        backup_vault_project_name,
        resource_location,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE
        job_status IN ('"'"'SUCCESSFUL'"'"',
          '"'"'FAILED'"'"', '"'"'SKIPPED'"'"') AND job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"', '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"')
    '
    create_view "backup_jobs_details"
    
    # Successful Backup Jobs Details
    create_routine "successful_backup_jobs" "TABLE_VALUED_FUNCTION" \
    'SELECT
        job_name,
        resource_name,
        resource_project_name,
        job_start_time,
        job_end_time,
        duration,
        incremental_backup_size_in_gib,
        backup_consistency_time,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        backup_vault_project_name,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
          TO_DATE) WHERE job_status = '"'"'SUCCESSFUL'"'"'
    '
    create_view "successful_backup_jobs"
    
    # Failed Backup Job Details
    create_routine "failed_backup_jobs" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        error_type,
        error_code,
        error_message,
        job_start_time,
        job_end_time,
        duration,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        backup_vault_project_name,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
          TO_DATE) WHERE job_status = '"'"'FAILED'"'"'
    '
    create_view "failed_backup_jobs"
    
    # Skipped Backup Job Details
    create_routine "skipped_backup_jobs" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        backup_plan_name,
        backup_rule,
        backup_vault_name,
        job_start_time,
        backup_vault_project_name,
        resource_type,
        resource_id,
        job_category,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
          TO_DATE)
      WHERE
        job_status = '"'"'SKIPPED'"'"'
    '
    create_view "skipped_backup_jobs"
    
    # Restore Jobs Summary
    create_routine "restore_jobs_summary" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        DATE(log_timestamp) AS `date`,
        restore_project_name,
        backup_vault_project_name,
        ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
        COUNT(*) AS `total`,
        COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
        COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE
        job_category = '"'"'RESTORE'"'"'
        AND job_status IN ('"'"'SUCCESSFUL'"'"',
          '"'"'FAILED'"'"')
      GROUP BY
        `date`,
        restore_project_name,
        backup_vault_project_name
      ORDER BY
        `date` DESC
    '
    create_view "restore_jobs_summary"
    
    # Restore Jobs Details
    create_routine "restore_jobs_details" "TABLE_VALUED_FUNCTION" \
    '
    SELECT
        job_name,
        resource_name,
        resource_project_name,
        job_status,
        error_type,
        error_code,
        error_message,
        job_start_time,
        job_end_time,
        duration,
        restore_resource_name,
        restore_project_name,
        backup_vault_name,
        backup_vault_project_name,
        resource_location,
        resource_type,
        resource_id,
        job_id
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
          TO_DATE)
      WHERE
        job_category IN ('"'"'RESTORE'"'"')
        AND job_status IN ('"'"'SUCCESSFUL'"'"',
          '"'"'FAILED'"'"')
    '
    create_view "restore_jobs_details"
    }
    
    # Protected Resource Details
    create_protected_resource_routines_and_views() {
    create_routine "protected_resource_details" "TABLE_VALUED_FUNCTION" \
    '
    WITH
      dat AS (
      SELECT
        *,
        LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
        LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
    
      FROM
        `'$PROJECT_ID'.'$DATASET'.'$PROTECTED_RESOURCE_TABLE_PREFIX'*` WHERE _TABLE_SUFFIX >= FROM_DATE AND _TABLE_SUFFIX <= TO_DATE
        AND jsonpayload_v1_bdrprotectedresourcelog.resourceType IN '"${ALLOWED_RESOURCE_TYPES}"'
        ),
      deduped AS (
      SELECT
        * EXCEPT (`insert_id_is_same`,
          `timestamp_is_same`)
      FROM
        dat
      WHERE
        NOT (`insert_id_is_same`
          AND `timestamp_is_same`) ),
      latest_for_date AS (
      SELECT
        DATE(timestamp) AS `log_date`,
        jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename AS `source_resource_name`,
        MAX(timestamp) AS `latest_timestamp`
      FROM
        deduped
      GROUP BY
        `log_date`,
        `source_resource_name` )
    SELECT
      DATE(timestamp) AS `date`,
      jsonpayload_v1_bdrprotectedresourcelog.resourcetype AS `resource_type`,
      REGEXP_EXTRACT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename,'"'"'/'"'"')[1]) AS `resource_project_name`,
      jsonpayload_v1_bdrprotectedresourcelog.sourceresourcedatasizegib AS `resource_data_size_in_gib`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[5]) AS `backup_plan_name`,
      (
      SELECT
        ARRAY_AGG(STRUCT(
            array_entry.rulename AS `rule_name`,
            array_entry.recurrence AS `recurrence`,
            array_entry.recurrenceschedule AS `recurrence_schedule`,
            array_entry.backupwindow AS `backup_window`,
            IF(array_entry.backupwindowtimezone = '"'"'Etc/UTC'"'"', '"'"'UTC'"'"',array_entry.backupwindowtimezone)  AS `backup_window_time_zone`,
            array_entry.retentiondays AS `retention_days`
          ))
      FROM
        UNNEST(jsonpayload_v1_bdrprotectedresourcelog.currentbackupruledetails) array_entry ) AS `backup_rules`,
    DATE(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrprotectedresourcelog.lastprotectedon)) AS `last_backup_plan_assoc_date`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
    IF
      (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 4, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
      jsonpayload_v1_bdrprotectedresourcelog.sourceresourcelocation AS `resource_location`,
      jsonpayload_v1_bdrprotectedresourcelog.sourceresourceid AS `resource_id`,
    FROM
      deduped
    INNER JOIN
      latest_for_date
    ON
      DATE(timestamp) = latest_for_date.log_date
      AND timestamp = latest_for_date.latest_timestamp
      AND latest_for_date.`source_resource_name` = jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename
    ORDER BY
      `date` DESC,
      `resource_name` ASC
    '
    create_view "protected_resource_details"
    }
    
    # Backup Vault Consumption
    create_backup_vault_consumption_routines_and_views() {
    create_routine "backup_vault_consumption" "TABLE_VALUED_FUNCTION" \
    '
    WITH
        dat AS (
        SELECT
          *,
        LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
        LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
        FROM
          `'$PROJECT_ID'.'$DATASET'.'$BACKUP_VAULT_TABLE_PREFIX'*`
        WHERE
          jsonpayload_v1_bdrbackupvaultdetailslog.resourceType IN '"${ALLOWED_RESOURCE_TYPES}"'
          AND _TABLE_SUFFIX >= FROM_DATE
          AND _TABLE_SUFFIX <= TO_DATE ),
        deduped AS (
        SELECT
          * EXCEPT (`insert_id_is_same`,
            `timestamp_is_same`)
        FROM
          dat
        WHERE
          NOT (`insert_id_is_same`
            AND `timestamp_is_same`) ),
        latest_for_date AS (
        SELECT
          DATE(timestamp) AS `log_date`,
          MAX(timestamp) AS `latest_timestamp`,
          jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename AS `source_resource_name`,
        FROM
          deduped
        GROUP BY
          log_date,
          `source_resource_name`)
      SELECT
        DATE(timestamp) AS `date`,
        jsonpayload_v1_bdrbackupvaultdetailslog.resourcetype AS `resource_type`,
        REGEXP_EXTRACT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
        SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'/'"'"')[1] AS `resource_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
        jsonpayload_v1_bdrbackupvaultdetailslog.storedbytesgib AS `backup_vault_stored_bytes_in_gib`,
    CAST(jsonpayload_v1_bdrbackupvaultdetailslog.minimumEnforcedRetentionDays AS INT64) AS `backup_vault_minimum_enforced_retention_days`,
      IF
    (ARRAY_LENGTH(SPLIT(IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL), '"'"'/'"'"')) < 6, NULL, SPLIT(IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL),'"'"'/'"'"')[5]) AS `backup_plan_name`,
    PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint), NULL)) AS `first_available_restore_point`,
    PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
    (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint), NULL)) AS `last_available_restore_point`,
    
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
      jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcelocation AS `resource_location`,
      FROM
        deduped
      INNER JOIN
        latest_for_date
      ON
        DATE(timestamp) = latest_for_date.log_date
        AND timestamp = latest_for_date.latest_timestamp
        AND jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename = latest_for_date.`source_resource_name`
      ORDER BY
        `date` DESC,
        `resource_name` ASC
    '
    create_view "backup_vault_consumption"
    }
    
    # Backup Plan Details
    create_backup_plan_details_routines_and_views() {
      create_routine "backup_plan_details" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        backup_plan_dat AS (
          SELECT
            *,
            LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
            LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
          FROM
            `'$PROJECT_ID'.'$DATASET'.'$BACKUP_PLAN_TABLE_PREFIX'*`
          WHERE
            _TABLE_SUFFIX >= FROM_DATE
            AND _TABLE_SUFFIX <= TO_DATE
        ),
        backup_plan_deduped AS (
          SELECT
            * EXCEPT (`insert_id_is_same`, `timestamp_is_same`)
          FROM
            backup_plan_dat
          WHERE
            NOT (`insert_id_is_same` AND `timestamp_is_same`)
        ),
        backup_plan_latest_for_date AS (
          SELECT
            DATE(timestamp) AS `log_date`,
            MAX(timestamp) AS `latest_timestamp`,
            jsonpayload_v1_bdrbackupplandetailslog.backupplanname AS `backupplanname`
          FROM
            backup_plan_deduped
          GROUP BY
            `log_date`,
            `backupplanname`
        ),
        latest_backup_vault AS (
          SELECT
            timestamp,
            jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname as backupvaultname,
            jsonpayload_v1_bdrbackupvaultdetailslog.minimumenforcedretentiondays as minimumenforcedretentiondays,
            jsonpayload_v1_bdrbackupvaultdetailslog.effectivedateforenforcedretentionlock as effectivedateforenforcedretentionlock
          FROM `'$PROJECT_ID'.'$DATASET'.'$BACKUP_VAULT_TABLE_PREFIX'*`
          QUALIFY ROW_NUMBER() OVER (PARTITION BY jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname ORDER BY timestamp DESC) = 1
        )
      SELECT
        DATE(bp_d.timestamp) AS `date`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname, '"'"'/'"'"')) < 6, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname,'"'"'/'"'"')[OFFSET(5)]) AS `backup_plan_name`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname, '"'"'/'"'"')) < 4, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname, '"'"'/'"'"')[OFFSET(3)]) AS `backup_plan_location`,
        (
          SELECT
            ARRAY_AGG(STRUCT(
                rule_details.rulename AS `rule_name`,
                rule_details.recurrence AS `recurrence`,
                rule_details.recurrenceschedule AS `recurrence_schedule`,
                rule_details.backupwindow AS `backup_window`,
                IF(rule_details.backupwindowtimezone = '"'"'Etc/UTC'"'"', '"'"'UTC'"'"',rule_details.backupwindowtimezone)  AS `backup_window_time_zone`,
                rule_details.retentiondays AS `retention_days`
              ))
          FROM
            UNNEST(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupruledetails) rule_details
        ) AS `backup_rules`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname,'"'"'/'"'"')[OFFSET(5)]) AS `backup_vault_name`,
        IF(ARRAY_LENGTH(SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname, '"'"'/'"'"')) < 4, NULL, SPLIT(bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname, '"'"'/'"'"')[OFFSET(3)]) AS `backup_vault_location`,
        bp_d.jsonpayload_v1_bdrbackupplandetailslog.resourcesprotectedcount AS `resources_protected_count`,
        bp_d.jsonpayload_v1_bdrbackupplandetailslog.protecteddatavolumegib AS `protected_data_volume_gib`,
        bv_d.minimumenforcedretentiondays AS `minimum_enforced_retention_days`,
        bv_d.effectivedateforenforcedretentionlock AS `effective_date_for_enforced_retention_lock`,
        CASE
          WHEN bv_d.effectivedateforenforcedretentionlock IS NULL THEN '"'"'unlocked'"'"'
          WHEN LENGTH(TRIM(bv_d.effectivedateforenforcedretentionlock)) = 0 THEN '"'"'unlocked'"'"'
          WHEN PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', bv_d.effectivedateforenforcedretentionlock) <= CURRENT_TIMESTAMP() THEN '"'"'locked'"'"'
          ELSE '"'"'unlocked'"'"'
        END AS `lock_on_enforced_retention`
      FROM
        backup_plan_deduped AS bp_d
        INNER JOIN backup_plan_latest_for_date AS bp_lfd
          ON DATE(bp_d.timestamp) = bp_lfd.log_date
          AND bp_d.timestamp = bp_lfd.latest_timestamp
          AND bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupplanname = bp_lfd.backupplanname
        LEFT JOIN latest_backup_vault AS bv_d
          ON bp_d.jsonpayload_v1_bdrbackupplandetailslog.backupvaultname = bv_d.backupvaultname
      ORDER BY
        `date` DESC,
        `backup_plan_name` ASC
      '
      create_view "backup_plan_details"
    }
    
    # Daily Scheduled Compliance
    create_daily_scheduled_compliance_routine_and_view() {
    create_routine "daily_scheduled_compliance" "TABLE_VALUED_FUNCTION" \
    '
    WITH
      DailyComplianceWindows AS (
      SELECT
        * EXCEPT (backup_rules),
        PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[0] || '"'"' '"'"' || backup_window_time_zone) AS expected_start_time,
        resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
      IF
        (REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] = '"'"'24:00'"'"', TIMESTAMP_ADD(PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' 23:59 '"'"' || backup_window_time_zone), INTERVAL 1 MINUTE), PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] || '"'"' '"'"' || backup_window_time_zone)) AS expected_end_time
      FROM
        `'$PROJECT_ID'.'$DATASET'.protected_resource_details`(FROM_DATE, TO_DATE)
      CROSS JOIN
        UNNEST(backup_rules) AS rule
      ON
        rule.recurrence='"'"'Daily'"'"'),
      BackupJobs AS (
      SELECT
        *,
        resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
      FROM
        `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE, FORMAT_DATE('"'"'%Y%m%d'"'"', DATE_ADD(PARSE_DATE('"'"'%Y%m%d'"'"', TO_DATE), INTERVAL 10 DAY)))
      WHERE
        job_category IN ('"'"'ON_DEMAND_BACKUP'"'"',
          '"'"'SCHEDULED_BACKUP'"'"',
          '"'"'ON_DEMAND_OPERATIONAL_BACKUP'"'"') ),
      DailyCompliance AS (
      SELECT
        date,
        DailyComplianceWindows.resource_type,
        DailyComplianceWindows.resource_name,
        DailyComplianceWindows.resource_project_name as `resource_project_name`,
        DailyComplianceWindows.backup_plan_name,
        DailyComplianceWindows.rule_name,
        DailyComplianceWindows.backup_window,
        DailyComplianceWindows.backup_window_time_zone,
        (CASE BackupJobs.job_status
            WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Successful'"'"'
            WHEN '"'"'RUNNING'"'"' THEN '"'"'Pending'"'"'
            ELSE
        IF
          (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Pending'"'"', '"'"'Failed'"'"')
        END
          ) AS backup_schedule_compliance_status,
        (CASE BackupJobs.job_status
            WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' was successful.'"'"'
            WHEN '"'"'RUNNING'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' is running.'"'"'
            ELSE
        IF
          (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Backup window for '"'"'|| date || '"'"' has not passed yet.'"'"', '"'"'No successful backup job detected within backup window.'"'"')
        END
          ) AS comment,
        DailyComplianceWindows.backup_vault_name,
        DailyComplianceWindows.backup_vault_project_name as `backup_vault_project_name`,
        DailyComplianceWindows.backup_vault_location as `backup_vault_location`,
        DailyComplianceWindows.resource_location,
        ROW_NUMBER() OVER (PARTITION BY BackupJobs.unique_resource_name, DailyComplianceWindows.date, DailyComplianceWindows.rule_name ORDER BY (CASE BackupJobs.job_status WHEN '"'"'SUCCESSFUL'"'"' THEN 4 WHEN '"'"'RUNNING'"'"' THEN 3 ELSE 1 END ) DESC,
          BackupJobs.job_start_time ASC) AS row_number,
      FROM
        DailyComplianceWindows
      LEFT JOIN
        BackupJobs
      ON
        (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
          AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
          AND BackupJobs.job_status = '"'"'SUCCESSFUL'"'"'
          AND BackupJobs.backup_consistency_time >= DailyComplianceWindows.expected_start_time
          AND BackupJobs.backup_consistency_time <= DailyComplianceWindows.expected_end_time)
        OR (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
          AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
          AND BackupJobs.job_status = '"'"'RUNNING'"'"'
          AND BackupJobs.job_start_time >= DailyComplianceWindows.expected_start_time
          AND BackupJobs.job_start_time <= DailyComplianceWindows.expected_end_time)
      ORDER BY
        date DESC,
        BackupJobs.resource_name ASC)
    SELECT
      * EXCEPT(row_number)
    FROM
      DailyCompliance
    WHERE
      row_number = 1
    '
    create_view "daily_scheduled_compliance"
    }
    
    # --- Main script ---
    
    # Get the default project ID
    DEFAULT_PROJECT_ID=$(gcloud config get-value project)
    read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
    # Use default if no input is provided
    if [ -z "$PROJECT_ID" ]; then
      PROJECT_ID=$DEFAULT_PROJECT_ID
    fi
    
    ## Generate Access Token
    echo "Generating access token..."
    ACCESS_TOKEN=$(gcloud auth print-access-token --quiet)
    if [ -z "$ACCESS_TOKEN" ]; then
      echo "Failed to retrieve access token" >&2
      exit 1
    fi
    echo "Access token generated successfully..."
    
    ## Check if the dataset exists
    echo "Check if Reporting Dataset exists..."
    if bq --project_id $PROJECT_ID ls | grep "$DATASET" >/dev/null; then
      echo "Dataset $DATASET exists for $PROJECT_ID. Continuing."
    else
      echo "Dataset $DATASET does not exist for $PROJECT_ID. Exiting."
      exit 0
    fi
    
    ## Check if the tables exist
    echo "Determining which tables are available in BigQuery..."
    check_jobs_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$JOBS_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    check_protected_resource_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$PROTECTED_RESOURCE_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    check_backup_vault_consumption_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$BACKUP_VAULT_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    check_backup_plan_details_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$BACKUP_PLAN_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    
    echo "Creating routines and views based on available tables.."
    if [ "$check_jobs_table" == 'true' ]; then
    create_jobs_routines_and_views
    fi
    
    if [ "$check_protected_resource_table" == 'true' ]; then
    create_protected_resource_routines_and_views
    fi
    
    if [ "$check_backup_vault_consumption_table" == 'true' ]; then
    create_backup_vault_consumption_routines_and_views
    fi
    
    if [ "$check_backup_plan_details_table" == 'true' ]; then
    create_backup_plan_details_routines_and_views
    fi
    
    if [ "$check_jobs_table" == 'true' ] && [ "$check_protected_resource_table" == 'true' ]; then
    create_daily_scheduled_compliance_routine_and_view
    fi
    
    if [ $check_jobs_table == 'false' ] || [ $check_protected_resource_table == 'false' ] || [ $check_backup_vault_consumption_table == 'false' ] || [ $check_backup_plan_details_table == 'false' ]; then
    echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset $DATASET."
    echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, and before running the script to create pre built reports. Try re-running the script again after some time to fix the issue."
    echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m"
    else
    echo "Set up completed..."
    fi
    
  4. Guarda el archivo con un nombre que tenga la extensión de archivo de Bash (por ejemplo, backupdrreports.sh).

  5. Ejecuta el comando bash con el archivo que acabas de crear. Por ejemplo, bash backupdrreports.sh.

    Puedes ver los informes predefinidos en el conjunto de datos de BigQuery Studio.

Una vez que haya configurado los informes pregenerados, podrá verlos en Looker Studio.

Siguientes pasos