Configurer des rapports prédéfinis dans BigQuery pour les sauvegardes dans un espace de stockage sécurisé

Cette page explique comment configurer et afficher des rapports prédéfinis dans BigQuery pour les ressources protégées à l'aide de plans de sauvegarde.

Pour configurer des rapports, vous devez effectuer une activité ponctuelle consistant à créer un récepteur de journaux pour diffuser des données de journalisation dans BigQuery, puis exécuter le script prédéfini. Si vous souhaitez configurer et afficher des rapports prédéfinis dans BigQuery pour les ressources protégées à l'aide du modèle de sauvegarde dans la console de gestion, consultez Configurer des rapports prédéfinis dans BigQuery.

Rôle IAM requis

Les autorisations IAM suivantes sont requises pour afficher les rapports prédéfinis dans BigQuery pour les sauvegardes dans un espace de stockage sécurisé. Découvrez comment attribuer un rôle IAM.

Rôle Quand attribuer le rôle ?
Écrivain de la configuration des journaux (roles/logging.configWriter) ou
Administrateur de la journalisation (roles/logging.admin) et
Éditeur de données BigQuery (roles/bigquery.dataEditor)
Créer un sink et un ensemble de données BigQuery à partir de la console Google Cloud
Propriétaire (roles/owner) Créer un sink et un ensemble de données BigQuery à partir du Google Cloud CLI
Administrateur BigQuery (bigquery.admin) Pour rédiger des requêtes personnalisées ou télécharger des requêtes.

Créer un récepteur et acheminer les journaux vers BigQuery pour les sauvegardes dans un espace de stockage sécurisé

BigQuery ne stocke que les journaux générés après la création d'un récepteur de journaux. Les journaux générés avant la création d'un récepteur de journaux ne sont pas visibles dans BigQuery. Vous pouvez créer le collecteur de journaux à partir de la console Google Cloud ou de Google Cloud CLI.

Pour créer des journaux de destination et d'acheminement dans BigQuery pour les sauvegardes dans un espace de stockage sécurisé, procédez comme suit:

Console

  1. Dans la console Google Cloud, accédez à la page Routeur de journaux :

    Accéder à la page du routeur de journaux

  2. Sélectionnez un projet Google Cloud existant.
  3. Cliquez sur Créer un récepteur.
  4. Dans le panneau Détails du récepteur, saisissez les champs suivants :
    1. Nom du récepteur: saisissez bdr_report_sink comme nom du récepteur. Vous devez utiliser le nom de l'éponge bdr_report_sink pour identifier les rapports de sauvegarde et de reprise après sinistre provenant d'autres éponges.
    2. Description du récepteur: décrivez l'objectif ou le cas d'utilisation du récepteur.
  5. Dans le panneau Destination du récepteur, procédez comme suit:

    1. Dans le menu Select sink service (Sélectionner le service de récepteur), sélectionnez le service de récepteur BigQuery dataset (Ensemble de données BigQuery).
    2. Dans Sélectionnez un ensemble de données BigQuery, sélectionnez Créer un ensemble de données BigQuery.
    3. Sur la page Créer un ensemble de données, procédez comme suit :
      1. Pour ID de l'ensemble de données, saisissez le nom de l'ensemble de données sous la forme BackupandDR_reports afin de le distinguer des autres ensembles de données. Ne modifiez pas le nom de l'ensemble de données (BackupandDR_reports).
      2. Dans Type d'emplacement, sélectionnez un emplacement géographique pour l'ensemble de données. Une fois l'ensemble de données créé, l'emplacement ne peut plus être modifié.
      3. Facultatif: Si vous souhaitez que les tables de cet ensemble de données expirent, sélectionnez Activer l'expiration de la table, puis spécifiez l'âge maximal par défaut de la table en jours.
      4. Cliquez sur Créer un ensemble de données.
  6. Dans le panneau Sélectionner les journaux à inclure dans le récepteur, procédez comme suit :

    1. Dans le champ Créer un filtre d'inclusion, saisissez l'expression de filtre suivante qui correspond aux entrées de journal que vous souhaitez inclure.

       logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
      
    2. Pour vérifier que le filtre saisi est correct, sélectionnez Prévisualiser les journaux. L'Explorateur de journaux s'ouvre dans un nouvel onglet avec le filtre prérempli.

  7. Facultatif: Dans le panneau Sélectionner les journaux à exclure du récepteur, procédez comme suit:

    1. Dans le champ Nom du filtre d'exclusion, saisissez un nom.
    2. Dans le champ Créer un filtre d'exclusion, saisissez une expression de filtre correspondant aux entrées de journal que vous souhaitez exclure. Vous pouvez également utiliser la fonction d'échantillonnage pour sélectionner une partie des entrées de journal à exclure.

  8. Cliquez sur Créer un récepteur.

    Vous pouvez consulter l'ensemble de données dans BigQuery Studio.

gcloud

  1. Accédez à Activer Cloud Shell, puis cliquez sur Ouvrir l'éditeur.
  2. Cliquez sur l'icône , sélectionnez Fichier, puis Nouveau fichier texte.
  3. Copiez et collez le script suivant.

       #!/bin/bash
        echo "This script will set up a log sink for BackupDR reports to be available in BigQuery"
    
        # Get the default project ID
        DEFAULT_PROJECT_ID=$(gcloud config get-value project)
        read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
        # Use default if no input is provided
        if [ -z "$PROJECT_ID" ]; then
          PROJECT_ID=$DEFAULT_PROJECT_ID
        fi
          # Set the project ID
        result=$(gcloud config set project $PROJECT_ID)
        if [ $? -ne 0 ]; then
          echo "Error setting the project to $PROJECT_ID"
          exit 1
        fi
          # --- Check if BigQuery API is already enabled, enable if not ---
        echo "Checking if BigQuery API is enabled..."
        if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then
          echo "BigQuery API is already enabled for $PROJECT_ID"
        else
          echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the associated costs before proceeding."
          read -p "Do you want to continue(Y/N)?" continue
          if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then
            echo "Enabling BigQuery API..."
            result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID)
            if [ $? -eq 0 ]; then
              echo "Successfully enabled BigQuery api for $PROJECT_ID"
            else
              echo "Error in setting up the BigQuery api for the project. $result"
              exit 1
            fi
          else
            exit 0
          fi
        fi
          # --- Check if BigQuery data set already exists, create if not ---
        echo "Checking if BigQuery data set exists..."
        if bq ls | grep "bdr_reports" >/dev/null; then
          echo "Dataset bdr_reports already exists for $PROJECT_ID"
        else
          echo "Creating bigQuery dataset bdr_reports..."
          # --- Get dataset location from user (default: US) ---
          read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION
          if [ -z "$DATASET_LOCATION" ]; then
            DATASET_LOCATION="US"
          fi
          # --- Get table expiration in days from user (default: no expiration) ---
          read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS
    
          # Calculate table expiration in seconds if provided
          if [ -n "$TABLE_EXPIRATION_DAYS" ]; then
            TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60))
            EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS"
          else
            EXPIRATION_FLAG=""
          fi
          result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG bdr_reports)
          if [ $? -eq 0 ]; then
            echo "Created a BigQuery dataset bdr_reports successfully."
          else
            echo ""
            echo "ERROR : Failed to create the BigQuery dataset."
            echo $result
            exit 1
          fi
        fi
          # --- Check if Log Sink already exists, create if not ---
        echo "Checking if Log Sink exists..."
        if gcloud logging sinks list | grep "bdr_report_sink" >/dev/null; then
          echo "Log Sink bdr_reports_sink already exists for $PROJECT_ID"
        else
          log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fbdr_*"
          echo "Creating log sink bdr_reports..."
          result=$(gcloud logging sinks create bdr_report_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/bdr_reports --log-filter="logName=~\"$log_filter\"")
          if [ $? -eq 0 ]; then
            echo "Created a logsink bdr_reports_sink successfully."
          else
            echo ""
            echo "ERROR : Failed to create logsink."
            exit 1
          fi
        fi
    
        # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery ---
        result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None)
        if [ $? -eq 0 ]; then
          echo "Added permission for cloud logging to write to BigQuery datasets"
        else
          echo ""
          echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have Owner access rights in order to be able to proceed."
          exit 1
        fi
    
        echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery."
        exit 0
    
      exit 0
    
  4. Enregistrez le fichier avec un nom comportant une extension de fichier Bash, par exemple script.sh.

  5. Exécutez la commande bash à l'aide du fichier que vous venez de créer. Par exemple, bash script.sh.

    Vous pouvez voir l'ensemble de données créé dans BigQuery Studio.

Configurer des rapports prédéfinis pour les sauvegardes stockées dans un vault

Vous pouvez configurer des rapports prédéfinis en exécutant le script suivant dans l'ensemble de données où les journaux sont acheminés via le récepteur de journaux.

Le script ajoute les rapports prédéfinis suivants:

Pour configurer des rapports prédéfinis dans BigQuery pour les sauvegardes dans un espace de stockage sécurisé, procédez comme suit:

gcloud

  1. Accédez à Activer Cloud Shell, puis cliquez sur Ouvrir l'éditeur.
  2. Cliquez sur l'icône , sélectionnez Fichier, puis Nouveau fichier texte.
  3. Copiez et collez le script de rapports prédéfini suivant dans la Google Cloud CLI.

      #!/bin/bash
    
      # Default config values. Modify only if required.
      DATASET="bdr_reports"
      JOBS_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_restore_jobs_"
      PROTECTED_RESOURCE_TABLE_PREFIX="backupdr_googleapis_com_bdr_protected_resource_"
      BACKUP_VAULT_TABLE_PREFIX="backupdr_googleapis_com_bdr_backup_vault_details_"
      VIEW_FROM_DATE_SUFFIX="20000101"  #Use an old enough date so that the views should contain all the data.
    
      # Function to check if a routine exists
      routine_exists() {
      local routine_name="$1"
      bq --project_id $PROJECT_ID ls --routines "$DATASET" | grep "$routine_name" >/dev/null
      }
    
      # Function to check if a view exists
      view_exists() {
      local view_name="$1"
      check_view=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from $DATASET.INFORMATION_SCHEMA.VIEWS WHERE table_name='$view_name'" | tail -n +2 | tr -d ' ')
      if [ $check_view == 'true' ]; then
        return 0
      fi
      return -1
      }
    
      # Function to create a routine
      create_routine() {
      local routine_name="$1"
      local routine_type="$2"
      local definition_body="$3"
        if routine_exists "$routine_name"; then
        echo "Routine $routine_name already exists. Skipping."
        return 0
      fi
      # Construct the routine body JSON
      local routine_body='
        {
        "definitionBody": "'"$definition_body"'",
        "routineType": "'"$routine_type"'",
        "routineReference": {
            "projectId": "'"$PROJECT_ID"'",
            "datasetId": "'"$DATASET"'",
            "routineId": "'"$routine_name"'"
        },
        "arguments": [
            {
            "name": "FROM_DATE",
            "dataType": {
                "typeKind": "STRING"
            }
            },
            {
            "name": "TO_DATE",
            "dataType": {
                "typeKind": "STRING"
            }
            }
        ]
        }
      '
    
      # Construct the curl command with the routine body
      local curl_command=(
        curl -s --request POST \
          "https://bigquery.googleapis.com/bigquery/v2/projects/$PROJECT_ID/datasets/$DATASET/routines" \
          --header "Authorization: Bearer $ACCESS_TOKEN" \
          --header 'Accept: application/json' \
          --header 'Content-Type: application/json' \
          --data "$routine_body" --compressed
      )
        # Execute the curl command and capture the result
      local result=$( "${curl_command[@]}" )
    
      # Check if creation was successful
      if echo "$result" | grep '"etag":' > /dev/null; then
        echo "Routine '$routine_name' created successfully."
      else
        echo "$result"
        exit -1
      fi
      }
    
      # Function to create a view
      create_view() {
      local view_name="$1"
      local parent_routine_name="$1"
      local view_query='
        SELECT * FROM `'$PROJECT_ID'.'$DATASET'.'$parent_routine_name'`("'$VIEW_FROM_DATE_SUFFIX'", FORMAT_DATE("%Y%m%d", CURRENT_DATE()));
      '
    
      # Check if view exists.
      if routine_exists "$parent_routine_name"; then
        view_exists "$view_name"
        if [ $? -eq 0 ]; then
          echo "View $view_name already exists. Skipping."
          return 0
        fi
        # Create the view if it does not exist
        bq --project_id $PROJECT_ID mk --use_legacy_sql=false --expiration 0 --view "$view_query" $DATASET.$view_name
      else
        echo "Routine $parent_routine_name is not present. Skipping the creation of $view_name."
        return -1
      fi
      }
    
      # Create all jobs related routines and views
      create_jobs_routines_and_views() {
      # Backup Restore Jobs
      create_routine "backup_restore_jobs" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        dat AS (
        SELECT
          *,
          LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
          LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
        FROM
          `'$PROJECT_ID'.'$DATASET'.'$JOBS_TABLE_PREFIX'*`
        WHERE
          _TABLE_SUFFIX >= FROM_DATE
          AND _TABLE_SUFFIX <= TO_DATE
          AND jsonpayload_v1_bdrbackuprestorejoblog.resourceType IN ('"'"'Compute Engine'"'"') ),
        deduped AS (
        SELECT
          * EXCEPT (insert_id_is_same,
            timestamp_is_same),
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).endtime), NULL) AS job_end_time,
    
        FROM
          dat
        WHERE
          NOT (insert_id_is_same
            AND timestamp_is_same) ),
        latest AS (
        SELECT
          jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
          MAX(CASE jsonpayload_v1_bdrbackuprestorejoblog.jobStatus
              WHEN '"'"'RUNNING'"'"' THEN 1
              WHEN '"'"'SKIPPED'"'"' THEN 2
              WHEN '"'"'SUCCESSFUL'"'"' THEN 3
              WHEN '"'"'FAILED'"'"' THEN 4
          END
            ) AS final_status_code
        FROM
          deduped
        GROUP BY
          jsonpayload_v1_bdrbackuprestorejoblog.jobid),
        filled_latest AS (
        SELECT
          jsonpayload_v1_bdrbackuprestorejoblog.jobId AS job_id,
          jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS final_status,
          PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime) AS parsed_start_time,
          PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time) AS parsed_end_time,
          TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), HOUR) AS duration_hours,
          MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), MINUTE), 60) AS duration_minutes,
          MOD(TIMESTAMP_DIFF(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', deduped.job_end_time), PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrbackuprestorejoblog.startTime), SECOND), 60) AS duration_seconds,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).restoreresourcename), NULL) AS restore_resource_name,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errortype), NULL) AS error_type,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errormessage), NULL) AS error_message,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupname), NULL) AS backup_name,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupplanname), NULL) AS backup_plan_name,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backuprule), NULL) AS backup_rule,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).backupconsistencytime), NULL) AS backup_consistency_time,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).incrementalbackupsizegib), NULL) AS incremental_backup_size_in_gib,
        IF
          (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode) = '"'"'number'"'"', FLOAT64(TO_JSON(jsonpayload_v1_bdrbackuprestorejoblog).errorcode), NULL) AS error_code,
    
        FROM
          deduped
        JOIN
          latest
        ON
          latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
          AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = (
            CASE final_status_code
              WHEN 1 THEN '"'"'RUNNING'"'"'
              WHEN 2 THEN '"'"'SKIPPED'"'"'
              WHEN 3 THEN '"'"'SUCCESSFUL'"'"'
              WHEN 4 THEN '"'"'FAILED'"'"'
          END
            ) )
      SELECT
        FORMAT_TIMESTAMP('"'"'%m%d'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"') || '"'"'-'"'"' || ARRAY_TO_STRING(REGEXP_EXTRACT_ALL(jsonpayload_v1_bdrbackuprestorejoblog.resourceType, '"'"'\\\\b[A-Z]'"'"'), '"'"''"'"') || '"'"'-'"'"' || (CASE jsonpayload_v1_bdrbackuprestorejoblog.jobCategory
            WHEN '"'"'ON_DEMAND_BACKUP'"'"' THEN '"'"'B'"'"'
            WHEN '"'"'SCHEDULED_BACKUP'"'"' THEN '"'"'B'"'"'
            WHEN '"'"'RESTORE'"'"' THEN '"'"'R'"'"'
        END
          ) || '"'"'-'"'"' || REPLACE(FORMAT_TIMESTAMP('"'"'%H%M%E3S'"'"', filled_latest.parsed_start_time, '"'"'UTC'"'"'), '"'"'.'"'"', '"'"''"'"') AS `job_name`,
        REGEXP_EXTRACT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'([^/]+)$'"'"') AS `resource_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName,'"'"'/'"'"')[1]) AS `resource_project_name`,
        jsonpayload_v1_bdrbackuprestorejoblog.jobStatus AS `job_status`,
        jsonpayload_v1_bdrbackuprestorejoblog.jobCategory AS `job_category`,
        Filled_latest.error_type AS `error_type`,
        CAST(filled_latest.error_code AS INT64) AS `error_code`,
        filled_latest.error_message AS `error_message`,
        parsed_start_time AS `job_start_time`,
        parsed_end_time AS `job_end_time`,
      IF
        (duration_hours > 0, duration_hours || '"'"'h '"'"', '"'"''"'"') ||
      IF
        (duration_minutes > 0, duration_minutes || '"'"'m '"'"', '"'"''"'"') ||
      IF
        (duration_seconds > 0, duration_seconds || '"'"'s '"'"', '"'"''"'"') AS `duration`,
        ROUND(filled_latest.incremental_backup_size_in_gib, 2) AS `incremental_backup_size_in_gib`,
        PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', filled_latest.backup_consistency_time) AS `backup_consistency_time`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.backup_plan_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.backup_plan_name,'"'"'/'"'"')[5]) AS `backup_plan_name`,
        filled_latest.backup_rule AS `backup_rule`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[5]) AS `backup_vault_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.backupVaultName,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.backup_name, '"'"'/'"'"')) < 10, NULL, SPLIT(filled_latest.backup_name,'"'"'/'"'"')[9]) AS `backup_name`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[1]) AS `restore_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(filled_latest.restore_resource_name, '"'"'/'"'"')) < 6, NULL, SPLIT(filled_latest.restore_resource_name,'"'"'/'"'"')[5]) AS `restore_resource_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceName,'"'"'/'"'"')[3]) AS `resource_location`,
        jsonpayload_v1_bdrbackuprestorejoblog.resourceType AS `resource_type`,
        jsonpayload_v1_bdrbackuprestorejoblog.sourceResourceId AS `resource_id`,
        jsonpayload_v1_bdrbackuprestorejoblog.jobId AS `job_id`,
        timestamp AS `log_timestamp`
      FROM
        deduped
      JOIN
        filled_latest
      ON
        filled_latest.job_id = jsonpayload_v1_bdrbackuprestorejoblog.jobId
        AND jsonpayload_v1_bdrbackuprestorejoblog.jobStatus = filled_latest.final_status
      ORDER BY
        parsed_start_time DESC
      '
    
      # Backup Jobs Summary
      create_routine "backup_jobs_summary" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          DATE(log_timestamp) AS `date`,
          resource_project_name,
          job_category,
          ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
          COUNT(*) AS `total`,
          COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
          COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
          COUNTIF(job_status = '"'"'SKIPPED'"'"') AS `skipped`,
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"') AND job_status IN ('"'"'SUCCESSFUL'"'"', '"'"'FAILED'"'"', '"'"'SKIPPED'"'"')
        GROUP BY
          `date`,
          resource_project_name,
          job_category
        ORDER BY
          `date` DESC
      '
      create_view "backup_jobs_summary"
    
      # Backup Jobs Details
      create_routine "backup_jobs_details" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          job_status,
          error_type,
          error_code,
          error_message,
          job_start_time,
          job_end_time,
          duration,
          incremental_backup_size_in_gib,
          backup_consistency_time,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          backup_vault_project_name,
          resource_location,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE
          job_status IN ('"'"'SUCCESSFUL'"'"',
            '"'"'FAILED'"'"', '"'"'SKIPPED'"'"') AND job_category IN ('"'"'ON_DEMAND_BACKUP'"'"', '"'"'SCHEDULED_BACKUP'"'"')
      '
      create_view "backup_jobs_details"
    
      # Successful Backup Jobs Details
      create_routine "successful_backup_jobs" "TABLE_VALUED_FUNCTION" \
      'SELECT
          job_name,
          resource_name,
          resource_project_name,
          job_start_time,
          job_end_time,
          duration,
          incremental_backup_size_in_gib,
          backup_consistency_time,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          backup_vault_project_name,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
            TO_DATE) WHERE job_status = '"'"'SUCCESSFUL'"'"'
      '
      create_view "successful_backup_jobs"
    
      # Failed Backup Job Details
      create_routine "failed_backup_jobs" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          error_type,
          error_code,
          error_message,
          job_start_time,
          job_end_time,
          duration,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          backup_vault_project_name,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
            TO_DATE) WHERE job_status = '"'"'FAILED'"'"'
      '
      create_view "failed_backup_jobs"
    
      # Skipped Backup Job Details
      create_routine "skipped_backup_jobs" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          backup_plan_name,
          backup_rule,
          backup_vault_name,
          job_start_time,
          backup_vault_project_name,
          resource_type,
          resource_id,
          job_category,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_jobs_details`(FROM_DATE,
            TO_DATE)
        WHERE
          job_status = '"'"'SKIPPED'"'"'
      '
      create_view "skipped_backup_jobs"
    
      # Restore Jobs Summary
      create_routine "restore_jobs_summary" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          DATE(log_timestamp) AS `date`,
          restore_project_name,
          backup_vault_project_name,
          ROUND((COUNTIF(job_status = '"'"'SUCCESSFUL'"'"')/COUNT(*)) * 100, 2) AS `success_percent`,
          COUNT(*) AS `total`,
          COUNTIF(job_status = '"'"'SUCCESSFUL'"'"') AS `successful`,
          COUNTIF(job_status = '"'"'FAILED'"'"') AS `failed`,
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE
          job_category = '"'"'RESTORE'"'"'
          AND job_status IN ('"'"'SUCCESSFUL'"'"',
            '"'"'FAILED'"'"')
        GROUP BY
          `date`,
          restore_project_name,
          backup_vault_project_name
        ORDER BY
          `date` DESC
      '
      create_view "restore_jobs_summary"
    
      # Restore Jobs Details
      create_routine "restore_jobs_details" "TABLE_VALUED_FUNCTION" \
      '
      SELECT
          job_name,
          resource_name,
          resource_project_name,
          job_status,
          error_type,
          error_code,
          error_message,
          job_start_time,
          job_end_time,
          duration,
          restore_resource_name,
          restore_project_name,
          backup_vault_name,
          backup_vault_project_name,
          resource_location,
          resource_type,
          resource_id,
          job_id
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE,
            TO_DATE)
        WHERE
          job_category IN ('"'"'RESTORE'"'"')
          AND job_status IN ('"'"'SUCCESSFUL'"'"',
            '"'"'FAILED'"'"')
      '
      create_view "restore_jobs_details"
      }
    
      # Protected Resource Details
      create_protected_resource_routines_and_views() {
      create_routine "protected_resource_details" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        dat AS (
        SELECT
          *,
          LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
          LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
    
        FROM
          `'$PROJECT_ID'.'$DATASET'.'$PROTECTED_RESOURCE_TABLE_PREFIX'*` WHERE _TABLE_SUFFIX >= FROM_DATE AND _TABLE_SUFFIX <= TO_DATE
          ),
        deduped AS (
        SELECT
          * EXCEPT (`insert_id_is_same`,
            `timestamp_is_same`)
        FROM
          dat
        WHERE
          NOT (`insert_id_is_same`
            AND `timestamp_is_same`) ),
        latest_for_date AS (
        SELECT
          DATE(timestamp) AS `log_date`,
          jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename AS `source_resource_name`,
          MAX(timestamp) AS `latest_timestamp`
        FROM
          deduped
        GROUP BY
          `log_date`,
          `source_resource_name` )
      SELECT
        DATE(timestamp) AS `date`,
        jsonpayload_v1_bdrprotectedresourcelog.resourcetype AS `resource_type`,
        REGEXP_EXTRACT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename,'"'"'/'"'"')[1]) AS `resource_project_name`,
        jsonpayload_v1_bdrprotectedresourcelog.sourceresourcedatasizegib AS `resource_data_size_in_gib`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[5]) AS `backup_plan_name`,
        (
        SELECT
          ARRAY_AGG(STRUCT(
              array_entry.rulename AS `rule_name`,
              array_entry.recurrence AS `recurrence`,
              array_entry.recurrenceschedule AS `recurrence_schedule`,
              array_entry.backupwindow AS `backup_window`,
              IF(array_entry.backupwindowtimezone = '"'"'Etc/UTC'"'"', '"'"'UTC'"'"',array_entry.backupwindowtimezone)  AS `backup_window_time_zone`,
              array_entry.retentiondays AS `retention_days`
            ))
        FROM
          UNNEST(jsonpayload_v1_bdrprotectedresourcelog.currentbackupruledetails) array_entry ) AS `backup_rules`,
      DATE(PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', jsonpayload_v1_bdrprotectedresourcelog.lastprotectedon)) AS `last_backup_plan_assoc_date`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 2, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname, '"'"'/'"'"')) < 4, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.currentbackupplanname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
      IF
        (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename,'"'"'/'"'"')[3]) AS `resource_location`,
        jsonpayload_v1_bdrprotectedresourcelog.sourceresourceid AS `resource_id`,
      FROM
        deduped
      INNER JOIN
        latest_for_date
      ON
        DATE(timestamp) = latest_for_date.log_date
        AND timestamp = latest_for_date.latest_timestamp
        AND latest_for_date.`source_resource_name` = jsonpayload_v1_bdrprotectedresourcelog.sourceresourcename
      ORDER BY
        `date` DESC,
        `resource_name` ASC
      '
      create_view "protected_resource_details"
      }
    
      # Backup Vault Consumption
      create_backup_vault_consumption_routines_and_views() {
      create_routine "backup_vault_consumption" "TABLE_VALUED_FUNCTION" \
      '
      WITH
          dat AS (
          SELECT
            *,
          LAG(insertId, 1, '"'"'placeholder_id'"'"') OVER (ORDER BY timestamp, insertId) = insertId AS `insert_id_is_same`,
          LAG(timestamp, 1, TIMESTAMP_SECONDS(1)) OVER (ORDER BY timestamp, insertId) = timestamp AS `timestamp_is_same`
          FROM
            `'$PROJECT_ID'.'$DATASET'.'$BACKUP_VAULT_TABLE_PREFIX'*`
          WHERE
            jsonpayload_v1_bdrbackupvaultdetailslog.resourceType IN ('"'"'Compute Engine'"'"')
            AND _TABLE_SUFFIX >= FROM_DATE
            AND _TABLE_SUFFIX <= TO_DATE ),
          deduped AS (
          SELECT
            * EXCEPT (`insert_id_is_same`,
              `timestamp_is_same`)
          FROM
            dat
          WHERE
            NOT (`insert_id_is_same`
              AND `timestamp_is_same`) ),
          latest_for_date AS (
          SELECT
            DATE(timestamp) AS `log_date`,
            MAX(timestamp) AS `latest_timestamp`,
            jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename AS `source_resource_name`,
          FROM
            deduped
          GROUP BY
            log_date,
            `source_resource_name`)
        SELECT
          DATE(timestamp) AS `date`,
          jsonpayload_v1_bdrbackupvaultdetailslog.resourcetype AS `resource_type`,
          REGEXP_EXTRACT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'([^/]+)$'"'"') AS `resource_name`,
          SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'/'"'"')[1] AS `resource_project_name`,
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[5]) AS `backup_vault_name`,
          jsonpayload_v1_bdrbackupvaultdetailslog.storedbytesgib AS `backup_vault_stored_bytes_in_gib`,
      CAST(jsonpayload_v1_bdrbackupvaultdetailslog.minimumEnforcedRetentionDays AS INT64) AS `backup_vault_minimum_enforced_retention_days`,
        IF
      (ARRAY_LENGTH(SPLIT(IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL), '"'"'/'"'"')) < 6, NULL, SPLIT(IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).currentbackupplanname), NULL),'"'"'/'"'"')[5]) AS `backup_plan_name`,
      PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).firstavailablerestorepoint), NULL)) AS `first_available_restore_point`,
      PARSE_TIMESTAMP('"'"'%FT%H:%M:%E*S%Ez'"'"', IF
      (JSON_TYPE(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint) = '"'"'string'"'"', STRING(TO_JSON(jsonpayload_v1_bdrbackupvaultdetailslog).lastavailablerestorepoint), NULL)) AS `last_available_restore_point`,
    
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[1]) AS `backup_vault_project_name`,
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.backupvaultname,'"'"'/'"'"')[3]) AS `backup_vault_location`,
        IF
          (ARRAY_LENGTH(SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename, '"'"'/'"'"')) < 6, NULL, SPLIT(jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename,'"'"'/'"'"')[3]) AS `resource_location`,
        FROM
          deduped
        INNER JOIN
          latest_for_date
        ON
          DATE(timestamp) = latest_for_date.log_date
          AND timestamp = latest_for_date.latest_timestamp
          AND jsonpayload_v1_bdrbackupvaultdetailslog.sourceresourcename = latest_for_date.`source_resource_name`
        ORDER BY
          `date` DESC,
          `resource_name` ASC
      '
      create_view "backup_vault_consumption"
      }
    
      # Daily Scheduled Compliance
      create_daily_scheduled_compliance_routine_and_view() {
      create_routine "daily_scheduled_compliance" "TABLE_VALUED_FUNCTION" \
      '
      WITH
        DailyComplianceWindows AS (
        SELECT
          * EXCEPT (backup_rules),
          PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[0] || '"'"' '"'"' || backup_window_time_zone) AS expected_start_time,
          resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
        IF
          (REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] = '"'"'24:00'"'"', TIMESTAMP_ADD(PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' 23:59 '"'"' || backup_window_time_zone), INTERVAL 1 MINUTE), PARSE_TIMESTAMP('"'"'%F %H:%M %Z'"'"', date || '"'"' '"'"' || REGEXP_EXTRACT_ALL(backup_window, '"'"'[0-9][0-9]:[0-9][0-9]'"'"')[1] || '"'"' '"'"' || backup_window_time_zone)) AS expected_end_time
        FROM
          `'$PROJECT_ID'.'$DATASET'.protected_resource_details`(FROM_DATE, TO_DATE)
        CROSS JOIN
          UNNEST(backup_rules) AS rule
        ON
          rule.recurrence='"'"'Daily'"'"'),
        BackupJobs AS (
        SELECT
          *,
          resource_project_name || '"'"'/'"'"' || resource_location || '"'"'/'"'"' || resource_id AS unique_resource_name,
        FROM
          `'$PROJECT_ID'.'$DATASET'.backup_restore_jobs`(FROM_DATE, FORMAT_DATE('"'"'%Y%m%d'"'"', DATE_ADD(PARSE_DATE('"'"'%Y%m%d'"'"', TO_DATE), INTERVAL 10 DAY)))
        WHERE
          job_category IN ('"'"'ON_DEMAND_BACKUP'"'"',
            '"'"'SCHEDULED_BACKUP'"'"') ),
        DailyCompliance AS (
        SELECT
          date,
          DailyComplianceWindows.resource_type,
          DailyComplianceWindows.resource_name,
          DailyComplianceWindows.resource_project_name as `resource_project_name`,
          DailyComplianceWindows.backup_plan_name,
          DailyComplianceWindows.rule_name,
          DailyComplianceWindows.backup_window,
          DailyComplianceWindows.backup_window_time_zone,
          (CASE BackupJobs.job_status
              WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Successful'"'"'
              WHEN '"'"'RUNNING'"'"' THEN '"'"'Pending'"'"'
              ELSE
          IF
            (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Pending'"'"', '"'"'Failed'"'"')
          END
            ) AS backup_schedule_compliance_status,
          (CASE BackupJobs.job_status
              WHEN '"'"'SUCCESSFUL'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' was successful.'"'"'
              WHEN '"'"'RUNNING'"'"' THEN '"'"'Job '"'"' || BackupJobs.job_name || '"'"' is running.'"'"'
              ELSE
          IF
            (CURRENT_TIMESTAMP() <= DailyComplianceWindows.expected_end_time, '"'"'Backup window for '"'"'|| date || '"'"' has not passed yet.'"'"', '"'"'No successful backup job detected within backup window.'"'"')
          END
            ) AS comment,
          DailyComplianceWindows.backup_vault_name,
          DailyComplianceWindows.backup_vault_project_name as `backup_vault_project_name`,
          DailyComplianceWindows.backup_vault_location as `backup_vault_location`,
          DailyComplianceWindows.resource_location,
          ROW_NUMBER() OVER (PARTITION BY BackupJobs.unique_resource_name, DailyComplianceWindows.date, DailyComplianceWindows.rule_name ORDER BY (CASE BackupJobs.job_status WHEN '"'"'SUCCESSFUL'"'"' THEN 4 WHEN '"'"'RUNNING'"'"' THEN 3 ELSE 1 END ) DESC,
            BackupJobs.job_start_time ASC) AS row_number,
        FROM
          DailyComplianceWindows
        LEFT JOIN
          BackupJobs
        ON
          (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
            AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
            AND BackupJobs.job_status = '"'"'SUCCESSFUL'"'"'
            AND BackupJobs.backup_consistency_time >= DailyComplianceWindows.expected_start_time
            AND BackupJobs.backup_consistency_time <= DailyComplianceWindows.expected_end_time)
          OR (BackupJobs.unique_resource_name = DailyComplianceWindows.unique_resource_name
            AND BackupJobs.backup_rule = DailyComplianceWindows.rule_name
            AND BackupJobs.job_status = '"'"'RUNNING'"'"'
            AND BackupJobs.job_start_time >= DailyComplianceWindows.expected_start_time
            AND BackupJobs.job_start_time <= DailyComplianceWindows.expected_end_time)
        ORDER BY
          date DESC,
          BackupJobs.resource_name ASC)
      SELECT
        * EXCEPT(row_number)
      FROM
        DailyCompliance
      WHERE
        row_number = 1
      '
      create_view "daily_scheduled_compliance"
      }
    
      # --- Main script ---
    
      # Get the default project ID
      DEFAULT_PROJECT_ID=$(gcloud config get-value project)
      read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID
    
      # Use default if no input is provided
      if [ -z "$PROJECT_ID" ]; then
        PROJECT_ID=$DEFAULT_PROJECT_ID
      fi
    
      ## Generate Access Token
      echo "Generating access token..."
      ACCESS_TOKEN=$(gcloud auth print-access-token --quiet)
      if [ -z "$ACCESS_TOKEN" ]; then
        echo "Failed to retrieve access token" >&2
        exit 1
      fi
      echo "Access token generated successfully..."
    
      ## Check if the dataset exists
      echo "Check if Reporting Dataset exists..."
      if bq --project_id $PROJECT_ID ls | grep "$DATASET" >/dev/null; then
        echo "Dataset $DATASET exists for $PROJECT_ID. Continuing."
      else
        echo "Dataset $DATASET does not exist for $PROJECT_ID. Exiting."
        exit 0
      fi
    
      ## Check if the tables exist
      echo "Determining which tables are available in BigQuery..."
      check_jobs_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$JOBS_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
      check_protected_resource_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$PROTECTED_RESOURCE_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
      check_backup_vault_consumption_table=$(bq --project_id $PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from "$DATASET".INFORMATION_SCHEMA.TABLES WHERE table_name like '%"$BACKUP_VAULT_TABLE_PREFIX"%'" | tail -n +2 | tr -d ' ')
    
      echo "Creating routines and views based on available tables.."
      if [ "$check_jobs_table" == 'true' ]; then
      create_jobs_routines_and_views
      fi
    
      if [ "$check_protected_resource_table" == 'true' ]; then
      create_protected_resource_routines_and_views
      fi
    
      if [ "$check_backup_vault_consumption_table" == 'true' ]; then
      create_backup_vault_consumption_routines_and_views
      fi
    
      if [ "$check_jobs_table" == 'true' ] && [ "$check_protected_resource_table" == 'true' ] ; then
      create_daily_scheduled_compliance_routine_and_view
      fi
    
      if [ $check_jobs_table == 'false' ] || [ $check_protected_resource_table == 'false' ] || [ $check_backup_vault_consumption_table == 'false' ]; then
      echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset $DATASET."
      echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, and before running the script to create pre built reports. Try re-running the script again after some time to fix the issue."
      echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m"
      else
      echo "Set up completed..."
      fi
    
    
  4. Enregistrez le fichier avec un nom comportant une extension de fichier Bash, par exemple backupdrreports.sh.

  5. Exécutez la commande bash à l'aide du fichier que vous venez de créer. Par exemple, bash backupdrreports.sh.

    Vous pouvez afficher les rapports prédéfinis sous l'ensemble de données dans BigQuery Studio.

Une fois que vous avez configuré des rapports prédéfinis, vous pouvez les afficher dans Looker Studio ou télécharger des rapports prédéfinis dans un fichier CSV.

Étape suivante