이 페이지에서는 BigQuery에서 사전 빌드된 보고서를 설정하고 보는 방법을 설명합니다. 이러한 보고서는 Cloud Logging의 로그를 사용하여 생성됩니다. 보고서를 설정하려면 로그 데이터를 BigQuery로 스트리밍하는 로그 싱크를 만드는 일회성 활동을 한 다음 사전 빌드된 스크립트를 실행해야 합니다.
필요한 IAM 역할
BigQuery에서 사전 빌드된 보고서를 보려면 다음 IAM 권한이 필요합니다. IAM 역할 부여 방법을 알아보세요.
역할 | 역할을 부여할 시점 |
---|---|
로그 구성 작성자 (roles/logging.configWriter ) 또는로깅 관리자 ( roles/logging.admin ) 및BigQuery 데이터 편집자 ( roles/bigquery.dataEditor ) |
Google Cloud 콘솔에서 싱크 및 BigQuery 데이터 세트를 만듭니다. |
소유자(roles/owner ) |
Google Cloud CLI에서 싱크 및 BigQuery 데이터 세트를 만듭니다. |
BigQuery 관리자(bigquery.admin ) |
맞춤 쿼리를 작성하거나 쿼리를 다운로드합니다. |
싱크를 만들고 로그를 BigQuery로 라우팅
BigQuery는 로그 싱크가 생성된 후에 생성된 로그만 저장합니다. 로그 싱크를 만들기 전에 생성된 로그는 BigQuery에 표시되지 않습니다. Google Cloud 콘솔 또는 Google Cloud CLI에서 로그 싱크를 만들 수 있습니다.
BigQuery에서 싱크를 만들고 로그를 라우팅하려면 다음 단계를 따르세요.
콘솔
- Google Cloud 콘솔에서 로그 라우터 페이지로 이동합니다.
- 기존 Google Cloud 프로젝트를 선택합니다.
- 싱크 만들기를 클릭합니다.
- 싱크 세부정보 패널에서 다음 필드를 입력합니다.
- 싱크 이름: 싱크 이름을
BackupandDR_reports_sink
로 입력합니다. 다른 싱크의 백업 및 DR 보고서를 식별하려면 싱크 이름BackupandDR_reports_sink
를 사용해야 합니다. - 싱크 설명: 싱크의 목적 또는 사용 사례를 설명합니다.
- 싱크 이름: 싱크 이름을
싱크 대상 위치 패널에서 다음을 수행합니다.
- Select sink service(싱크 서비스 선택) 메뉴에서 BigQuery dataset(BigQuery 데이터 세트) 싱크 서비스를 선택합니다.
- BigQuery 데이터 세트 선택에서 새 BigQuery 데이터 세트 만들기를 선택합니다.
- 데이터 세트 만들기 페이지에서 다음을 실행합니다.
- 데이터 세트 ID에 데이터 세트 이름을
BackupandDR_reports
로 입력하여 다른 데이터 세트와 구분합니다. 데이터 세트 이름을BackupandDR_reports
에서 변경하지 마세요. - 위치 유형에서 데이터 세트의 지리적 위치를 선택합니다. 데이터 세트가 생성된 후에는 위치를 변경할 수 없습니다.
- 선택사항: 이 데이터 세트의 테이블을 만료시키려면 테이블 만료 시간 사용 설정을 선택한 다음 기본 최대 테이블 기간을(일 단위) 지정합니다.
- 데이터 세트 만들기를 클릭합니다.
- 데이터 세트 ID에 데이터 세트 이름을
싱크에 포함할 로그 선택 패널에서 다음을 수행합니다.
포함 필터 만들기 필드에 포함하려는 로그 항목과 일치하는 다음 필터 표현식을 입력합니다.
logName=~"projects/PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*"
올바른 필터를 입력했는지 확인하려면 미리보기 로그를 선택합니다. 그러면 필터가 미리 채워진 새 탭에서 로그 탐색기가 열립니다.
선택사항: 싱크에서 제외할 로그 선택 패널에서 다음을 수행합니다.
싱크 만들기를 선택합니다.
BigQuery Studio에서 데이터 세트를 확인할 수 있습니다.
gcloud
- Cloud Shell 활성화로 이동하여 편집기 열기를 클릭합니다.
- 아이콘을 클릭하고 파일을 선택한 다음 새 텍스트 파일을 선택합니다.
다음 스크립트를 복사하여 붙여넣습니다.
#!/bin/bash echo "This script will set up a log sink for BackupDR reports to be available in BigQuery" # Get the default project ID DEFAULT_PROJECT_ID=$(gcloud config get-value project) read -p "Enter Project ID (default: $DEFAULT_PROJECT_ID, press Enter to continue):" PROJECT_ID # Use default if no input is provided if [ -z "$PROJECT_ID" ]; then PROJECT_ID=$DEFAULT_PROJECT_ID fi # Set the project ID result=$(gcloud config set project $PROJECT_ID) if [ $? -ne 0 ]; then echo "Error setting the project to $PROJECT_ID" exit 1 fi # --- Check if BigQuery API is already enabled, enable if not --- echo "Checking if BigQuery API is enabled..." if gcloud services list | grep "bigquery.googleapis.com" >/dev/null; then echo "BigQuery API is already enabled for $PROJECT_ID" else echo "For logs to be available in BigQuery, we need to enable BigQuery service in the project if not done already. This might mean additional costs incurred. Please check the pricing at https://cloud.google.com/backup-disaster-recovery/docs/monitor-reports/reports-overview#pricing before proceeding." read -p "Do you want to continue(Y/N)?" continue if [ "$continue" = "y" ] || [ "$continue" = "Y" ]; then echo "Enabling BigQuery API..." result=$(gcloud services enable bigquery.googleapis.com --project $PROJECT_ID) if [ $? -eq 0 ]; then echo "Successfully enabled BigQuery api for $PROJECT_ID" else echo "Error in setting up the BigQuery api for the project. $result" exit 1 fi else exit 0 fi fi # --- Check if BigQuery data set already exists, create if not --- echo "Checking if BigQuery data set exists..." if bq ls | grep "BackupandDR_reports" >/dev/null; then echo "Dataset BackupandDR_reports already exists for $PROJECT_ID" else echo "Creating bigQuery dataset BackupandDR_reports..." # --- Get dataset location from user (default: US) --- read -p "Enter dataset location (default: US, press Enter to use): " DATASET_LOCATION if [ -z "$DATASET_LOCATION" ]; then DATASET_LOCATION="US" fi # --- Get table expiration in days from user (default: no expiration) --- read -p "Enter default table expiration in days (default: no expiration, press Enter to skip): " TABLE_EXPIRATION_DAYS # Calculate table expiration in seconds if provided if [ -n "$TABLE_EXPIRATION_DAYS" ]; then TABLE_EXPIRATION_SECONDS=$((TABLE_EXPIRATION_DAYS * 24 * 60 * 60)) EXPIRATION_FLAG="--default_table_expiration $TABLE_EXPIRATION_SECONDS" else EXPIRATION_FLAG="" fi result=$(bq --location=$DATASET_LOCATION mk $EXPIRATION_FLAG BackupandDR_reports) if [ $? -eq 0 ]; then echo "Created a BigQuery dataset BackupandDR_reports successfully." else echo "" echo "ERROR : Failed to create the BigQuery dataset." echo $result exit 1 fi fi # --- Check if Log Sink already exists, create if not --- echo "Checking if Log Sink exists..." if gcloud logging sinks list | grep "BackupandDR_reports_sink" >/dev/null; then echo "Log Sink BackupandDR_reports_sink already exists for $PROJECT_ID" else log_filter="projects/$PROJECT_ID/logs/backupdr.googleapis.com%2Fgcb_*" echo "Creating log sink BackupandDR_reports_sink..." result=$(gcloud logging sinks create BackupandDR_reports_sink bigquery.googleapis.com/projects/$PROJECT_ID/datasets/BackupandDR_reports --log-filter="logName=~\"$log_filter\"") if [ $? -eq 0 ]; then echo "Created a logsink BackupandDR_reports_sink successfully." else echo "" echo "ERROR : Failed to create logsink." exit 1 fi fi # --- Add IAM Policy binding for Cloud logging service account to write logs to BigQuery --- result=$(gcloud projects add-iam-policy-binding $(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") --member=serviceAccount:service-$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")@gcp-sa-logging.iam.gserviceaccount.com --role=roles/bigquery.dataEditor --condition=None) if [ $? -eq 0 ]; then echo "Added permission for cloud logging to write to BigQuery datasets" else echo "" echo "ERROR : Failed to add permissions for cloud logging to write to BigQuery datasets. Please make sure that you have correct access rights in order to be able to proceed." exit 1 fi echo "Setup complete. The logs for the project $PROJECT_ID will now start flowing to bigquery." exit 0
Bash(
.sh
) 파일 확장자가 있는 이름(예:script.sh
)으로 파일을 저장합니다.방금 만든 파일을 사용하여 bash 명령어를 실행합니다. 예를 들면
bash script.sh
입니다.BigQuery Studio에서 생성된 데이터 세트를 확인할 수 있습니다.
사전 빌드된 보고서 설정
로그가 로그 싱크를 통해 라우팅되는 데이터 세트 내에서 다음 스크립트를 실행하여 사전 빌드된 보고서를 설정할 수 있습니다.
스크립트는 다음과 같은 사전 빌드된 보고서를 추가합니다.
BigQuery에서 사전 빌드된 보고서를 설정하려면 다음 단계를 따르세요.
gcloud
- Cloud Shell 활성화로 이동하여 편집기 열기를 클릭합니다.
- 새 텍스트 파일을 만듭니다.
다음 사전 빌드된 보고서 스크립트를 복사하여 Google Cloud CLI에 붙여넣습니다.
backup_job_summary_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent, COUNT(*) as Total_Jobs, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded, COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed, COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled, COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run, FROM filtered_data WHERE jsonPayload.job_category = "Backup Job" AND jsonPayload.job_status != "retry" GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type ORDER BY Date, Resource_Name;' recovery_job_summary_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) >= DATE_SUB(CURRENT_DATE(), INTERVAL 29 DAY) AND DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END)/COUNT(*) * 100 as Success_Percent, COUNT(*) as Total_Jobs, COUNT(CASE WHEN jsonPayload.job_status = "succeeded" THEN 1 END) as Succeeded, COUNT(CASE WHEN jsonPayload.job_status = "failed" THEN 1 END) as Failed, COUNT(CASE WHEN jsonPayload.job_status = "canceled" THEN 1 END) as Cancelled, COUNT(CASE WHEN jsonPayload.job_status = "notrun" THEN 1 END) as Not_Run, FROM filtered_data WHERE jsonPayload.job_category = "Recovery Job" AND jsonPayload.job_status != "retry" GROUP BY Date, jsonPayload.job_type, jsonPayload.resource_id, jsonPayload.resource_name, jsonPayload.resource_type ORDER BY Date, Resource_Name;' failed_job_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` GROUP BY insertId ) SELECT jsonPayload.job_name as Job_Name, jsonPayload.job_category as Job_Category, jsonPayload.job_type as Job_Type, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message, jsonPayload.hostname as Host_Name, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.appliance_name as Appliance_Name, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(jsonPayload.job_start_time)) as Job_Start_Time, DATE(jsonPayload.job_start_time) as Job_Date, jsonPayload.backup_type as Backup_Type FROM filtered_data WHERE jsonPayload.job_status = "failed" ORDER BY Job_Start_Time DESC, Resource_Name' job_details_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` GROUP BY insertId ) SELECT jsonPayload.job_name as Job_Name, jsonPayload.job_category as Job_Category, jsonPayload.job_type as Job_Type, jsonPayload.log_backup as Log_Backup, jsonPayload.job_status as Job_Status, DATE(CASE WHEN jsonPayload.job_start_time = "" THEN jsonPayload.job_queued_time ELSE jsonPayload.job_start_time END) as Job_Date, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message, jsonPayload.job_initiation_failure_reason as Job_Initiation_Failure_Reason, jsonPayload.appliance_name as Appliance_Name, jsonPayload.hostname as Host_Name, jsonPayload.target_appliance_name as Target_Appliance_Name, jsonPayload.target_pool_name as Target_Pool_Name, jsonPayload.target_host_name as Target_Host_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_queued_time !="" THEN jsonPayload.job_queued_time ELSE NULL END)) as Job_Queue_Time, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_start_time!="" THEN jsonPayload.job_start_time ELSE NULL END)) as Job_Start_Time, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.job_end_time!="" THEN jsonPayload.job_end_time ELSE NULL END)) as Job_End_Time, jsonPayload.job_duration_in_hours as Job_Duration_In_Hours, jsonPayload.backup_consistency as Backup_Consistency, jsonPayload.resource_data_size_in_gib as Resource_Data_Size_In_GiB, jsonPayload.snapshot_disk_size_in_gib as Snapshot_Disk_Size_in_GiB, jsonPayload.pre_compress_in_gib as Pre_Compress_In_GiB, jsonPayload.compression_ratio as Compression_Ratio, jsonPayload.data_sent_in_gib as Streamsnap_Data_Sent_In_GiB, jsonPayload.data_written_in_gib as Streamsnap_Data_Written_In_GiB, jsonPayload.data_copied_in_gib as Data_Copied_In_GiB, jsonPayload.data_change_rate as Data_Change_Rate_Percent, jsonPayload.backup_type as Backup_Type, FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(CASE WHEN jsonPayload.recovery_point!="" THEN jsonPayload.recovery_point ELSE NULL END)) as Recovery_Point, jsonPayload.onvault_pool_storage_consumed_in_gib as OnVault_Pool_Storage_Consumed_In_GiB, FROM filtered_data ORDER BY Job_Start_Time DESC, Resource_Name' unresolved_failures_query='WITH job_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_recovery_jobs_*` WHERE jsonPayload.job_status in ("succeeded", "failed") GROUP BY insertId ), -- Select resources which have their latest status as failed unresolved_failed_resources AS ( SELECT jsonPayload.appliance_name as Appliance_Name, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, jsonPayload.job_status as Job_Status, jsonPayload.job_start_time as Job_Start_Time, jsonPayload.hostname as Hostname, jsonPayload.error_code as Error_Code, jsonPayload.error_message as Error_Message FROM job_data j1 WHERE jsonPayload.job_start_time = (SELECT MAX(jsonPayload.job_start_time) FROM job_data j2 WHERE j1.jsonPayload.appliance_name = j2.jsonPayload.appliance_name AND j1.jsonPayload.resource_name = j2.jsonPayload.resource_name AND j1.jsonPayload.resource_type = j2.jsonPayload.resource_type AND j1.jsonPayload.job_type = j2.jsonPayload.job_type) AND jsonPayload.job_status = "failed" ), -- Select all jobs for all the resources that have unresolved failures all_jobs_of_unresolved_failure_resources AS ( SELECT jsonPayload.appliance_name as Appliance_Name, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.job_type as Job_Type, jsonPayload.job_category as Job_Category, jsonPayload.job_name as Job_Name, jsonPayload.recovery_point as Recovery_Point, jsonPayload.job_status as Job_Status, jsonPayload.job_start_time as Job_Start_Time, jsonPayload.hostname as Hostname, jsonPayload.error_code as Error_Code FROM job_data j1 JOIN unresolved_failed_resources fr ON j1.jsonPayload.appliance_name = fr.Appliance_Name AND j1.jsonPayload.resource_name = fr.Resource_Name AND j1.jsonPayload.resource_type = fr.Resource_Type AND j1.jsonPayload.job_type = fr.Job_Type ), -- Select the latest successful jobs for the resources with unresolved failures latest_success AS ( SELECT * FROM all_jobs_of_unresolved_failure_resources all_jobs WHERE all_jobs.Job_Status = "succeeded" AND all_jobs.Job_Start_Time = ( SELECT MAX(Job_Start_Time) FROM all_jobs_of_unresolved_failure_resources all_jobs_2 WHERE all_jobs_2.Appliance_Name = all_jobs.Appliance_Name AND all_jobs_2.Resource_Name = all_jobs.Resource_Name AND all_jobs_2.Resource_Type = all_jobs.Resource_Type AND all_jobs_2.Job_Type = all_jobs.Job_Type AND all_jobs_2.job_status = "succeeded" ) ), -- Select all failed jobs after the last success for the resources with unresolved failures failed_jobs_to_report AS ( SELECT all_jobs.Appliance_Name as Appliance_Name, all_jobs.Resource_Name as Resource_Name, all_jobs.Resource_Type as Resource_Type, all_jobs.Job_Type as Job_Type, all_jobs.Job_Name as Job_Name, all_jobs.Recovery_Point as Recovery_Point, all_jobs.Job_Status as Job_Status, all_jobs.Job_Start_Time as Job_Start_Time, all_jobs.Hostname as Hostname, all_jobs.Error_Code as Error_Code, all_jobs.Job_Category as Job_Category, FROM all_jobs_of_unresolved_failure_resources all_jobs LEFT JOIN latest_success success ON success.Appliance_Name = all_jobs.Appliance_Name AND success.Resource_Name = all_jobs.Resource_Name AND success.Resource_Type = all_jobs.Resource_Type AND success.Job_Type = all_jobs.Job_Type WHERE all_jobs.Job_Status = "failed" AND TIMESTAMP(all_jobs.Job_Start_Time) > COALESCE(TIMESTAMP(success.Job_Start_Time), TIMESTAMP(DATE_SUB(CURRENT_DATE(), INTERVAL 3 DAY))) ) SELECT failed_jobs.Resource_Name, failed_jobs.Resource_Type, failed_jobs.Job_Type, failed_jobs.Job_Category, COUNT(*) as Error_Count, failed_jobs.Error_Code, ANY_VALUE(ufr.Error_Message) as Error_Message, FORMAT_TIMESTAMP("%F %R %Z", MIN(TIMESTAMP(failed_jobs.Job_Start_Time))) as First_Failure, FORMAT_TIMESTAMP("%F %R %Z", MAX(TIMESTAMP(failed_jobs.Job_Start_Time))) as Last_Failure, TIMESTAMP_DIFF(CURRENT_TIMESTAMP,TIMESTAMP(MAX(failed_jobs.Job_Start_Time)), HOUR) as Hours_Since_Last_Failure, failed_jobs.Appliance_Name FROM failed_jobs_to_report failed_jobs LEFT JOIN unresolved_failed_resources ufr ON failed_jobs.Appliance_Name = ufr.Appliance_Name AND failed_jobs.Resource_Name = ufr.Resource_Name AND failed_jobs.Resource_Type = ufr.Resource_Type AND failed_jobs.Job_Type = ufr.Job_Type AND failed_jobs.Error_Code = ufr.Error_Code GROUP BY Appliance_Name, Resource_Name, Resource_Type, Job_Type, Job_Category, Error_Code ORDER BY Last_Failure DESC ' daily_schedule_compliance_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_daily_schedule_compliance_*` WHERE DATE(jsonPayload.date) >= DATE_SUB(CURRENT_DATE(), INTERVAL 8 DAY) AND DATE(jsonPayload.date) < CURRENT_DATE() + 1 GROUP BY insertId ) SELECT jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.backup_rule_policy_name as Backup_Rule_Policy_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, DATE(jsonPayload.date) as Date, jsonPayload.backup_window_start_time as Backup_Window_Start_Time, jsonPayload.job_type as Job_Type, jsonPayload.status as Status, jsonPayload.comment as Comment, jsonPayload.appliance_name as Appliance_Name, jsonPayload.host_name as Host_Name FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name, Date; ' protected_data_resource_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_protected_resource_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.host_name as Host_Name, jsonPayload.backup_plan_policy_template as Backup_Plan_Policy_Template, jsonPayload.protected_on as Protected_On, jsonPayload.recovery_point as Recovery_Point, jsonPayload.protected_data_in_gib as Protected_Data_in_Gib, jsonPayload.onvault_in_gib as OnVault_Data_in_Gib, jsonPayload.backup_plan_restrictions as Backup_Plan_Restrictions, jsonPayload.backup_inclusion_or_exclusion as Backup_Inclusion_or_Exclusion, jsonPayload.policy_overrides as Policy_Overrides, jsonPayload.appliance_name as Appliance_Name, jsonPayload.remote_appliance as Remote_Appliance, jsonPayload.source_appliance as Source_Appliance, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name;' storage_utilization_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.appliance_name as Appliance_Name, jsonPayload.pool_name as Pool_Name, jsonPayload.storage_type as Storage_Type, jsonPayload.total_capacity_in_gib as Total_Capacity_In_GiB, jsonPayload.used_capacity_in_gib as Used_Capacity_In_GiB, jsonPayload.utilization_percentage as Utilization_percentage, jsonPayload.appliance_id as Appliance_id, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Appliance_Name;' mounted_image_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_mounted_images_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.source_resource_name as Source_Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.appliance_name as Appliance_Name, jsonPayload.mounted_image_name as Mounted_Image_Name, jsonPayload.source_image_name as Source_Image_Name, jsonPayload.source_image_type as Source_Image_Type, jsonPayload.recovery_point_date as Recovery_Point_Date, jsonPayload.last_mount_date as Last_Mount_Date, jsonPayload.source_host_name as Source_Host_Name, jsonPayload.mounted_host_name as Mounted_Host_Name, jsonPayload.mounted_resource_name as Mounted_Resource_Name, jsonPayload.resource_virtual_size_in_gib as Resource_Virtual_Size_In_Gib, jsonPayload.storage_consumed_in_gib as Storage_Consumed_In_Gib, jsonPayload.mounted_resource_label as Mounted_Resource_Label, jsonPayload.restorable_object as Restorable_Object, jsonPayload.mounted_image_age_in_days as Mounted_Image_Age_In_Days, jsonPayload.user_name as User_Name, jsonPayload.read_mode as Read_Mode, jsonPayload.resource_size_in_gib as Resource_Size_In_Gib, jsonPayload.source_image_expiration_date as Source_Image_Expiration_Date, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Mounted_Resource_Name;' unprotected_resource_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_unprotected_resource_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.resource_name as Resource_Name, jsonPayload.resource_type as Resource_Type, jsonPayload.host_name as Host_Name, jsonPayload.instance_name as Instance_Name, jsonPayload.discovered_on as Discovered_On, jsonPayload.discovered_by as Discovery_Appliance, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Resource_Name;' connector_version_query='WITH filtered_data AS ( SELECT insertId, ANY_VALUE(jsonPayload) as jsonPayload, ANY_VALUE(timestamp) as Report_Timestamp FROM `PROJECT_ID.DATASET_NAME.backupdr_googleapis_com_gcb_connector_version_*` GROUP BY insertId ) SELECT FORMAT_TIMESTAMP("%F %R %Z", TIMESTAMP(Report_Timestamp)) as Last_updated_on, jsonPayload.appliance_name as Appliance_Name, jsonPayload.host_name as Host_Name, jsonPayload.host_os_type as Host_OS_Type, jsonPayload.host_ip_address as Host_IP_Address, jsonPayload.db_authentication as DB_Authentication, jsonPayload.installed_version as Installed_Version, jsonPayload.available_version as Available_Version, jsonPayload.version_check as Version_Check, jsonPayload.disk_preference as Disk_Preference, jsonPayload.transport as Transport, FROM filtered_data f1 WHERE Report_Timestamp = (SELECT MAX(Report_Timestamp) FROM filtered_data f2 WHERE f2.jsonPayload.appliance_id = f1.jsonPayload.appliance_id) ORDER BY Host_Name;' table_not_exists_error_flag=false check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_recovery_jobs%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$backup_job_summary_query" DATASET_NAME.'Backup Jobs Summary' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$recovery_job_summary_query" DATASET_NAME.'Recovery Jobs Summary' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$failed_job_query" DATASET_NAME.'Failed Jobs' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$job_details_query" DATASET_NAME.'Job Details' else echo $result fi result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$unresolved_failures_query" DATASET_NAME.'Unresolved Failures' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_backup_recovery_jobs does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_daily_schedule_compliance%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$daily_schedule_compliance_query" DATASET_NAME.'Daily Schedule Compliance' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_daily_schedule_compliance does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_protected_resource%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$protected_data_resource_query" DATASET_NAME.'Protected Data Resource' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_protected_resource does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_backup_storage_utilization%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$storage_utilization_query" DATASET_NAME.'Storage Resource Utilization' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_backup_storage_utilization does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_unprotected_resource%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$unprotected_resource_query" DATASET_NAME.'Unprotected Resource' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_unprotected_resource does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_mounted_images%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$mounted_image_query" DATASET_NAME.'Mounted Image Details' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_mounted_images does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi check_table=$(bq query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.TABLES WHERE table_name like '%backupdr_googleapis_com_gcb_connector_version%'" | tail -n +2 | tr -d ' ') if [ $check_table == 'true' ]; then result=$(bq update --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details') if [ $? -ne 0 ]; then bq mk --use_legacy_sql=false --expiration 0 --view "$connector_version_query" DATASET_NAME.'Connector Version Details' else echo $result fi else echo "The table backupdr_googleapis_com_gcb_connector_version does not exist in the dataset DATASET_NAME." table_not_exists_error_flag=true fi if [ $table_not_exists_error_flag == true ]; then echo -e "\e[1m\e[33mAll the prebuilt reports could not be created successfully in BigQuery as one or more report logs are missing in the dataset DATASET_NAME." echo -e "Please ensure that you have waited for at least 8 hours after creating the sink, before running the script to create pre built reports. Try re-running the script again after some time to fix the issue." echo -e "Reach out to Google Cloud Support in case you are still facing this issue.\e[0m" fi # Check if view exists check_view=$(bq --project_id PROJECT_ID query --format=csv --use_legacy_sql=false "Select COUNT(*)>0 from DATASET_NAME.INFORMATION_SCHEMA.VIEWS WHERE table_name='Storage Resource Utilization'" | tail -n +2 | tr -d ' ') if [ "$check_view" == 'true' ]; then # Check if view is working correctly fetch_result=$(bq query --format=csv --use_legacy_sql=false "SELECT COUNT(*) FROM \`DATASET_NAME.Storage Resource Utilization\`" 2>&1) if [[ $? -eq 0 ]]; then echo -e "\e[1m\e[33mTo fix a schema related error in the Backup Storage Utilization view, we updated the software on June 3rd, 2024. As part of the fix, storage utilization data before June 3rd 2024, will be no longer available in the report.\e[0m" echo -e "Deleting the data before June 3rd 2024..." start_date="20240301" end_date="20240603" current_date="$start_date" while [ "$current_date" -le "$end_date" ]; do table_name="DATASET_NAME.backupdr_googleapis_com_gcb_backup_storage_utilization_${current_date}" bq rm -f -t "$table_name" current_date=$(date -d "$(date -d "$current_date" +%Y-%m-%d) + 1 day" +%Y%m%d) done echo -e "Completed successfully." fi fi
다음을 바꿉니다.
PROJECT_ID
: 프로젝트의 이름DATASET_NAME
: BigQuery 데이터 세트의 이름입니다. 백업 및 DR 서비스 보고서의 BigQuery 데이터 세트 이름을 기본 데이터 세트 이름인BackupandDR_reports
로 설정하는 것이 좋습니다. 싱크를 만들 때 기본 데이터 세트 이름과 다른 이름을 설정한 경우 BigQuery 데이터 세트와 일치하도록 데이터 세트 이름을 변경합니다.
Bash(
.sh
) 파일 확장자가 있는 이름(예:backupdrreports.sh
)으로 파일을 저장합니다.방금 만든 파일을 사용하여 bash 명령어를 실행합니다. 예를 들면
bash backupdrreports.sh
입니다.BigQuery 스튜디오의 데이터 세트 아래에 사전 빌드된 보고서가 표시됩니다.
사전 빌드된 보고서를 설정한 후 Google Workspace에 액세스할 수 있으면 Google 연결 시트에서 결과를 확인할 수 있습니다. Google Workspace에 액세스할 수 없는 경우 Looker Studio에서 보고서를 확인하거나 사전 빌드된 보고서를 CSV 파일로 다운로드할 수 있습니다.
가격 책정
BigQuery에서 사전 빌드된 보고서를 보는 경우 요금이 청구될 수 있습니다. 이 요금은 BigQuery에서 스트리밍, 저장, 쿼리되는 보고 데이터의 양을 기준으로 합니다. 자세한 내용은 가격 책정을 참고하세요.
다음 단계
- BigQuery에서 맞춤 쿼리를 작성하는 방법을 알아보세요.
- Google 연결 시트에서 사전 빌드된 보고서를 보는 방법을 알아보세요.
- Looker에서 사전 작성된 보고서를 보는 방법을 알아보세요.