Pricing
- When is a workload considered to be under management by Backup and DR Service?
Backup and DR Service billing (also called SKU usage) considers any workload with at least one un-expired backup using the service. Backup images can be retained in a snapshot or OnVault pool for extended periods of time (years) for compliance purposes when the source workload is no longer actively being protected.
Backups in an OnVault pool are only considered when calculating Backup and DR Service usage while the source workload is actively being backed up— a backup plan is associated with the workload. If the source workload becomes unprotected—the source workload is deleted or no longer protected by a backup plan—and if all backup images for the workload in any snapshot pool (local or remote) have expired, then the OnVault images associated with such workloads are not counted towards Backup and DR Service SKU usage, although they still continue to accrue Cloud Storage charges.
- Does backup compression impact usage?
The compression achieved by the system in snapshot and OnVault pools doesn't affect the usage measurement, as the usage count is based on the frontend workload size and not the backend storage consumption.
- If I provision a new backup/recovery appliance to replicate from our primary appliance for DR purposes, how does replication between backup/recovery appliances affect usage?
Backup and DR Service measures usage based on frontend workload size. It does not take into account how many copies are retained or where they are retained. Adding an appliance for DR purposes won't affect Backup and DR Service SKU usage.
- If my file system workload has 3 TiB of data, and I use prune paths and exclude lists to eliminate 1 TiB of files from management, does Backup and DR count 3 TiB as managed capacity or 2 TiB as managed capacity?
If you use Backup and DR agent to backup a file system, then the usage is measured based on the actual amount of data managed, which in this example is 2 TiB.
If you backup the file system without the Backup and DR agent (such as with agentless whole VMware/Compute Engine VM backup), then the usage is measured based on the size of the volume(s), which is 3 TiB.
- How often is the Backup and DR Service usage updated?
Backup and DR Service usage are calculated and updated once every hour.
- Usage count for my workload seems to be lower than what it reported yesterday, why is that?
Backup and DR Service usage is based on the most recent successful copy, not the largest recoverable copy. Workloads shrink or expand over a time (irrespective of the change rates involved). When the workload size shrinks, it reflects on the usage reading from the most recent backup.
- I manage a 4 TiB Oracle database. The Oracle database has a 10% daily change rate, but the size of the database is always 4 TiB. What will be my usage on any given day?
Backup and DR Service measures usage based on the size of the last successful copy. So the usage calculation will be 4 TiB. Unless the size of the workload changes, the change rate does not directly impact the usage calculation.
- If I have a SQL Server workload running on a VMware VM, and I manage the SQL Server workload using Backup and DR agent and the entire VMware VM, will my SQL database be counted on top of the VM?
Backup and DR Service counts usage for VMware VM separately and the SQL database separately, which leads to double counting. However, the best practice in such scenarios is to manage the OS volume of the VM and the workloads residing on the VMs separately. This effectively eliminates double protection and thus double counting.
- I manage Microsoft SQL and Oracle workloads using Backup and DR Service. Do you count only the database size or do you include the log files in usage measurement?
Backup and DR Service measures only the managed database files that are needed for a consistent database backup. It does not count log files towards usage measurement.
- I'm no longer actively managing a workload that backed up daily for over a year. When will Backup and DR Service usage measurement stop including this workload?
Backup and DR Service measures usage based on the last successful backup of the workload available. It considers the workload for usage measurement until all the copies under management in the snapshot pool have expired. Which means, as long as there is a recoverable image in the snapshot pool, Backup and DR Service will count usage. This includes orphan images as well (images that are retained per the backup plans, but source workload has been deleted from management). For VMware VMs using a direct to OnVault policy, Backup and DR Service SKU usage measurement stops when the VM is no longer protected by a backup plan.
- I have a workload with 3 months retention. I no longer need to protect this workload, so I removed the backup plan. When will the usage consumed by this workload be released?
Any workload that has a recoverable image in the snapshot pool, either under active or inactive protection, will count towards usage.
- How do I verify my usage for VMware VMs?
Backup and DR Service usage measurements for VMware VMs are consistent with the vCenter reported size for that VM.
du -h *.vmdk
output from the appropriate VM folder on the datastore
matches the usage count.
- How do I verify my usage for Oracle database?
- Backup and DR Service usage measurements for Oracle are based on the allocated size for the database. Here is a sample query to verify the Oracle database size.
select (d.total + c.total) total from (select sum(bytes) total from v$datafile) d, (select sum(block_size*file_size_blks) total from v$controlfile) c;
Then subtract the following: select sum(bytes) free from dba_free_space;
- How do I verify my usage for file systems?
- Backup and DR Service usage measurements for file system based on workloads:
- Windows - used file system size reported by DiskManager
- Linux - used file system size reported by
df - k
Migration to node based pricing for backing up Google Cloud VMware Engine
- What is included in node based pricing for backing up Google Cloud VMware Engine?
- Pricing is only for protecting Google Cloud VMware Engine—whole VM backups. It does not include backup management charges for any agent-based backups, such as charges for application consistent backups for SAP HANA, SQL Server, MySQL, or Postgres. To estimate charges for agent-based backups, the production size for each of these database types is needed. Agent-based backups are priced as per standard Google Cloud Backup and DR pricing.
- How does the existing Managed Data License (MDL) based pricing compare to the new node based pricing for backing up Google Cloud VMware Engine?
- Following is an example to illustrate the difference between the existing and new pricing for backing up new Google Cloud VMware Engine.
- Will my existing pricing discounts be applicable on new Google Cloud VMware Engine related backup SKUs?
- Your existing pricing discounts will remain applicable for new Google Cloud VMware Engine related backup SKUs.
- Do I have an option to choose between existing Managed Data License (MDL) based pricing and new node based pricing for backing up Google Cloud VMware Engine VMs?
- Going forward, only the new node based pricing model will be supported for all customers who want to back up Google Cloud VMware Engine VMs using Google Cloud Backup and DR.
- What do I need to do to switch to the new Google Cloud VMware Engine node based pricing model?
- All existing Google Cloud Backup and DR customers who protect Google Cloud VMware Engine workloads must update each of their backup/recovery appliances to version 11.0.5 or later to adopt the updated pricing model. Once all the backup appliances are on 11.0.5 version or later, post August 4, 2023, pricing will automatically switch to the new node based pricing.
- What if I don't want to switch from the existing Managed Data License (MDL) based pricing model to the new node based pricing model for backing up Google Cloud VMware Engine VMs?
- You can continue on the existing pricing model by not upgrading the backup/recovery appliance to version 11.0.5 or later. However, you must upgrade your appliance and adopt the new pricing model once the appliance gets out of support.
Backup and DR Service new reporting system
- What is the new reporting system and its benefits over the existing report manager?
Google Cloud Backup and DR Service has added a new reporting system based on the built-in Google Cloud services: Cloud Monitoring, Cloud Logging, and BigQuery. It gives you a more seamless, scalable, and comprehensive reporting experience.
You can do the following using the new reporting system:
- Discover, view, and analyze trends in reporting data through Cloud Logging
- View, analyze, and retrieve job-related metrics through Cloud Monitoring
- Create custom reports in BigQuery using SQL query language
- Set up custom alerts on reporting data through Cloud Logging and Cloud Monitoring
- Email reports, auto-schedule data refreshes, and analyze data using the tools
- Why do I need to move from the existing report manager to a new reporting system by April 20, 2024?
The existing report manager is being deprecated on April 20, 2024. The existing link to the report manager in the management console will be disabled and you will no longer be able to access through the management console or scripts after April 20, 2024.
- What happens if I don't move to the reporting system by April 20, 2024?
If you don't set up the new reporting system by April 20, 2024, you can only access built-in reports in a CSV format. You can do this by exporting the reports to a Google Cloud Storage bucket from the management console.
- Which of the features supported in the existing report manager are no longer supported in the reporting system?
The following table explains the features supported in the new reporting system and the existing report manager.
Feature | Existing report manager |
New reporting system |
---|---|---|
Prebuilt reports | Supported | Supported |
Flexibility to export reports | Supported | Supported |
Filter or sort data in the report and do basic customization (re-order or hide columns) | Supported | Supported |
Ability to define custom retention period for the reports and reporting data | Supported | Supported |
Run on-demand reports | Supported | Not applicable1 |
Navigate to report from the management console | Supported | Supported |
Email reports | Not supported | Supported |
Ability to query and create custom reports | Not supported | Supported |
Alerting capability on important events | Not supported | Supported |
Ability to create custom dashboards | Not supported | Supported |
Analyze data in real time using the analytical tools | Not supported | Supported |
1 New reporting system supports near real time job reports.
- Will all my reporting data that exists on the report manager be available in the new reporting system?
- No, your historical reporting data is not available in the new reporting system. You will continue to have access to historical and existing built-in reports until April 30, 2025 through the Export historical reports tab in the management console. You can export these reports in a CSV format to a Google Cloud Storage bucket.
- Does exporting historical reports in CSV format to a Google Cloud Storage bucket incur any charges?
- Backup and DR Service doesn't charge for exporting historical reports in CSV format to a Google Cloud Storage bucket. There is a cost for storing data in a Google Cloud Storage bucket. See Data Storage pricing for a Cloud Storage for details.
- When does the option to export historical reports become available?
- The option Export historical reports will be available from April 1, 2024. You will be able to export reports using the new Export historical reports option available under the Reports menu in the management console.
- What are the historical and existing built-in reports that will be available to export?
The following built-in reports will be available to export in CSV format to a Google Cloud Storage bucket.
- Is there any additional cost associated with using the reporting system?
The price for using the new reporting system depends on the following factors:
- The volume and retention period of log data in Cloud Logging.
- The volume of data streamed, stored, and queried in BigQuery.
For detailed pricing information, see new Backup and DR reporting system pricing section.
- What is the process to move to new reporting system?
To get started with the new reporting system for Google Cloud Backup and DR, you must update your backup/recovery appliance to the 11.0.9 release. After updating the appliance, you can do the following:
- Set up BigQuery to view prebuilt reports
- Create custom reports in BigQuery
- View metric dashboards in Cloud Monitoring
- Analyze job logs in Cloud Logging
- Set up event related alerts in Cloud Logging Who can I contact for questions related to setting up the reporting system?
You can reach out to the Google Cloud support team for queries or issues related to the reporting system.