This page describes how to perform initial activation of the Google Cloud Backup and DR service and set up configurations for your project.
Components of the Backup and DR service architecture
The Google Cloud Backup and DR service architecture is delivered through the following components:
Management console: The management console serves as the management plane for your backup/recovery appliances. Each Backup and DR deployment includes a single management console managing any number of backup/recovery appliances. The management console is deployed in the Backup administration project.
Backup/recovery appliances: The backup/recovery appliance is the data mover that efficiently captures, moves, and manages the lifecycle of backup data within your enterprise. Backup/recovery appliances are deployed in the Workload entity.
Backup and DR agents: The Backup and DR agent is a lightweight piece of software that calls the application-native APIs to efficiently capture data from production applications in an incremental-forever fashion, and provides the application awareness at the time of recovery. The agent is installed on application hosts where applications to be protected reside. If you are only protecting the entire VM or a subset of its disks, then the Backup and DR agent is not required.
The management console is activated into a service producer VPC network. This service producer VPC is peered to a VPC within your project to establish network connectivity between the management console and your network. This is done using private services access.
Set up Google Cloud Backup and DR service in the Google Cloud console
Go to the Google Cloud console to activate the Google Cloud Backup and DR service API and set up permissions for your account:
Activate Google Cloud Backup and DR
Set up a private services access connection
Many Google services use
private services access.
Depending on whether any of these have already
been activated, this connection may exist. If a large subnet has
already been provisioned, no further configuration is needed. However,
if a private services access connection does not exist, then the Backup and DR
activation process needs to create one. This process requires you to
either supply a subnet range or allow Google to allocate one for you. This
decision may require discussion with your network administrator. Equally, if an
existing allocated subnet in the private service connection labeled
servicenetworking-googleapis-com
is only /24, then an additional subnet needs
to be added as at least a /23 subnet is required and
a /20 subnet is provisioned if the automatic allocation method is selected. If
the private service access connection already exists, ensure that a /23 IP range
is available in the private service connection labeled
servicenetworking-googleapis-com
.
Choose a storage type
The backup/recovery appliance stores backup data in the local appliance snapshot pool. You can copy it to object storage for long term retention. Google Cloud offers the following three types of local object storage:
Minimal capacity persistent disk: Use this storage type if you only want to protect Compute Engine VMs. Backups are stored as native PD snapshots and do not consume the local storage of the backup/recovery appliance.
Standard persistent disk: This storage type offers efficient block storage, starting with a 4TB persistent disk. This is recommended for VMware Engine VMs and database or file system applications with mid to high I/Os.
SSD persistent disk: This storage type offers fast block storage, starting with a 4TB persistent disk. This is recommended for Google Cloud VMware Engine VMs and database or file system applications with very high I/Os.
You can expand the capacity of your disk pools later.
You can move backups with long-term retention needs to Google Cloud standard, nearline, and coldline storage depending on your expected need to access the data.
Recommended network topology for Backup and DR
Google Cloud recommends the use of shared VPC while deploying Backup and DR. Shared VPC allows an organization to connect resources from multiple projects to a common Virtual private cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the shared VPC network.
Shared VPC lets organization administrators delegate administrative responsibilities, such as creating and managing instances, to service project administrators while maintaining centralized control over network resources like subnets, routes, and firewalls.
The management console is activated into a service provider network VPC. This VPC is connected to your network using private service access. The primary purpose of this connection is for the management console and the backup/recovery appliances to exchange metadata with each other. Backup traffic does not traverse this link. However, a management server needs to be able to communicate with all of the backup/recovery appliances deployed anywhere within the network.
Shared VPC best practices
The following best practices are recommended:
Connecting to the management console. It's best to connect the service provider network to a Shared VPC within your network. All traffic from the management console flows through this VPC, and therefore, through the host project. Provisioning the connectivity to the Backup and DR service through a Shared VPC also enables a seamless connection between projects where the workloads are running (service projects) and the Backup and DR service.
Backup/recovery appliance location. The backup/recovery appliances must be deployed with a connection to the same VPC used during the activation of the management console. There are two recommended strategies for selecting the projects for the backup/recovery appliances:
In the central host project. In this strategy, Backup and DR is treated as a central service from IT. The central backup team governs provisioning of the service. As such, all backup/recovery appliances are provisioned in the host project enabling central admins to consolidate all backup resources into a central project. This approach has the benefit of consolidating all backup-related resources and their billing into a single project.
In service projects. This strategy is suitable for more decentralized teams where service projects are created and their management is delegated to distributed teams. In this scenario, the recommended best practice is to provision subnets in the selected VPC for downstream service projects. Backup/recovery appliances are installed in the service projects within these subnets. This enables colocation of the workload and the backup/recovery appliance within a single project.
Firewall configurations
The following ingress firewall rules are automatically added as they are requirements for Backup and DR.
Purpose | Source | Target | Port (TCP) |
---|---|---|---|
Support traffic (support to appliance) | Host running SSH client | Backup/recovery appliance | 26 |
Management traffic (Console to appliance) | Management console | Backup/recovery appliance | 443 |
iSCSI backup (host to appliance) | Host running Backup and DR agent | Backup/recovery appliance | 3260 |
StreamSnap traffic (appliance to appliance) | Backup/recovery appliance | Backup/recovery appliance | 5107 |
For any host that is running the Backup and DR agent, you need to manually add the following TCP port to allow connectivity with an ingress firewall rule.
Purpose | Source | Target | Port (TCP) |
---|---|---|---|
Agent traffic (appliance to host) | Backup/recovery appliance | Host running Backup and DR agent | 5106 |
For hosts using NFS for backup traffic, or for ESX hosts running in VMware Engine that are using NFS for mounts, you need to manually add the following TCP and UDP ports to allow connectivity with an ingress firewall rule.
Purpose | Source | Target | Port (TCP/UDP) |
---|---|---|---|
NFS backup or mount | Host running Agent or ESXi host running mount | Backup/recovery appliance | 111, 756, 2049, 4001, 4045 |
For a list of the permissions used during this operation, see Backup and DR installation permissions reference.
Supported regions
The following section lists the management console and backup/recovery appliances supported regions.
Management console supported regions
While Backup and DR Service can be used to backup supported workloads in any Google Cloud region, the management console can currently be activated only in the following regions. Note that you cannot move management console to a different region.
Geographic Area | Region Name | Region Description | |
---|---|---|---|
Americas | |||
us-central1 |
Iowa |
|
|
us-east1 |
South Carolina | ||
us-east4 |
Northern Virginia | ||
us-west1 |
Oregon |
|
|
us-west4 |
Las Vegas | ||
Europe | |||
europe-west1 |
Belgium |
|
|
europe-west4 |
Netherlands | ||
Asia Pacific | |||
asia-east1 |
Taiwan | ||
asia-southeast1 |
Singapore |
Backup/recovery appliance supported regions
Google Cloud Backup and DR backup/recovery appliances can be deployed into any
Google Cloud region.
Workflow service to perform the backup/recovery deployment is supported in the
listed regions. If the Workflows service isn't
available in a region where your backup/recovery appliance
is being deployed, then the Google Cloud Backup and DR service defaults to
running the workflow in the us-central1
region (the appliance itself is still
created in your selected region). If you have an organization policy that is
set to prevent creating resources in other regions, then you need to temporarily
update your organization policy to allow creation of resources in us-central1
region. You can restrict the us-central1
region after the backup/recovery
appliance deployment.