Network Connectivity Center is a hub-and-spoke model for network connectivity management in Google Cloud. The hub resource reduces operational complexity through a simple, centralized connectivity management model. The hub is paired with Google's network to deliver reliable connectivity on demand.
For definitions of hub and spoke, see the Hubs and spokes section later in this document.
Network Connectivity Center enables you to connect your on-premises networks together by using Google's network as a network for data transfer. On-premises networks can consist of on-premises data centers and branch or remote offices.
On-premises networks connect to a Network Connectivity Center hub by using spokes that have supported Google Cloud spoke resources attached to them. For example, a spoke can contain HA VPN tunnels for a Cloud VPN gateway near the on-premises network.
The following diagram shows several different types of spoke resources connected to a Network Connectivity Center hub. The hub is associated with a VPC network.
Data transfer over Google's network
Network Connectivity Center supports connecting different enterprise sites outside of Google Cloud by using Google's network as a wide area network (WAN). This type of traffic is referred to as data transfer traffic.
Outside sites can consist of, for example, branch office networks, private data centers, and workloads in other cloud providers. These sites connect to a Network Connectivity Center hub by using existing cloud hybrid connectivity resources such as Cloud VPN, Dedicated Interconnect, Partner Interconnect, or select Router appliance partners.
Using Network Connectivity Center enables instant access to the global reach and reliability of Google's network. This functionality enables your enterprise to benefit from Google's deep set of reliability and traffic engineering practices.
How it works
The following sections describe how Network Connectivity Center works and its components.
Hubs and spokes
Network Connectivity Center consists of hub and spoke resources. You can add one or more labels to a hub or spoke resource to identify it.
A hub is a global Google Cloud resource that supports multiple attached spokes. It provides a simple way to connect spokes together to enable data transfer across them. A hub can provide data transfer between different on-premises locations and a Virtual Private Cloud (VPC) network through its attached spokes.
The hub resource reduces operational complexity by using a simple, centralized connectivity management model. A hub, combined with Google's network, delivers reliable connectivity on demand.
A spoke is a Google Cloud network resource connected to a hub. It is part of a hub and can't be created without creating the hub first. A spoke routes traffic to remote network address blocks and enables the connection of multiple remote networks.
Spokes can have only one resource type associated with each spoke. For resource types, see the next section.
Spoke resource types
Network Connectivity Center supports attaching the following Google Cloud resources to spokes. You can have only one resource type within a spoke, but you can attach multiple instances of the same resource type to the same spoke.
- HA VPN tunnels
- VLAN attachments
- Router appliance instances that you or select partners deploy within Google Cloud
You configure a router appliance instance as a BGP peer of a Cloud Router. You can create a router appliance instance by configuring a Compute Engine VM, enabling BGP peering to Cloud Router, and running an image of your choice—for example, a third-party network virtual appliance.
- For Router appliance concepts, see the Router appliance overview.
- For product comparisons, see Choosing a Network Connectivity product.
High availability for spoke resources (required)
Each resource type has different requirements for high availability. Configure the resource types that you plan to use by reading the information in the following sections.
High availability for Cloud Interconnect
For Network Connectivity Center to work correctly with Cloud Interconnect resources, you must configure multiple Interconnect connections, each in a separate edge availability domain.
For detailed information about how to configure Cloud Interconnect resources for high availability, see the following documentation:
- Redundancy and SLA for Dedicated Interconnect
- Establishing 99.99% availability for Dedicated Interconnect
- Redundancy and SLA for Partner Interconnect
- Establishing 99.99% availability for Partner Interconnect
- Creating redundant Interconnect connections with sufficient capacity
High availability for Cloud VPN
For Network Connectivity Center to work correctly with Cloud VPN resources, you must configure multiple HA VPN gateway interfaces and tunnels to achieve a 99.99% SLA. For guidance, see the Cloud VPN overview.
High availability for Router appliance
For Network Connectivity Center to work correctly with router appliance instances attached to a spoke, you must do the following:
If you place all your router appliance instances in a single spoke, use equal-cost multipath (ECMP) to advertise the same set of prefixes from two router appliance instances. To advertise different prefixes for each spoke, add each router appliance instance to a different spoke.
You can't create a cross-region configuration in a single spoke.
ECMP is the result of advertising the same prefix or prefixes, with the same MEDs and AS path as applicable, from two or more router appliance instances. The guidance about route selection in VPC networks applies to router appliance instances as it does to other Google Cloud resources.
For detailed information about how to configure router appliance instances for high availability, see requirements for 99.9% availability.
Network Connectivity Center provides full mesh connectivity between each spoke that is attached to it by propagating all routes that one spoke learns to all other spokes attached to the same hub.
In the preceding topology, Spokes
C are attached to the same hub
and use Cloud Router to advertise prefixes into the hub.
To enable cross-region site-to-site traffic through hubs and spokes, you must enable global routing in the VPC network associated with the hub and spokes. If all spokes are located in the same region, then global routing isn't necessary because site-to-site traffic works without enabling global routing.
The following table shows how the hub propagates prefix advertisements to other spokes.
|Routes from Spoke
||Routes from Spoke
||Routes from Spoke
|Routes exported to Spoke
|Routes exported to Spoke
|Routes exported to Spoke
When determining the order of routes, consider that routes installed by a Network Connectivity Center hub are treated as dynamic routes. For more information about how route conflicts are resolved, see Route applicability and order in the VPC documentation.
For best-path selection of received advertisements, Google Cloud uses MED to determine priority. For more information, see the Routing considerations section for guidance.
Compatibility with existing network configurations
Network Connectivity Center only affects communication between spokes. It does not affect communication between Cloud VPN or Cloud Interconnect and VPC networks.
- All VMs in the same VPC network still learn any routes that a Cloud VPN tunnel or Interconnect connection advertises.
- All subnet routes in the same VPC network are still advertised to all Cloud VPN tunnels and Interconnect connections in that VPC network.
- The preceding route propagation still happens across networks that use VPC Network Peering. For exceptions, see Routing considerations.
In addition, Network Connectivity Center does not affect how routes are advertised across networks that use VPC Network Peering. All routes advertised from a Cloud VPN tunnel or VLAN attachment can still be exported to a peered network. All subnet routes from the peered network get advertised to the on-premises network through a Cloud VPN tunnel or through a VLAN attachment.
This section describes general considerations to review before setting up Network Connectivity Center, as well as considerations that apply to the resources attached to a hub and to routing through hubs and spokes.
For Network Connectivity Center quotas and limits, see Quotas and limits.
Hubs, spokes, and VPC networks
When you add the first spoke to a hub, that hub is associated with the project and network for the spoke. There can be only one instance of a hub for a VPC network. The VPC network associated with the hub cannot be a legacy VPC network.
Resources such as Cloud VPN tunnels and VLAN attachments attached to a spoke must all belong to the same VPC network as the hub.
Shared VPC networks support hubs and spokes differently.
The following considerations also apply to hubs and spokes:
- Only HA VPN tunnels are supported as attachments to spokes. Classic VPN tunnels are not supported.
- When creating HA VPN tunnels that attach to a Network Connectivity Center spoke, creating Google Cloud-to-Google Cloud HA VPN gateways in different regions in the same Google Cloud project is not supported. This is a limitation of HA VPN, not a limitation of Network Connectivity Center.
- Data transfer traffic between sites is best-effort, and there are no bandwidth guarantees.
- Network Connectivity Center is available only in supported locations (some exceptions apply).
Support for VPC Network Peering
You can use VPC Network Peering to peer the network associated with the hub with one or more of your other VPC networks. However, for the network peered with the hub network to send and receive traffic to and from on-premises networks attached to the hub, you must also do the following:
- Use custom route advertisements to announce peer VPC subnets to on-premises networks attached to the hub.
- Enable the import and export of custom routes. This makes routes from on-premises networks attached to the hub visible from the subnets in VPC networks peered with the VPC network that the hub is associated with.
Support for Shared VPC networks
When using Shared VPC networks, you must create the hub in the host project.
For information about the
that you can assign to administrators of service projects, see
During public preview, commands listed in this guide use the alpha version of the Network Connectivity API and the beta version of the Compute Engine API.
- Routing prefixes must be exclusively advertised within a hub or outside of a hub. For example, if the same prefix is announced by two Cloud VPN tunnels, one in a hub and one outside of a hub, data transfer might not occur if the best-path selection chooses the tunnel outside of the hub.
- AS-path is used for best-path selection within a single Cloud Router task. Otherwise, only MED is used to prioritize routes. For more information, see the AS-path section in the Cloud Router overview.
Assigning ASNs to spokes
To simplify your configuration, we recommend that you assign ASNs to spokes in the following way:
- Use a single Google autonomous system number (ASN) (
router.bgp.asn) on Cloud Router for all spoke resources attached to a single hub. For example, the HA VPN tunnels attached to a spoke would use that Google ASN on the Cloud Router that they use for peering. Follow the recommendations for numbering ASNs in the documentation for creating a Cloud Router.
- Assign a unique peer ASN to each spoke attached to the same hub
router.bgpPeers.asn). Within each spoke, make sure that all peer ASNs are the same. Because if two peers advertise the same prefix with different ASNs or AS paths, only one peer's ASN and AS path is readvertised for that prefix.
- We recommend configuring AS path loop detection on your peer routers, although
this feature is almost always on by default. In some cases, it can't be
disabled. For example, when the peer router is Cloud Router or
possibly when using routing capabilities from other cloud providers.
When AS path loop detection is enabled, if two spokes are configured with the same peer ASN, AS path loop detection on a peer router for one spoke drops all prefix advertisements from the other spoke.
In the following example, peer routers in the following spokes advertise the following prefixes and different ASNs:
A; peer router
65001, advertises prefixes
A: peer router
65002, advertises prefixes
B; peer router
65003, advertises prefixes
Cloud Router then advertises the following prefixes to a peer on Spoke
10.1.0.0/16, but the AS path could begin with any of these ASNs:
10.2.0.0/16with an AS path beginning with
10.3.0.0/16with an AS path beginning with
10.4.0.0/16with an AS path beginning with
By making sure that each prefix is advertised by only a single spoke, and that all peers within a spoke have the same ASN, this ambiguity is avoided.
- If there are duplicate route advertisements from multiple spokes for the same subnets with the same priority, Cloud Router uses ECMP to distribute traffic across all next hops. In this case, Interconnect connections receive more traffic than Cloud VPN connections, which receive more traffic than VMs acting as router appliance instances.
- Known issue. If there are duplicate route advertisements from resources in participating spokes, such as HA VPN tunnels, and from similar resources outside spokes, then the traffic in participating spokes might use ECMP to all available next hops. This happens even if the next hops aren't participating hubs or spokes themselves. This behavior will be fixed in a subsequent version of Network Connectivity Center.
- For an example of how to configure route advertisements when one of your redundant Interconnect connections is to an unsupported location, see Optimal route advertisement for Network Connectivity Center.
- BGP sessions for HA VPN tunnels advertise identical IP address ranges.
- BGP attribute support is as follows:
- Propagation of the AS path and MED to hybrid attachments is supported.
- BGP communities are not supported.
To create hubs and spokes, see Working with hubs and spokes.
To work through a tutorial, see Connecting two branch offices using Cloud VPN spokes.
To view a list of partners whose solutions are integrated with Network Connectivity Center, see Network Connectivity Center partners.