Network Connectivity Center overview

Network Connectivity Center is a hub-and-spoke model for network connectivity management in Google Cloud. The hub resource reduces operational complexity through a simple, centralized connectivity management model. The hub is paired with Google's network to deliver reliable connectivity on demand.

For definitions of hub and spoke, see the Hubs and spokes section later in this document.

Network Connectivity Center enables you to connect your on-premises networks together by using Google's network as a network for data transfer. On-premises networks can consist of on-premises data centers and branch or remote offices.

On-premises networks connect to a Network Connectivity Center hub by using spokes that have supported Google Cloud spoke resources attached to them. For example, a spoke can contain HA VPN tunnels for a Cloud VPN gateway near the on-premises network.

The following diagram shows several different types of spoke resources connected to a Network Connectivity Center hub. The hub is associated with a VPC network.

Network Connectivity Center concept.
Network Connectivity Center concept (click to enlarge)

Data transfer over Google's network

Network Connectivity Center supports connecting different enterprise sites outside of Google Cloud by using Google's network as a wide area network (WAN). This type of traffic is referred to as data transfer traffic.

Outside sites can consist of, for example, branch office networks, private data centers, and workloads in other cloud providers. These sites connect to a Network Connectivity Center hub by using existing cloud hybrid connectivity resources such as Cloud VPN, Dedicated Interconnect, Partner Interconnect, or select Router appliance partners.

Using Network Connectivity Center enables instant access to the global reach and reliability of Google's network. This functionality enables your enterprise to benefit from Google's deep set of reliability and traffic engineering practices.

Data transfer over Google's network.
Data transfer over Google's network (click to enlarge)

How it works

The following sections describe how Network Connectivity Center works and its components.

Hubs and spokes

Network Connectivity Center consists of hub and spoke resources. You can add one or more labels to a hub or spoke resource to identify it.

Hub

A hub is a global Google Cloud resource that supports multiple attached spokes. It provides a simple way to connect spokes together to enable data transfer across them. A hub can provide data transfer between different on-premises locations and a Virtual Private Cloud (VPC) network through its attached spokes.

The hub resource reduces operational complexity by using a simple, centralized connectivity management model. A hub, combined with Google's network, delivers reliable connectivity on demand.

Spoke

A spoke is a Google Cloud network resource connected to a hub. It is part of a hub and can't be created without creating the hub first. A spoke routes traffic to remote network address blocks and enables the connection of multiple remote networks.

Spokes can have only one resource type associated with each spoke. For resource types, see the next section.

Spoke resource types

Network Connectivity Center supports attaching the following Google Cloud resources to spokes. You can have only one resource type within a spoke, but you can attach multiple instances of the same resource type to the same spoke.

  • HA VPN tunnels
  • VLAN attachments
  • Router appliance instances that you or select partners deploy within Google Cloud

You configure a router appliance instance as a BGP peer of a Cloud Router. You can create a router appliance instance by configuring a Compute Engine VM, enabling BGP peering to Cloud Router, and running an image of your choice—for example, a third-party network virtual appliance.

High availability for spoke resources (required)

Each resource type has different requirements for high availability. Configure the resource types that you plan to use by reading the information in the following sections.

High availability for Cloud Interconnect

For Network Connectivity Center to work correctly with Cloud Interconnect resources, you must configure multiple Interconnect connections, each in a separate edge availability domain.

For detailed information about how to configure Cloud Interconnect resources for high availability, see the following documentation:

High availability for Cloud VPN

For Network Connectivity Center to work correctly with Cloud VPN resources, you must configure multiple HA VPN gateway interfaces and tunnels to achieve a 99.99% SLA. For guidance, see the Cloud VPN overview.

High availability for Router appliance

For Network Connectivity Center to work correctly with router appliance instances attached to a spoke, you must do the following:

If you place all your router appliance instances in a single spoke, use equal-cost multipath (ECMP) to advertise the same set of prefixes from two router appliance instances. To advertise different prefixes for each spoke, add each router appliance instance to a different spoke.

You can't create a cross-region configuration in a single spoke.

ECMP is the result of advertising the same prefix or prefixes, with the same MEDs and AS path as applicable, from two or more router appliance instances. The guidance about route selection in VPC networks applies to router appliance instances as it does to other Google Cloud resources.

For detailed information about how to configure router appliance instances for high availability, see requirements for 99.9% availability.

Route exchange

Network Connectivity Center provides full mesh connectivity between each spoke that is attached to it by propagating all routes that one spoke learns to all other spokes attached to the same hub.

Network Connectivity Center topology.
Network Connectivity Center topology (click to enlarge)

In the preceding topology, Spokes A, B, and C are attached to the same hub and use Cloud Router to advertise prefixes into the hub.

To enable cross-region site-to-site traffic through hubs and spokes, you must enable global routing in the VPC network associated with the hub and spokes. If all spokes are located in the same region, then global routing isn't necessary because site-to-site traffic works without enabling global routing.

The following table shows how the hub propagates prefix advertisements to other spokes.

Routes from Spoke A Routes from Spoke B Routes from Spoke C
Routes exported to Spoke A 10.3.0.0/16 is reachable through Spoke B. 10.4.0.0/16 is reachable through Spoke C.
Routes exported to Spoke B 10.2.0.0/16 is reachable through Spoke A. 10.4.0.0/16 is reachable through Spoke C.
Routes exported to Spoke C 10.2.0.0/16 is reachable through Spoke A. 10.3.0.0/16 is reachable through Spoke B.

Route conflicts

When determining the order of routes, consider that routes installed by a Network Connectivity Center hub are treated as dynamic routes. For more information about how route conflicts are resolved, see Route applicability and order in the VPC documentation.

For best-path selection of received advertisements, Google Cloud uses MED to determine priority. For more information, see the Routing considerations section for guidance.

Compatibility with existing network configurations

Network Connectivity Center only affects communication between spokes. It does not affect communication between Cloud VPN or Cloud Interconnect and VPC networks.

  • All VMs in the same VPC network still learn any routes that a Cloud VPN tunnel or Interconnect connection advertises.
  • All subnet routes in the same VPC network are still advertised to all Cloud VPN tunnels and Interconnect connections in that VPC network.
  • The preceding route propagation still happens across networks that use VPC Network Peering. For exceptions, see Routing considerations.

In addition, Network Connectivity Center does not affect how routes are advertised across networks that use VPC Network Peering. All routes advertised from a Cloud VPN tunnel or VLAN attachment can still be exported to a peered network. All subnet routes from the peered network get advertised to the on-premises network through a Cloud VPN tunnel or through a VLAN attachment.

Considerations

This section describes general considerations to review before setting up Network Connectivity Center, as well as considerations that apply to the resources attached to a hub and to routing through hubs and spokes.

For Network Connectivity Center quotas and limits, see Quotas and limits.

Hubs, spokes, and VPC networks

When you add the first spoke to a hub, that hub is associated with the project and network for the spoke. There can be only one instance of a hub for a VPC network. The VPC network associated with the hub cannot be a legacy VPC network.

Resources such as Cloud VPN tunnels and VLAN attachments attached to a spoke must all belong to the same VPC network as the hub.

Shared VPC networks support hubs and spokes differently.

The following considerations also apply to hubs and spokes:

  • If you want to exchange routes between spokes in multiple regions, the VPC network where your spoke resources reside must have its dynamic routing mode set to global.
  • Only HA VPN tunnels are supported as attachments to spokes. Classic VPN tunnels are not supported.
  • When creating HA VPN tunnels that attach to a Network Connectivity Center spoke, creating Google Cloud-to-Google Cloud HA VPN gateways in different regions in the same Google Cloud project is not supported. This is a limitation of HA VPN, not a limitation of Network Connectivity Center.
  • Data transfer traffic between sites is best-effort, and there are no bandwidth guarantees.
  • Network Connectivity Center is available only in supported locations (some exceptions apply).

Support for VPC Network Peering

You can use VPC Network Peering to peer the network associated with the hub with one or more of your other VPC networks. However, for the network peered with the hub network to send and receive traffic to and from on-premises networks attached to the hub, you must also do the following:

  1. Use custom route advertisements to announce peer VPC subnets to on-premises networks attached to the hub.
  2. Enable the import and export of custom routes. This makes routes from on-premises networks attached to the hub visible from the subnets in VPC networks peered with the VPC network that the hub is associated with.

Support for Shared VPC networks

When using Shared VPC networks, you must create the hub in the host project.

For information about the networkconnectivity.googleapis.com/spokeAdmin role that you can assign to administrators of service projects, see Access control.

API considerations

During public preview, commands listed in this guide use the alpha version of the Network Connectivity API and the beta version of the Compute Engine API.

Routing considerations

  • Routing prefixes must be exclusively advertised within a hub or outside of a hub. For example, if the same prefix is announced by two Cloud VPN tunnels, one in a hub and one outside of a hub, data transfer might not occur if the best-path selection chooses the tunnel outside of the hub.
  • AS-path is used for best-path selection within a single Cloud Router task. Otherwise, only MED is used to prioritize routes. For more information, see the AS-path section in the Cloud Router overview.
  • You must assign ASNs as described in ASN requirements for spoke resources.

Route advertisements

  • If there are duplicate route advertisements from multiple spokes for the same subnets with the same priority, Cloud Router uses ECMP to distribute traffic across all next hops. In this case, Interconnect connections receive more traffic than Cloud VPN connections, which receive more traffic than VMs acting as router appliance instances.
  • Known issue. If there are duplicate route advertisements from resources in participating spokes, such as HA VPN tunnels, and from similar resources outside spokes, then the traffic in participating spokes might use ECMP to all available next hops. This happens even if the next hops aren't participating hubs or spokes themselves. This behavior will be fixed in a subsequent version of Network Connectivity Center.
  • For an example of how to configure route advertisements when one of your redundant Interconnect connections is to an unsupported location, see Optimal route advertisement for Network Connectivity Center.

BGP sessions

  • BGP sessions for HA VPN tunnels advertise identical IP address ranges.
  • BGP attribute support is as follows:
    • Propagation of the AS path and MED to hybrid attachments is supported.
    • BGP communities are not supported.

What's next