Network Connectivity Center overview

Network Connectivity Center is an orchestration framework that provides network connectivity among spoke resources that are connected to a central management resource called a hub. Network Connectivity Center supports two types of spokes:

  • Virtual Private Cloud (VPC) spokes
  • Hybrid spokes, consisting of:
    • HA VPN tunnels
    • Cloud Interconnect VLAN attachments
    • Router appliance spokes

An Network Connectivity Center hub supports either VPC spokes or hybrid spokes, but not both.

With these capabilities, you can do the following:

  • Connect multiple VPC networks to one another. The VPC networks can be located in projects in the same Google Cloud organization or different organizations.
  • Connect an external network to a Google Cloud VPC network by using Router appliance VMs. This approach is known as site-to-cloud connectivity.
  • Use Router appliance VMs to manage connectivity between your VPC networks.
  • Use a Google Cloud VPC network as a wide area network (WAN) to connect networks that are outside of Google Cloud. You can establish connectivity between your external sites by using Cloud VPN tunnels, Cloud Interconnect VLAN attachments, or Router appliance VMs. This approach is known as site-to-site data transfer.

How it works

When a hub uses hybrid spokes located in a single VPC network, you can configure site-to-site data transfer so that the dynamic routes whose next hops are a hybrid spoke (for example, a Cloud Interconnect VLAN attachment) are advertised to an on-premises network by the BGP sessions of the other hybrid spokes in the VPC network. You can also use hybrid spokes to connect two VPC networks, creating dynamic routes in each network.

When a hub uses VPC spokes, you can configure mesh connectivity among all VPC networks connected to the hub by exchanging subnet routes between the VPC networks.

Spokes

A spoke represents one or more Google Cloud network resources that are connected to a hub. When you create a spoke, you must associate it with at least one supported connectivity resource, which is sometimes called a backing resource.

A spoke can use any of the following Google Cloud resources as its backing resource.

Resource

Applicable use cases

VPC spokes
  • Connectivity among IPv4 subnet ranges from multiple VPC networks
Router appliance
  • IPv4 site-to-cloud connectivity (all of the appliances linked to from a single spoke must be in the same VPC network)
  • IPv4 site-to-site data transfer (all of the spokes that are connected to the same hub must have all of their backing resources in the same VPC network)
  • IPv4 connectivity between VPC networks
Cloud VPN (HA VPN) tunnels,
Cloud Interconnect VLAN attachments
  • IPv4 site-to-site data transfer (all of the Cloud VPN tunnels, VLAN attachments, or both, must be in the same VPC network)

VPC spokes

VPC spokes let you connect two or more VPC networks to a hub so that the networks exchange IPv4 subnet routes. VPC spokes attached to a single hub can reference VPC networks in the same project or a different project (including a project in a different organization).

For detailed information about VPC spokes, see VPC spokes overview.

Hybrid spokes

A single hybrid spoke can be associated with more than one resource of the same type. For example, a hybrid spoke can reference two or more HA VPN tunnels, but that same hybrid spoke cannot also reference Router appliance VMs or Cloud Interconnect VLAN attachments.

Site-to-site data transfer using hybrid spokes requires that the spokes be located in the same VPC network. For more information, see Site-to-site data transfer overview.

Route exchange with VPC connectivity

Network Connectivity Center VPC spokes support exchanging subnet IPv4 address ranges that use private addresses, excluding privately used public IPv4 addresses. Static and dynamic routes in a spoke VPC network cannot be exchanged with other VPC spokes in the hub.

Because of this constraint, if you need to connect a VPC network to an on-premises network, you must use one of the following options:

  • Create the Cloud VPN tunnels or Cloud Interconnect VLAN attachments in the spoke VPC network itself, or
  • Connect the spoke VPC network to another VPC network using VPC Network Peering, and configure the peering connection as described in the VPC spokes and VPC Network Peering.

Use cases

The following sections describe the main Network Connectivity Center use cases.

Connect different VPC networks with Network Connectivity Center

When you attach two or more VPC spokes to a hub, Network Connectivity Center provides IPv4 subnet connectivity among all the VPC networks that are represented by the spokes. Using a hub simplifies the management of large-scale mesh subnet connectivity. See quotas for how many VPC networks can be connected to a hub.

The following diagram shows two VPC spokes.

Connect spokes to a VPC network.
Connect spokes to a VPC network (click to enlarge)

Connect networks using Router appliance VMs

Network Connectivity Center can use Router appliance VMs in the following two IPv4 connectivity scenarios:

  • Connecting a VPC network to an on-premises or other cloud provider network using dynamic routes
  • Connecting two VPC networks to each other using dynamic routes

With this option, Cloud Router manages the BGP sessions for next hop Router appliance VMs.

Connect an external network to Google Cloud

The following diagram uses a hybrid spoke with a Router appliance VM to connect two VPC networks to an external network. The Cloud Router VM has one network interface (NIC) in each VPC network.

Connect an external network to Google Cloud.
Connect an external network to Google Cloud (click to enlarge)

For more information about this use case, see Site-to-cloud topologies that use a third-party appliance.

Manage connectivity between VPC networks

The following diagram uses a hybrid spoke with a Router appliance VM running specialized firewall or packet inspection software to connect two VPC networks.

Use a third-party firewall
Use a third-party firewall (click to enlarge)

For more information, see VPC-to-VPC topology that uses a third-party appliance.

Conduct data transfer over Google's network

Data transfer provides IPv4 connectivity between external networks using a Google Cloud VPC network and hybrid spokes. You can transfer data between multiple on-premises networks or to other cloud networks.

When you create a hybrid spoke, you can enable the data transfer option for that spoke. When data transfer is enabled for hybrid spokes connected to the same hub, the dynamic routes learned by each Router appliance VM, Cloud VPN tunnel, or Cloud Interconnect VLAN attachment are re-advertised to the other VMs, tunnels, or VLAN attachments associated with any hybrid spoke connected to the same hub. Data transfer requires that all hybrid spokes refer to Router appliance VMs, Cloud VPN tunnels, or Cloud Interconnect VLAN attachments in a single VPC network.

For example, suppose you have data centers in New York, Sydney, and Tokyo. After you use supported resources to connect your VPC network to each of these sites, you could create a spoke to represent each network. After you complete this setup, Network Connectivity Center would provide full mesh connectivity between all three sites.

As shown in the following diagram, you can create spokes that rely on connectivity resources such as Cloud VPN, Cloud Interconnect, and Router appliance.

The diagram doesn't show Cross-Cloud Interconnect, but you can also use Cross-Cloud Interconnect VLAN attachments.

Data transfer over Google's network.
Data transfer over Google's network (click to enlarge)

For more information about this use case, see Site-to-site data transfer overview.

Considerations

Before setting up Network Connectivity Center, review the following sections.

IP addressing

Network Connectivity Center supports IPv4 addressing. It does not support IPv6. For example:

  • If a spoke has site-to-site data transfer enabled, the resources associated with the spokes support IPv4 traffic. These spokes cannot exchange IPv6 traffic. This statement applies to all spoke types: Router appliance, VLAN attachment, and VPN spokes.

  • Site-to-cloud Router appliance spokes support IPv4 traffic. IPv6 traffic is not supported.

  • When you create a Router appliance VM, the VM's primary internal IPv4 address must be an RFC 1918 address.

  • When VPC spokes contain both IPv4 and IPv6 subnets, only IPv4 subnets are exchanged between them.

Routing

Routes installed by a Network Connectivity Center hub are treated as dynamic routes.

For information about how dynamic routes are handled in comparison with other types of routes, see Applicability and order in the VPC documentation.

Prioritization

All hybrid spoke resources use Cloud Router. For details about how the path selection model used by Cloud Router, see AS path prepending and AS path length in the Cloud Router overview.

ASNs

All non-Google peering routers that are associated with a single spoke must use the same ASN when advertising prefixes to the Cloud Router. This is important because, if two peers advertise the same prefix with different ASNs or AS paths, only one peer's ASN and AS path is readvertised for that prefix. Different spokes must have different ASNs. That is, if two BGP sessions belong to different spokes, they must have different ASNs.

Also, if you are using the data transfer feature, you must assign ASNs as described in ASN requirements for site-to-site data transfer.

BGP sessions

BGP communities are not supported.

Route advertisement changes when using site-to-site data transfer

When you add an Cloud Interconnect VLAN attachment or Cloud VPN tunnel to a hybrid spoke, Network Connectivity Center updates the corresponding BGP session for the VLAN attachment or Cloud VPN tunnel so that it re-advertises the prefixes learned by BGP sessions of the other Cloud Interconnect VLAN attachments or Cloud VPN tunnels connected to any of the hub's hybrid spokes that have the site-to-site data transfer option enabled.

Supported spoke types

Support for other products

The following sections describe how Network Connectivity Center works with other networking products and features.

VPC spokes and VPC Network Peering

Network Connectivity Center VPC spokes only support exchanging valid subnet IPv4 address ranges that use private addresses, excluding privately used public IPv4 addresses, excluding IPv6 subnet ranges, and excluding static and dynamic routes:

  • See VPC spokes overview for more information about Network Connectivity Center VPC spokes.
  • See route exchange options in the VPC Network Peering documentation for details about how routes are exchanged using VPC Network Peering.

Even though Network Connectivity Center VPC spokes do not support exchanging static or dynamic routes, a spoke VPC network can still import the static and dynamic routes from another VPC network by using VPC Network Peering. If the other VPC network has dynamic routes with next hop Cloud Interconnect VLAN attachments or Cloud VPN tunnels that connect to an on-premises network, you can connect the spoke VPC network to the on-premises network by using Cloud Router custom route advertisements and VPC Network Peering route exchange options as described in the transit network example of the VPC Network Peering documentation.

Shared VPC networks

When using Shared VPC networks, you must create the hub in the host project.

We recommend assigning the networkconnectivity.googleapis.com/spokeAdmin role to administrators of service projects. For details on this role and other Network Connectivity Center roles, see Roles and permissions.

Legacy networks

Spoke resources cannot be part of a legacy network.

VPN tunnels

Classic VPN tunnels are not supported.

Data transfer

If you're using data transfer, review the Considerations section in the site-to-site data transfer overview.

Service level agreement

For information about the Network Connectivity Center service level agreement, see Network Connectivity Center Service Level Agreement (SLA).

Pricing

For information about pricing, see Network Connectivity Center pricing.

What's next