Hub-and-spoke network architecture

Last reviewed 2023-07-12 UTC

This document presents two architectural options for setting up a hub-and-spoke network topology in Google Cloud. One option uses the network peering capability of Virtual Private Cloud (VPC), and the other uses Cloud VPN.

Enterprises can separate workloads into individual VPC networks for purposes of billing, environment isolation, and other considerations. However, the enterprise might also need to share specific resources across these networks, such as a shared service or a connection to on-premises. In such cases, it can be useful to place the shared resource in a hub network and to attach the other VPC networks as spokes. The following diagram shows an example of the resulting hub-and-spoke network, sometimes called a star topology.

Hub-and-spoke network schema.

In this example, separate spoke VPC networks are used for the workloads of individual business units within a large enterprise. Each spoke VPC network is connected to a central hub VPC network that contains shared services and can serve as the sole entry point to the cloud from the enterprise's on-premises network.

Architecture using VPC Network Peering

The following diagram shows a hub-and-spoke network using VPC Network Peering. VPC Network Peering enables communication using internal IP addresses between resources in separate VPC networks. Traffic stays on Google's internal network and does not traverse the public internet.

Hub-and-spoke architecture using VPC Network Peering
  • In this architecture, the resources that need network-level isolation use separate spoke VPC networks. For example, the architecture shows a Compute Engine VM in the spoke-1 VPC network. The spoke-2 VPC network has a Compute Engine VM and a Google Kubernetes Engine (GKE) cluster.
  • Each spoke VPC network in this architecture has a peering relationship with a central hub VPC network.
  • VPC Network Peering does not constrain VM bandwidth. Each VM can send traffic at the full bandwidth of that individual VM.
  • Each spoke VPC network has a Cloud NAT gateway for outbound communication with the internet.
  • VPC Network Peering does not provide for transitive route announcements. Unless an additional mechanism is used, the VM in the spoke-1 network cannot send traffic to the VM in the spoke-2 network. To work around this non-transitivity constraint, the architecture shows the option of using Cloud VPN to forward routes between the networks. In this example, VPN tunnels between the spoke-2 VPC network and the hub VPC network enable reachability to the spoke-2 VPC network from the other spokes. If you need connectivity between only a few specific spokes, you can peer those VPC network pairs directly.

Architecture using Cloud VPN

The scalability of a hub-and-spoke topology that uses VPC Network Peering is subject to VPC Network Peering limits. And as noted earlier, VPC Network Peering connections don't allow transitive traffic beyond the two VPC networks that are in a peering relationship. The following diagram shows an alternative hub-and-spoke network architecture that uses Cloud VPN to overcome the limitations of VPC Network Peering.

Hub-and-spoke architecture using Cloud VPN
  • The resources that need network-level isolation use separate spoke VPC networks.
  • IPSec VPN tunnels connect each spoke VPC network to a hub VPC network.
  • A DNS private zone in the hub network and a DNS peering zone and private zone exist in each spoke network.
  • Bandwidth between networks is limited by the total bandwidths of the tunnels.

When choosing between the two architectures discussed so far, consider the relative merits of VPC Network Peering and Cloud VPN:

  • VPC Network Peering has the non-transitivity constraint, but it supports the full bandwidth defined by the machine type of the VMs and other factors that determine network bandwidth. However, you can add transitive routing by adding VPN tunnels.
  • Cloud VPN allows transitive routing, but the total bandwidth (ingress plus egress) is limited to the bandwidths of the tunnels.

Design alternatives

Consider the following architectural alternatives for interconnecting resources that are deployed in separate VPC networks in Google Cloud:

Inter-spoke connectivity using a gateway in the hub VPC network
To enable inter-spoke communication, you can deploy a network virtual appliance (NVA) or a next-generation firewall (NGFW) on the hub VPC network, to serve as a gateway for spoke-to-spoke traffic. See Centralized network appliances on Google Cloud.
VPC Network Peering without a hub
If you don't need centralized control over on-premises connectivity or sharing services across VPC networks, then a hub VPC network isn't necessary. You can set up peering for the VPC network pairs that require connectivity, and manage the interconnections individually. Do consider the limits on the number of peering relationships per VPC network.
Multiple Shared VPC networks

Create a Shared VPC network for each group of resources that you want to isolate at the network level. For example, to separate the resources used for production and development environments, create a Shared VPC network for production and another Shared VPC network for development. Then, peer the two VPC networks to enable inter-VPC network communication. Resources in individual projects for each application or department can use services from the appropriate Shared VPC network.

For connectivity between the VPC networks and your on-premises network, you can use either separate VPN tunnels for each VPC network, or separate VLAN attachments on the same Dedicated Interconnect connection.

What's next