VPC spokes overview

This page provides an overview of Virtual Private Cloud (VPC) spokes support in Network Connectivity Center.

VPC spokes

Network Connectivity Center provides inter-VPC network connectivity at scale with the support for VPC spokes. VPC spokes reduce the operational complexity of managing the individual pair-wise VPC Network Peering connections through the use of VPC spokes and a centralized connectivity management model. VPC spokes can export and import all subnet routes from other spoke VPCs on a Network Connectivity Center hub. This ensures full connectivity between all workloads that reside in all these VPC networks. Inter-VPC network traffic stays within the Google Cloud network and does not travel through the internet, which helps to ensure privacy and security.

VPC spokes can be in the same project and organization or in a different project and organization from the Network Connectivity Center hub. A VPC spoke can be connected to one hub at a time.

For information about how to create a VPC spoke, see Create a VPC spoke.

Comparison to VPC Network Peering

VPC spokes support medium to large enterprise requirements by providing IPv4 and IPv6 subnet route connectivity and IPv4 dynamic route connectivity using hybrid spokes.

A VPC network can simultaneously be a Network Connectivity Center VPC spoke and connected to another VPC network using VPC Network Peering, provided the peered VPC network isn't a VPC spoke itself.

Keep the following in mind when using Network Connectivity Center VPC spokes and VPC Network Peering:

  • Peering subnet routes in a VPC spoke aren't exported to the hub.

  • Network Connectivity Center doesn't provide connectivity to resources in a VPC network that's connected to one VPC spoke using VPC Network Peering, with the following exception:

Feature VPC Network Peering VPC spokes
VPC networks

Peerings per VPC network

Active VPC spokes per hub

Subnet ranges (subnet routes)

Subnetwork ranges per peering group

Subnetwork routes per route table

Static and dynamic routes

Static routes per peering group

Dynamic routes per region per peering group

Unique dynamic route prefixes per hub route table per region.

Static route exchange is not supported.

Export filters

Specific filters are not supported; see Route exchange options in the VPC Network Peering documentation.

Up to 16 CIDR ranges supported per VPC spoke.

Inter-VPC NAT

Not supported

Supported

Private Service Connect connection propagation

Not supported

Supported (Preview)

Producer VPC spoke connectivity from other VPC networks

Not supported

Supported (Preview)

IP addressing

Internal IPv4 addresses, including private IPv4 addresses and privately used public IPv4 addresses. See Valid IPv4 ranges.

Internal and external IPv6 addresses.

Private internal IPv4 addresses only, excluding privately used public IPv4 addresses. See Valid IPv4 ranges.

Internal and external IPv6 addresses (Preview).

IP address families Supported configurations:
  • Exchange only IPv4 subnet ranges
  • Exchange both IPv4 and IPv6 subnet ranges
Supported configurations:
  • Exchange only IPv4 subnet ranges
  • Exchange both IPv4 and IPv6 subnet ranges
  • Exchange only IPv6 subnet ranges
Performance and throughput (when compared to other VPC connectivity mechanisms)

Lowest latency, highest throughput (VM-VM equivalent).

Lowest latency, highest throughput (VM-VM equivalent).

VPC spokes in a different project from a hub

By using Network Connectivity Center, you can attach VPC networks, represented as VPC spokes, to a single hub in a different project, including a project in a different organization. This lets you connect your VPC networks across multiple projects and organizations together at scale.

You can be one of the following types of users:

  • A hub administrator who owns a hub in one project
  • A VPC network spoke administrator or network administrator who wants to add their VPC network in a different project as a spoke to the hub

The hub administrator controls who can create a VPC spoke in a different project associated with their hub by using Identity and Access Management (IAM) permissions. The VPC network spoke administrator creates a spoke in a different project from the hub. These spokes are inactive upon creation. The hub administrator must review them, and can either accept or reject the spoke. If the hub administrator accepts the spoke, it becomes active.

Network Connectivity Center always automatically accepts spokes created in the same project as the hub.

For detailed information about how to manage hubs that have VPC spokes in a different project than the hub, see Hub administration overview. For detailed information for spoke administrators, see Spoke administration overview.

VPC connectivity with export filters

Network Connectivity Center lets you limit the connectivity of all spoke VPC networks to only a subset of subnetworks in the spoke VPC. You can limit connectivity as follows:

  • For IPv4 subnet ranges:
    • You can configure the spoke to either advertise all of its IPv4 subnet ranges or none of its IPv4 subnet ranges.
    • You can specify IPv4 address ranges to prevent from being advertised and establish a list of CIDR ranges that can be advertised from the VPC network. Or, you can specify only a list of permitted CIDR ranges, thereby blocking all but the permitted ranges.
  • For IPv6 subnet ranges (Preview):
    • You can configure the spoke to either advertise all of its IPv6 subnet ranges or none of its IPv6 subnet ranges.

You can use export filters to configure VPC spokes to exchange only IPv4 subnet ranges, only IPv6 subnet ranges, or both IPv4 and IPv6 subnet ranges. Consider a spoke whose VPC network has a mix of subnet stack types. If you configure the spoke to export only IPv6 subnet ranges, then IPv6 ranges from dual-stack and IPv6-only subnets are exchanged, but IPv4 subnet ranges from IPv4-only and dual-stack and subnets aren't exchanged.

Exclude export ranges

You can keep an IPv4 address range from being advertised by using the --exclude-export-ranges flag in the Google Cloud CLI or the excludeExportRanges field in the API. Any IPv4 address ranges that match the specified range are excluded from being exported to the hub. This filtering is useful when you have subnets that need to be private within the VPC network, or might overlap with other subnets in the hub route table.

Include export ranges

You can establish which IP address ranges are permitted to be advertised from a VPC spoke by using the include-export-ranges flag or the includeExportRanges field in the API. You can specify the following:

  • To advertise all IPv4 subnet ranges, you can specify ALL_PRIVATE_IPV4_RANGES.
  • To advertise only specific IPv4 subnet ranges, you can specify a list of CIDR ranges.
  • To advertise all IPv6 subnet ranges, you can specify ALL_IPV6_RANGES.

For IPv4 address ranges, you can establish a more precise connectivity when you use an include export filter alongside the exclude export filter. This filtering determines whether a particular subnet range can be advertised from the VPC network.

Considerations

Consider the following when using the exclude and include export ranges filters:

  • The include ranges must be exclusive to each other, which means that the include ranges must not overlap. For example, suppose that there are three IP address ranges:

    Range 1: 10.100.64.0/18

    Range 2: 10.100.250.0/21

    Range 3: 10.100.100.0/22

    Range 1 and range 2 are valid include ranges because these two don't overlap. However, range 3 is under range 1, which can cause overlapping, so range 3 is invalid.

  • Because Network Connectivity Center already has exclude export filters available in the network configuration policy, both the include and exclude export filters affect the valid network configuration CIDR ranges. When both include and exclude export filters are used, the include IP address ranges must be a superset of the exclude IP address ranges.

  • By default, all VPC connectivity policies have an include CIDR range of 0.0.0.0/0, which means that if you don't specify the include filter when creating the VPC spoke, Network Connectivity Center sets the default include range to all the valid private IPv4 addresses as defined in Valid IPv4 ranges.

  • To refine an include range, you can add multiple exclude ranges. For example, if you specify 10.1.0.0/16 as an include range and 10.1.100.0/24 and 10.1.200.0/24 as the exclude ranges, the result is a refined connectivity with the combination of both include and exclude filters. This include range includes everything from 10.1.0.0/24 to 10.1.99.0/24, 10.1.101.0/24 to 10.1.199.0/24, and 10.1.201.0/24 to 10.1.255.0/24.

  • Existing subnet ranges continue to work as expected. Overlaps with include and exclude ranges when creating new subnet ranges result in an error.

Invalid new subnet range examples

The following examples show invalid subnet ranges:

  • Overlap with exclude range: Suppose that there are the following IP address ranges.

    Include range: 10.0.0.0/8

    Exclude range 4: 10.1.1.0/24

    Subnet range 4: 10.1.0.0/16

    In this case, the include range contains subnet range 4. However, it is a superset of exclude range 4. Hence, subnet range 4 is invalid.

  • Overlap with include range: Suppose that there are the following IP address ranges.

    Include range: 10.1.1.0/24

    Subnet range 5: 10.1.0.0/16

    Subnet range 5 overlaps with the include range, hence it is invalid.

When you enter an invalid subnet range during the subnet creation process, you get an Invalid IPCiderRange error, similar to the following:

Invalid IPCidrRange: CIDR_RANGE conflicts with existing subnetwork SUBNET_RANGE in region REGION

Preset topologies

Network Connectivity Center lets you specify the desired connectivity configuration across all VPC spokes. You can choose one of the following two preset topologies:

  • Mesh topology
  • Star topology

When you create a hub by using the gcloud network-connectivity hubs create command, choose the preset mesh or star topology. If the topology isn't specified, it defaults to mesh. Once set during hub creation, you can't change the topology of a given hub.

To change the topology settings of a spoke, you can delete the spoke and create a new spoke with a new hub that uses a different topology.

Mesh topology

Mesh topology provides high scale network connectivity between VPC spokes. This topology lets all spokes connect to and communicate with each other. Subnets within these VPC spokes are fully reachable unless you specify exclude export filters. By default, when two or more workload VPC networks are configured to join a Network Connectivity Center hub as spokes, Network Connectivity Center automatically constructs a full mesh of connectivity between each spoke.

All spokes within the mesh topology belong to a single default group. Mesh topology is supported on VPC and hybrid spoke types.

The following diagram shows the mesh topology connectivity in Network Connectivity Center.

Network Connectivity Center mesh topology connectivity.
Network Connectivity Center mesh topology connectivity (click to enlarge).

Star topology

Star topology is only supported with VPC spokes. When you use star topology for connectivity, the edge spokes and their associated subnets reach only the designated center spokes, while the center spokes can reach all other spokes. This helps ensure segmentation and connectivity separation across edge VPC networks.

Because VPC spokes can be attached to a hub in a different project, VPC spokes can come from different administrative domains. These spokes that are in a project that is different from the hub might not need to communicate with every other spoke in the Network Connectivity Center hub.

You can choose star topology for the following use case:

  • Workloads running in different VPC networks that don't require connectivity with each other but do require access only to the VPC networks through the central shared services VPC network.

  • Security control over communication across multiple VPC networks that requires the traffic to pass through a set of centralized network virtual appliances (NVAs).

The following diagram shows star topology connectivity in Network Connectivity Center. center-vpc-a and center-vpc-b are associated with the center group, and edge-vpc-c and edge-vpc-d are associated with the edge group. In this case, using star topology enables edge-vpc-c and edge-vpc-d to be connected to center-vpc-a and center-vpc-b and to propagate their subnets to the center group, but not be connected to each other (no direct reachability between edge-vpc-c and edge-vpc-d). Meanwhile center-vpc-a and center-vpc-b are connected to each other and to both edge-vpc-c and edge-vpc-d, thus enabling full reachability from the center group VPCs to the edge group VPCs.

Network Connectivity Center star topology connectivity.
Network Connectivity Center star topology connectivity (click to enlarge)

Spoke groups

A spoke group is a subset of spokes attached to a hub. To configure Network Connectivity Center by using star topology, you must separate all VPC spokes into two different groups, also referred to as routing domains:

  1. A center group of spokes, which communicate with every other spoke connected to the hub
  2. An edge group of spokes, which communicate only with spokes that belong to the center group

A VPC spoke can belong to only one group at a time. Groups are automatically created when you create a hub.

A hub administrator can update a spoke group by using the gcloud network-connectivity hubs groups update command. The hub administrator can add a list of project IDs or project numbers to enable auto-accept for spokes. When auto-accept is enabled, the spoke from the auto-accept project is automatically connected to the hub without the need for individual spoke proposal review.

You can list the center and edge groups as nested resources for a specific hub by using the gcloud network-connectivity hubs groups list --hub command. For hubs created with mesh topology, the output returns the default group. For hubs created with star topology, the output returns center and edge groups.

For detailed information about how to configure the mesh or star topology for your VPC spokes, see Configure a hub.

Limitations

This section describes the limitations of VPC spokes in general and when they are attached to a hub in a different project. These limitations also apply to producer VPC spokes (Preview).

Limitations of VPC spokes

  • VPC networks can connect with each other in an exclusive manner through either the Network Connectivity Center hub or through VPC Network Peering.
  • You can't use VPC Network Peering between two VPC spokes that are connected to the same Network Connectivity Center hub. However, consider the following:
    • A producer VPC spoke requires a peering connection to a VPC spoke on the same hub. Connectivity through Network Connectivity Center isn't established between the producer VPC spoke and its peered VPC spoke.
    • You can have a Network Connectivity Center-connected VPC spoke that is peered through VPC Network Peering with a separate VPC that is not a part of Network Connectivity Center.
  • VPCs connected together by using Network Connectivity Center and VPC Network Peering in any combination are not transitive.
  • Static routes exchange across VPC spokes isn't supported.
  • Routes pointing to internal passthrough Network Load Balancer virtual IP addresses in other VPC spokes are not supported.
  • Overlapping subnets must be masked by exclude export filters.
  • Update of export range filters after VPC spoke creation is not supported.
  • For a spoke in a different project from the hub, when a new VPC Service Controls perimeter is added, you can't add new spokes that violate the perimeter, but existing spokes continue to function.
  • IPv6-based internal passthrough Network Load Balancers aren't reachable among VPC spokes.
  • IPv6 dynamic route exchange isn't supported.
  • Star topology connectivity doesn't support hybrid spokes or dynamic route exchange.
  • Auto mode VPC networks are not supported as VPC spokes. You can switch from auto mode to custom VPC network that lets you manually define subnet prefixes for each region in your VPC network. Once updated, you cannot undo this action.

Cool-down period requirements after deleting a VPC spoke

For a new spoke for the same VPC network attached to a different hub, you must wait for the cool-down period of at least 10 minutes. If the adequate cool-down period is not allowed, the new configuration might not take effect. This cool-down period is not needed if the VPC network is added as a spoke to the same hub.

Quotas and limits

For detailed quota information, see Quotas and limits.

Billing

Spoke hours

Spoke hours are charged to the project where the spoke resource lives and follows the standard spoke hours pricing. Spoke hours are only charged when the spoke is in the ACTIVE state.

Outbound traffic

Outbound traffic is charged to the project of the spoke resource from which traffic originates. Pricing is the same regardless of whether traffic crosses project boundaries.

Service level agreement

For information about the Network Connectivity Center service level agreement, see Network Connectivity Center Service Level Agreement (SLA).

Pricing

For information about pricing, see Network Connectivity Center pricing.

What's next