VPC spokes
Network Connectivity Center provides inter-VPC network connectivity at scale with the support for VPC spokes. VPC spokes reduce the operational complexity of managing the individual pair-wise VPC Network Peering connections through the use of VPC spokes and a centralized connectivity management model. VPC spokes export and import all IPv4 subnet routes from other spoke VPCs on a Network Connectivity Center hub. This ensures full IPv4 connectivity between all workloads that reside in all these VPC networks. Inter-VPC network traffic stays within the Google Cloud network and does not travel through the internet, which helps to ensure privacy and security.
VPC spokes can be in the same project and organization or in a different project and organization from the Network Connectivity Center hub. A VPC spoke can be connected to one hub at a time.
For information about how to create a VPC spoke, see Create a VPC spoke.
Comparison to VPC Network Peering
VPC spokes support medium to large enterprise requirements by providing IPv4 subnet route connectivity and IPv4 dynamic route connectivity using hybrid spokes.
A VPC network can simultaneously be a Network Connectivity Center VPC spoke and connected to another VPC network using VPC Network Peering, provided the peered VPC network isn't a VPC spoke itself.
Keep the following in mind when using Network Connectivity Center VPC spokes and VPC Network Peering:
Peering subnet routes in a VPC spoke aren't exported to the hub.
Network Connectivity Center doesn't provide connectivity to resources in a VPC network that's connected to one VPC spoke using VPC Network Peering, with the following exception:
- A peered service producer VPC network for private services access can be added as a producer VPC spoke (Preview).
Feature | VPC Network Peering | VPC spokes |
---|---|---|
VPC networks | ||
Subnet ranges (subnet routes) |
Subnetwork routes per route table |
|
Static and dynamic routes |
500 dynamic routes per hub route table per region. Static route exchange is not supported. |
|
Export filters |
Specific filters are not supported; see Route exchange options in the VPC Network Peering documentation. |
Up to 16 CIDR ranges supported per VPC spoke. |
Inter-VPC NAT |
Not supported |
Supported |
Private Service Connect connection propagation |
Not supported |
Supported (Preview) |
Producer VPC spoke connectivity from other VPC networks |
Not supported |
Supported (Preview) |
IP addressing |
Internal IP addresses, including private IPv4 addresses, privately used public IPv4 addresses, and internal IPv6 addresses. See Valid IPv4 ranges and Internal IPv6 specifications. |
Private internal IPv4 addresses only, _excluding_ privately used public IPv4 addresses. See Valid IPv4 ranges. |
IP address families |
IPv4 or dual-stack (IPv4 and IPv6). |
IPv4 only. |
Performance and throughput (when compared to other VPC connectivity mechanisms) |
Lowest latency, highest throughput (VM-VM equivalent). |
Lowest latency, highest throughput (VM-VM equivalent). |
VPC spokes in a different project from a hub
By using Network Connectivity Center, you can attach VPC networks, represented as VPC spokes, to a single hub in a different project, including a project in a different organization. This lets you connect your VPC networks across multiple projects and organizations together at scale.
You can be one of the following types of users:
- A hub administrator who owns a hub in one project
- A VPC network spoke administrator or network administrator who wants to add their VPC network in a different project as a spoke to the hub
The hub administrator controls who can create a VPC spoke in a different project associated with their hub by using Identity and Access Management (IAM) permissions. The VPC network spoke administrator creates a spoke in a different project from the hub. These spokes are inactive upon creation. The hub administrator must review them, and can either accept or reject the spoke. If the hub administrator accepts the spoke, it becomes active.
Network Connectivity Center always automatically accepts spokes created in the same project as the hub.
For detailed information about how to manage hubs that have VPC spokes in a different project than the hub, see Hub administration overview. For detailed information for spoke administrators, see Spoke administration overview.
VPC connectivity with export filters
Network Connectivity Center lets you limit the connectivity of all spoke VPC networks to only a subset of subnetworks in the spoke VPC. You can limit connectivity by specifying IP address ranges from being advertised and by establishing a list of CIDR ranges that can be advertised from the VPC network. You can also limit connectivity by specifying a list of permitted CIDR ranges, thereby blocking all but the permitted ranges.
Exclude export ranges
You can keep an IP address range from being advertised by
using the --exclude-export-ranges
flag in the Google Cloud CLI or the
excludeExportRanges
field in the API. Any subnetworks that match the
specified range are excluded from being exported to the hub. This filtering
is useful when you have subnets that need to be private within the
VPC network, or might overlap with other subnets in the
hub route table.
Include export ranges
You can establish a list of CIDR ranges that are permitted to be advertised
from a VPC spoke by using the include-export-ranges
flag or the includeExportRanges
field in the API. A more precise connectivity
is established when you use this alongside the exclude export filter IP address
range. This filtering determines whether a particular subnet range can be
advertised from the VPC network.
Considerations
Consider the following when using the exclude and include export ranges filters:
The include ranges must be exclusive to each other, which means that the include ranges must not overlap. For example, suppose that there are three IP address ranges:
Range 1
:10.100.64.0/18
Range 2
:10.100.250.0/21
Range 3
:10.100.100.0/22
Range 1
andrange 2
are valid include ranges because these two don't overlap. However,range 3
is underrange 1
, which can cause overlapping, sorange 3
is invalid.Because Network Connectivity Center already has exclude export filters available in the network configuration policy, both the include and exclude export filters affect the valid network configuration CIDR ranges. When both include and exclude export filters are used, the include IP address ranges must be a superset of the exclude IP address ranges.
By default, all VPC connectivity policies have an include CIDR range of
0.0.0.0/0
, which means that if you don't specify the include filter when creating the VPC spoke, Network Connectivity Center sets the default include range to all the valid private IPv4 addresses as defined in Valid IPv4 ranges.To refine an include range, you can add multiple exclude ranges. For example, if you specify
10.1.0.0/16
as an include range and10.1.100.0/24
and10.1.200.0/24
as the exclude ranges, the result is a refined connectivity with the combination of both include and exclude filters. This include range includes everything from10.1.0.0/24
to10.1.99.0/24
,10.1.101.0/24
to10.1.199.0/24
, and10.1.201.0/24
to10.1.255.0/24
.Existing subnet ranges continue to work as expected. Overlaps with include and exclude ranges when creating new subnet ranges result in an error.
Invalid new subnet range examples
The following examples show invalid subnet ranges:
Overlap with exclude range: Suppose that there are the following IP address ranges.
Include range:
10.0.0.0/8
Exclude range 4
:10.1.1.0/24
Subnet range 4
:10.1.0.0/16
In this case, the include range contains
subnet range 4
. However, it is a superset ofexclude range 4
. Hence,subnet range 4
is invalid.Overlap with include range: Suppose that there are the following IP address ranges.
Include range:
10.1.1.0/24
Subnet range 5
:10.1.0.0/16
Subnet range 5
overlaps with the include range, hence it is invalid.
When you enter an invalid subnet range during the subnet creation process, you
get an Invalid IPCiderRange
error, similar to the following:
Invalid IPCidrRange: CIDR_RANGE conflicts with existing subnetwork SUBNET_RANGE in region REGION
Preset topologies
Network Connectivity Center lets you specify the desired connectivity configuration across all VPC spokes. You can choose one of the following two preset topologies:
- Mesh topology
- Star topology
When you create a hub by using the
gcloud network-connectivity hubs create
command,
choose the preset mesh or star topology. If the topology isn't specified, it
defaults to mesh. Once set during hub creation, you can't change the topology
of a given hub.
To change the topology settings of a spoke, you can delete the spoke and create a new spoke with a new hub that uses a different topology.
Mesh topology
Mesh topology provides high scale network connectivity between
VPC spokes.
This topology lets all spokes connect to and communicate with
each other. Subnets within these VPC
spokes are fully reachable unless you specify
exclude export filters
.
By default, when two or more workload VPC
networks are configured to join a Network Connectivity Center hub as spokes,
Network Connectivity Center automatically constructs a full mesh of connectivity between
each spoke.
All spokes within the mesh topology belong to a single default group. Mesh topology is supported on VPC and hybrid spoke types.
The following diagram shows the mesh topology connectivity in Network Connectivity Center.
Star topology
Star topology is only supported with VPC spokes. When you use star topology for connectivity, the edge spokes and their associated subnets reach only the designated center spokes, while the center spokes can reach all other spokes. This helps ensure segmentation and connectivity separation across edge VPC networks.
Because VPC spokes can be attached to a hub in a different project, VPC spokes can come from different administrative domains. These spokes that are in a project that is different from the hub might not need to communicate with every other spoke in the Network Connectivity Center hub.
You can choose star topology for the following use cases:
Workloads running in different VPC networks that don't require connectivity with each other but do require access only to the VPC networks through the central shared services VPC network.
Security control over communication across multiple VPC networks that requires the traffic to pass through a set of centralized network virtual appliances (NVAs).
The following diagram shows star topology connectivity in Network Connectivity Center.
center-vpc-a
and center-vpc-b
are associated with the center group, and
edge-vpc-c
and edge-vpc-d
are associated with the edge group. In this case,
using star topology enables edge-vpc-c
and edge-vpc-d
to be
connected to center-vpc-a
and center-vpc-b
and to propagate their subnets to
the center group, but not be connected to each other (no direct reachability
between edge-vpc-c
and edge-vpc-d
). Meanwhile center-vpc-a
and center-vpc-b
are connected to each other and to both edge-vpc-c
and
edge-vpc-d
, thus enabling full reachability from the center group
VPCs to the edge group VPCs.
Spoke groups
A spoke group is a subset of spokes attached to a hub. To configure Network Connectivity Center by using star topology, you must separate all VPC spokes into two different groups, also referred to as routing domains:
- A center group of spokes, which communicate with every other spoke connected to the hub
- An edge group of spokes, which communicate only with spokes that belong to the center group
A VPC spoke can belong to only one group at a time. Groups are automatically created when you create a hub.
A hub administrator can update a spoke group by using the
gcloud network-connectivity hubs groups update
command. The hub administrator can add a list of project IDs or
project numbers to enable auto-accept for spokes. When auto-accept is
enabled, the spoke from the auto-accept project is automatically connected to
the hub without the need for individual spoke proposal review.
You can list the center and edge groups as nested resources for a
specific hub by using the
gcloud network-connectivity hubs groups list --hub
command.
For hubs created with mesh topology, the output returns the default group.
For hubs created with star topology, the output returns center and edge
groups.
For detailed information about how to configure the mesh or star topology for your VPC spokes, see Configure a hub.
Limitations
This section describes the limitations of VPC spokes in general and when they are attached to a hub in a different project. These limitations also apply to producer VPC spokes (Preview).
Limitations of VPC spokes
- VPC networks can connect with each other in an exclusive manner through either the Network Connectivity Center hub or through VPC Network Peering.
- You can't use VPC Network Peering between two VPC spokes
that are connected to the same Network Connectivity Center hub. However, consider
the following:
- A producer VPC spoke requires a peering connection to a VPC spoke on the same hub. Connectivity through Network Connectivity Center isn't established between the producer VPC spoke and its peered VPC spoke.
- You can have a Network Connectivity Center-connected VPC spoke that is peered through VPC Network Peering with a separate VPC that is not a part of Network Connectivity Center.
- VPCs connected together by using Network Connectivity Center and VPC Network Peering in any combination are not transitive.
- IPv4 static routes exchange across VPC spokes isn't supported.
- Routes pointing to internal passthrough Network Load Balancer virtual IP addresses in other VPC spokes are not supported.
- Overlapping subnets must be masked by exclude export filters.
- Update of
export range filters
after VPC spoke creation is not supported. - For a spoke in a different project from the hub, when a new VPC Service Controls perimeter is added, you can't add new spokes that violate the perimeter, but existing spokes continue to function.
- Dynamic route exchange in hubs configured in star topology have the following
two restrictions to adhere to the connectivity policies of edge and center
groups:
- Support for only site-to-cloud spokes in the edge group
- Support for both site-to-site and site-to-cloud spokes in the center group
- If there are two or more routing VPC networks, neither of them
can also be a VPC spoke.
- If there is a single routing VPC network, it can also be a VPC spoke.
- If a routing VPC network is also a VPC spoke, additional routing VPC networks can't be added.
- VPC spokes don't comply with site-to-site and site-to-cloud semantics of dynamic routes. Overlapping site-to-cloud routes can cause site-to-site traffic to be dropped. You can work around this limitation by making sure that a routing VPC network isn't configured as a VPC spoke.
Cool-down period after deleting a VPC spoke
For a new spoke for the same VPC network attached to a different hub, you must wait for the cool-down period of at least 10 minutes. If the adequate cool-down period is not allowed, the new configuration might not take effect. This cool-down period is not needed if the VPC network is added as a spoke to the same hub.
Quotas and limits
For detailed quota information, see Quotas and limits.
Billing
Spoke hours
Spoke hours are charged to the project where the spoke resource lives and
follows the standard spoke hours pricing. Spoke hours are only charged when
the spoke is in the ACTIVE
state.
Outbound traffic
Outbound traffic is charged to the project of the spoke resource from which traffic originates. Pricing is the same regardless of whether traffic crosses project boundaries.
Service level agreement
For information about the Network Connectivity Center service level agreement, see Network Connectivity Center Service Level Agreement (SLA).
Pricing
For information about pricing, see Network Connectivity Center pricing.
What's next
- To create hubs and spokes, see Work with hubs and spokes.
- To view a list of partners whose solutions are integrated with Network Connectivity Center, see Network Connectivity Center partners.
- To find solutions for common issues, see Troubleshooting.
- To get details about API and
gcloud
commands, see APIs and reference.