Cloud Interconnect provides low latency, high availability connections that enable you to reliably transfer data between your on-premises and Google Cloud Virtual Private Cloud (VPC) networks. Also, Interconnect connections provide internal IP address communication, which means internal IP addresses are directly accessible from both networks.
Cloud Interconnect offers two options for extending your on-premises network:
- Dedicated Interconnect provides a direct physical connection between your on-premises network and Google's network.
- Partner Interconnect provides connectivity between your on-premises and VPC networks through a supported service provider.
For a comparison to help you choose between the two offerings, see the Cloud Interconnect section in Choosing a Network Connectivity product.
For definitions of terms used on this page, see Cloud Interconnect key terms.
Using Cloud Interconnect provides the following benefits:
Traffic between your on-premises network and your VPC network doesn't traverse the public internet. Traffic traverses a dedicated connection or goes through a service provider with a dedicated connection. By bypassing the public internet, your traffic takes fewer hops, so there are fewer points of failure where your traffic might get dropped or disrupted.
Your VPC network's internal IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. For details, see IP addressing and dynamic routes.
You can scale your connection capacity to meet your particular requirements.
For Dedicated Interconnect, connection capacity is delivered over one or more 10-Gbps or 100-Gbps Ethernet connections, with the following maximum capacities supported per Interconnect connection:
- 8 x 10-Gbps connections (80 Gbps total)
- 2 x 100-Gbps connections (200 Gbps total)
For Partner Interconnect, the following connection capacities for each VLAN attachment are supported:
- 50-Mbps to 50-Gbps VLAN attachments. The maximum supported attachment size is 50 Gbps, but not all sizes might be available, depending on what's offered by your chosen partner in the selected location.
You can request 100-Gbps connections at any of the locations listed on Choosing colocation facility locations.
Dedicated Interconnect, Partner Interconnect, Direct Peering, and Carrier Peering can all help you optimize egress traffic from your VPC network and reduce your egress costs. Cloud VPN by itself does not reduce egress costs.
You can use Cloud Interconnect with Private Google Access for on-premises hosts so that on-premises hosts can use internal IP addresses rather than external IP addresses to reach Google APIs and services. For more information, see Private access options for services in the VPC documentation.
Using Cloud VPN instead
If you don't require the low latency and high availability of Cloud Interconnect, consider using Cloud VPN to set up IPsec VPN tunnels between your networks. IPsec VPN tunnels encrypt data by using industry-standard IPsec protocols as traffic traverses the public internet.
A Cloud VPN tunnel doesn't require the overhead or costs associated with a direct, private connection. Cloud VPN only requires a VPN device in your on-premises network.
IP addressing and dynamic routes
When you connect your VPC network to your on-premises network, you allow communication between the IP address space of your on-premises network and some or all of the subnets in your VPC network. Which VPC subnets are available depends on the dynamic routing mode of your VPC network. Subnet IP ranges in VPC networks are always internal IP addresses.
The IP address space on your on-premises network and on your VPC network must not overlap, or traffic is not routed properly. Remove any overlapping addresses from either network.
Your on-premises routers share the routes to your on-premises network to the Cloud Routers in your VPC network. This action creates custom dynamic routes in your VPC network, each with a next hop set to the appropriate VLAN attachment.
Unless modified by custom advertisements, Cloud Routers in your VPC network share VPC network subnet IP address ranges with your on-premises routers according to the dynamic routing mode of your VPC network.
The following configurations require that you create a custom route advertisement on your Cloud Router to direct traffic from your on-premises network to certain internal IP addresses through an Interconnect connection:
- Configuring Private Google Access for on-premises hosts
- Creating a Cloud DNS forwarding zone
- Alternative name server network requirements
Cloud Interconnect as a data transfer network
Before you use Cloud Interconnect, carefully review Section 2 of the General Service Terms for Google Cloud.
Using Network Connectivity Center, you can use VLAN attachments to connect on-premises networks together, passing traffic between them as a data transfer network. You connect the networks by attaching VLAN attachments to a Network Connectivity Center spoke for each on-premises location. You then connect each spoke to a Network Connectivity Center hub.
For more information about Network Connectivity Center, see the Network Connectivity Center overview.
Restricting Cloud Interconnect usageBy default, any VPC network can use Cloud Interconnect. To control which VPC networks can use Cloud Interconnect, you can set an organization policy. For more information, see Restricting Cloud Interconnect usage.
Cloud Interconnect MTU
VLAN attachments can have a maximum transmission unit (MTU) of 1440 or 1500 bytes.
To avoid packet loss, configure the same MTU value for all VLAN attachments that connect to the same VPC network.
For TCP traffic, if the communicating virtual machine (VM) instances have an MTU of 1500 bytes and the attachment has an MTU of 1440 bytes, then MSS clamping reduces the MTU of TCP connections to 1440 bytes, and TCP traffic proceeds.
MSS clamping does not affect UDP packets. Therefore, if the attachment has an MTU of 1440 bytes and the VPC network has an MTU of 1500 bytes, then UDP datagrams with more than 1412 bytes of data (1412 bytes UDP data + 8-byte UDP header + 20-byte IPv4 header = 1440) are dropped. In such a case, you can do one of the following:
- If your VPC network's MTU is set to
1500, create VLAN attachments with MTUs set to
- If the VLAN attachments in that network have MTUs set to
1440, lower the MTU of the attached VPC network to
Support for GRE traffic
Cloud Interconnect supports GRE traffic. Support for GRE allows you to terminate GRE traffic on a VM from the internet (external IP address) and Cloud VPN or Cloud Interconnect (internal IP address). The decapsulated traffic can then be forwarded to a reachable destination. GRE enables you to use services such as Secure Access Service Edge (SASE) and SD-WAN. You must create a firewall rule to allow GRE traffic.
Visualize and monitor Interconnect connections and VLAN attachments
Network Topology is a visualization tool that shows the topology of your VPC networks, hybrid connectivity to and from your on-premises networks, and the associated metrics. You can view your Interconnect connections and VLAN attachments as entities in the Network Topology view.
A base entity is the lowest level of a particular hierarchy and represents a resource that can directly communicate with other resources over a network. Network Topology aggregates base entities into hierarchical entities that you can expand or collapse. When you first view a Network Topology graph, it aggregates all the base entities into their top-level hierarchy.
For example, Network Topology aggregates VLAN attachments into their Interconnect connection, and you can view the hierarchy by expanding or collapsing the icons that represent Interconnect connections.
For more information, see the Network Topology overview.
Frequently asked questions
For answers to common questions about Cloud Interconnect architecture and features, see the Cloud Interconnect FAQ.
To choose a connection type for Cloud Interconnect, see Choosing a Network Connectivity product.
To learn about best practices when planning for and configuring Cloud Interconnect, see Best practices for Cloud Interconnect.