Dedicated Interconnect Overview

Dedicated Interconnect (Dedicated Interconnect) provides direct physical connections between your on-premises network and Google’s network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet.

Before you use Dedicated Interconnect

  • You must be familiar with basic network interconnections. You'll be ordering and configuring circuits.
  • You must be familiar with the Cloud Interconnect terminology that's described in Key Terminology.
  • Your network must physically meet Google's network in a colocation facility. You must provide your own routing equipment.
  • In the colocation facility, your on-premises network devices must support the following technical requirements:
    • 10G circuits, single mode fiber, 10GBASE-LR (1310 nm), or 100G circuits, single mode fiber, 100GBASE-LR4
    • IPv4 link local addressing
    • LACP, even if you're using a single circuit
    • EBGP-4 with multi-hop
    • 802.1q VLANs

How does Dedicated Interconnect work?

For Dedicated Interconnect, you provision a cross connect between the Google network and your own router in a common location. The following example shows a single Dedicated Interconnect connection between a GCP VPC network and on-premises network:

Diagram of a Dedicated Interconnect (click to enlarge)
Dedicated Interconnect (click to enlarge)

For this basic setup, a cross connect is provisioned between the Google network and the on-premises router in a common colocation facility. This cross connect is a Dedicated Interconnect connection.

To exchange routes, a BGP session is configured over the interconnect between the Cloud Router and on-premises router. Then, traffic from the on-premises network can reach the VPC network and vice versa.

Elements of Dedicated Interconnect

The following definitions explain the different elements that were introduced in the basic setup.

Interconnect

The interconnect represents a specific physical connection between Google and an on-premises network. The interconnect exists in a colocation facility where the on-premises network and Google's network meet.

A single interconnect can be a single 10G link, a single 100G link, or a link bundle, connected to a single Cloud Router. If you have multiple connections to Google at different locations or to different devices, you must create separate interconnects.

VLAN attachment (also known as an InterconnectAttachment)

A VLAN attachment is a virtual point-to-point tunnel between an interconnect and a single region in a VPC network. The attachment allocates a specific 802.1q VLAN on the interconnect.

Use VLAN attachments to connect an interconnect with a specific VPC network. More specifically, the VLAN attachment attaches an interconnect with a Cloud Router in a VPC network.

You can create multiple VLAN attachments for a single interconnect so that you can connect to multiple VPC networks or to different regions in a single VPC network.

You can set the capacity of each VLAN attachment. For a list of capacities, see the Pricing page. The default attachment capacity is 10 Gbps.

The capacity setting limits the maximum bandwidth an attachment can use. If you have multiple attachments on a single interconnect, the capacity limitation might be helpful in cases where you want to prevent network congestion on your interconnect. The maximum bandwidth is approximate, so it's possible for a VLAN attachment to use more bandwidth than the selected capacity.

Since the capacity setting only limits the egress bandwidth from Google Cloud to the colocation facility for the interconnect, it's recommended that you configure an egress rate limiter on your router for your interconnect connection. Configuring this limiter enables you to cap the maximum ingress bandwidth to your VPC network for traffic using that connection.

Interconnect location

The interconnect location is the colocation facility where the interconnect is provisioned. This is where your on-premises routing equipment meets Google's peering edge.

Each interconnect location supports a subset of Google Cloud Platform (GCP) regions. For example, the lga-zone1-16 location supports interconnect attachments in the northamerica-northeast1, us-east1, us-west1, us-west2, us-east4, and us-central1 regions.

All interconnect locations for Dedicated Interconnect support 1 x 100 Gbps (100 Gbps) or 2 x 100 Gbps (200 Gbps) circuits.

For a list of locations and their supported regions, see Colocation Facility Locations.

Cloud Router

The Cloud Router is used to dynamically exchange routes between the VPC network and on-premises network through BGP. You establish a BGP session between the on-premises router and Cloud Router. All of the information for the BGP session is provided by the VLAN attachment, such as the peering IP addresses and VLAN ID.

Cloud Router advertises subnets in the VPC network and propagates learned routes to those subnets. For more information about Cloud Router, see the overview in the Cloud Router documentation.

Provisioning overview

Start by ordering an interconnect so that Google can allocate the necessary resources and send you an LOA-CFA. After you receive the LOA-CFA, you'll need to submit it to your vendor so that they can provision the cross connects between Google's network and your network.

You'll configure and test the interconnects with Google before you can use them. After they're ready, you can create VLAN attachments to allocate a VLAN on the interconnect.

For information about all the steps required to provision a Dedicated Interconnect, see the Provisioning Overview in the Creating Dedicated Interconnect how-to guide.

Redundancy

Depending on your availability needs, you can configure Dedicated Interconnect to support mission-critical services or applications that can tolerate some downtime. To achieve a specific level of reliability, Google has two prescriptive configurations, one for 99.99% availability and another for 99.9% availability.

Google recommends that you use the 99.99% configuration for production-level applications with low tolerance for downtime. If your applications aren't mission-critical and can tolerate some downtime, you can use the 99.9% configuration.

The SLA requires properly configured topologies that are defined by the 99.99% and 99.9% configurations. These configurations ensure availability and provide an SLA.

Base configuration

For the highest level availability, Google recommends the configuration for 99.99% availability, as shown in the following diagram. Clients in the on-premises network can reach the IP addresses of VM instances in the us-central1 region through at least one of the redundant paths and vice versa. If one path is unavailable, the other paths can continue to serve traffic.

Diagram of redundant interconnects for 99.9% availability (click to enlarge)
Redundant interconnects for 99.99% availability (click to enlarge)

Tutorials

Balancing egress traffic with redundant interconnects

When you have a redundant topology similar to the 99.99% configuration, there are multiple paths for traffic to traverse from the VPC network to your on-premises network. If the Cloud Routers receive the same announcement with equal cost (same CIDR range and same MED values), GCP uses ECMP to balance the egress traffic across connections.

Dedicated Interconnect availability

A Dedicated Interconnect connection is considered available if you can send and receive packets (ICMP Ping) between a VM instance in a specific GCP region and a correctly configured machine in your on-premises network. You should be able to send and receive packets through at least one of your redundant connections.

Frequently-asked questions

For answers to common questions about Cloud Interconnect architecture and features, see the Cloud Interconnect FAQ.

What's Next?

Was this page helpful? Let us know how we did:

Send feedback about...

Interconnect