Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google’s network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet or using VPN tunnels.
- Traffic between your on-premises network and your VPC network doesn't traverse the public Internet. Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where traffic might get dropped or disrupted.
- Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses, you must use a separate connection.
- You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect).
- The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you have a high-volume of traffic to and from Google’s network.
- The minimum deployment per location is 10 Gbps. If your traffic doesn't require that level of capacity, consider Cloud VPN.
- The circuit between your network and Google's network is not encrypted. If you require additional data security, use application-level encryption or your own VPN. Currently, you can't use Google Cloud VPN in combination with a dedicated connection, but you can use your own VPN solution.
Before you use Dedicated Interconnect
- You must be familiar with basic network interconnections. You'll be ordering and configuring circuits.
- Your network must physically meet Google's network in a colocation facility. You must provide your own routing equipment.
- In the colocation facility, your on-premises network devices must support the
following technical requirements:
- Single mode fiber, 10GBASE-LR, 1310 nm
- IPv4 link local addressing
- LACP for bonding multiple links
- EBGP-4 with multi-hop
- 802.1q VLANs
Dedicated Interconnect overview
The following example shows a single dedicated interconnect between a GCP VPC network and on-premises network.
For this setup, you provision a cross connect between the Google network and the on-premises router in a common colocation facility. This cross connect is a dedicated interconnect.
To exchange routes, you configure a BGP session over the interconnect between the Cloud Router and on-premises router. Then, traffic from the on-premises network can reach the VPC network and vice versa.
Elements of Dedicated Interconnect
The following definitions explain the different elements that were introduced in the basic setup. For a summary of all the steps required to provision a dedicated interconnect, see Provisioning Overview.
- The interconnect represents a specific physical connection between Google and an on-premises network. The interconnect exists in a colocation facility where the on-premises network and Google's network meet.
- A single interconnect can be a single 10G link or a link bundle, connected to a single Google router. If you have multiple connections to Google at different locations or to different devices, you must create separate interconnects.
- VLAN attachment (also known as an InterconnectAttachment)
- A VLAN attachment is a virtual point-to-point tunnel between an interconnect and a single region in a VPC network. The attachment allocates a specific 802.1q VLAN on the interconnect.
- Use VLAN attachments to connect an interconnect with a specific VPC network. More specifically, the VLAN attachment attaches an interconnect with a Cloud Router in a VPC network.
- You can create multiple VLAN attachments for a single interconnect so that you can connect to multiple VPC networks or to different regions in a VPC network.
- Interconnect location
- The interconnect location is the colocation facility where the interconnect is provisioned. This is where your on-premises routing equipment meets Google's peering edge.
- Each interconnect location supports a subset of Google Cloud Platform (GCP)
regions. For example, the
us-central1. When you create the VLAN attachment, you must attach the interconnect with a Cloud Router in one of those regions.
- For a list of locations and their supported regions, see Colocation Facility Locations.
- Cloud Router
- The Cloud Router is used to dynamically exchange routes between the VPC network and on-premises network through BGP. You configure the BGP session between the on-premises router and Cloud Router. All of the information for the BGP session is provided by the VLAN attachment, such as the peering IP addresses and VLAN ID.
- Cloud Router advertises subnets in the VPC network and propagates learned routes to those subnets. For more information about Cloud Router, see the overview in the Cloud Router documentation.
Depending on your availability needs, you can configure Google Cloud Dedicated Interconnect to support mission-critical services or applications that can tolerate some downtime. To achieve a specific level of reliability, Google has two prescriptive configurations, one for 99.99% availability and another for 99.9% availability.
Google recommends that you use the 99.99% configuration. Use this configuration for production-level applications with low tolerance for downtime. If your applications aren't mission-critical and can tolerate some downtime, you can use the 99.9% configuration.
The SLA requires properly configured topologies that are defined by Google, such as the 99.99% and 99.9% configurations. Use these configurations to ensure availability and to obtain an SLA.
For the highest level availability, Google recommends the configuration for
99.99% availability, as shown in the following diagram. Clients in the
on-premises network can reach the IP addresses of VM instances in the
us-central1 region through at least one of the redundant paths and vise versa.
If one path is unavailable, the other paths can continue to serve traffic.
How to configure dedicated interconnects
To achieve 99.99% availability for Dedicated Interconnect, see Creating a Topology for Production-level Applications.
To achieve 99.9% availability for Dedicated Interconnect, see Creating a Topology for Non-critical Applications (not recommended).
Balancing egress traffic with redundant interconnects
When you have a redundant topology similar to the 99.99% configuration, there are multiple paths for traffic to traverse from the VPC network to your on-premises network. If the Cloud Routers receive the same announcement with equal cost (same CIDR range and same MED value), GCP uses ECMP to balance the egress traffic across connections.
Dedicated Interconnect availability
A Dedicated Interconnect connection is considered available if you can send and receive packets (ICMP Ping) between a VM instance in a specific GCP region and a correctly configured machine in your on-premises network. You should be able to send and receive packets through at least one of your redundant connections.
When provisioning a dedicated interconnect, you'll need to understand these concepts:
- Colocation facility (also known as an Interconnect location)
- A colocation facility is where Google has a point of presence, allowing you to connect your on-premises network with Google's network. In the colocation facility, work with the facility provider to provision your routing equipment before using Google Cloud Dedicated Interconnect.
- For a list of facilities and their supported regions, see Colocation Facility Locations.
- Metropolitan area
- Colocation facilities are located in a metropolitan area (metro). These metros are typically cities where the facilities are located. Also, each metro supports a certain set of GCP regions, so the metro you pick determines which regions (and consequently which subnets) you can connect to.
- In most cases, you can meet Google's network in a colocation facility that's geographically close to your on-premises network.
- Metropolitan availability zone
- Each metro has at least two zones called metropolitan availability zones. These zones provide isolation during scheduled maintenance. This means that two zones in the same metro won't be down for maintenance at the same time.
- Metropolitan availability zones span a single metro, not across metros. For
redundancy and to maintain an SLA, you must build duplicate
interconnects in the same metro but in different metropolitan availability
zones. For example, building interconnects in
dfw-zone2-4provides redundancy but building in
dfw-zone1-505doesn't because the facilities are in different metros or in the same metropolitan availability zone.
- A Letter of Authorization and Connecting Facility Assignment (LOA-CFA) identifies the connection ports that Google has assigned for your connection and grants permission for a vendor in a colocation facility to connect to them. LOA-CFA documents are required when you order connections in a colocation facility, such as when you provision Dedicated Interconnect connections.
- When you order dedicated connections, Google allocates resources for your interconnects and then generates an LOA-CFA document for each one. The LOA-CFA lists the demarcation points that Google allocated for your interconnects. Then, you provide this form to the facility vendor to provision cross connects between Google's equipment and your own. For more information about the provisioning flow, see the create connections overview.
- For information about provisioning process of a dedicated interconnect, see Provisioning Overview.
- To view a list of colocation facilities where you can meet Google's network, see colocation facility.
- To achieve 99.99% availability for Dedicated Interconnect, see Creating a Topology for Production-level Applications.
- To achieve 99.9% availability for Dedicated Interconnect, see Creating a Topology for Non-critical Applications (not recommended).