Dedicated Interconnect overview

Dedicated Interconnect provides direct physical connections between your on-premises network and Google's network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost-effective than purchasing additional bandwidth over the public internet.

Before you use Dedicated Interconnect

Ensure that you meet the following requirements:

  • Be familiar with basic network interconnections so that you can order and configure circuits.
  • Be familiar with Cloud Interconnect terminology.
  • Your network must physically meet Google's network in a colocation facility. You must provide your own routing equipment. Your on-premises router is typically located in the colocation facility. However, you can also extend your connection to a router outside of the colocation facility.
  • In the colocation facility, your network devices must support the following technical requirements:

    • 10-Gbps circuits, single mode fiber, 10GBASE-LR (1310 nm), or 100-Gbps circuits, single mode fiber, 100GBASE-LR4

    • IPv4 link local addressing

    • LACP, even if you're using a single circuit

    • EBGP-4 with multi-hop

    • 802.1Q VLANs

How does Dedicated Interconnect work?

For Dedicated Interconnect, you provision a Dedicated Interconnect connection between the Google network and your own network.

The following example diagram shows a single Dedicated Interconnect connection between a Virtual Private Cloud (VPC) network and your on-premises network.

A Dedicated Interconnect connection (click to enlarge).
Dedicated Interconnect connection (click to enlarge)

For the basic setup shown in the diagram, a Dedicated Interconnect connection is provisioned between the Google network and the on-premises router in a common colocation facility. Your setup might be different if your on-premises router is not in the same colocation facility as your Dedicated Interconnect demarcation.

When you create a VLAN attachment, you associate it with a Cloud Router. This Cloud Router creates a BGP session for the VLAN attachment and its corresponding on-premises peer router. The Cloud Router receives the routes that your on-premises router advertises. These routes are added as custom dynamic routes in your VPC network. The Cloud Router also advertises routes for Google Cloud resources to the on-premises peer router.

VLAN attachment MTU options

VLAN attachments can have a maximum transmission unit (MTU) of 1440, 1460, 1500 or 8896 bytes.

The following limitations apply:

  • VLAN attachments with an MTU of 8896 (also known as jumbo frames) are supported only on unencrypted IPv4 and IPv6 Dataplane V2 VLAN attachments.

  • Jumbo frames are not supported by Dataplane V1 VLAN attachments or HA VPN over Cloud Interconnect.

  • Requests to Google API Client Libraries automatically use 1440 MTU packets, even if your VLAN attachment is set to a higher MTU value, including jumbo frames.

We recommend that you use the same MTU for all VLAN attachments that are connected to the same VPC network, and that you set the MTU of the VPC networks to the same value. For more information about Cloud Interconnect MTUs, see Cloud Interconnect MTU.

Provisioning

To create and configure a Dedicated Interconnect connection, you start by deciding where you want a Dedicated Interconnect connection and whether you want MACsec for Cloud Interconnect. Then, you order a Dedicated Interconnect connection so that Google can allocate the necessary resources and send you a Letter of Authorization and Connecting Facility Assignment (LOA-CFA). After you receive the LOA-CFA, you need to submit it to your vendor so that they can provision the connections between Google's network and your network.

You then need to configure and test the connections with Google before you can use them. After they're ready, you can create VLAN attachments to allocate a VLAN on the connection.

For detailed steps to provision a Dedicated Interconnect connection, see the Provisioning overview.

Fixed pricing

Dedicated Interconnect offers fixed port pricing for outbound data transfers for some VLAN attachments. This lets you get a fixed monthly invoice for outbound data transfers.

Fixed port pricing considers the following types of connections:

  • Local connection: The VLAN attachment is in the same a metro location where the destination Google Cloud region is located.

    For example, if you obtain a VLAN attachment in the Los Angeles, California metro location and the destination Google Cloud region is us-west2, then the VLAN attachment's location and destination Google Cloud region are the same. This is considered a local connection.

  • Remote connection: The VLAN attachment is in a different metro location where the destination Google Cloud region is located.

    For example, if you create a VLAN attachment in the Los Angeles, California metro location and the destination Google Cloud region is us-east4, then the VLAN attachment's location and destination Google Cloud region are different. This is considered a remote connection.

    Similarly, if you obtain a VLAN attachment in the Portland, Oregon metro location, then there isn't a local Google Cloud region available within that metro location. Because you can't connect to a local Google Cloud region, this is considered a remote connection.

The following Google Cloud regions only offer support for fixed pricing for remote connections on Dedicated Interconnect:

  • us-west1
  • us-east1
  • europe-west4
  • europe-north1
  • asia-east1

To request a Dedicated Interconnect connection with fixed port pricing, contact your account team.

Redundancy and SLA

Depending on your availability needs, you can configure Dedicated Interconnect to support mission-critical services or applications that can tolerate some downtime. To achieve a specific level of reliability, Google has two prescriptive configurations:

For the highest level availability, we recommend the configuration for 99.99% availability as the base configuration, as shown in the following diagram. Clients in the on-premises network can reach the IP addresses of virtual machine (VM) instances in the us-central1 region through at least one of the redundant paths. If one path is unavailable, the other paths can continue to serve traffic.

Redundant connections for 99.99% availability (click to enlarge).
Redundant connections for 99.99% availability (click to enlarge)

We recommend that you use the 99.99% availability configuration for production-level applications with a low tolerance for downtime. If your applications aren't mission-critical and can tolerate some downtime, you can use the 99.9% availability configuration.

The SLA requires properly configured topologies that are defined by the 99.99% and 99.9% configurations. These configurations ensure availability and provide an SLA.

Balance egress traffic with redundant connections

When you have a redundant topology similar to the 99.99% configuration, there are multiple paths for traffic to traverse from the VPC network to your on-premises network.

Google Cloud uses ECMP to balance the egress traffic across connections. To use ECMP, the Cloud Routers used by the VLAN attachments must receive the same announcement with equal cost (the same CIDR range and the same MED values).

Dedicated Interconnect does the following to balance traffic across connections:

  • All VLAN attachments operate on Dataplane v1. Traffic is balanced approximately, according to the configured capacity. Traffic balancing might not work optimally when VLAN attachment capacities are different.

  • All VLAN attachments operate on Dataplane v2. Google Cloud balances the traffic between the VLAN attachments based upon the configured capacity of each VLAN attachment.

  • VLAN attachments operate on a mix of Dataplane v1 and v2. Egress traffic might be misbalanced between the VLAN attachments. Misbalanced traffic is most noticeable for attachments with under 1 Gpbs of configured capacity.

    Google is migrating all existing VLAN attachments to use Dataplane v2 without any action required on your part. If you need to migrate to Dataplane v2 to resolve misbalanced VLAN attachments, contact Google Cloud Support.

Create redundant connections with sufficient capacity

The Best practices document describes best practices for creating redundant Cloud Interconnect connections that have sufficient capacity in a failover scenario. Following these practices helps ensure that events such as planned maintenance or hardware failures don't cause loss of connectivity.

Dedicated Interconnect availability

A Dedicated Interconnect connection is considered available if you can send and receive ICMP packets (ping) between a VM in a specific Google Cloud region and a correctly configured machine in your on-premises network. You should be able to send and receive packets through at least one of your redundant connections.

IPv6 support

Dedicated Interconnect supports IPv6 traffic.

To support IPv6 traffic in a Dedicated Interconnect connection, your IPv6-enabled VPC networks must include dual-stack subnets. In addition, the subnets must be assigned internal IPv6 ranges.

You must also configure IPv6 addresses on the VMs in the subnet.

For information about using internal IPv6 ranges in your VPC network and subnets, see Internal IPv6 specifications.

After you configure IPv6 in your VPC network, subnets, and VMs, you can configure dual-stack VLAN attachments.

Stack types and BGP sessions

With Dedicated Interconnect, you can choose between two different stack types for your VLAN attachment.

  • Single stack (IPv4 only)
  • Dual stack (IPv4 and IPv6)

The stack type that you select for your VLAN attachment determines what version of IP traffic is supported by your Dedicated Interconnect connection.

When you create the BGP sessions for a dual-stack VLAN attachment, you have the following options for IPv6 route exchange:

The following table summarizes the types of BGP sessions allowed for each Dedicated Interconnect VLAN attachment.

Stack type Supported BGP sessions
IPv4 only IPv4 BGP
IPv4 and IPv6
  • IPv4 BGP, with or without MP-BGP
  • IPv6 BGP, with or without MP-BGP
  • Both IPv4 BGP and IPv6 BGP, no MP-BGP

For more information about BGP sessions, see Establish BGP sessions in the Cloud Router documentation.

Single-stack IPv4-only VLAN attachments

By default, a Dedicated Interconnect VLAN attachment is assigned the IPv4-only stack type.

An IPv4-only VLAN attachment can support only IPv4 traffic.

Use the following procedure to create an IPv4-only Dedicated Interconnect VLAN attachment and an IPv4 BGP session.

Dual-stack IPv4 and IPv6 VLAN attachments

A Dedicated Interconnect VLAN attachment that is configured with the dual-stack (IPv4 and IPv6) stack type can support both IPv4 and IPv6 traffic.

For a dual-stack VLAN attachment, you can configure your Cloud Router with an IPv4 BGP session, an IPv6 BGP session, or both. If you configure only one BGP session, you can enable MP-BGP to allow that session to exchange both IPv4 and IPv6 routes. If you create an IPv4 BGP session and an IPv6 BGP session, you can't enable MP-BGP on either session.

Use the following procedure to create a dual-stack Dedicated Interconnect VLAN attachment and all supported BGP sessions.

You can also change the stack type of your VLAN attachment after you create the attachment. For more information, see Modify VLAN attachment.

Restrict Dedicated Interconnect usage

By default, any VPC network can use Cloud Interconnect. To control which VPC networks can use Cloud Interconnect, you can set an organization policy. For more information, see Restrict Cloud Interconnect usage.

What's next?