Cloud Interconnect FAQ

This document covers commonly asked questions about Cloud Interconnect features and architecture grouped into the following major sections:

Traffic over Cloud Interconnect

This section covers questions about traffic types, bandwidth, and encryption over Cloud Interconnect.

What kind of packets are carried over Cloud Interconnect ?

The Cloud Interconnect circuit carries 802.1q Ethernet frames with IPv4 packets in the payload. These frames are also known as VLAN-tagged Ethernet frames.

The value of the 12-bit VLAN ID (VID) field of the 802.1q header is the same as the VLAN ID value assigned by Google Cloud when a VLAN attachment is created. For more information, see the following documents:

How can I encrypt my traffic over Cloud Interconnect ?

Depending on the service that is accessed by using Cloud Interconnect, your traffic might already be encrypted without your needing to do anything special. For example, if you are accessing one of the Google Cloud APIs reachable over Cloud Interconnect, that traffic is already encrypted with TLS in the same way as if the APIs were accessed over the public internet.

You can also use the TLS solution for services that you create; for example, a service that you offer on a Compute Engine instance or on a Google Kubernetes Engine Pod that supports the HTTPS protocol.

If you need encryption at the IP layer, you can create one or more self-managed (non-Google Cloud) VPN gateways in your Virtual Private Cloud (VPC) network and assign a private IP address to each gateway. For example, you can run a strongSwan VPN on a Compute Engine instance. You can then terminate IPsec tunnels to those VPN gateways through Cloud Interconnect from an on-premises environment.

For more details, see the Encryption in transit documentation.

Can I create a 100-Gbps connection over Dedicated Interconnect?

Yes, you can scale your connection to Google based on your needs.

An Interconnect connection consists of one or more circuits deployed as an Ethernet port channel link (LAG) group. The circuits in a connection can be 10 Gbps or 100 Gbps, but not both.

A connection can have one of the following maximum capacities:

  • 8 x 10-Gbps circuits (80 Gbps total)
  • 2 x 100-Gbps circuits (200 Gbps total)

Dedicated Interconnect or Partner Interconnect support VLAN attachment capacities from 50 Mbps to 50 Gbps. Although the maximum supported attachment size for Partner Interconnect is 50 Gbps, not all sizes might be available, depending on what's offered by your chosen service provider in the selected location.

You can order more than one connection and use them in an active-active fashion by using the Border Gateway Protocol (BGP) routing capabilities of Cloud Router.

For a detailed list of capacities, quotas, and limits, see Cloud Interconnect Pricing and Quotas and limits.

Can I reach my instances using IPv6 over Cloud Interconnect ?

VPC does not offer a built-in capability to terminate IPv6 traffic to instances.

Can I specify the BGP peering IP address?

  • For Partner Interconnect, no. Google chooses the peering IP addresses.
  • For Dedicated Interconnect, you can specify an IP address range (CIDR block) that Google selects from when you create a VLAN attachment. This CIDR block must be in the link-local IP address range 169.254.0.0/16.

Can I reach Google APIs through Cloud Interconnect from on-premises? Which services or APIs are available?

There are two ways to reach Google APIs:

  • Option 1: You can enable Private Google Access for one or more subnets in the VPC network, and deploy one or more reverse-proxy instances in those subnets. These reverse proxies have only VPC private IP addresses configured, and are thus reachable only through the Cloud Interconnect link from on-premises. With this solution, most access to Google Cloud APIs, developer APIs, and most Google Cloud services is granted.

    For more details, including a list of Google Cloud services that Private Google Access supports, see Configuring Private Google Access.

  • Option 2: You can use Private Google Access for on-premises hosts. In this case, requests from on-premises hosts must be sent to restricted.googleapis.com, which resolves persistently to IP range 199.36.153.4/30, also known as the restricted VIP range.

    To advertise the restricted VIP range, add a custom route on Cloud Router. This ensures that traffic to the restricted VIP (as a destination) is routed from on-premises to the API endpoints over Cloud Interconnect. Only the Google APIs and services that support the restricted VIP are reachable with this solution.

For the latest information about configuration details and supported services, see Configuring Private Google Access for on-premises hosts.

Can I use Cloud Interconnect as a private channel to access all Google Workspace services through a browser?

As of December 2018, it is not possible to reach Google Workspace applications through Cloud Interconnect.

Why do my BGP sessions flap continuously after a certain interval?

Check for an incorrect subnet mask on your on-premises BGP IP range. For example, instead of configuring 169.254.10.0/29, you might have configured 169.254.10.0/30.

Can I send and learn MED values over an L3 Partner Interconnect connection?

If you are using a Partner Interconnect connection where a Layer 3 service provider handles BGP for you, Cloud Router can't learn MED values from your on-premises router or send MED values to that router. This is because MED values can't pass through autonomous systems. Over this type of connection, you can't set route priorities for routes advertised by Cloud Router to your on-premises router. In addition, you can't set route priorities for routes advertised by your on-premises router to your VPC network.

Cloud Interconnect architecture

This section covers common questions that arise when designing or using a Cloud Interconnect architecture.

Can I rename Dedicated Interconnect connections or move them to a different project?

No. After you name a Dedicated Interconnect connection, you can't rename it or move it to a different Google Cloud project. Instead, you must delete the connection and recreate it with a new name or in a different project.

Can I use Cloud Interconnect to connect to the public internet?

As of December 2018, internet routes are not advertised over Cloud Interconnect.

How can I connect to Google Cloud if I'm located in a PoP location not listed in the colocation facility locations?

You have two options, after which you can go through the normal ordering and provisioning process for Dedicated Interconnect:

  • Option 1: You can order leased lines from a carrier to connect from your point-of-presence (PoP) location to one of Google's Cloud Interconnect colocation facilities. Usually it's best if you contact your existing colocation facility provider and get a list of on-net providers. An on-net provider is a provider that already has infrastructure in the building where you are located. Using an on-net provider is cheaper and faster than using a different provider who must build out infrastructure to meet you in your existing PoP location.
  • Option 2: You can use Partner Interconnect with a service provider who can provide a last-mile circuit to meet you. Colocation providers cannot usually provide this type of service because they have fixed locations where you must already be present.

If I use Partner Interconnect, can I see the connection in the project where I create the VLAN attachment?

When you use the Partner Interconnect service, the object for the Interconnect connection is created in the service provider project and is not visible in your project. The VLAN attachment (interconnectAttachment) is still visible inside your project as in the Cloud Interconnect case.

How do I create a redundant architecture that uses Cloud Interconnect ?

Depending on the desired SLA, there are specific architectures that must be implemented for both Dedicated Interconnect and Partner Interconnect.

Topologies for production-ready architectures with a 99.99% SLA and for non-critical applications with a 99.9% SLA are available at Cloud Interconnect tutorials.

These SLA levels refer to the availability of the Interconnect connection, which is the availability of the routed connection between the on-premises location and the VPC network. For example, if you create a service on Compute Engine instances that's reachable through Cloud Interconnect, the service availability depends on the combined availability of both the Cloud Interconnect service and the Compute Engine service.

  • For Dedicated Interconnect, a single Interconnect connection (LACP bundle) has a No Uptime SLA.
  • For Partner Interconnect, a single VLAN attachment has a No Uptime SLA.

Issues with single connection/bundle failures are treated as a support case priority no higher than P3: Medium Impact—Service Use Partially Impaired. Therefore, you cannot expect a quick resolution or further analysis of the root cause.

Due to planned or unplanned maintenance, single links or bundles might be drained even for extended periods of time, such as hours or days.

Can I forward traffic over Cloud Interconnect between my on-premises legacy application and my internal load balancer backends?

In this scenario, you deployed an application that consists of two tiers: an on-premises tier that has not yet been migrated to Google Cloud (legacy tier) and a cloud tier running on VPC instances that are also backends of a Google Cloud internal load balancer.

You can use Cloud Interconnect to forward the traffic between these two application tiers as long as you implement the necessary routes between Cloud Router and your on-premises router. The Cloud Router that you use for the Interconnect connection that handles this application's traffic must reside in the same region as the subnet that contains the load balancer backends. This is because the internal load balancer only supports regional routing. Internal load balancer access is lost when global routing for the VPC uses a tunnel outside the region where the load balancer backends are located. For more information, see Using Cloud VPN and Cloud Interconnect.

If the on-premises traffic enters the VPC network from a different region, you can either deploy an internal load balancer with the respective backends in the other region, or route the traffic to a reverse proxy from which the internal load balancer VIP can be reached.

Can I move one or more instances of Cloud Interconnect between Google Cloud projects or organizations?

If you want to move a project to a new Google Cloud organization, you can open a support case, and Google Cloud Support can facilitate the move.

Changes of organization do not affect Dedicated Interconnect and VLAN attachments as long as the project stays the same.

For project changes, if you are performing a Cloud Interconnect activation and you have an LOA but have not yet completed the activation, cancel the current activation and create a new one in the correct project. Google issues you a new LOA, which you can then give to your Interconnect connection provider. For steps, see Ordering a connection and Retrieving LOA-CFAs.

An active Interconnect connection can't be moved between projects because it is a child object of the project, and there is no ability to automatically migrate objects between projects. If possible, you should start a request for a new Interconnect connection.

How can I use the same Interconnect connection to connect multiple VPC networks in multiple projects inside the same Google Cloud organization?

For either Dedicated Interconnect or Partner Interconnect, you can use Shared VPC or VPC Network Peering to share a single attachment between multiple VPC networks. For steps, see Enabling multiple VPC networks to access the same VLAN attachment.

For Partner Interconnect

If you can't use Shared VPC or VPC Network Peering—for example, because you need to keep the VPC networks separate—you must create additional VLAN attachments. Creating more attachments might incur additional costs.

If you have multiple VLAN attachments, including attachments in different projects, you can pair them with a Partner Interconnect connection from the same service provider or with Partner Interconnect connections from different service providers.

For Dedicated Interconnect

You can create multiple attachments, one for each project or VPC network that you want to connect to.

If you have many projects, you can give each project its own VLAN attachment and its own Cloud Router while configuring all the attachments to use the same physical Dedicated Interconnect connection in a specified project.

The VLAN attachment, in addition to being a VLAN with an 802.1q ID, is a child object of an Interconnect connection that exists in a project.

In this model, each VPC network has its own routing configuration. If you want to centralize routing policies, you can review the Shared VPC model and Shared VPC considerations. You can then terminate the VLAN attachment in the VPC network of the Shared VPC host project. Your host project has a quota for the maximum number of VLAN attachments per Interconnect connection. For details, see Cloud Interconnect Quotas and limits.

Can I use a single Interconnect connection to connect multiple on-premises sites to my VPC network?

You can easily do this. For example, if the multiple sites are part of an MPLS VPN network, either self-managed or managed by a carrier, you can logically add the VPC network as an additional site by using an approach similar to Inter-AS MPLS VPN Option A (for more information, see RFC 4364, Paragraph 10).

This solution is described in the answer for making a VPC network appear in a partner's MPLS VPN service. By applying the BGP capabilities of Cloud Router, it is possible to inject VPC routes inside an existing IP core fabric by using techniques and architectures similar to the ones used to import internet routes.

Can I physically patch together an Interconnect connection and an interconnect from another cloud provider?

If you are already using another cloud provider that offers a service functionally equivalent to Cloud Interconnect, there is no agreed-upon configuration between cloud providers to physically patch together two connections, one provided by Google Cloud and one provided by the other cloud provider. However, you can route between the private address space of the VPC network and the network of a different cloud provider.

If the service handoff point for the other cloud provider is located in the same location as Cloud Interconnect, you can provision your own router in that location to terminate the two connection services. The router then routes between the VPC network and the other cloud provider's network. This configuration lets you route directly from the two cloud networks into your on-premises network with minimal delay.

Some Partner Interconnect carriers are able to offer this configuration as a managed service, based on a virtual router. If Google Cloud and the other cloud provider terminate connection services in different locations, you must provide a circuit that connects the two locations.

How can I connect AWS and Google Cloud without placing equipment in a colocation facility near the Google edge?

Megaport offers their own cloud router solution for Google Cloud customers who don't want to place hardware near the Google edge. For information about how to set up this product with Google Cloud, see the configuration instructions.

VLAN attachments

This section covers questions about VLAN attachments.

How can I choose the VLAN ID that is used for a VLAN attachment?

For a VLAN attachment created with Partner Interconnect, the service provider either chooses the VLAN ID during the attachment creation process or lets you choose it. Check with your service provider to determine if they let you choose the VLAN ID for VLAN attachments.

For a VLAN attachment created with Dedicated Interconnect, you can use the gcloud compute interconnects attachments create command with the --vlan flag, or you can follow the Google Cloud Console instructions.

The following example shows how to use the gcloud command to change the VLAN ID to 5:

gcloud compute interconnects attachments dedicated create my-attachment \
  --router my-router \
  --interconnect my-interconnect \
  --vlan 5 \
  --region us-central1

For full instructions, see one of the following documents:

Can I use a Cloud Router with more than one VLAN attachment?

Yes, this is a supported configuration.

Can I configure attachments whose combined bandwidth exceeds the bandwidth of my Interconnect connection?

Yes, but creating attachments with a combined bandwidth greater than the Interconnect connection doesn't give you more than the maximum supported bandwidth of the connection.

MPLS

This section covers questions about Cloud Interconnect and Multiprotocol Label Switching (MPLS).

Can I use Cloud Interconnect to terminate an MPLS LSP inside my VPC network?

As of December 2018, VPC does not offer a built-in capability in Google Cloud to terminate MPLS LSP.

For a self-managed MPLS VPN service, can I make my VPC network appear as an additional site?

If you have an MPLS VPN service that you manage, you can make your VPC network appear as an additional site that consists of a self-managed VPN.

This scenario assumes that you are not buying an MPLS VPN service from a provider. Instead, you have an MPLS VPN environment where you manage and configure the P and PE routers of the MPLS network yourself.

To make your VPC network appear as an additional site in your self-managed MPLS VPN service, do the following:

  1. Connect one of your MPLS VPN PE edge devices to your peering edge device for Dedicated Interconnect by using a model similar to Inter-AS MPLS VPN Option A (see RFC 4364, Paragraph 10). In other words, you can terminate the required MPLS-VPN VPN, for example, VRF_A, into your PE edge device, and then use VLAN-to-VRF mapping to "join" the Google Cloud VLAN attachment into this VPN, essentially mapping VLAN to VRF_A at the PE edge device.

  2. Create a standard IPv4 BGP session between the PE router and Cloud Router to ensure that routes are exchanged between them. The routes sent by Cloud Router appear only in the VPN routing table (inside VRF_A) and not in the global routing table of the PE edge device.

    You can manage overlapping IP ranges by creating multiple, separated VPNs. For example, VRF_A and VRF_B, each one having a BGP session to Cloud Router in a specific VPC network (for example, VPC_A and VPC_B). This procedure does not require any MPLS encapsulation between your PE edge device and the peering edge device for Dedicated Interconnect.

Can I make my VPC network appear as an additional site in my MPLS VPN from a carrier who is also a service provider for Partner Interconnect?

If you buy an MPLS VPN service from a carrier that is also an official service provider for Partner Interconnect, you can make your VPC network appear as an additional site in your MPLS VPN.

In this case, the carrier manages and configures the P and PE routers of their MPLS network. Because Partner Interconnect uses the exact same connectivity model as Dedicated Interconnect, the carrier can use a model similar to Inter-AS MPLS VPN Option A (see RFC 4364, Paragraph 10).

Essentially, the carrier provides a Layer 3 Partner Interconnect service to you, and then "binds" your VLAN attachment with the correct MPLS VPN on the carrier's edge device. Because this is a Layer 3 service model, the BGP session is established between your Cloud Router and your VRF inside the carrier edge device. For details, see the Partner Interconnect overview.