Cloud Interconnect FAQ

This document covers commonly-asked questions about Cloud Interconnect features and architecture grouped into the following major sections:

Traffic over Cloud Interconnect

This section covers questions about traffic types, bandwidth, and encryption over Cloud Interconnect.

What kind of packets are carried over Cloud Interconnect ?

The Cloud Interconnect circuit carries 802.1q Ethernet frames with IPv4 packets in the payload. These frames are also known as VLAN-tagged Ethernet frames.

The value of the 12-bit VLAN ID (VID) field of the 802.1q header is the same as the VLAN ID value assigned by Google Cloud Platform when an interconnect attachment (VLAN) is created. For more information, see the following documents:

How can I encrypt my traffic over Cloud Interconnect ?

Depending on the service that is accessed using Cloud Interconnect, your traffic might already be encrypted without you needing to do anything special. For example, if you are accessing one of the Google Cloud Platform APIs reachable over the Cloud Interconnect, that traffic is already encrypted with TLS in the same way as if the APIs were accessed over the public internet.

You can also use the TLS solution for services you create, For example, a service You offer on a Compute Engine instance or on a Google Kubernetes Engine pod that supports the HTTPS protocol.

If you need encryption at the IP layer, you can create one or more self-managed (non-GCP) VPN gateways in your Virtual Private Cloud network and assign a private IP address to each gateway. For example, running a StrongSwan VPN on a Compute Engine instance. You can then terminate IPsec tunnels to those VPN gateways through Cloud Interconnect from on premises.

For more details, see the Encryption-in-Transit documentation.

Can I create a 100G connection over Dedicated Interconnect?

Yes, you can scale your connection to Google based on your needs.

An Cloud Interconnect consists of one or more circuits deployed as an Ethernet port channel link (LAG) group. The circuits in an Cloud Interconnect can be 10 Gbps or 100 Gbps, but not both within the same Cloud Interconnect.

For Dedicated Interconnect, the following circuit capacities are supported per interconnect:

  • 8 x 10 Gbps connections (80 Gbps total)
  • 1 x 100 Gbps connections (100 Gbps total)
  • 2 x 100 Gbps connections (200 Gbps total)

For Partner Interconnect, the following circuit capacities for each interconnect attachment (VLAN) are supported per interconnect:

  • From 50 Mbps to 10 Gbps interconnect attachments (VLANs)

You can order more than one Cloud Interconnect and use them in an active-active fashion by leveraging the BGP routing capabilities of Cloud Router.

For updated information, refer to the Cloud Interconnect Quotas page.

Can I reach my instances using IPv6 over Cloud Interconnect ?

VPC does not offer a native capability to terminate IPv6 traffic to instances.

Can I specify the BGP peering IP address?

  • For Partner Interconnect, no. The peering IP addresses are chosen by Google.
  • For Dedicated Interconnect, you can specify an IP address range (CIDR block) that Google selects from when you create an interconnect attachment (VLAN). This CIDR block must be in the link-local IP address range 169.254.0.0/16.

Can I preserve IP DSCP values over Dedicated Interconnect?

Dedicated Interconnect preserves original IP DSCP values end to end.

If you set the IP DSCP value in the traffic you send to your VPC network over Dedicated Interconnect, this value is preserved and arrives unchanged at your instances.

If you set the IP DSCP value in the outgoing traffic generated from an instance that traverses Dedicated Interconnect, the value is preserved at the destination end.

If you are using Partner Interconnect, check with your partner to see if IP DSCP values are preserved over your Partner connection.

Can I reach Google APIs through Cloud Interconnect from on premises? Which services or APIs are available?

There are two ways to reach Google APIs.

With Option 1, you can enable Private Google Access for one or more subnets in the VPC network, and deploy one or more reverse-proxy instances in those subnets. These reverse-proxies have only VPC private IP configured, and are thus reachable only through the Cloud Interconnect link from on premises. With this solution, most access to cloud APIs, developer APIs, and most GCP services is granted.

See Configuring Private Google Access for more details, including a list of GCP services Private Google Access supports.

With Option 2, you can leverage Private Google Access for on-premises hosts. In this case, requests from on-premises hosts must be sent to restricted.googleapis.com, which resolves persistently to IP range 199.36.153.4/30, also known as the restricted VIP range.

Add a custom route on Cloud Router to advertise the restricted VIP range. This ensures that traffic to the restricted VIP (as a destination) is routed from on premises to the API endpoints over Cloud Interconnect. Only the Google APIs and services that support the restricted VIP are reachable with this solution.

For the latest information on configuration details and supported services, see Configuring Private Google Access for On-premises Hosts.

Can I use Cloud Interconnect as a private channel to access all G Suite services through a browser?

As of December 2018, it is not possible to reach G Suite applications through Cloud Interconnect.

Why do my BGP sessions flap continuously after a certain interval?

Check for an incorrect subnet mask on your on-premises BGP IP range. For example, instead of configuring 169.254.10.0/29, you might have configured 169.254.10.0/30.

Cloud Interconnect architecture

This section covers common questions that arise when when designing or using an Cloud Interconnect architecture.

Can I use Cloud Interconnect to connect to the public internet?

As of December 2018, internet routes are not advertised over Cloud Interconnect.

How can I connect to GCP if I'm located in a POP location not listed on the Cloud Interconnect Colocation Facilities page?

You have two options, after which you can go through the normal ordering and provisioning process for Dedicated Interconnect:

  • You can order leased lines from a carrier to connect from your point-of-presence (POP) location to one of Google's Cloud Interconnect Colocation facilities. Usually it's best if you contact your existing colocation facility provider and get a list of 'on-net' providers. An on-net provider is a provider that already has infrastructure in the building where you are located, making this option cheaper and faster than using a different provider who must build out infrastructure to meet you in your existing POP location.
  • Another option is for you to use Partner Interconnect with a carrier partner who can provide a last-mile circuit to meet you. Colocation providers cannot usually provide this type of service, as they have fixed locations where you must already be present.

If I use Partner Interconnect, will I see the interconnect in the project where I create the interconnect attachment (VLAN)?

When you use the Partner Interconnect service, the interconnect object is created in the partner project and is not visible in your project. The interconnect attachment (VLAN) is still visible inside your project as in the Cloud Interconnect case.

How do I create a redundant architecture that leverages Cloud Interconnect ?

Depending on the desired SLA, there are specific architectures that must be implemented for both Dedicated Interconnect and Partner Interconnect.

Topologies for production-ready architectures with a 99.99% SLA and for non-critical applications with a 99.9% SLA are available at https://cloud.google.com/interconnect/docs/tutorials.

These SLA levels refer to the availability of the Cloud Interconnect, which is the availability of the routed connection between the on-premises location and the VPC network. For example, if you create a service on Compute Engine instances that's reachable through Cloud Interconnect, the service availability depends on the combined availability of both the Cloud Interconnect service and the Compute Engine service.

  • For Dedicated Interconnect, a single interconnect (LACP bundle) has a No Uptime SLA.
  • For Partner Interconnect, a single interconnect attachment (VLAN) has a No Uptime SLA.

Issues with single interconnect/bundle failures are treated at a support case priority no higher than P3: Medium Impact—Service Use Partially Impaired, thus you cannot expect a quick resolution or further analysis of the root cause.

Due to planned or unplanned maintenance, single links or bundles might be drained even for extended periods of time, such as hours or days.

Can I forward traffic over Cloud Interconnect between my on-premises legacy application and my Internal Load Balancer backends?

In this scenario, you've deployed an application that consists of two tiers: an on-premises tier that has not yet been migrated to GCP (legacy tier) and a cloud tier running on VPC instances that are also backends of a GCP Internal Load Balancer (ILB).

You can use Cloud Interconnect to forward the traffic between these two application tiers as long as you implement the necessary routes between Cloud Router and your on-premises router. The Cloud Router you use for the Cloud Interconnect handling this application's traffic must reside in the same region as the subnet containing the ILB backends. This is because ILB only supports regional routing and ILB access is lost when Global Routing for the VPC uses a tunnel outside the region where the ILB backends are located. See Deploying Internal Load Balancing with clients across VPN or Interconnect for more information.

If the on-premises traffic enters the VPC network from a different region, you can either deploy an ILB with the respective backends in the other region, or route the traffic to a reverse proxy from which the ILB VIP can be reached.

Can I move one or more instances of Cloud Interconnect between GCP projects or organizations?

If you want to move a project to a new GCP organization, you can open a support case and Cloud Support can facilitate the move.

Dedicated Interconnect and interconnect attachments (VLANs) are not affected by changes of organization as long as the project stays the same.

For project changes, if you are performing an Cloud Interconnect activation and you have an LOA but have not yet completed the activation, cancel the current activation and create a new one in the correct project. Google issues a new LOA, which you can then give to your cross connect provider.

However, an active Cloud Interconnect can't be moved between projects since it is a child object of the project and there is no ability to automatically migrate objects between projects. If possible, you should start a request for a new Cloud Interconnect.

How can I use the same Cloud Interconnect to connect multiple VPC networks in multiple projects inside the same GCP Organization?

Specifying a VLAN attachment that exists in a different project than the Cloud Interconnect it's attached to is supported for both Dedicated Interconnect and Partner Interconnect.

For Partner Interconnect

If you have multiple interconnect attachments (VLANs), including those in different projects, you can pair them with a Partner Interconnect connection from the same service provider, or with Partner Interconnect connections from different service providers.

For Dedicated Interconnect

If you have many projects, you can give each project its own interconnect attachment (VLAN) and its own Cloud Router while configuring all of the attachments to use the same physical Dedicated Interconnect in a specified project.

The interconnect attachment (VLAN), in addition to being a VLAN with an 802.1q ID, is a child object of an Cloud Interconnect that exists in a project.

In this model, each VPC network has its own routing configuration. If you want to centralize routing policies, then you can review the Shared VPC model and Shared VPC considerations and terminate the interconnect attachment (VLAN) in the VPC network of the Shared VPC host project. Your host project has a quota for the maximum number of interconnect attachments (VLANs) per Cloud Interconnect. For details, see the Cloud Interconnect quotas page.

Can I use a single Cloud Interconnect to connect multiple on-premises sites to my VPC network?

You can easily do this. For example, if the multiple sites are part of an MPLS VPN network, either self-managed or managed by a carrier, you can "logically add" the VPC network as an additional site using an approach similar to Inter-AS MPLS VPN Option A, see RFC 4364, Paragraph 10.

This solution is described in the answer for making a VPC network appear in a partner's MPLS VPN service. By leveraging the BGP capabilities of Cloud Router, it is possible to inject VPC routes inside an existing IP core fabric using techniques and architectures similar to the ones used to import internet routes.

Can I physically "patch" together Cloud Interconnect and an interconnect from another cloud provider?

If you are already using another cloud provider that offers a service functionally equivalent to Cloud Interconnect, there is no agreed-upon configuration between cloud providers to physically "patch" together two interconnects, one provided by GCP and one by the other cloud provider. However, you can route between the private address space of the VPC network and the network of a different cloud provider.

If the service handoff point for the other cloud provider is located in the same location as Cloud Interconnect, you can provision your own router in that location to terminate the two interconnect services. The router will then route between the VPC network and the other cloud provider's network. This configuration allows you to route directly from the two cloud networks into your on-premises network with minimal delay.

Some Partner Interconnect carriers are able to offer this as a managed service, based on a virtual router. If GCP and the other cloud provider terminate interconnect services in different locations, then you must provide a circuit that connects the two locations.

How can I connect AWS and GCP without placing equipment in a colo facility near the Google edge?

Megaport offers their own cloud router solution for GCP customers who don't want to place hardware near the Google edge. See the configuration instructions covering how to set up this product with GCP.

Interconnect attachments (VLANs)

This section covers questions about interconnect attachments (VLANs).

How can I choose the VLAN ID that is used for an Interconnect Attachment (VLAN)?

For an interconnect attachment created with Partner Interconnect, the partner either chooses the VLAN ID during the attachment creation process or allows you to choose it. Check with your partner to see if they allow you to choose the VLAN ID for interconnect attachments.

For an interconnect attachment created with Dedicated Interconnect, you can use the gcloud compute interconnects attachments create command with the --vlan flag or you can follow the Google Cloud Platform Console instructions.

The following example shows using the gcloud command to change the VLAN ID to 5:

gcloud compute interconnects attachments dedicated create my-attachment \
  --router my-router \
  --interconnect my-interconnect \
  --vlan 5 \
  --region us-central1

For full instructions, see one of the following documents:

Can I use the Cloud Router with more than one interconnect attachment?

Yes, this is a supported configuration.

MPLS

This section covers questions about Cloud Interconnect and MPLS.

Can I use Cloud Interconnect to terminate an MPLS LSP inside my VPC network?

As of December 2018, VPC does not offer a native capability in GCP to terminate MPLS LSP.

For a self-managed MPLS VPN service, can I make my VPC network appear as an additional site?

If you have an MPLS VPN service that you manage yourself, you can make your VPC network appear as an additional site consisting of a self-managed VPN.

This scenario assumes that you are not buying an MPLS VPN service from a provider. Instead, you have an MPLS VPN environment where you manage and configure the P and PE routers of the MPLS network yourself.

To make your VPC network appear as an additional site in your self-managed MPLS VPN service, do the following:

  1. Connect one of your MPLS VPN PE edge devices to your peering edge device for Cloud Interconnect – Dedicated using a model very similar to Inter-AS MPLS VPN Option A, see RFC 4364, Paragraph 10. In other words, you can terminate the required MPLS-VPN VPN, for example, VRF_A, into your PE edge device, and then use VLAN-to-VRF mapping to "join" the GCP interconnect attachment (VLAN) into this VPN, essentially mapping VLAN to VRF_A at the PE edge device.

  2. Create a standard IPv4 BGP session between the PE router and Cloud Router, to ensure routes are exchanged between them. The routes sent by Cloud Router will appear only in the VPN routing table (inside VRF_A) and not in the global routing table of the PE edge device.

    You can manage overlapping IP ranges by creating multiple, separated VPNs. For example, VRF_A and VRF_B, each one having a BGP session to Cloud Router in a specific VPC network (for example, VPC_A and VPC_B ). This procedure does not require any MPLS encapsulation between your PE edge device and the peering edge device for Dedicated Interconnect.

Can I make my VPC network appear as an additional site in my MPLS VPN from a carrier who is also a partner for Partner Interconnect?

If you buy an MPLS VPN service from a carrier that is also an official partner for Partner Interconnect, you can make your VPC network appear as an additional site in your MPLS VPN.

In this case, the provider manages and configures the P and PE routers of their MPLS network. As Partner Interconnect uses the exact same connectivity model as Dedicated Interconnect, the carrier can leverage a model very similar to Inter-AS MPLS VPN Option A, see RFC 4364, Paragraph 10.

Essentially, the carrier provides a Layer 3 Partner Interconnect service to you, and then "binds" your interconnect attachment (VLAN) with the correct MPLS VPN on the carrier's edge device. See the Interconnect Partner Overview for details. As this is a Layer 3 service model, the BGP session is established between your Cloud Router and your VRF inside the carrier edge device.

Var denne siden nyttig? Si fra hva du synes:

Send tilbakemelding om ...