Partner Interconnect provides connectivity between your on-premises network and your Virtual Private Cloud (VPC) network through a supported service provider. A Partner Interconnect connection is useful if your data center is in a physical location that can't reach a Dedicated Interconnect colocation facility, or your data needs don't warrant an entire 10-Gbps connection.
Before you use Partner Interconnect
Ensure that you meet the following requirements:
- Be familiar with Cloud Interconnect terminology.
- Work with a supported service provider to establish connectivity between their network and your on-premises network. Supported service providers offer Layer 2 connectivity, Layer 3 connectivity, or both. Work with your service provider to understand their offerings and requirements.
How does Partner Interconnect work?
Service providers have existing physical connections to Google's network that they make available for their customers to use. After you establish connectivity with a service provider, you can request a Partner Interconnect connection from your service provider. After the service provider provisions your connection, you can start passing traffic between your networks by using the service provider's network.
VLAN attachment MTU options
We recommend that you use the same MTU for all VLAN attachments that are connected to the same VPC network, and that you set the MTU of the VPC networks to the same value. For more information about Cloud Interconnect MTUs, see Cloud Interconnect MTU.
Provisioning
To provision a Partner Interconnect connection with a service provider, you start by selecting a partner and whether you want MACsec for Cloud Interconnect, and then connecting your on-premises network to a supported service provider. Work with the service provider to establish connectivity.
Next, you create a VLAN attachment for a Partner Interconnect connection in your Google Cloud project, which generates a unique pairing key that you use to request a connection from your service provider. You also need to provide other information such as the connection location, IP stack type, and capacity.
After the service provider configures your VLAN attachment, you activate your connection to start using it. Depending on your connection, either you or your service provider then establishes a Border Gateway Protocol (BGP) session.
For detailed steps to provision a Partner Interconnect connection, see the Provisioning overview.
Layer 2 versus Layer 3 connectivity
For Layer 2 connections, you must configure and establish a BGP session between your Cloud Routers and on-premises routers for each VLAN attachment that you create. The BGP configuration information is provided by the VLAN attachment after your service provider has configured it.
For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their on-premises routers for each VLAN attachment. You don't need to configure BGP on your local router. Google and your service provider automatically set the correct BGP configurations.
Because the BGP configuration for Layer 3 connections is fully automated, you can pre-activate your connections (VLAN attachments). When you enable pre-activation, the VLAN attachments are active as soon as the service provider configures them.
Pre-activation
After you create a VLAN attachment and your service provider configures it, the attachment can't pass traffic until you activate it. Activation lets you check that you're connecting to an expected service provider.
If you don't need to verify the connection and are using a Layer 3 connection, you can choose to pre-activate the attachment. If you pre-activate the attachment, it can immediately pass traffic after your service provider has configured it.
If you want to verify who you're connecting to, don't pre-activate your attachments.
Consider pre-activation if you're using Layer 3 and want your connection to activate without additional approval. Layer 3 service providers automatically configure BGP sessions with your Cloud Routers so that BGP starts immediately. You don't need to return to Google after your service provider configures your attachments.
For Layer 2 connections, there's no benefit to pre-activating VLAN attachments.
Basic topology
The following topology diagrams show example Partner Interconnect connections for Layer 2 and Layer 3.
For Layer 2 connections, traffic passes through the service provider's network to reach the VPC network or on-premises network. BGP is configured between the on-premises router and a Cloud Router in the VPC network, as shown in the following diagram.
For Layer 3 connections, traffic is passed to the service provider's network. Their network then routes the traffic to the correct destination, either to the on-premises network or to the VPC network. Connectivity between the on-premises network and the service provider network depends on the service provider. For example, the service provider might request that you establish a BGP session with them or configure a static default route to their network.
Redundancy and SLA
Depending on your availability needs, you can configure Partner Interconnect to support mission-critical services or applications that can tolerate some downtime. To achieve a specific level of reliability, Google has two prescriptive configurations:
- Establish 99.99% availability for Partner Interconnect (recommended)
- Establish 99.9% availability for Partner Interconnect
We recommend that you use the 99.99% availability configuration for production-level applications with a low tolerance for downtime. If your applications aren't mission-critical and can tolerate some downtime, you can use the 99.9% availability configuration.
For the 99.99% and 99.9% availability configurations, Google offers an SLA that applies only to the connectivity between your VPC network and the service provider's network. The SLA doesn't include the connectivity between your network and the service provider's network. If your service provider does offer an SLA, you can get an end-to-end SLA based on the Google-defined topologies. For more information, ask your service provider.
99.99% availability topology
For the highest level availability, we recommend the 99.99% availability configuration. Clients in the on-premises network can reach the IP addresses of virtual machine (VM) instances in the selected region through at least one of the redundant paths. If one path is unavailable, the other paths can continue to serve traffic.
99.99% availability requires at least four VLAN attachments across two metros, one in each edge availability domain (metro availability zone). You also need two Cloud Routers (one in each Google Cloud region of a VPC network). Associate one Cloud Router with each pair of VLAN attachments. You must also enable global routing for the VPC network.
For Layer 2 connections, four virtual circuits are required, split between two metros. Layer 2 also requires that you add four BGP sessions to the on-premises router, two for each Cloud Router, as shown in the following diagram.
For Layer 3 connections, four connections between Google and your service provider are required. You create four VLAN attachments, and then your service provider establishes two BGP sessions with each of your Cloud Routers. The VLAN attachments must be split between two metros, as shown in the following diagram.
Multiple service providers
To build a highly available topology, you can use multiple service providers. You must build redundant connections for each service provider in each metro.
For example, you might provision two primary connections by using a local service provider that's close to your data center. For the backup connection, you might use a long-haul service provider to build two connections in a different metro. Ensure that this topology meets all your requirements for availability.
Balancing egress traffic with redundant VLAN attachments
When you have a redundant topology similar to the 99.99% configuration, there are multiple paths for traffic to traverse from the VPC network to your on-premises network.
Google Cloud uses ECMP to balance the egress traffic across connections. To use ECMP, the Cloud Routers used by the VLAN attachments must receive the same announcement with equal cost (the same CIDR range and the same MED values).
Dedicated Interconnect does the following to balance traffic across connections:
All VLAN attachments operate on Dataplane v1. Traffic is balanced approximately, according to the configured capacity. Traffic balancing might not work optimally when VLAN attachment capacities are different.
All VLAN attachments operate on Dataplane v2. Google Cloud balances the traffic between the VLAN attachments based upon the configured capacity of each VLAN attachment.
VLAN attachments operate on a mix of Dataplane v1 and v2. Egress traffic might be misbalanced between the VLAN attachments. Misbalanced traffic is most noticeable for attachments with under 1 Gpbs of configured capacity.
Google is migrating all existing VLAN attachments to use Dataplane v2 without any action required on your part. If you need to migrate to Dataplane v2 to resolve misbalanced VLAN attachments, contact Google Cloud Support.
Create redundant connections with sufficient capacity
The Best practices document describes best practices for creating redundant connections that have sufficient capacity in a failover scenario. Following these practices helps ensure that events such as planned maintenance or hardware failures do not cause loss of connectivity.
IPv6 support
Partner Interconnect supports IPv6 traffic for both Layer 2 and Layer 3 connectivity. You have the option to create an IPv4 and IPv6 (dual stack) VLAN attachment.
Dual-stack Partner Interconnect VLAN attachments must use separate IPv4 and IPv6 BGP sessions. Multiprotocol BGP (MP-BGP)—IPv4 + IPv6 route exchange—on a single BGP session isn't supported.
To support IPv6 traffic in a Partner Interconnect connection, your IPv6-enabled networks must have the following:
Include dual-stack subnets within IPv6-enabled Virtual Private Cloud networks.
Assign internal IPv6 ranges to the subnets. For more information, see IPv6 subnet ranges.
Configure IPv6 addresses for VMs and instance templates within the subnet.
For more information about configuring IPv6 within a subnet, see the following:
To create a custom mode VPC network with internal IPv6 addresses, see Create a custom mode VPC network with a dual-stack subnet.
To create a subnet with IPv6 enabled, see Add a dual-stack subnet.
To enable IPv6 in an existing subnet, see Change a subnet's stack type to dual stack.
To create or enable VMs with IPv6, see Configure IPv6 for instances and instance templates.
For information about using internal IPv6 ranges in your VPC network and subnets, see Internal IPv6 specifications.
After you configure IPv6 in your VPC network, subnets, and VMs, you can configure dual-stack VLAN attachments.
Restrict Partner Interconnect usage
By default, any VPC network can use Cloud Interconnect. To control which VPC networks can use Cloud Interconnect, you can set an organization policy. For more information, see Restrict Cloud Interconnect usage.Limitations
You can't send and learn MED values over a Layer 3 Partner Interconnect connection.
If you are using a Partner Interconnect connection where a Layer 3 service provider handles BGP for you, Cloud Router can't learn MED values from your on-premises router or send MED values to that router. This is because MED values can't pass through autonomous systems. Over this type of connection, you can't set route priorities for routes advertised by Cloud Router to your on-premises router. In addition, you can't set route priorities for routes advertised by your on-premises router to your VPC network.
What's next?
- To find answers to common questions about Cloud Interconnect architecture and features, see the Cloud Interconnect FAQ.
- To find out more about Cloud Interconnect, see the Cloud Interconnect overview.
- To learn about best practices when planning for and configuring Cloud Interconnect, see Best practices.
- To find Google Cloud resource names, see the Cloud Interconnect APIs.