Troubleshooting

This troubleshooting guide can help you solve common issues that you might encounter when using Cloud Interconnect:

For answers to common questions about Cloud Interconnect architecture and features, see the Cloud Interconnect FAQ.

For definitions of terms used on this page, see Cloud Interconnect key terms.

To find logging and monitoring information and view Cloud Interconnect metrics, see Monitoring connections.

General troubleshooting

Check for Cloud Interconnect service disruptions

  • You can check known disruptions on the Google Cloud Status Dashboard. You can also subscribe to the Cloud Incidents JSON feed or RSS feed for push updates.

  • You are notified of maintenance events on that impact your Dedicated Interconnect connections. For details, see Infrastructure maintenance events.

  • You are notified of maintenance events that impact your Partner Interconnect VLAN attachments. Notifications for Partner Interconnect are sent in a similar fashion as the notifications for Dedicated Interconnect connections with some minor differences. For details, see Infrastructure maintenance events.

Can't connect to resources in other regions

By default, Virtual Private Cloud (VPC) networks are regional, meaning that Cloud Router advertises only the subnets in its region. To connect to other regions, set the dynamic routing mode of your VPC network to global so that Cloud Router can advertise all subnets.

For more information, see Dynamic routing mode in the Cloud Router documentation.

Can't reach VMs in a peered VPC network

In this scenario, you have set up an Interconnect connection between your on-premises network and a VPC network, network A. You have also set up VPC Network Peering between network A and another VPC network, network B. However, you are unable to reach VMs in network B from your on-premises network.

This configuration is unsupported as described in the restrictions of the VPC Network Peering overview.

However, you can use custom IP range advertisements from Cloud Routers in your VPC network to share routes to destinations in the peer network. In addition, you must configure your VPC Network Peering connections to import and export custom routes.

For more information about advertising routes between on-premises networks and VPC peered networks, see the following resources:

Missing subnets in a connection

To advertise all of the available subnets, specify the missing routes with custom route advertisements and advertise all subnet routes between regions with global dynamic routing. To do this, follow these steps:

  1. Specify custom route advertisements both on a Cloud Router and on the BGP session. To enter the missing routes, set the following parameters:

    --set-advertisement-groups = ADVERTISED_GROUPS
    --set-advertisement-ranges = ADVERTISED_IP_RANGES
    

    Replace the following:

    • ADVERTISED_GROUPS: a Google-defined group that Cloud Router dynamically advertises; it can have a value of all_subnets, which mimics the default behavior of a Cloud Router
    • ADVERTISED_IP_RANGES: the contents of the new array of IP address ranges; it can have one or more values of your choice

    For more details and examples, see Advertising custom IP ranges.

  2. Enable global dynamic routing mode.

Can't ping Cloud Router

If you can't ping Cloud Router from your on-premises router, find your product in the following table and perform the troubleshooting steps for that product. VMs cannot reach 169.254.0.0/16.

Troubleshooting steps to follow Dedicated Interconnect Partner Interconnect with L3 partner Partner Interconnect with L2 partner
For Partner Interconnect, the Cloud Router might never be pingable because some partners filter traffic to the IP range (169.254.0.0/16) for Cloud Router. For L3 partners, the partner automatically configures BGP. If BGP doesn't come up, contact your partner. Not applicable Yes Not applicable
Verify that your local device has learned the correct MAC address for the Google Cloud side of the connection. For more information, see Troubleshooting ARP. Yes Not applicable Not applicable
Verify that your Cloud Router has an interface and a BGP peer. Cloud Router is not pingable unless the interface and BGP peer are fully configured, including the remote ASN.
  • For Dedicated Interconnect, see BGP session not working.
  • For L2 Partner Interconnect, Google has added the interface and BGP peer for Cloud Router automatically, but you must configure the remote ASN.
Yes Not applicable Yes

Troubleshooting ARP

For Dedicated Interconnect, to find the correct MAC address, run the following gcloud command:

  gcloud compute interconnects get-diagnostics INTERCONNECT_NAME

The googleSystemID contains the MAC address that should be present in your device's ARP table for IP addresses assigned to Cloud Router.

  result:
    links:
    — circuitId: SAMPLE-0
      googleDemarc: sample-local-demarc-0
      lacpStatus:
        googleSystemId: ''
        neighborSystemId: ''
        state: DETACHED
      receivingOpticalPower:
        value: 0.0
      transmittingOpticalPower:
        value: 0.0
    macAddress: 00:00:00:00:00:00

If your device has not learned a MAC address, verify that the correct VLAN ID and IP address are configured on the subinterface.

For Partner Interconnect, if you see the wrong MAC address on your device, verify that you have not bridged the Layer 2 segments of two VLAN attachments. The Google Cloud side of the Interconnect connection is configured with ip proxy-arp, which replies to all ARP requests and can cause your on-premises router to learn incorrect ARP entries.

Can't create VLAN attachment

If you attempt to create a VLAN attachment for Dedicated Interconnect or Partner Interconnect that violates an organization policy, you see an error message. See the following example error message from running gcloud compute interconnects attachments partner create:

ERROR: (gcloud.compute.interconnects.attachments.partner.create) Could not fetch resource:
- Constraint constraints/compute.restrictPartnerInterconnectUsage violated for projects/example-project. projects/example-project/global/networks/example-network is not allowed to use the Partner Interconnect.

For more information, see Restricting Cloud Interconnect usage and contact your organization administrator.

Sharing connections with other projects in my organization

Use Shared VPC to share a connection, such as a VLAN attachment or a Dedicated Interconnect connection in a host project.

For more information about setting up a Shared VPC network, see Provisioning Shared VPC.

For more information about configuring attachments in a Shared VPC network, see Enabling multiple VPC networks to access the same VLAN attachment.

Dedicated Interconnect

Google can't ping you during the Interconnect connection provisioning process

Check that you're using the correct IP and LACP configuration. During the testing process, Google sends you different test IP configurations for your on-premises router, depending on whether you order a multiple-link bundle or a single-link bundle. Don't configure VLAN attachments for either of these tests.

  • The first set of IP addresses that Google sends is for testing multi-circuit connectivity on each individual link. Configure the test IP addresses on every physical link (without LACP configured), as instructed in the emails that Google sent you. Google must ping all the IP addresses successfully before this first test passes.
  • For the second test, remove all the IP addresses from the first test. Configure the port channel with LACP even if your connection has only one link. Google pings the port channel address. Don't modify the LACP configuration of your port channel after the connection has passed the final test. However, you must remove the test IP address from the port channel interface.
  • Google sends the final production IP address for testing single-circuit connectivity. Configure the IP address on the bundle interface (with LACP configured, either active or passive mode), as instructed in the email that Google sent you. Google must ping the bundle interface IP address successfully before this test passes. Configure the port channel with LACP even if your connection has only one link.

Can't ping Cloud Router

  • Check that you can ping Google's port channel IP address. The IP address is the googleIpAddress value when you view the Interconnect connection details.
  • Check that you have the correct VLAN on your on-premises router's subinterface. The VLAN information should match the information provided by the VLAN attachment.
  • Check that you have the right IP address on your on-premises router's subinterface. When you create a VLAN attachment, it allocates a pair of link-local IP addresses. One is for an interface on a Cloud Router (cloudRouterIpAddress), and the other is for a subinterface on your on-premises router's port channel, not the port channel itself (customerRouterIpAddress).
  • If you're testing the performance of your VLAN attachments, don't ping Cloud Router. Instead, create and then use a Compute Engine virtual machine (VM) instance in your VPC network. For more information, see Performance testing.

BGP session not working

  • Enable multi-hop BGP on your on-premises router with at least two hops.
  • Check that you have the correct neighbor IP address configured on your on-premises router. Use the BGP peer IP address (cloudRouterIpAddress) that the VLAN attachment allocated.
  • Check that the local ASN configuration on your on-premises router matches the peer ASN on the Cloud Router. In addition, check that the local ASN configuration on the Cloud Router matches the peer ASN on your on-premises router.
  • Each attachment is allocated a unique /29 CIDR from 169.254.0.0/16 within your VPC network. One IP address in the /29 CIDR is allocated for the Cloud Router and the other for your on-premises router.

    Check that the correct IP addresses are allocated for your on-premises router interface and its BGP neighbor. A common mistake is to configure a /30 on your on-premises router interface instead of a /29. Google Cloud reserves all other addresses in the /29 CIDR.

    Make sure that you have not allocated any other IP addresses from this CIDR to the VLAN attachment interface on your router.

Can't reach VMs in your VPC network

  • Check that you can ping the port channel and VLAN attachment.
  • Check that your BGP session is active.
  • Check that your on-premises router is advertising and receiving routes.
  • Check that there are no overlaps between your on-premises route advertisements and Google Cloud network ranges.
  • Set the MTU size to 1440 or 1500 on your on-premises router, depending on the configuration of your VLAN attachment and VPC network.

Performance testing over your VLAN attachments

If you need to test the performance of your VLAN attachments, use a VM in your VPC network. Add the performance tools that you require on the VM. Don't use the Cloud Router link-local IP address to test for latency, such as ICMP ping or path MTU. Using Cloud Router can give unpredictable results.

Getting diagnostics

To get current, detailed technical information about the Google Cloud side of a Dedicated Interconnect connection on demand, see Getting diagnostics.

Partner Interconnect

BGP session not working (Layer 2 connections)

  • Check that your on-premises router has been configured with a BGP session to your Cloud Routers. For more information, see Configuring on-premises routers for Partner Interconnect.
  • Enable multi-hop BGP on your on-premises router with at least two hops.
  • Check that you have the correct neighbor IP address configured on your on-premises router. Use the BGP peer IP address (cloudRouterIpAddress) that the VLAN attachment allocated.
  • Check that the local ASN configuration on your on-premises router matches the peer ASN on the Cloud Router (16550). In addition, check that the local ASN configuration on the Cloud Router matches the peer ASN on your on-premises router.

BGP session not working (Layer 3 connections)

  • Your Cloud Router must be configured with your service provider's ASN. Contact your service provider for assistance.

VLAN attachment down for a Partner Interconnect connection

The status for a VLAN attachment can show as down even if there are no issues with the Google Cloud configuration and the Partner Interconnect connection.

Make sure that you have configured EBGP multihop on your on-premises router to have at least four hops. You can see a sample configuration in Configuring on-premises routers.

Pairing key issue in a Partner Interconnect connection

When you try to set up a Partner Interconnect connection, you might encounter an error message such as "Google - Provider status not available." To fix this issue, follow these steps:

  1. Ensure that the pairing key was generated by the customer-side VLAN attachment (PARTNER type). The key is a long random string that Google uses to identify the attachment. The destination Google Cloud region and edge availability domain are encoded in the pairing key in the following format:

    <random_string>/<region_name>/<domain>
    

    The domain field contains the string any if the VLAN attachment is not restricted to a particular domain or if you don't specify the edge availability domain. For more information about the pairing keys, see Provision on the partner portal.

  2. Ensure that the edge availability domain of the Partner Interconnect connection matches the domain specified by the pairing key.

Can't ping Cloud Router (Layer 2 connections)

  • Check that you have the correct VLAN attachment on your on-premises router's subinterface. The VLAN attachment information should match the information provided by your service provider.
  • Check that you have the right IP address on your on-premises router's subinterface. After your service provider configures your VLAN attachment, the attachment allocates a pair of link-local IP addresses. One is for an interface on the associated Cloud Router (cloudRouterIpAddress), and the other is for a subinterface on your on-premises router's port channel, not the port channel itself (customerRouterIpAddress).
  • If you're testing the performance of your attachments, don't ping Cloud Router. Instead, create and then use a VM in your VPC network. For more information, see Performance testing.

Loss of optic power on Partner Interconnect connection's port

If there is a loss of optic power on a port, you might encounter one of the following issues:

  • Loss of Layer 3 connectivity (loss of a BGP session) or inability to access your Google Cloud VM instances.
  • Degraded performance of your link bundle. This issue happens if multiple 10GE ports are bundled together and only some of the links in a bundle are functioning.

Loss of optic power on a port means that the hardware is unable to detect a signal from the other end. This can be caused by one of the following issues:

  • A faulty transceiver
  • A faulty transport system
  • A physical fiber issue

To fix this issue, contact your Partner Interconnect and/or circuit provider. They can perform the following steps:

  1. Check that their transceiver is functioning.
  2. Run a hard loop to the Meet-Me Room (MMR) to check if the light levels on their device are working as expected.
  3. Check whether the issues are on their side, or on the Google side. The best way to isolate the interface is to put a bidirectional loop at the demarc. The interfaces on each side will transmit light down to the demarc where it will be looped back to itself. The faulty side will be the side of the demarc that does not come up cleanly.
  4. Clean and reseat all the fiber on their side.

Can't send and learn MED values over an L3 Partner Interconnect connection

If you are using a Partner Interconnect connection where a Layer 3 service provider handles BGP for you, Cloud Router can't learn MED values from your on-premises router or send MED values to that router. This is because MED values can't pass through autonomous systems. Over this type of connection, you can't set route priorities for routes advertised by Cloud Router to your on-premises router. In addition, you can't set route priorities for routes advertised by your on-premises router to your VPC network.

All other issues

For additional assistance, contact your service provider. If needed, your service provider will contact Google to fix issues related to the Google side of the network.