Advanced Configurations

This page describes advanced configuration details that most users shouldn't need. The Overview describes the basic concepts of Cloud VPN.


While many VPNs can be configured using only the steps outlined in Creating a VPN, some users need additional details. Such details are provided here.

Advanced settings and configurations

Security associations and multiple subnets

Cloud VPN creates a single child security association (SA) announcing all CIDR blocks associated with the tunnel. Some IKEv2 on-premises devices support this behavior, and some only support creating a unique child SA for each CIDR block. With these latter devices, tunnels with multiple CIDR blocks can fail to establish.

There are several workarounds for this issue:

  1. Use Cloud Router to create routes learned through the BGP dynamic routing protocol. With this configuration, the CIDRs are not negotiated in the IKE protocol.
  2. Disable CIDR configuration in the IKE protocol by setting the CIDR blocks to This setup is called "Route-based VPN"). See the Simple setup instructions.
  3. Configure the on-premises device to have several CIDRs in the same child SA. Only some devices support this, and it is only possible in IKEv2.
  4. If possible, aggregate the CIDRs into a single, larger CIDR.
  5. Create a separate tunnel for each CIDR block. If necessary, you can create several VPN gateways for this purpose.

All on-premises subnets connected to the same tunnel must use the same child SA. If different on-premises subnets do not have the same SA, they must be connected to different tunnels.

When egressing from on-premises, the routing to Cloud VPN and the choice of child SA within Cloud VPN is based on the destination IP only. This affects architecture choices. For instance, configuring two child SAs that have the same on-premises CIDR block and different GCP CIDR blocks may not produce the desired effect. Similarly, connecting the same on-premises CIDR with two different GCP CIDRs using two different tunnels may not produce the desired effect.

Supported IKE ciphers

The following are supported ciphers for IKEv2 and IKEv1.

IKEv2 supported ciphers

Phase Cipher role Cipher
Phase 1 Encryption aes-cbc, aes-ctr each with keys of 128,192,256
aes-gcm, aes-ccm with same key lengths and IVs of size 8,12,16
Integrity sha1-96, sha2-256-128, sha2-384-192, sha2-512-256, md5, xcbc-96, cmc-96
prf sha1-96, sha2-256-128, sha2-384-192, sha2-512-256, md5, xcbc-96, cmc-96
Diffie-Hellman (DH) For most IKEv2 devices, this value is auto-negotiated and so should be left unset. If your device requires it to be set, use one of the following values:
Group 2 (modp_1024), Group 5 (modp_1536), Group 14 (modp_2048), Group 15 (modp_3072), Group 16 (modp_4096), Group 18 (modp_8192), modp_1024s160, modp_2048s224, modp_2048s256
Phase 1 lifetime 36,000 seconds (10 hours)
Phase 2 Encryption aes-cbc-128, aes-cbc-192, aes-cbc-256, aes-128-gcm-8, aes-128-gcm-12, aes-128-gcm-16
Integrity sha1-96, sha2-256-128, sha2-512-256
PFS Algorithm (required) Group 2 (modp_1024), Group 5 (modp_1536), Group 14 (modp_2048), Group 15 (modp_3072), Group 16 (modp_4096), Group 18 (modp_8192), modp_1024s160, modp_2048s224, modp_2048s256
Diffie-Hellman (DH) Some devices require a DH value for Phase 2. If so, use the value that you used in Phase 1.
Phase 2 lifetime 10,800 seconds (3 hours)

IKEv1 supported ciphers

Phase Cipher Role Cipher
Phase 1 Encryption aes-cbc-128
Integrity sha1-96
PFS Algorithm (required) Group 2 (modp_1024)
prf sha1-96
Diffie-Hellman (DH) Group 2 (modp_1024)
Phase 1 lifetime 36,600 seconds (10 hours, 10 minutes)
Phase 2 Encryption aes-cbc-128
Integrity sha1-96
Diffie-Hellman (DH) Some devices require a DH value for Phase 2. If so, use the value that you used in Phase 1.
Phase 2 lifetime 10,800 seconds (3 hours)

Maximum Transfer Unit (MTU) considerations

The Maximum Transmission Unit (MTU) of a network connection is the largest number of bytes, including headers, that can be transmitted over that connection in a single packet. Because IPsec packets add additional headers and encryption overhead to a normal packet, the maximum amount of user data that can be carried in the packet is smaller than for a non-IPsec packet.

For TCP sessions, Cloud VPN uses MSS clamping to rewrite the SYN packet of the initial TCP handshake. Cloud VPN tells the sending side of the connection to set its Maximum Segment Size (MSS) to a low enough value that the encrypted packet is below the MTU of the connection.

For non-TCP traffic, such as UDP sessions, Path MTU Discovery usually takes care of MTU considerations automatically. In rare cases, you might have to set your MTU values manually at the source.

The Cloud VPN MTU is 1460. The MTU of the on-premises VPN device must be set to 1460 or lower. ESP packets leaving the device must not exceed 1460 bytes. To account for the additional ESP overhead, the recommended MTU value is 1360 on any host that's sending packets through the tunnel. You must also enable prefragmentation on your device, which means that packets must be fragmented first, then encapsulated.

If your on-premises device is generating ESP packets larger than 1460, you have two options:

  1. Determine a lower workable MTU for your protocol and set the MTU appropriately.
  2. Allow the on-premises gateway to prefragment large packets before encapsulating. To do this, make sure the DF bit is off before sending packets to the gateway.
    Cloud VPN only supports prefragmentation of large packets. It does not support large packets that have been encapsulated, then fragmented.

Replay detection

Cloud VPN has replay detection enabled, and there is no way to turn it off. The Cloud VPN replay window is 4096 packets.

UDP encapsulation

Cloud VPN supports UDP encapsulation, which is commonly used in NAT-T configurations.

For NAT-T, Cloud VPN supports a one-to-one mapping from the NAT device's external public IP address to the internal private IP address of your on-premises VPN gateway. When you configure a Cloud VPN tunnel's on-premises IP address, specify the public IP address of the NAT device.

Cloud VPN doesn't support one-to-many NAT-T relationships. For example, multiple VPN gateways can't share the same public IP address of the NAT. In order to provide continuous availability, Cloud VPN must initiate the IPsec session, which is not possible in a one-to-many NAT-T configuration.

Tunnels with overlapping IP ranges

It is possible to create a tunnel that has the same IP range as another tunnel, a subset of the other tunnel's range, or a superset of the other tunnel's range.

If the two tunnels do not have matching CIDR blocks, Cloud VPN picks the tunnel that has the more specific block. If the two tunnels do have matching CIDR blocks, then Cloud VPN uses ECMP to balance flows between the two tunnels. The balancing is based on a hash, so all packets from the same flow use the same tunnel.

Packet filtering

Cloud VPN does not perform policy-related filtering on incoming authentication packets. Outgoing packets are filtered based on the IP range configured on the Cloud VPN gateway.

VPN maintenance cycles

VPN will undergo maintenance cycles periodically. During these short periods, VPN Gateways and tunnels will not serve traffic. The maintenance logic ensures that, for a given project, only one VPN gateway within a region are serviced at any given time. Tunnels automatically reconnect after the maintenance cycle completes. For more information about VPN availability, refer to the VPN SLA.

Redundancy and failover

If a Cloud VPN tunnel goes down, it restarts automatically. If an entire virtual device fails, Cloud VPN automatically instantiates a new one with the same configuration. The new gateway and tunnel connect automatically.

If your on-premises side is hardware based, having a second on-premises-side gateway provides redundancy and failover on that side of the connection as well. A second physical gateway allows you to take one of them offline for software upgrades or other scheduled maintenance. It also protects you in case of an outright failure in one of the devices.

To configure a tunnel from your Cloud VPN gateway to a second on-premises-side VPN gateway, do the following:

  1. Configure a second on-premises VPN gateway and a tunnel.
  2. Set up a second tunnel on your Cloud VPN gateway pointing to the second on-premises gateway.
  3. Forward the same routes for the second tunnel as you did for the first. If you want both tunnels to balance traffic, set their route priorities to be the same. If you want one tunnel to be primary, set a lower priority on the second tunnel.
  4. If either VPN tunnel fails due to network issues along the path or a problem with a on-premises gateway, the Cloud VPN gateway will continue sending traffic over the healthy tunnel and will automatically resume using both tunnels once the failed tunnel recovers.
Redundant on-premises VPN gateways diagram (click to enlarge)
Redundant on-premises VPN gateways diagram (click to enlarge)

Monitoring VPN tunnels

Use Stackdriver Monitoring to view metrics and create alerts related to your VPN tunnels. To test traffic between your on-premises and VPC networks, ping a VM instances that's accessible through the VPN tunnel. Don't test by pinging the external IP address of the Cloud VPN gateway. Those packets provide no indication whether the tunnel is working because they don't traverse the VPN tunnel. For more information, see View VPN Logs.

VPN throughput

Each Cloud VPN tunnel can support up to 3 Gbps when the traffic is traversing a direct peering link, or 1.5 Gbps when traversing the public Internet. Actual performances vary depending on the following factors:

  • Network capacity between the two VPN peers.
  • The capabilities of the on-premises device. See your device's documentation for more information.
  • Packet size. Because processing happens on a per-packet basis, having a significant percentage of smaller packets can reduce overall throughput.
  • High RTT and packet loss rates can greatly reduce throughput for TCP.

When measuring throughput in TCP streams, it is better to also measure more than one TCP stream. For instance, if you are measuring using the iperf tool, you should tune the -P parameter to add multiple streams.

It is important to understand the on-premises VPN gateway’s throughput limitations and ensure appropriate throughput levels are supported by the on-premises VPN gateway.

If your on-premises VPN gateway’s throughput capabilities are higher, and you would like to scale higher throughput from Cloud VPN gateway, you can set up a second VPN gateway, as shown in option 2 below. You can also combine these strategies, as in option 3 below.

Option 1: Configure to scale your on-premises VPN gateway

Set up a second on-premises VPN gateway device with a different public IP address. Create a second tunnel on your existing Cloud VPN gateway that forwards the same IP range, but pointing at the second on-premises gateway IP. Your Cloud VPN gateway automatically load balances between the configured tunnels. You can set up the VPN gateways to have multiple tunnels load balanced this way to increase the aggregate VPN connectivity throughput.

Redundant on-premises VPN gateways diagram (click to enlarge)
Redundant on-premises VPN gateways diagram (click to enlarge)

Option 2: Configure to scale the Cloud VPN gateway

Add a second Cloud VPN gateway in the same region similar to the existing VPN gateway. The second Cloud VPN gateway can have a tunnel that points to the same IP address of the on-premises VPN gateway as the tunnel on the first gateway. Once configured, traffic to the on-premises VPN gateway is automatically load balanced between the two Cloud VPN gateways and tunnels.

Redundant Cloud VPN gateways diagram (click to enlarge)
Redundant Cloud VPN gateways diagram (click to enlarge)

Option 3: Scale at both the on-premises VPN gateway and the Cloud VPN gateway

Combine options 1 and 2 mentioned above to scale throughput. If you have two on-premises VPN gateways and two Cloud VPN gateways, each Cloud VPN gateway can have a tunnel pointing at each on-premises VPN gateway public IP, giving you four load balanced tunnels between the VPN gateway thereby potentially increasing four times the bandwidth.

Redundant Cloud VPN and on-premises VPN gateways diagram (click to enlarge)
Redundant Cloud VPN and on-premises VPN gateways diagram (click to enlarge)

For more information, see Building High-throughput VPNs. You can increase the number of tunnels up to your project's quota. ECMP is used to balance traffic between tunnels.

What's next?

Send feedback about...