Special Configurations

This page describes special networking configurations of Compute Engine virtual machine (VM) instances, such as configuring an instance as a network proxy, configuring an instance as a NAT gateway, and configuring an instance as a VPN gateway.

Configure a static internal IP address for an instance

By default, an instance's internal IP address is ephemeral. You cannot reserve an internal IP address and move it around the way you can with a static external IP address. This section describes how to use a Google Cloud Platform route to set a static target IP address for your instance and configure the routing table on your instance to set a static source IP address for your instance. This technique allows you use a route to keep the IP address in use even if you delete the target instance.

For example, if you want to assign 10.1.1.1 specifically as an internal IP address to a virtual machine instance, you can create a static route that sends traffic from 10.1.1.1 to your instance even if the instance's internal IP address is not 10.1.1.1. You can also configure the instance to send traffic from 10.1.1.1. You can keep the IP address "reserved" by keeping the route even if you delete and recreate the instance, then point the route at the new instance.

Use the following instructions to:

The following instructions assume that your project uses the default VPC network. If you wish to use another network, you must specify the --network flag in the gcloud commands.

Before you configure a static internal IP address, keep in mind that:

  • You can assign a specific internal IP address to an instance during instance creation, without using a route. For more information, read the Specifying an internal IP address at instance creation documentation. However, if you delete your instance, there is nothing to prevent another user from creating an instance with that IP address while the address is unused.
  • It is also possible to communicate with an instance using the instance name. Even if the instance's internal IP might change, the instance name remains the same, so you can configure your applications to send packets to the instance name instead. Using an instance name is more robust than configuring a static internal IP address using routes.

Set a static target internal IP address using routes

These instructions set up an instance and VPC network to receive packets sent to the IP (target IP). To configure the instance to send packets with this IP address (source IP), see Set a static source IP address

Debian 8


  1. Choose an internal IP address that doesn't belong to any network in your project.

    Use an address from a private range, like 10.x.x.x or 192.168.x.x, but not from a range that is in use in the project. This example uses 10.1.1.1.

  2. Create a new virtual machine instance and enable IP forwarding.

    By default, Compute Engine won't deliver packets whose destination IP address is different than the IP address of the instance receiving the packet. To disable this destination IP check, you can enable IP forwarding for the instance.

    gcloud compute instances create my-configurable-instance --can-ip-forward
    
  3. Create a static route to direct packets destined for 10.1.1.1 to your instance.

    The target of the static route must be an IP address that is not in any network of the project.

    gcloud compute routes create ip-10-1-1-1 \
        --next-hop-instance my-configurable-instance \
        --next-hop-instance-zone us-central1-f \
        --destination-range 10.1.1.1/32
    
  4. Create a firewall rule to allow traffic on this IP.

    In this case, the firewall rule allows all traffic through the 10.0.0.0/8 IP address range. You can add additional IP ranges or limit access as needed for your use case.

    gcloud compute firewall-rules create allow-internal \
      --allow icmp,udp,tcp --source-range 10.0.0.0/8
    
  5. Add a new virtual interface to your instance.

    1. ssh into your instance.

      gcloud compute ssh my-configurable-instance
      
    2. Append the following lines to the /etc/network/interfaces file.

      # Change to root first
      user@my-configurable-instance:~$ sudo su -
      

      # Append the following lines
      root@my-configurable-instance:~# cat <<EOF >>/etc/network/interfaces
      auto eth0:0
      iface eth0:0 inet static
      address 10.1.1.1
      netmask 255.255.255.255
      EOF
      

    3. Restart the instance.

      root@my-configurable-instance:~# sudo reboot
      

  6. Check that your virtual machine instance interface is up by pinging 10.1.1.1 from inside your instance.

    user@my-configurable-instance:~$ ping 10.1.1.1
    

    You can also try pinging the interface from another instance in your project.

Ubuntu 14.04


  1. Choose an internal IP address that doesn't belong to any network in your project.

    Use an address from a private range, like 10.x.x.x or 192.168.x.x, but not from a range that is in use in the project. This example uses 10.1.1.1.

  2. Create a new virtual machine instance and enable IP forwarding.

    By default, Compute Engine won't deliver packets whose destination IP address is different than the IP address of the instance receiving the packet. To disable this destination IP check, you can enable IP forwarding for the instance.

    gcloud compute instances create my-configurable-instance --can-ip-forward
    
  3. Create a static route to direct packets destined for 10.1.1.1 to your instance.

    The target of the static route must be an IP address that is not in any network of the project.

    gcloud compute routes create ip-10-1-1-1 \
        --next-hop-instance my-configurable-instance \
        --next-hop-instance-zone us-central1-f \
        --destination-range 10.1.1.1/32
    
  4. Create a firewall rule to allow traffic on this IP.

    In this case, the firewall rule allows all traffic through the 10.0.0.0/8 IP address range. You can add additional IP ranges or limit access as needed for your use case.

    gcloud compute firewall-rules create allow-internal \
        --allow icmp,udp,tcp --source-range 10.0.0.0/8
    
  5. Add a new virtual interface to your instance.

    1. ssh into your instance.

      gcloud compute ssh my-configurable-instance
      
    2. Append the following lines to the /etc/network/interfaces file.

      # Change to root first
      user@my-configurable-instance:~$ sudo su -
      

      # Append the following lines
      root@my-configurable-instance:~# cat <<EOF >>/etc/network/interfaces
      auto eth0:0
      iface eth0:0 inet static
      address 10.1.1.1
      netmask 255.255.255.255
      EOF
      

    3. Restart the instance.

      root@my-configurable-instance:~# sudo reboot
      

  6. Check that your virtual machine instance interface is up by pinging 10.1.1.1 from inside your instance.

    user@my-configurable-instance:~$ ping 10.1.1.1
    

    You can also try pinging the interface from another instance in your project.

CentOS 6


  1. Choose an internal IP address that doesn't belong to any network in your project.

    Use an address from a private range, like 10.x.x.x or 192.168.x.x, but not from a range that is in use in the project. This example uses 10.1.1.1.

  2. Create a new virtual machine instance and enable IP forwarding.

    By default, Compute Engine won't deliver packets whose destination IP address is different than the IP address of the instance receiving the packet. To disable this destination IP check, you can enable IP forwarding for the instance.

    gcloud compute instances create my-configurable-instance --can-ip-forward
    
  3. Create a static route to direct packets destined for 10.1.1.1 to your instance.

    The target of the static route must be an IP address that is not in any network of the project.

    gcloud compute routes create ip-10-1-1-1 \
        --next-hop-instance my-configurable-instance \
        --next-hop-instance-zone us-central1-f \
        --destination-range 10.1.1.1/32
    
  4. Create a firewall rule to allow traffic on this IP.

    In this case, the firewall rule allows all traffic through the 10.0.0.0/8 IP address range. You can add additional IP ranges or limit access as needed for your use case.

    gcloud compute firewall-rules create allow-internal \
      --allow icmp,udp,tcp --source-range 10.0.0.0/8
    
  5. Add a new virtual interface to your instance.

    1. ssh into your instance.

      gcloud compute ssh my-configurable-instance
      
    2. Add the following lines to a new file named /etc/sysconfig/network-scripts/ifcfg-eth0:0:

      # Change to root first
      user@my-configurable-instance:~$ sudo su -
      

      # Add the following lines
      root@my-configurable-instance:~# cat <<EOF >>/etc/sysconfig/network-scripts/ifcfg-eth0:0
      DEVICE="eth0:0"
      BOOTPROTO="static"
      IPADDR=10.1.1.1
      NETMASK=255.255.255.255
      ONBOOT="yes"
      EOF
      

    3. Enable your new virtual network interface.

      root@my-configurable-instance:~# ifup eth0:0
      

  6. Check that your virtual machine instance interface is up by pinging 10.1.1.1 from inside your instance.

    user@my-configurable-instance:~$ ping 10.1.1.1
    

    You can also try pinging the interface from another instance in your project.

Set a static source IP address

In addition to setting a static target internal IP for a virtual machine, you can also set up the instance to use the static internal IP as its source IP when communicating with other virtual machines.

This section only sets up the instance to use the IP as a source address (source IP). To have the VPC network route packets with this destination IP to the instance, also set a static target IP address.

To set the source IP as the static internal IP on a Linux instance:

# Change to root first
user@my-configurable-instance:~$ sudo su -
# Get the next hop route of your packets
root@my-configurable-instance:~# route -n
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.240.0.1      0.0.0.0         255.255.255.255 UH    0      0        0 eth0
# Add a route, where 10.0.0.0/16 is the address range the route applies;
# replace this range with an address range of your choice.
# This means that all instance in this range will see the source IP of your
# instance as 10.1.1.1
root@my-configurable-instance:~# ip route add 10.0.0.0/16 dev eth0:0 via 10.240.0.1 src 10.1.1.1

To test this out, try using the instance to ping an IP address in the address range 10.0.0.0/16 on the same network. For example, the following example demonstrates the tcpdump of a request made from an instance with a static internal IP of 10.1.1.1 to 10.0.0.2:

root@my-configurable-instance-2:~# tcpdump -npi eth0 -vv icmp or arp
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
17:59:05.374093 IP (tos 0x0, ttl 64, id 59993, offset 0, flags [DF], proto ICMP (1), length 84)
    10.1.1.1 > 10.0.0.2: ICMP echo request, id 4319, seq 1, length 64
17:59:05.374126 IP (tos 0x0, ttl 64, id 55671, offset 0, flags [none], proto ICMP (1), length 84)
    10.0.0.2 > 10.1.1.1: ICMP echo reply, id 4319, seq 1, length 64
17:59:06.375432 IP (tos 0x0, ttl 64, id 60166, offset 0, flags [DF], proto ICMP (1), length 84)
    10.1.1.1 > 10.0.0.2: ICMP echo request, id 4319, seq 2, length 64

Configure an instance as a network proxy

You can design your VPC network so that only one instance has external access, and all other instances in the VPC network use that instance as a proxy server to the outside world. This is useful if you want to control access into or out of your VPC network, or reduce the cost of paying for multiple external IP addresses.

This particular example discusses how to set up a network proxy on VM instances that use a Debian image. It uses a gateway instance as a Squid proxy server but this is only one way of setting up a proxy server.

To set up a Squid proxy server:

  1. Set up one instance with an external (static or ephemeral) IP address. For this example, name your instance gateway-instance.
  2. Set up one or more instances without external IP addresses by specifying gcloud compute instances create ... --no-address. For this example, call this instance hidden-instance.
  3. Learn how to connect from one instance to another because you will not be able to connect directly into your internal-only instances.
  4. Add a firewall to allow tcp traffic on port 3128:

    gcloud compute firewall-rules create FIREWALL_RULE --network NETWORK --allow tcp:3128
    
  5. Install Squid on gateway-instance, and configure it to allow access from any machines on the VPC network (RFC1918, RFC4193, and RFC4291 IP spaces). This assumes that gateway-instance and hidden-instance are both connected to the same VPC network, which enables them to connect to each other.

    user@gateway-instance:~$ sudo apt-get install squid3
    

    Enable any machine on the local network to use the Squid3 server. The following sed commands uncomment and enable the acl localnet src entries in the Squid config files for local networks and machines.

    user@gateway-instance:~$ sudo sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(http_access deny to_localhost\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 10.0.0.0/8.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 172.16.0.0/12.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 192.168.0.0/16.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fc00\:\:/7.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fe80\:\:/10.*\):\1:' /etc/squid3/squid.conf
    

    # Prevent proxy access to metadata server
    user@gateway-instance:~$ sudo cat <<EOF >>/etc/squid3/squid.conf
    acl to_metadata dst 169.254.169.254
    http_access deny to_metadata
    EOF
    

    # Start Squid
    user@gateway:~$ sudo service squid3 start
    

  6. Configure hidden-instance to use gateway-instance as its proxy. Use ssh to connect into hidden-instance and define its proxy URL addresses to point to gateway-instance on port 3128 (the default Squid configuration) as shown here:

    user@gateway-instance:~$ ssh hidden-instance
    

    user@hidden-instance:~$ sudo -s
    

    root@hidden-instance:~# echo "export http_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
    

    root@hidden-instance:~# echo "export https_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
    

    root@hidden-instance:~# echo "export ftp_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
    

    root@hidden-instance:~# echo "export no_proxy=169.254.169.254,metadata,metadata.google.internal" >> /etc/profile.d/proxy.sh
    

    Update sudoers to pass these env variables through.

    root@hidden-instance:~# cp /etc/sudoers /tmp/sudoers.new
    

    root@hidden-instance:~# chmod 640 /tmp/sudoers.new
    

    root@hidden-instance:~# echo "Defaults env_keep += \"ftp_proxy http_proxy https_proxy no_proxy"\" >>/tmp/sudoers.new
    

    root@hidden-instance:~# chmod 440 /tmp/sudoers.new
    

    root@hidden-instance:~# visudo -c -f /tmp/sudoers.new && cp /tmp/sudoers.new /etc/sudoers
    

  7. Exit sudo, load the variables, and run apt-get on hidden-instance. It should now work using gateway as a proxy. If gateway were not serving as a proxy, apt-get would not work because hidden-instance has no direct connection to the Internet.

    root@hidden-instance:~# exit
    

    user@hidden-instance:~$ source ~/.profile
    

    user@hidden-instance:~$ sudo apt-get update
    

Set up an external HTTP connection to an instance

The default firewall rules do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them. Note that an instance must have an external IP address before it can receive traffic from outside its VPC network.

You can add a firewall rule to allow HTTP or HTTPS connections using the gcloud command-line tool or the Google Cloud Platform Console. You can also add a firewall rule through the API.

Console

You can use the Cloud Platform Console to create an overall firewall rule for all instances on the VPC network, or you can allow individual instances access to HTTP and HTTPS connections by selecting the respective option when you create that instance. The latter option is described first, because it provides more control over individual instances.

  1. In the Cloud Platform Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the Create instance button.
  3. In the Firewall section, select Allow HTTP traffic and Allow HTTPS traffic.
  4. Click the Create button to create the instance.

By selecting these checkboxes, Compute Engine automatically creates a default-http or default-https rule that applies to all instances with either the http-server or https-server tags. Your new instance is also tagged with the appropriate tag depending your checkbox selection.

If you already have existing default-http and default-https firewall rules, you can apply the firewall rule to existing instances by enabling the Allow HTTP or Allow HTTPS options on the instance's details page.

  1. Go to the VM instances page.
  2. Click the name of the desired instance.
  3. Click Edit button at the top of the page.
  4. Scroll down to the Firewalls section.
  5. Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
  6. Click Save.

In a similar manner, you can also disable external HTTP or HTTPS access for an instance by unchecking one or both checkboxes.

By allowing specific instances to be tagged for HTTP and HTTPS traffic rather than creating an overall firewall rule that applies to all instances, Compute Engine limits the possible security implications of allowing external traffic to all virtual machines in a project. However, if you would like to create a firewall rule that allows HTTP or HTTPS traffic to all virtual machine instances, you can create your own firewall rule:

  1. Go to the VPC networks page.
  2. Select the VPC network where you would to apply the firewall rule.
  3. Under the Firewall rules section, click Add firewall rule.
  4. Name your firewall rule, and add tcp:80 in the Protocols & Ports box, or tcp:443 for HTTPS traffic.
  5. Click Create.
`gcloud` command-line tool

If you want to allow HTTP and HTTPS traffic to all virtual machines in a project, the following command creates a firewall that allows incoming HTTP and HTTPS requests from anywhere to any instance connected to this VPC network.

gcloud compute firewall-rules create FIREWALL_RULE --allow tcp:80,tcp:443

**Example**

gcloud compute firewall-rules create sample-http \
 --description "Incoming http and https allowed." \
 --allow tcp:80,tcp:443
gcloud compute firewall-rules describe sample-http
allowed:
- IPProtocol: tcp
  ports:
  - '80'
  - '443'
creationTimestamp: '2014-06-13T13:27:12.206-07:00'
id: '5057780722612413546'
kind: compute#firewall
name: sample-http
network: https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/networks/default
selfLink: https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/firewalls/samplehttp
sourceRanges:
- 0.0.0.0/0

Configure an instance as a NAT gateway

This example shows setting up the gateway in a legacy network. Adjust the network ranges shown if you're using a VPC network.

You can create more complicated networking scenarios by making changes to the routes collection. This section describes how you can set up an internal address translation (NAT) gateway instance that can route traffic from internal-only virtual machine instances to the Internet. This allows you to use one external IP address to send traffic from multiple virtual machine instances but only expose a single virtual machine to the Internet.

  1. To start, create a VPC network to host your virtual machine instances for this scenario. In this example, the legacy network range used is 10.240.0.0/16 with a gateway of 10.240.0.1. However, you can select your own IPv4 range and gateway address as well. You can also create a VPC network instead.

    gcloud compute networks create gce-network \
        --mode legacy \
        --range 10.240.0.0/16
    
  2. Create firewall rules to allow ssh connections in the new network you just created.

    gcloud compute firewall-rules create gce-network-allow-ssh --allow tcp:22 --network gce-network
    gcloud compute firewall-rules create gce-network-allow-internal --allow tcp:1-65535,udp:1-65535,icmp \
        --source-ranges 10.240.0.0/16 --network gce-network
    
  3. Create a virtual machine to act as a NAT gateway on gce-network.

    gcloud compute instances create nat-gateway --network gce-network --can-ip-forward \
        --zone us-central1-a \
        --image-family debian-8 \
        --image-project debian-cloud \
        --tags nat
    
  4. Tag any virtual machine instances without an external IP address that will use the gateway instance with the tag no-ip, or create a new virtual machine without an external IP address and tag the instance with the no-ip tag.

    • Add tags to an existing instance.

      gcloud compute instances add-tags existing-instance --tags no-ip
      
    • Alternatively, create a new virtual machine without an external IP address.

      gcloud compute instances create example-instance --network gce-network --no-address \
          --zone us-central1-a \
          --image-family debian-8 \
          --image-project debian-cloud \
          --tags no-ip
      
  5. Create a route to send traffic destined to the Internet through your gateway instance.

    gcloud compute routes create no-ip-internet-route --network gce-network \
        --destination-range 0.0.0.0/0 \
        --next-hop-instance nat-gateway \
        --next-hop-instance-zone us-central1-a \
        --tags no-ip --priority 800
    

    Setting the priority of this route ensures that this route wins if there are any other conflicting routes. 1000 is the default priority and a value lower than 1000 will take precedent.

  6. Next, log onto your gateway instance and configure iptables to NAT internal traffic to the Internet.

    gcloud compute ssh nat-gateway --zone us-central1-a
    

    On your instance, configure iptables:

    $ sudo sysctl -w net.ipv4.ip_forward=1
    

    $ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    

    The first sudo command tells the kernel that you want to allow IP forwarding. The second sudo command masquerades packets received from internal instances as if they were sent from the NAT gateway instance.

  7. (Optional) If you want these settings to persist across future reboots:

    $ echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf > /dev/null
    

    $ sudo apt-get install iptables-persistent
    

Set up an instance as a VPN gateway

You can use Strongswan VPN software to set up a VPN gateway on one of your instances. For most users, Google recommends that you use Cloud VPN instead of Strongswan. With Cloud VPN, you don't need to create and configure an instance to run VPN software. Use Strongswan in cases where Cloud VPN doesn't provide required functionality.

  1. Create a VPC network that your on-premises network will connect to.

    gcloud compute networks create vpn-network --mode custom
    
  2. Create a subnet with an IP range that doesn't overlap with your on-premises subnet.

    gcloud compute networks subnets create vpn-subnet \
        --network vpn-network \
        --region us-central1 \
        --range 10.0.0.0/24 \
    
  3. Create a instance in the vpn-subnet subnet. This instance will be your VPN gateway.

    gcloud compute instances create vpn-gateway --can-ip-forward \
        --subnet vpn-subnet \
        --zone us-central1-a \
        --tags vpn-gateway
    
  4. Look up and record your VPN gateway's internal and external IP address.

    gcloud compute instances describe --zone us-central1-a vpn-gateway
    

    The external IP address is the value of the natIP field. The internal IP address is the value of the networkIP field, such as 10.0.0.2.

  5. Create an instance that communicates with clients in your on-premises network through the VPN gateway.

    gcloud compute instances create test-vpn \
        --subnet vpn-subnet \
        --tags vpn \
        --zone us-central1-a
    
  6. Create a route in the vpn-network network to route traffic through vpn-gateway if it is destined for your on-premises network.

    gcloud compute routes create vpnnetwork-to-gateway \
        --destination-range [ON_PREM_IP_RANGE] \
        --next-hop-address [VPN_GATEWAY_INTERNAL_IP] \
        --network vpn-network \
        --tags vpn
    

    The [VPN_GATEWAY_INTERNAL_IP] value is the internal IP address of your VPN gateway (the value of the networkIP field).

  7. Add the following firewall rules to your VPC network to accept incoming traffic.

    gcloud compute firewall-rules create ssh --source-ranges 0.0.0.0/0 \
        --allow tcp:22 \
        --network vpn-network
    
    gcloud compute firewall-rules create  allow-internal \
        --source-ranges 10.0.0.0/24 \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --network vpn-network \
        --allow all
    
    gcloud compute firewall-rules create allow-ipsec-nat \
        --source-ranges [ON_PREM_VPN_GATEWAY_EXTERNAL_IP]/32 \
        --allow udp:4500,udp:500 \
        --network vpn-network \
        --target-tags vpn-gateway
    
    gcloud compute firewall-rules create from-onprem \
        --source-ranges [ON_PREM_NETWORK_ADDRESS_SPACE] \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --network vpn-network \
        --target-tags vpn
    

    Create firewall rules in your on-premises network to accept incoming traffic from the VPC network.

  8. Connect to your VPN gateway instance.

  9. Install and configure Strongswan, the VPN software.

    From the home directory, create a file named ipsec.conf. Populate it with the following contents, replacing the placeholders with your enviornment's values:

    conn myconn
      authby=psk
      auto=start
      dpdaction=hold
      esp=aes128-sha1-modp2048!
      forceencaps=yes
      ike=aes128-sha1-modp2048!
      keyexchange=ikev2
      mobike=no
      type=tunnel
      left=%any
      leftid=[VPN_GATEWAY_EXTERNAL_IP_ADDRESS]
      leftsubnet=10.0.0.0/24
      leftauth=psk
      leftikeport=4500
      right=[ON_PREM_EXTERNAL_IP_ADDRESS]
      rightsubnet=[ON_PREM_ADDRESS_SPACE]
      rightauth=psk
      rightikeport=4500
    

    Then, run the following commands, replacing [secret-key] with a secret key (a string value):

    $ sudo apt-get update
    

    $ sudo apt-get install strongswan -y
    

    $ echo "%any : PSK \"[secret-key]\"" | sudo tee /etc/ipsec.secrets > /dev/null
    

    $ sudo sysctl -w net.ipv4.ip_forward=1
    

    $ sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
    

    $ sudo cp ipsec.conf /etc
    

    $ sudo ipsec restart
    

    You must also configure your on-premises VPN gateway to successfully establish a VPN tunnel.

    If your on-premises gateway machine is running a Debian-based operating system, you can use the same steps to install and configure Strongswan. For example, make a copy of your ipsec.conf file and switch the left and right IDs and subnets.

  10. Test your VPN tunnel by pinging an on-premises machine from the test-vpn instance:

    gcloud compute ssh test-vpn --command 'ping -c 3 [ON_PREM_INTERNAL_ADDRESS]'
    

Troubleshooting

If you are experiencing issues with your VPN setup based on the instructions above, try these tips to troubleshoot your setup:

  1. Check the status of your connection:

    $ sudo ipsec status
    

    If myconn isn't listed, start up the connection:

    $ sudo ipsec up myconn
    

  2. Determine whether the two VPN endpoints are able to communicate at all.

    Use netcat to send VPN-like traffic (UDP, port 4500). Run the following command on your local VPN endpoint:

    $ echo | nc -u [vpn-vm-gateway-external-address] 4500
    

    Run tcpdump on the receiving end to determine that your Compute Engine instance can receive the packet on port 4500:

    $ tcpdump -nn -n host [public-ip-of-local-VPN-gateway-machine] -i any
    

  3. Turn on more verbose logging by adding the following lines to your ipsec.conf files:

    config setup
      charondebug="ike 3, mgr 3, chd 3, net 3"
    
    conn myconn
      authby=psk
      auto=start
      ...
    

    Next, retry your connection. Although the connection should still fail, you can check the log for errors. The log file should be located at /var/log/charon.log on your Compute Engine instance.

What's next

  • See the VPC Overview for information on VPC networks.
  • See Using VPC for instructions on creating and modifying VPC networks.

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Compute Engine Documentation