Special Configurations for VM Instances

This page describes special networking configurations of Compute Engine virtual machine (VM) instances, such as the following:

  • Configuring an instance as a network proxy.
  • Configuring an instance as a NAT gateway.
  • Configuring an instance as a VPN gateway.
  • Building high availability and high bandwidth NAT gateways.

Configure an instance as a network proxy

You can design your VPC network so that only one instance has external access, and all other instances in the VPC network use that instance as a proxy server to the outside world. This is useful if you want to control access into or out of your VPC network, or reduce the cost of paying for multiple external IP addresses.

This particular example discusses how to set up a network proxy on VM instances that use a Debian image. It uses a gateway instance as a Squid proxy server but this is only one way of setting up a proxy server.

To set up a Squid proxy server:

  1. Set up one instance with an external (static or ephemeral) IP address. For this example, name your instance gateway-instance.
  2. Set up one or more instances without external IP addresses by specifying gcloud compute instances create ... --no-address. For this example, call this instance hidden-instance.
  3. Learn how to connect from one instance to another because you will not be able to connect directly into your internal-only instances.
  4. Add a firewall to allow tcp traffic on port 3128:

    gcloud compute firewall-rules create [FIREWALL_RULE] --network [NETWORK] --allow tcp:3128
    

  5. Install Squid on gateway-instance, and configure it to allow access from any machines on the VPC network (RFC1918, RFC4193, and RFC4291 IP spaces). This assumes that gateway-instance and hidden-instance are both connected to the same VPC network, which enables them to connect to each other.

    user@gateway-instance:~$ sudo apt-get install squid3
    

    Enable any machine on the local network to use the Squid3 server. The following sed commands uncomment and enable the acl localnet src entries in the Squid config files for local networks and machines.

    user@gateway-instance:~$ sudo sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(http_access deny to_localhost\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 10.0.0.0/8.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 172.16.0.0/12.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 192.168.0.0/16.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fc00\:\:/7.*\):\1:' /etc/squid3/squid.conf
    

    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fe80\:\:/10.*\):\1:' /etc/squid3/squid.conf
    

    # Prevent proxy access to metadata server
    user@gateway-instance:~$ sudo cat <<EOF >>/etc/squid3/squid.conf
    acl to_metadata dst 169.254.169.254
    http_access deny to_metadata
    EOF
    

    # Start Squid
    user@gateway:~$ sudo service squid3 start
    

  6. Configure hidden-instance to use gateway-instance as its proxy. Use ssh to connect into hidden-instance and define its proxy URL addresses to point to gateway-instance on port 3128 (the default Squid configuration) as shown here:

    user@gateway-instance:~$ ssh hidden-instance
    

    user@hidden-instance:~$ sudo -s
    

    root@hidden-instance:~# echo "export http_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
    

    root@hidden-instance:~# echo "export https_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
    

    root@hidden-instance:~# echo "export ftp_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
    

    root@hidden-instance:~# echo "export no_proxy=169.254.169.254,metadata,metadata.google.internal" >> /etc/profile.d/proxy.sh
    

    Update sudoers to pass these env variables through.

    root@hidden-instance:~# cp /etc/sudoers /tmp/sudoers.new
    

    root@hidden-instance:~# chmod 640 /tmp/sudoers.new
    

    root@hidden-instance:~# echo "Defaults env_keep += \"ftp_proxy http_proxy https_proxy no_proxy"\" >>/tmp/sudoers.new
    

    root@hidden-instance:~# chmod 440 /tmp/sudoers.new
    

    root@hidden-instance:~# visudo -c -f /tmp/sudoers.new && cp /tmp/sudoers.new /etc/sudoers
    

  7. Exit sudo, load the variables, and run apt-get on hidden-instance. It should now work using gateway as a proxy. If gateway were not serving as a proxy, apt-get would not work because hidden-instance has no direct connection to the Internet.

    root@hidden-instance:~# exit
    

    user@hidden-instance:~$ source ~/.profile
    

    user@hidden-instance:~$ sudo apt-get update
    

Set up an external HTTP connection to an instance

The default firewall rules do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them. Note that an instance must have an external IP address before it can receive traffic from outside its VPC network.

You can add a firewall rule to allow HTTP or HTTPS connections using the gcloud command-line tool or the Google Cloud Platform Console. You can also add a firewall rule through the API.

Console

You can use the GCP Console to create an overall firewall rule for all instances on the VPC network, or you can allow individual instances access to HTTP and HTTPS connections by selecting the respective option when you create that instance. The latter option is described first, because it provides more control over individual instances.

  1. In the GCP Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click the Create button.
  3. In the Firewall section, select Allow HTTP traffic and Allow HTTPS traffic.
  4. Click the Create button to create the instance.

By selecting these checkboxes, the VPC network automatically creates a default-http or default-https rule that applies to all instances with either the http-server or https-server tags. Your new instance is also tagged with the appropriate tag depending your checkbox selection.

If you already have existing default-http and default-https firewall rules, you can apply the firewall rule to existing instances by enabling the Allow HTTP or Allow HTTPS options on the instance's details page.

  1. Go to the VM instances page.
  2. Click the name of the desired instance.
  3. Click Edit button at the top of the page.
  4. Scroll down to the Firewalls section.
  5. Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
  6. Click Save.

In a similar manner, you can also disable external HTTP or HTTPS access for an instance by unchecking one or both checkboxes.

By allowing specific instances to be tagged for HTTP and HTTPS traffic rather than creating an overall firewall rule that applies to all instances, GCP limits the possible security implications of allowing external traffic to all virtual machines in a project. However, if you would like to create a firewall rule that allows HTTP or HTTPS traffic to all virtual machine instances, you can create your own firewall rule:

  1. Go to the VPC networks page.
  2. Select the VPC network where you would to apply the firewall rule.
  3. Under the Firewall rules section, click Add firewall rule.
  4. Name your firewall rule, and add tcp:80 in the Protocols & Ports box, or tcp:443 for HTTPS traffic.
  5. Click Create.
gcloud command-line tool

If you want to allow HTTP and HTTPS traffic to all virtual machines in a project, the following command creates a firewall that allows incoming HTTP and HTTPS requests from anywhere to any instance connected to this VPC network.

gcloud compute firewall-rules create FIREWALL_RULE --allow tcp:80,tcp:443

**Example**

gcloud compute firewall-rules create sample-http \
 --description "Incoming http and https allowed." \
 --allow tcp:80,tcp:443
gcloud compute firewall-rules describe sample-http
allowed:
- IPProtocol: tcp
  ports:
  - '80'
  - '443'
creationTimestamp: '2014-06-13T13:27:12.206-07:00'
id: '5057780722612413546'
kind: compute#firewall
name: sample-http
network: https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/networks/default
selfLink: https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/firewalls/samplehttp
sourceRanges:
- 0.0.0.0/0

Set up an instance as a VPN gateway

You can use Strongswan VPN software to set up a VPN gateway on one of your instances. For most users, Google recommends that you use Cloud VPN instead of Strongswan. With Cloud VPN, you don't need to create and configure an instance to run VPN software. Use Strongswan in cases where Cloud VPN doesn't provide required functionality.

  1. Create a VPC network that your on-premises network will connect to.

    gcloud compute networks create vpn-network --subnet-mode custom
    

  2. Create a subnet with an IP range that doesn't overlap with your on-premises subnet.

    gcloud compute networks subnets create vpn-subnet \
        --network vpn-network \
        --region us-central1 \
        --range 10.0.0.0/24 \
    

  3. Create a instance in the vpn-subnet subnet. This instance will be your VPN gateway.

    gcloud compute instances create vpn-gateway --can-ip-forward \
        --subnet vpn-subnet \
        --zone us-central1-a \
        --tags vpn-gateway
    

  4. Look up and record your VPN gateway's internal and external IP address.

    gcloud compute instances describe --zone us-central1-a vpn-gateway
    

    The external IP address is the value of the natIP field. The internal IP address is the value of the networkIP field, such as 10.0.0.2.

  5. Create an instance that communicates with clients in your on-premises network through the VPN gateway.

    gcloud compute instances create test-vpn \
        --subnet vpn-subnet \
        --tags vpn \
        --zone us-central1-a
    

  6. Create a route in the vpn-network network to route traffic through vpn-gateway if it is destined for your on-premises network.

    gcloud compute routes create vpnnetwork-to-gateway \
        --destination-range [ON_PREM_IP_RANGE] \
        --next-hop-address [VPN_GATEWAY_INTERNAL_IP] \
        --network vpn-network \
        --tags vpn
    

    The [VPN_GATEWAY_INTERNAL_IP] value is the internal IP address of your VPN gateway (the value of the networkIP field).

  7. Add the following firewall rules to your VPC network to accept incoming traffic.

    gcloud compute firewall-rules create ssh --source-ranges 0.0.0.0/0 \
        --allow tcp:22 \
        --network vpn-network
    

    gcloud compute firewall-rules create  allow-internal \
        --source-ranges 10.0.0.0/24 \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --network vpn-network \
        --allow all
    

    gcloud compute firewall-rules create allow-ipsec-nat \
        --source-ranges [ON_PREM_VPN_GATEWAY_EXTERNAL_IP]/32 \
        --allow udp:4500,udp:500 \
        --network vpn-network \
        --target-tags vpn-gateway
    

    gcloud compute firewall-rules create from-onprem \
        --source-ranges [ON_PREM_NETWORK_ADDRESS_SPACE] \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --network vpn-network \
        --target-tags vpn
    

    Create firewall rules in your on-premises network to accept incoming traffic from the VPC network.

  8. Connect to your VPN gateway instance.

  9. Install and configure Strongswan, the VPN software.

    From the home directory, create a file named ipsec.conf. Populate it with the following contents, replacing the placeholders with your environment's values:

    conn myconn
      authby=psk
      auto=start
      dpdaction=hold
      esp=aes128-sha1-modp2048!
      forceencaps=yes
      ike=aes128-sha1-modp2048!
      keyexchange=ikev2
      mobike=no
      type=tunnel
      left=%any
      leftid=[VPN_GATEWAY_EXTERNAL_IP_ADDRESS]
      leftsubnet=10.0.0.0/24
      leftauth=psk
      leftikeport=4500
      right=[ON_PREM_EXTERNAL_IP_ADDRESS]
      rightsubnet=[ON_PREM_ADDRESS_SPACE]
      rightauth=psk
      rightikeport=4500
    

    Then, run the following commands, replacing [secret-key] with a secret key (a string value):

    $ sudo apt-get update
    

    $ sudo apt-get install strongswan -y
    

    $ echo "%any : PSK \"[secret-key]\"" | sudo tee /etc/ipsec.secrets > /dev/null
    

    $ sudo sysctl -w net.ipv4.ip_forward=1
    

    $ sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
    

    $ sudo cp ipsec.conf /etc
    

    $ sudo ipsec restart
    

    You must also configure your on-premises VPN gateway to successfully establish a VPN tunnel.

    If your on-premises gateway machine is running a Debian-based operating system, you can use the same steps to install and configure Strongswan. For example, make a copy of your ipsec.conf file and switch the left and right IDs and subnets.

  10. Test your VPN tunnel by pinging an on-premises machine from the test-vpn instance:

    gcloud compute ssh test-vpn --command 'ping -c 3 [ON_PREM_INTERNAL_ADDRESS]'
    

Troubleshooting

If you are experiencing issues with your VPN setup based on the instructions above, try these tips to troubleshoot your setup:

  1. Check the status of your connection:

    $ sudo ipsec status
    

    If myconn isn't listed, start up the connection:

    $ sudo ipsec up myconn
    

  2. Determine whether the two VPN endpoints are able to communicate at all.

    Use netcat to send VPN-like traffic (UDP, port 4500). Run the following command on your local VPN endpoint:

    $ echo | nc -u [vpn-vm-gateway-external-address] 4500
    

    Run tcpdump on the receiving end to determine that your VM instance can receive the packet on port 4500:

    $ tcpdump -nn -n host [public-ip-of-local-VPN-gateway-machine] -i any
    

  3. Turn on more verbose logging by adding the following lines to your ipsec.conf files:

    config setup
      charondebug="ike 3, mgr 3, chd 3, net 3"

    conn myconn authby=psk auto=start ...

    Next, retry your connection. Although the connection should still fail, you can check the log for errors. The log file should be located at /var/log/charon.log on your VM instance.

Configure an instance as a NAT gateway

This example shows setting up the gateway in a legacy network. Adjust the network ranges shown if you're using a VPC network.

You can create more complicated networking scenarios by making changes to the routes collection. This section describes how you can set up an internal address translation (NAT) gateway instance that can route traffic from internal-only virtual machine instances to the Internet. This allows you to use one external IP address to send traffic from multiple virtual machine instances but only expose a single virtual machine to the Internet.

  1. To start, create a VPC network to host your virtual machine instances for this scenario. In this example, the legacy network range used is 10.240.0.0/16 with a gateway of 10.240.0.1. However, you can select your own IPv4 range and gateway address as well. You can also create a VPC network instead.

    gcloud compute networks create gce-network \
        --subnet-mode legacy \
        --range 10.240.0.0/16
    

  2. Create firewall rules to allow ssh connections in the new network you just created.

    gcloud compute firewall-rules create gce-network-allow-ssh --allow tcp:22 --network gce-network
    
    gcloud compute firewall-rules create gce-network-allow-internal --allow tcp:1-65535,udp:1-65535,icmp \
        --source-ranges 10.240.0.0/16 --network gce-network
    

  3. Create a virtual machine to act as a NAT gateway on gce-network.

    gcloud compute instances create nat-gateway --network gce-network --can-ip-forward \
        --zone us-central1-a \
        --image-family debian-8 \
        --image-project debian-cloud \
        --tags nat
    

  4. Tag any virtual machine instances without an external IP address that will use the gateway instance with the tag no-ip, or create a new virtual machine without an external IP address and tag the instance with the no-ip tag.

    • Add tags to an existing instance.

    gcloud compute instances add-tags existing-instance --tags no-ip
    

    • Alternatively, create a new virtual machine without an external IP address.

    gcloud compute instances create example-instance --network gce-network --no-address \
            --zone us-central1-a \
            --image-family debian-8 \
            --image-project debian-cloud \
            --tags no-ip
    

  5. Create a route to send traffic destined to the Internet through your gateway instance.

    gcloud compute routes create no-ip-internet-route --network gce-network \
        --destination-range 0.0.0.0/0 \
        --next-hop-instance nat-gateway \
        --next-hop-instance-zone us-central1-a \
        --tags no-ip --priority 800
    

    Setting the priority of this route ensures that this route wins if there are any other conflicting routes. 1000 is the default priority and a value lower than 1000 will take precedent.

  6. Next, log onto your gateway instance and configure iptables to NAT internal traffic to the Internet.

    gcloud compute ssh nat-gateway --zone us-central1-a
    

    On your instance, configure iptables:

    $ sudo sysctl -w net.ipv4.ip_forward=1
    

    $ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    

    The first sudo command tells the kernel that you want to allow IP forwarding. The second sudo command masquerades packets received from internal instances as if they were sent from the NAT gateway instance.

  7. (Optional) If you want these settings to persist across future reboots:

    $ echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf > /dev/null
    

    $ sudo apt-get install iptables-persistent
    

Build high availability and high bandwidth NAT gateways

This section describes how to set up multiple NAT gateways with Equal Cost Multi-Path (ECMP) routing and autohealing enabled for a more resilient and high-bandwidth deployment.

GCP uses RFC 1918 private IP addresses for virtual machines. If these VMs need access to resources on the public internet, a NAT is required. A single NAT gateway architecture is sufficient for simple scenarios. However, higher throughput or higher availability requires a more resilient architecture.

Configuring the gateways

In instances where multiple routes have the same priority, GCP uses ECMP routing to distribute traffic. In this case, you create several NAT gateways to receive parts of the traffic through ECMP. The NAT gateways then forward the traffic to external hosts with their public IP addresses.

The following diagram shows this configuration.

multiple gateway configuration

For higher resiliency, you place each gateway in a separate managed instance group with size 1 and attach a simple health check to ensure that the gateways will automatically restart if they fail. The gateways are in separate instance groups so they'll have a static external IP attached to the instance template. You provision three n1-standard-2 NAT gateways in this example, but you can use any other number or size of gateway that you want. For example, n1-standard-2 instances are capped at 4 Gbps of network traffic; if you need to handle a higher volume of traffic, you might choose n1-standard-8s.

  1. Create a VPC network (if needed). If you're not adding these gateways to an existing VPC, create a VPC network and subnet for them. If you are adding them to an existing VPC, skip to the second step and modify the regions as appropriate for your environment.

    1. Using Cloud Shell, create a custom VPC associated with your GCP project. This VPC allows you to use non-default IP addressing, but does not include any default firewall rules:

      gcloud compute networks create example-vpc --mode custom
      

    2. Create a subnet within this VPC, and specify a region and IP range. For this tutorial, use 10.0.1.0/24 and the us-east1 region:

      gcloud compute networks subnets create example-east \
          --network example-vpc --range 10.0.1.0/24 --region us-east1
      

  2. Reserve and store three static IP addresses.

    1. Reserve and store an address named nat-1 in the us-east1 region:

      gcloud compute addresses create nat-1 --region us-east1

      nat_1_ip=$(gcloud compute addresses describe nat-1 \ --region us-east1 --format='value(address)')

    2. Reserve and store an address named nat-2 in us-east1:

      gcloud compute addresses create nat-2 --region us-east1

      nat_2_ip=$(gcloud compute addresses describe nat-2 \ --region us-east1 --format='value(address)')

    3. Reserve and store an address named nat-3 in us-east1:

      gcloud compute addresses create nat-3 --region us-east1
      nat_3_ip=$(gcloud compute addresses describe nat-3 \
          --region us-east1 --format='value(address)')
      

  3. Create three instance templates with reserved IPs.

    1. Copy the startup config:

      gsutil cp gs://nat-gw-template/startup.sh .
      

      If you cannot access the startup script, copy it from the Startup script section.

    2. Create a nat-1 instance template:

      gcloud compute instance-templates create nat-1 \
          --machine-type n1-standard-2 --can-ip-forward --tags natgw \
          --metadata-from-file=startup-script=startup.sh --address $nat_1_ip
      

    3. Create a nat-2 instance template:

      gcloud compute instance-templates create nat-2 \
          --machine-type n1-standard-2 --can-ip-forward --tags natgw \
          --metadata-from-file=startup-script=startup.sh  --address $nat_2_ip
      

    4. Create a nat-3 instance template:

      gcloud compute instance-templates create nat-3 \
          --machine-type n1-standard-2 --can-ip-forward --tags natgw \
          --metadata-from-file=startup-script=startup.sh --address $nat_3_ip
      

      The n1-standard-2 machine type has two vCPUs and can use up to 4 Gbps of network bandwidth. If you need more bandwidth, you might want to choose a different host. Bandwidth scales at 2 Gbps per vCPU, up to 16 Gbps on an 8vCPU host.

  4. Create a health check to monitor responsiveness:

    gcloud compute health-checks create http nat-health-check --check-interval 30 \
        --healthy-threshold 1 --unhealthy-threshold 5 --request-path /health-check
    

    gcloud compute firewall-rules create "natfirewall" \
        --allow tcp:80 --target-tags natgw \
        --source-ranges "209.85.152.0/22","209.85.204.0/22","35.191.0.0/16"
    

    If a system fails and can't respond to HTTP traffic, it is restarted. In this case, since you need a project, you can use an existing project or create a new one.

  5. Create an instance group for each NAT gateway:

    gcloud compute instance-groups managed create nat-1 --size=1 --template=nat-1 --zone=us-east1-b
    gcloud compute instance-groups managed create nat-2 --size=1 --template=nat-2 --zone=us-east1-c
    gcloud compute instance-groups managed create nat-3 --size=1 --template=nat-3 --zone=us-east1-d
    

  6. Set up autohealing to restart unresponsive NAT gateways:

    gcloud beta compute instance-groups managed set-autohealing nat-1 \
        --health-check nat-health-check --initial-delay 120 --zone us-east1-b
    nat_1_instance=$(gcloud compute instances list |awk '$1 ~ /^nat-1/ { print $1 }')
    gcloud beta compute instance-groups managed set-autohealing nat-2 \
        --health-check nat-health-check --initial-delay 120 --zone us-east1-c
    nat_2_instance=$(gcloud compute instances list |awk '$1 ~ /^nat-2/ { print $1 }')
    gcloud beta compute instance-groups managed set-autohealing nat-3 \
        --health-check nat-health-check --initial-delay 120 --zone us-east1-d
    nat_3_instance=$(gcloud compute instances list |awk '$1 ~ /^nat-3/ { print $1 }')
    

  7. Add default routes to your instances:

    gcloud compute routes create natroute1 --destination-range 0.0.0.0/0 \
        --tags no-ip --priority 800 --next-hop-instance-zone us-east1-b \
        --next-hop-instance $nat_1_instance
    gcloud compute routes create natroute2 --destination-range 0.0.0.0/0 \
        --tags no-ip --priority 800 --next-hop-instance-zone us-east1-c \
        --next-hop-instance $nat_2_instance
    gcloud compute routes create natroute3 --destination-range 0.0.0.0/0 \
        --tags no-ip --priority 800 --next-hop-instance-zone us-east1-d \
        --next-hop-instance $nat_3_instance
    

  8. Tag the instances that you want to use the NAT:

    gcloud compute instances add-tags natted-servers --tags no-ip
    

  9. Test NAT functionality. With your gateways configured and your guest VMs tagged, ping external hosts without giving your VMs external IPs, as in this example:

    ping 8.8.8.8

    Example output:

    PING 8.8.8.8 (8.8.8.8): 56 data bytes
    64 bytes from 8.8.8.8: icmp_seq=0 ttl=52 time=0.618 ms
    64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=0.325 ms
    64 bytes from 8.8.8.8: icmp_seq=2 ttl=52 time=0.443 ms
    64 bytes from 8.8.8.8: icmp_seq=3 ttl=52 time=0.314 ms
    64 bytes from 8.8.8.8: icmp_seq=4 ttl=52 time=0.386 ms
    

Issues to consider

This configuration provides three NAT gateways in the us-east1 region, each capable of 2 Gbps. ECMP load balancing isn't perfect, though, and an individual flow is not spread across multiple links.

  • A Terraform module for this configuration is also available for automating deployments.
  • This configuration is best for ephemeral or non-stateful outbound links. If the size of the NAT gateway pool changes, TCP connections might be rebalanced, which could result in resetting an established connection.
  • The nodes are not automatically updated, so if a Debian default installation presents a threat, you will need to update manually.
  • These instances are all in the us-east1 region. If your VMs are in other zones, you might get better performance by moving gateways closer to those zones.
  • Bandwidth per gateway is up to 2 Gbps per core unidirectional. During a gateway failure, traffic is distributed to the remaining gateways, but because running flows are not reprogrammed, traffic does not immediately resettle when the gateway comes back online. So make sure you allow enough overhead when sizing.
  • To be alerted of unexpected results, use Stackdriver to monitor the managed instance groups and network traffic.

Startup script: startup.sh

Startup script referenced in step 3a:

#!/bin/bash
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

cat <<EOF > /usr/local/sbin/health-check-server.py
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess

PORT_NUMBER = 80
PING_HOST = "www.google.com"

def connectivityCheck():
  try:
    subprocess.check_call(["ping", "-c", "1", PING_HOST])
    return True
  except subprocess.CalledProcessError as e:
    return False

#This class handles any incoming request
class myHandler(BaseHTTPRequestHandler):
  def do_GET(self):
    if self.path == '/health-check':
      if connectivityCheck():
        self.send_response(200)
      else:
        self.send_response(503)
    else:
      self.send_response(404)

try:
  server = HTTPServer(("", PORT_NUMBER), myHandler)
  print "Started httpserver on port " , PORT_NUMBER
  #Wait forever for incoming http requests
  server.serve_forever()

except KeyboardInterrupt:
  print "^C received, shutting down the web server"
  server.socket.close()
EOF

nohup python /usr/local/sbin/health-check-server.py >/dev/null 2>&1 &

What's next

  • See the VPC Overview for information on VPC networks.
  • See Using VPC for instructions on creating and modifying VPC networks.

Send feedback about...