Configuring VMs for networking use cases

This page describes special networking configurations of Compute Engine virtual machine (VM) instances, such as the following:

  • Setting up an external HTTP connection to a VM
  • Configuring a VM as a network proxy
  • Configuring a VM as a VPN gateway
  • Configuring a VM as a NAT gateway
  • Building high availability and high bandwidth NAT gateways

Setting up an external HTTP connection to a VM

The default firewall rules do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them. Note that a VM must have an external (static or ephemeral) IP address before it can receive traffic from outside its Virtual Private Cloud (VPC) network.

You can add a firewall rule to allow HTTP or HTTPS connections using the gcloud command-line tool or the Google Cloud Console. You can also add a firewall rule through the API.


You can use the Cloud Console to create an overall firewall rule for all instances on the VPC network, or you can allow individual instances access to HTTP and HTTPS connections by selecting the respective option when you create that instance. The latter option is described first, because it provides more control over individual instances.

  1. In the Cloud Console, go to the VM Instances page.

    Go to the VM Instances page

  2. Click Create instance.
  3. In the Firewall section, select Allow HTTP traffic and Allow HTTPS traffic.
  4. Click Create to create the instance.

By selecting these checkboxes, the VPC network automatically creates a default-http or default-https rule that applies to all instances with either the http-server or https-server tags. Your new instance is also tagged with the appropriate tag depending your checkbox selection.

If you already have existing default-http and default-https firewall rules, you can apply the firewall rule to existing instances by enabling the Allow HTTP or Allow HTTPS options on the instance's details page.

  1. Go to the VM instances page.
  2. Click the name of the desired instance.
  3. Click Edit button at the top of the page.
  4. Scroll down to the Firewalls section.
  5. Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
  6. Click Save.

In a similar manner, you can also disable external HTTP or HTTPS access for a VM by unchecking one or both checkboxes.

By allowing specific instances to be tagged for HTTP and HTTPS traffic rather than creating an overall firewall rule that applies to all instances, Google Cloud limits the possible security implications of allowing external traffic to all virtual machines in a project. However, if you would like to create a firewall rule that allows HTTP or HTTPS traffic to all virtual machine instances, you can create your own firewall rule:

  1. Go to the VPC networks page.
  2. Select the VPC network where you would to apply the firewall rule.
  3. Under the Firewall rules section, click Add firewall rule.
  4. Name your firewall rule, and add tcp:80 in the Protocols & Ports box, or tcp:443 for HTTPS traffic.
  5. Click Create.
gcloud command-line tool

If you want to allow HTTP and HTTPS traffic to all virtual machines in a project, the following command creates a firewall that allows incoming HTTP and HTTPS requests from anywhere to any instance connected to this VPC network.

gcloud compute firewall-rules create FIREWALL_RULE --allow tcp:80,tcp:443


gcloud compute firewall-rules create sample-http \
 --description "Incoming http and https allowed." \
 --allow tcp:80,tcp:443
gcloud compute firewall-rules describe sample-http
- IPProtocol: tcp
  - '80'
  - '443'
creationTimestamp: '2014-06-13T13:27:12.206-07:00'
id: '5057780722612413546'
kind: compute#firewall
name: sample-http

Configuring a VM as a network proxy

You can design your VPC network so that only one instance has external access, and all other instances in the VPC network use that instance as a proxy server to the outside world. This is useful if you want to control access into or out of your VPC network, or reduce the cost of paying for multiple external IP addresses.

This particular example discusses how to set up a network proxy on VM instances that use a Debian image. It uses a gateway instance as a Squid proxy server but this is only one way of setting up a proxy server.

To set up a Squid proxy server:

  1. Set up one instance with an external (static or ephemeral) IP address. For this example, name your instance gateway-instance.
  2. Set up one or more instances without external IP addresses by specifying gcloud compute instances create ... --no-address. For this example, call this instance hidden-instance.
  3. Learn how to connect from one instance to another because you will not be able to connect directly into your internal-only instances.
  4. Add a firewall to allow tcp traffic on port 3128:

    gcloud compute firewall-rules create [FIREWALL_RULE] --network [NETWORK] --allow tcp:3128
  5. Install Squid on gateway-instance, and configure it to allow access from any machines on the VPC network (RFC 1918, RFC 4193, and RFC 4291 IP spaces). This assumes that gateway-instance and hidden-instance are both connected to the same VPC network, which enables them to connect to each other.

    user@gateway-instance:~$ sudo apt-get install squid3

    Enable any machine on the local network to use the Squid3 server. The following sed commands uncomment and enable the acl localnet src entries in the Squid config files for local networks and machines.

    user@gateway-instance:~$ sudo sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid/squid.conf
    user@gateway-instance:~$ sudo sed -i 's:#\(http_access deny to_localhost\):\1:' /etc/squid/squid.conf
    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src*\):\1:' /etc/squid/squid.conf
    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src*\):\1:' /etc/squid/squid.conf
    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src*\):\1:' /etc/squid/squid.conf
    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fc00\:\:/7.*\):\1:' /etc/squid/squid.conf
    user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fe80\:\:/10.*\):\1:' /etc/squid/squid.conf
    # Prevent proxy access to metadata server
    user@gateway-instance:~$ sudo tee -a /etc/squid/squid.conf <<'EOF'
    acl to_metadata dst
    http_access deny to_metadata
    # Start Squid
    user@gateway:~$ sudo service squid start
  6. Configure hidden-instance to use gateway-instance as its proxy. Use ssh to connect into hidden-instance and define its proxy URL addresses to point to gateway-instance on port 3128 (the default Squid configuration) as shown here:

    user@gateway-instance:~$ ssh hidden-instance
    user@hidden-instance:~$ sudo -s
    root@hidden-instance:~# echo "export http_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/
    root@hidden-instance:~# echo "export https_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/
    root@hidden-instance:~# echo "export ftp_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/
    root@hidden-instance:~# echo "export no_proxy=,metadata," >> /etc/profile.d/

    Update sudoers to pass these env variables through.

    root@hidden-instance:~# cp /etc/sudoers /tmp/
    root@hidden-instance:~# chmod 640 /tmp/
    root@hidden-instance:~# echo "Defaults env_keep += \"ftp_proxy http_proxy https_proxy no_proxy"\" >>/tmp/
    root@hidden-instance:~# chmod 440 /tmp/
    root@hidden-instance:~# visudo -c -f /tmp/ && cp /tmp/ /etc/sudoers
  7. Exit sudo, load the variables, and run apt-get on hidden-instance. It should now work using gateway as a proxy. If gateway were not serving as a proxy, apt-get would not work because hidden-instance has no direct connection to the Internet.

    root@hidden-instance:~# exit
    user@hidden-instance:~$ source ~/.profile
    user@hidden-instance:~$ sudo apt-get update

Configuring a VM as a VPN gateway

You can use Strongswan VPN software to set up a VPN gateway on one of your instances. For most users, we recommend that you use Cloud VPN instead of Strongswan. With Cloud VPN, you don't need to create and configure a VM to run VPN software. Use Strongswan in cases where Cloud VPN doesn't provide required functionality.

  1. Create a VPC network that your on-premises network will connect to.

    gcloud compute networks create vpn-network --subnet-mode custom
  2. Create a subnet with an IP range that doesn't overlap with your on-premises subnet.

    gcloud compute networks subnets create vpn-subnet \
        --network vpn-network \
        --region us-central1 \
        --range \
  3. Create a VM instance in the vpn-subnet subnet. This instance will be your VPN gateway.

    gcloud compute instances create vpn-gateway --can-ip-forward \
        --subnet vpn-subnet \
        --zone us-central1-a \
        --tags vpn-gateway
  4. Look up and record your VPN gateway's internal and external IP address.

    gcloud compute instances describe --zone us-central1-a vpn-gateway

    The external IP address is the value of the natIP field. The internal IP address is the value of the networkIP field, such as

  5. Create a VM instance that communicates with clients in your on-premises network through the VPN gateway.

    gcloud compute instances create test-vpn \
        --subnet vpn-subnet \
        --tags vpn \
        --zone us-central1-a
  6. Create a route in the vpn-network network to route traffic through vpn-gateway if it is destined for your on-premises network.

    gcloud compute routes create vpnnetwork-to-gateway \
        --destination-range [ON_PREM_IP_RANGE] \
        --next-hop-address [VPN_GATEWAY_INTERNAL_IP] \
        --network vpn-network \
        --tags vpn

    The [VPN_GATEWAY_INTERNAL_IP] value is the internal IP address of your VPN gateway (the value of the networkIP field).

  7. Add the following firewall rules to your VPC network to accept incoming traffic.

    gcloud compute firewall-rules create ssh --source-ranges \
        --allow tcp:22 \
        --network vpn-network
    gcloud compute firewall-rules create  allow-internal \
        --source-ranges \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --network vpn-network
    gcloud compute firewall-rules create allow-ipsec-nat \
        --source-ranges [ON_PREM_VPN_GATEWAY_EXTERNAL_IP]/32 \
        --allow udp:4500,udp:500 \
        --network vpn-network \
        --target-tags vpn-gateway
    gcloud compute firewall-rules create from-onprem \
        --source-ranges [ON_PREM_NETWORK_ADDRESS_SPACE] \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --network vpn-network \
        --target-tags vpn

    Create firewall rules in your on-premises network to accept incoming traffic from the VPC network.

  8. Connect to your VPN gateway instance.

  9. Install and configure Strongswan, the VPN software.

    From the home directory, create a file named ipsec.conf. Populate it with the following contents, replacing the placeholders with your environment's values:

    conn myconn

    Then, run the following commands, replacing [secret-key] with a secret key (a string value):

    $ sudo apt-get update
    $ sudo apt-get install strongswan -y
    $ echo "%any : PSK \"[secret-key]\"" | sudo tee /etc/ipsec.secrets > /dev/null
    $ sudo sysctl -w net.ipv4.ip_forward=1
    $ sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
    $ sudo cp ipsec.conf /etc
    $ sudo ipsec restart

    You must also configure your on-premises VPN gateway to successfully establish a VPN tunnel.

    If your on-premises gateway machine is running a Debian-based operating system, you can use the same steps to install and configure Strongswan. For example, make a copy of your ipsec.conf file and switch the left and right IDs and subnets.

  10. Test your VPN tunnel by pinging an on-premises machine from the test-vpn instance:

    gcloud compute ssh test-vpn --command 'ping -c 3 [ON_PREM_INTERNAL_ADDRESS]'


If you are experiencing issues with your VPN setup based on the instructions above, try these tips to troubleshoot your setup:

  1. Check the status of your connection:

    $ sudo ipsec status

    If myconn isn't listed, start up the connection:

    $ sudo ipsec up myconn
  2. Determine whether the two VPN endpoints are able to communicate at all.

    Use netcat to send VPN-like traffic (UDP, port 4500). Run the following command on your local VPN endpoint:

    $ echo | nc -u [vpn-vm-gateway-external-address] 4500

    Run tcpdump on the receiving end to determine that your VM instance can receive the packet on port 4500:

    $ tcpdump -nn -n host [public-ip-of-local-VPN-gateway-machine] -i any
  3. Turn on more verbose logging by adding the following lines to your ipsec.conf files:

    config setup
      charondebug="ike 3, mgr 3, chd 3, net 3"
    conn myconn

    Next, retry your connection. Although the connection should still fail, you can check the log for errors. The log file should be located at /var/log/charon.log on your VM instance.

Configuring a VM as a NAT gateway

You can create more complicated networking scenarios by making changes to the routes collection. This section describes how you can set up an internal address translation (NAT) gateway instance that can route traffic from internal-only virtual machine instances to the Internet. This allows you to use one external IP address to send traffic from multiple virtual machine instances but only expose a single virtual machine to the Internet.

  1. To start, create a VPC network to host your virtual machine instances for this scenario.

    gcloud compute networks create custom-network1 \
        --subnet-mode custom
  2. Create subnet for the us-central1 region.

    gcloud compute networks subnets create subnet-us-central \
        --network custom-network1 \
        --region us-central1 \
  3. Create firewall rules to allow ssh connections in the new network you just created.

    gcloud compute firewall-rules create custom-network1-allow-ssh \
        --allow tcp:22 \
        --network custom-network1
    gcloud compute firewall-rules create custom-network1-allow-internal \
        --allow tcp:1-65535,udp:1-65535,icmp \
        --source-ranges \
        --network custom-network1
  4. Create a virtual machine to act as a NAT gateway on custom-network1.

    gcloud compute instances create nat-gateway --network custom-network1 \
        --subnet subnet-us-central \
        --can-ip-forward \
        --zone us-central1-a \
        --image-family debian-9 \
        --image-project debian-cloud \
        --tags nat
  5. Tag any virtual machine instances without an external IP address that will use the gateway instance with the tag no-ip, or create a new virtual machine without an external IP address and tag the instance with the no-ip tag.

    • Add tags to an existing instance.
    gcloud compute instances add-tags existing-instance --tags no-ip
    • Alternatively, create a new virtual machine without an external IP address.
    gcloud compute instances create example-instance --network custom-network1 \
        --subnet subnet-us-central \
        --no-address \
        --zone us-central1-a \
        --image-family debian-9 \
        --image-project debian-cloud \
        --tags no-ip
  6. Create a route to send traffic destined to the Internet through your gateway instance.

    gcloud compute routes create no-ip-internet-route \
        --network custom-network1 \
        --destination-range \
        --next-hop-instance nat-gateway \
        --next-hop-instance-zone us-central1-a \
        --tags no-ip --priority 800

    Setting the priority of this route ensures that this route wins if there are any other conflicting routes. 1000 is the default priority and a value lower than 1000 will take precedent.

  7. Next, log on to your gateway instance and configure iptables to NAT internal traffic to the Internet.

    gcloud compute ssh nat-gateway --zone us-central1-a

    On your instance, configure iptables:

    $ sudo sysctl -w net.ipv4.ip_forward=1
    $ sudo iptables -t nat -A POSTROUTING -o $(/sbin/ifconfig | head -1 | awk -F: {'print $1'}) -j MASQUERADE

    The first sudo command tells the kernel that you want to allow IP forwarding. The second sudo command masquerades packets received from internal instances as if they were sent from the NAT gateway instance.

    To inspect your iptables NAT rules, use the list option:

    $ sudo iptables -v -L -t nat

    Check that the output is similar to the following example:

    Chain PREROUTING (policy ACCEPT 5 packets, 3924 bytes)
     pkts bytes target     prot opt in     out     source               destination
    Chain INPUT (policy ACCEPT 5 packets, 3924 bytes)
     pkts bytes target     prot opt in     out     source               destination
    Chain OUTPUT (policy ACCEPT 64 packets, 4164 bytes)
     pkts bytes target     prot opt in     out     source               destination
    Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination
    64  4164 MASQUERADE  all  --  any    eth0    anywhere             anywhere"
  8. (Optional) If you want these settings to persist across future reboots:

    $ sudo echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-natgw.conf
    $ sudo apt-get install iptables-persistent

Building high availability and high bandwidth NAT gateways

This section describes how to set up multiple NAT gateways with Equal Cost Multi-Path (ECMP) routing and autohealing enabled for a more resilient and high-bandwidth deployment.

Google Cloud uses RFC 1918 private IP addresses for virtual machines. If these VMs need access to resources on the public internet, a NAT is required. A single NAT gateway architecture is sufficient for simple scenarios. However, higher throughput or higher availability requires a more resilient architecture.

Configuring the gateways

In instances where multiple routes have the same priority, Google Cloud uses ECMP routing to distribute traffic. In this case, you create several NAT gateways to receive parts of the traffic through ECMP. The NAT gateways then forward the traffic to external hosts with their public IP addresses.

The following diagram shows this configuration.

multiple gateway configuration

For higher resiliency, you place each gateway in a separate managed instance group with size 1 and attach a simple health check to ensure that the gateways will automatically restart if they fail. The gateways are in separate instance groups so they'll have a static external IP attached to the instance template. You provision three n1-standard-2 NAT gateways in this example, but you can use any other number or size of gateway that you want. For example, n1-standard-2 instances are capped at 4 Gbps of network traffic; if you need to handle a higher volume of traffic, you might choose n1-standard-8.

  1. Create a VPC network (if needed). If you're not adding these gateways to an existing VPC, create a VPC network and subnet for them. If you are adding them to an existing VPC, skip to the second step and modify the regions as appropriate for your environment.

    1. Using Cloud Shell, create a custom VPC associated with your Google Cloud project. This VPC allows you to use non-default IP addressing, but does not include any default firewall rules:

      gcloud compute networks create example-vpc --subnet-mode custom
    2. Create a subnet within this VPC, and specify a region and IP range. For this tutorial, use and the us-east1 region:

      gcloud compute networks subnets create example-east \
          --network example-vpc --range --region us-east1
  2. Reserve and store three static IP addresses.

    1. Reserve and store an address named nat-1 in the us-east1 region:

      gcloud compute addresses create nat-1 --region us-east1
      nat_1_ip=$(gcloud compute addresses describe nat-1 \
          --region us-east1 --format='value(address)')
    2. Reserve and store an address named nat-2 in us-east1:

      gcloud compute addresses create nat-2 --region us-east1
      nat_2_ip=$(gcloud compute addresses describe nat-2 \
          --region us-east1 --format='value(address)')
    3. Reserve and store an address named nat-3 in us-east1:

      gcloud compute addresses create nat-3 --region us-east1
      nat_3_ip=$(gcloud compute addresses describe nat-3 \
          --region us-east1 --format='value(address)')
  3. Create three instance templates with reserved IPs.

    1. Copy the startup config:

      gsutil cp gs://nat-gw-template/ .

      If you cannot access the startup script, copy it from the Startup script section.

    2. Create a nat-1 instance template:

      gcloud compute instance-templates create nat-1 \
          --machine-type n1-standard-2 --can-ip-forward --tags natgw \
 --address $nat_1_ip
    3. Create a nat-2 instance template:

      gcloud compute instance-templates create nat-2 \
          --machine-type n1-standard-2 --can-ip-forward --tags natgw \
  --address $nat_2_ip
    4. Create a nat-3 instance template:

      gcloud compute instance-templates create nat-3 \
          --machine-type n1-standard-2 --can-ip-forward --tags natgw \
 --address $nat_3_ip

      The n1-standard-2 machine type has two vCPUs and can use up to 4 Gbps of network bandwidth. If you need more bandwidth, you might want to choose a different host. Bandwidth scales at 2 Gbps per vCPU, up to 16 Gbps on an 8vCPU host.

  4. Create a health check to monitor responsiveness:

    gcloud compute health-checks create http nat-health-check --check-interval 30 \
        --healthy-threshold 1 --unhealthy-threshold 5 --request-path /health-check
    gcloud compute firewall-rules create "natfirewall" \
        --allow tcp:80 --target-tags natgw \
        --source-ranges "",""

    If a system fails and can't respond to HTTP traffic, it is restarted. In this case, since you need a project, you can use an existing project or create a new one.

  5. Create a VM instance group for each NAT gateway:

    gcloud compute instance-groups managed create nat-1 --size=1 --template=nat-1 --zone=us-east1-b
    gcloud compute instance-groups managed create nat-2 --size=1 --template=nat-2 --zone=us-east1-c
    gcloud compute instance-groups managed create nat-3 --size=1 --template=nat-3 --zone=us-east1-d
  6. Set up autohealing to restart unresponsive NAT gateways:

    gcloud compute instance-groups managed update nat-1 \
        --health-check nat-health-check --initial-delay 120 --zone us-east1-b
    nat_1_instance=$(gcloud compute instances list --format='value(name)' --filter='name ~nat-1')
    gcloud compute instance-groups managed update nat-2 \
        --health-check nat-health-check --initial-delay 120 --zone us-east1-c
    nat_2_instance=$(gcloud compute instances list --format='value(name)' --filter='name ~nat-2')
    gcloud compute instance-groups managed update nat-3 \
        --health-check nat-health-check --initial-delay 120 --zone us-east1-d
    nat_3_instance=$(gcloud compute instances list --format='value(name)' --filter='name ~nat-3')
  7. Add default routes to your VPC network, which apply to instances that use the NAT:

    gcloud compute routes create natroute1 --destination-range \
        --tags no-ip --priority 800 --next-hop-instance-zone us-east1-b \
        --next-hop-instance $nat_1_instance
    gcloud compute routes create natroute2 --destination-range \
        --tags no-ip --priority 800 --next-hop-instance-zone us-east1-c \
        --next-hop-instance $nat_2_instance
    gcloud compute routes create natroute3 --destination-range \
        --tags no-ip --priority 800 --next-hop-instance-zone us-east1-d \
        --next-hop-instance $nat_3_instance
  8. Tag the instances that you want to use the NAT:

    gcloud compute instances add-tags natted-servers --tags no-ip
  9. Test NAT functionality. With your gateways configured and your guest VMs tagged, ping external hosts without giving your VMs external IPs, as in this example:


    Example output:

    PING ( 56 data bytes
    64 bytes from icmp_seq=0 ttl=52 time=0.618 ms
    64 bytes from icmp_seq=1 ttl=52 time=0.325 ms
    64 bytes from icmp_seq=2 ttl=52 time=0.443 ms
    64 bytes from icmp_seq=3 ttl=52 time=0.314 ms
    64 bytes from icmp_seq=4 ttl=52 time=0.386 ms

Issues to consider

This configuration provides three NAT gateways in the us-east1 region, each capable of 2 Gbps. ECMP load balancing isn't perfect, though, and an individual flow is not spread across multiple links.

  • A Terraform module for this configuration is also available for automating deployments.
  • This configuration is best for ephemeral or non-stateful outbound links. If the size of the NAT gateway pool changes, TCP connections might be rebalanced, which could result in resetting an established connection.
  • The nodes are not automatically updated, so if a Debian default installation presents a threat, you will need to update manually.
  • These instances are all in the us-east1 region. If your VMs are in other zones, you might get better performance by moving gateways closer to those zones.
  • Bandwidth per gateway is up to 2 Gbps per core unidirectional. During a gateway failure, traffic is distributed to the remaining gateways, but because running flows are not reprogrammed, traffic does not immediately resettle when the gateway comes back online. So make sure you allow enough overhead when sizing.
  • To be alerted of unexpected results, use Monitoring to monitor the managed instance groups and network traffic.

Startup script:

Startup script referenced in step 3a:

echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-natgw.conf
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

cat <<EOF > /usr/local/sbin/
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess


def connectivityCheck():
    subprocess.check_call(["ping", "-c", "1", PING_HOST])
    return True
  except subprocess.CalledProcessError as e:
    return False

#This class handles any incoming request
class myHandler(BaseHTTPRequestHandler):
  def do_GET(self):
    if self.path == '/health-check':
      if connectivityCheck():

  server = HTTPServer(("", PORT_NUMBER), myHandler)
  print "Started httpserver on port " , PORT_NUMBER
  #Wait forever for incoming http requests

except KeyboardInterrupt:
  print "^C received, shutting down the web server"

nohup python /usr/local/sbin/ >/dev/null 2>&1 &

Migrate an instance-based NAT gateway to Cloud NAT

If you have an instance-based NAT gateway but would like to migrate to Cloud NAT, do the following:

  1. Configure Cloud NAT in the same region where you have the instance-based gateway.
  2. Delete the static route or routes sending packets to the instance-based NAT gateway. Note that you still should have a default gateway route for your network.
  3. Traffic should start flowing through Cloud NAT.
  4. Once you have confirmed that everything is working, delete your instance-based NAT gateways.

What's next