This page describes special networking configurations of Compute Engine virtual machine (VM) instances, such as the following:
- Setting up an external HTTP connection to a VM
- Configuring a VM as a network proxy
- Configuring a VM as a VPN gateway
- Configuring a VM as a NAT gateway
- Building high availability and high bandwidth NAT gateways
Setting up an external HTTP connection to a VM
The default firewall rules do not allow HTTP or HTTPS connections to your instances. However, it is fairly simple to add a rule that does allow them. Note that a VM must have an external (static or ephemeral) IP address before it can receive traffic from outside its Virtual Private Cloud (VPC) network.
You can use the Cloud Console to create an overall firewall rule for all instances on the VPC network, or you can allow individual instances access to HTTP and HTTPS connections by selecting the respective option when you create that instance. The latter option is described first, because it provides more control over individual instances.
- In the Cloud Console, go to the VM Instances page.
- Click Create instance.
- In the Firewall section, select Allow HTTP traffic and Allow HTTPS traffic.
- Click Create to create the instance.
By selecting these checkboxes, the VPC network automatically
default-https rule that
applies to all instances with either the
https-server tags. Your new instance is also tagged with the
appropriate tag depending your checkbox selection.
If you already have existing
default-https firewall rules, you can apply the firewall rule
to existing instances by enabling the Allow HTTP or
Allow HTTPS options on the instance's details page.
- Go to the VM instances page.
- Click the name of the desired instance.
- Click Edit button at the top of the page.
- Scroll down to the Firewalls section.
- Check the Allow HTTP or Allow HTTPS options under your desired VPC network.
- Click Save.
In a similar manner, you can also disable external HTTP or HTTPS access for a VM by unchecking one or both checkboxes.
By allowing specific instances to be tagged for HTTP and HTTPS traffic rather than creating an overall firewall rule that applies to all instances, Google Cloud limits the possible security implications of allowing external traffic to all virtual machines in a project. However, if you would like to create a firewall rule that allows HTTP or HTTPS traffic to all virtual machine instances, you can create your own firewall rule:
- Go to the VPC networks page.
- Select the VPC network where you would to apply the firewall rule.
- Under the Firewall rules section, click Add firewall rule.
- Name your firewall rule, and add
tcp:80in the Protocols & Ports box, or
tcp:443for HTTPS traffic.
- Click Create.
If you want to allow HTTP and HTTPS traffic to all virtual machines in a project, the following command creates a firewall that allows incoming HTTP and HTTPS requests from anywhere to any instance connected to this VPC network.
gcloud compute firewall-rules create FIREWALL_RULE --allow tcp:80,tcp:443
gcloud compute firewall-rules create sample-http \ --description "Incoming http and https allowed." \ --allow tcp:80,tcp:443
gcloud compute firewall-rules describe sample-http allowed: - IPProtocol: tcp ports: - '80' - '443' creationTimestamp: '2014-06-13T13:27:12.206-07:00' id: '5057780722612413546' kind: compute#firewall name: sample-http network: https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/networks/default selfLink: https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/firewalls/samplehttp sourceRanges: - 0.0.0.0/0
Configuring a VM as a network proxy
You can design your VPC network so that only one instance has external access, and all other instances in the VPC network use that instance as a proxy server to the outside world. This is useful if you want to control access into or out of your VPC network, or reduce the cost of paying for multiple external IP addresses.
This particular example discusses how to set up a network proxy on VM instances that use a Debian image. It uses a gateway instance as a Squid proxy server but this is only one way of setting up a proxy server.
To set up a Squid proxy server:
- Set up one instance with an
external (static or ephemeral) IP address.
For this example, name your instance
- Set up one or more instances without external IP addresses by specifying
gcloud compute instances create ... --no-address. For this example, call this instance
- Learn how to connect from one instance to another because you will not be able to connect directly into your internal-only instances.
Add a firewall to allow tcp traffic on port 3128:
gcloud compute firewall-rules create [FIREWALL_RULE] --network [NETWORK] --allow tcp:3128
Install Squid on
gateway-instance, and configure it to allow access from any machines on the VPC network (RFC 1918, RFC 4193, and RFC 4291 IP spaces). This assumes that
hidden-instanceare both connected to the same VPC network, which enables them to connect to each other.
user@gateway-instance:~$ sudo apt-get install squid3
Enable any machine on the local network to use the Squid3 server. The following
sedcommands uncomment and enable the
acl localnet srcentries in the Squid config files for local networks and machines.
user@gateway-instance:~$ sudo sed -i 's:#\(http_access allow localnet\):\1:' /etc/squid/squid.conf
user@gateway-instance:~$ sudo sed -i 's:#\(http_access deny to_localhost\):\1:' /etc/squid/squid.conf
user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 10.0.0.0/8.*\):\1:' /etc/squid/squid.conf
user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 172.16.0.0/12.*\):\1:' /etc/squid/squid.conf
user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src 192.168.0.0/16.*\):\1:' /etc/squid/squid.conf
user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fc00\:\:/7.*\):\1:' /etc/squid/squid.conf
user@gateway-instance:~$ sudo sed -i 's:#\(acl localnet src fe80\:\:/10.*\):\1:' /etc/squid/squid.conf
# Prevent proxy access to metadata server user@gateway-instance:~$ sudo tee -a /etc/squid/squid.conf <<'EOF' acl to_metadata dst 169.254.169.254 http_access deny to_metadata EOF
# Start Squid user@gateway:~$ sudo service squid start
gateway-instanceas its proxy. Use ssh to connect into
hidden-instanceand define its proxy URL addresses to point to
gateway-instanceon port 3128 (the default Squid configuration) as shown here:
user@gateway-instance:~$ ssh hidden-instance
user@hidden-instance:~$ sudo -s
root@hidden-instance:~# echo "export http_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
root@hidden-instance:~# echo "export https_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
root@hidden-instance:~# echo "export ftp_proxy=\"http://gateway-instance.$(dnsdomainname):3128\"" >> /etc/profile.d/proxy.sh
root@hidden-instance:~# echo "export no_proxy=169.254.169.254,metadata,metadata.google.internal" >> /etc/profile.d/proxy.sh
Update sudoers to pass these env variables through.
root@hidden-instance:~# cp /etc/sudoers /tmp/sudoers.new
root@hidden-instance:~# chmod 640 /tmp/sudoers.new
root@hidden-instance:~# echo "Defaults env_keep += \"ftp_proxy http_proxy https_proxy no_proxy"\" >>/tmp/sudoers.new
root@hidden-instance:~# chmod 440 /tmp/sudoers.new
root@hidden-instance:~# visudo -c -f /tmp/sudoers.new && cp /tmp/sudoers.new /etc/sudoers
sudo, load the variables, and run
hidden-instance. It should now work using gateway as a proxy. If gateway were not serving as a proxy,
apt-getwould not work because
hidden-instancehas no direct connection to the Internet.
user@hidden-instance:~$ source ~/.profile
user@hidden-instance:~$ sudo apt-get update
Configuring a VM as a VPN gateway
You can use Strongswan VPN software to set up a VPN gateway on one of your instances. For most users, we recommend that you use Cloud VPN instead of Strongswan. With Cloud VPN, you don't need to create and configure a VM to run VPN software. Use Strongswan in cases where Cloud VPN doesn't provide required functionality.
Create a VPC network that your on-premises network will connect to.
gcloud compute networks create vpn-network --subnet-mode custom
Create a subnet with an IP range that doesn't overlap with your on-premises subnet.
gcloud compute networks subnets create vpn-subnet \ --network vpn-network \ --region us-central1 \ --range 10.0.0.0/24 \
Create a VM instance in the
vpn-subnetsubnet. This instance will be your VPN gateway.
gcloud compute instances create vpn-gateway --can-ip-forward \ --subnet vpn-subnet \ --zone us-central1-a \ --tags vpn-gateway
Look up and record your VPN gateway's internal and external IP address.
gcloud compute instances describe --zone us-central1-a vpn-gateway
The external IP address is the value of the
natIPfield. The internal IP address is the value of the
networkIPfield, such as 10.0.0.2.
Create a VM instance that communicates with clients in your on-premises network through the VPN gateway.
gcloud compute instances create test-vpn \ --subnet vpn-subnet \ --tags vpn \ --zone us-central1-a
Create a route in the
vpn-networknetwork to route traffic through vpn-gateway if it is destined for your on-premises network.
gcloud compute routes create vpnnetwork-to-gateway \ --destination-range [ON_PREM_IP_RANGE] \ --next-hop-address [VPN_GATEWAY_INTERNAL_IP] \ --network vpn-network \ --tags vpn
[VPN_GATEWAY_INTERNAL_IP]value is the internal IP address of your VPN gateway (the value of the
Add the following firewall rules to your VPC network to accept incoming traffic.
gcloud compute firewall-rules create ssh --source-ranges 0.0.0.0/0 \ --allow tcp:22 \ --network vpn-network
gcloud compute firewall-rules create allow-internal \ --source-ranges 10.0.0.0/24 \ --allow tcp:1-65535,udp:1-65535,icmp \ --network vpn-network
gcloud compute firewall-rules create allow-ipsec-nat \ --source-ranges [ON_PREM_VPN_GATEWAY_EXTERNAL_IP]/32 \ --allow udp:4500,udp:500 \ --network vpn-network \ --target-tags vpn-gateway
gcloud compute firewall-rules create from-onprem \ --source-ranges [ON_PREM_NETWORK_ADDRESS_SPACE] \ --allow tcp:1-65535,udp:1-65535,icmp \ --network vpn-network \ --target-tags vpn
Create firewall rules in your on-premises network to accept incoming traffic from the VPC network.
Connect to your VPN gateway instance.
Install and configure Strongswan, the VPN software.
From the home directory, create a file named
ipsec.conf. Populate it with the following contents, replacing the placeholders with your environment's values:
conn myconn authby=psk auto=start dpdaction=hold esp=aes128-sha1-modp2048! forceencaps=yes ike=aes128-sha1-modp2048! keyexchange=ikev2 mobike=no type=tunnel left=%any leftid=[VPN_GATEWAY_EXTERNAL_IP_ADDRESS] leftsubnet=10.0.0.0/24 leftauth=psk leftikeport=4500 right=[ON_PREM_EXTERNAL_IP_ADDRESS] rightsubnet=[ON_PREM_ADDRESS_SPACE] rightauth=psk rightikeport=4500
Then, run the following commands, replacing
[secret-key]with a secret key (a string value):
$ sudo apt-get update
$ sudo apt-get install strongswan -y
$ echo "%any : PSK \"[secret-key]\"" | sudo tee /etc/ipsec.secrets > /dev/null
$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g' /etc/sysctl.conf
$ sudo cp ipsec.conf /etc
$ sudo ipsec restart
You must also configure your on-premises VPN gateway to successfully establish a VPN tunnel.
If your on-premises gateway machine is running a Debian-based operating system, you can use the same steps to install and configure Strongswan. For example, make a copy of your
ipsec.conffile and switch the left and right IDs and subnets.
Test your VPN tunnel by pinging an on-premises machine from the
gcloud compute ssh test-vpn --command 'ping -c 3 [ON_PREM_INTERNAL_ADDRESS]'
If you are experiencing issues with your VPN setup based on the instructions above, try these tips to troubleshoot your setup:
Check the status of your connection:
$ sudo ipsec status
myconnisn't listed, start up the connection:
$ sudo ipsec up myconn
Determine whether the two VPN endpoints are able to communicate at all.
Use netcat to send VPN-like traffic (UDP, port 4500). Run the following command on your local VPN endpoint:
$ echo | nc -u [vpn-vm-gateway-external-address] 4500
tcpdumpon the receiving end to determine that your VM instance can receive the packet on port 4500:
$ tcpdump -nn -n host [public-ip-of-local-VPN-gateway-machine] -i any
Turn on more verbose logging by adding the following lines to your
config setup charondebug="ike 3, mgr 3, chd 3, net 3" conn myconn authby=psk auto=start ...
Next, retry your connection. Although the connection should still fail, you can check the log for errors. The log file should be located at
/var/log/charon.logon your VM instance.
Configuring a VM as a NAT gateway
You can create more complicated networking scenarios by making changes to the routes collection. This section describes how you can set up an internal address translation (NAT) gateway instance that can route traffic from internal-only virtual machine instances to the Internet. This allows you to use one external IP address to send traffic from multiple virtual machine instances but only expose a single virtual machine to the Internet.
To start, create a VPC network to host your virtual machine instances for this scenario.
gcloud compute networks create custom-network1 \ --subnet-mode custom
Create subnet for the
gcloud compute networks subnets create subnet-us-central \ --network custom-network1 \ --region us-central1 \ --range 192.168.1.0/24
Create firewall rules to allow ssh connections in the new network you just created.
gcloud compute firewall-rules create custom-network1-allow-ssh \ --allow tcp:22 \ --network custom-network1
gcloud compute firewall-rules create custom-network1-allow-internal \ --allow tcp:1-65535,udp:1-65535,icmp \ --source-ranges 192.168.1.0/24 \ --network custom-network1
Create a virtual machine to act as a NAT gateway on
gcloud compute instances create nat-gateway --network custom-network1 \ --subnet subnet-us-central \ --can-ip-forward \ --zone us-central1-a \ --image-family debian-9 \ --image-project debian-cloud \ --tags nat
Tag any virtual machine instances without an external IP address that will use the gateway instance with the tag
no-ip, or create a new virtual machine without an external IP address and tag the instance with the
- Add tags to an existing instance.
gcloud compute instances add-tags existing-instance --tags no-ip
- Alternatively, create a new virtual machine without an external IP address.
gcloud compute instances create example-instance --network custom-network1 \ --subnet subnet-us-central \ --no-address \ --zone us-central1-a \ --image-family debian-9 \ --image-project debian-cloud \ --tags no-ip
Create a route to send traffic destined to the Internet through your gateway instance.
gcloud compute routes create no-ip-internet-route \ --network custom-network1 \ --destination-range 0.0.0.0/0 \ --next-hop-instance nat-gateway \ --next-hop-instance-zone us-central1-a \ --tags no-ip --priority 800
Setting the priority of this route ensures that this route wins if there are any other conflicting routes. 1000 is the default priority and a value lower than 1000 will take precedent.
Next, log on to your gateway instance and configure iptables to NAT internal traffic to the Internet.
gcloud compute ssh nat-gateway --zone us-central1-a
On your instance, configure iptables:
$ sudo sysctl -w net.ipv4.ip_forward=1
$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudocommand tells the kernel that you want to allow IP forwarding. The second
sudocommand masquerades packets received from internal instances as if they were sent from the NAT gateway instance.
To inspect your iptables NAT rules, use the list option:
$ sudo iptables -v -L -t nat
Check that the output is similar to the following example:
Chain PREROUTING (policy ACCEPT 5 packets, 3924 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 5 packets, 3924 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 64 packets, 4164 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 64 4164 MASQUERADE all -- any eth0 anywhere anywhere"
(Optional) If you want these settings to persist across future reboots:
$ sudo echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-natgw.conf
$ sudo apt-get install iptables-persistent
Building high availability and high bandwidth NAT gateways
This section describes how to set up multiple NAT gateways with Equal Cost Multi-Path (ECMP) routing and autohealing enabled for a more resilient and high-bandwidth deployment.
Google Cloud uses RFC 1918 private IP addresses for virtual machines. If these VMs need access to resources on the public internet, a NAT is required. A single NAT gateway architecture is sufficient for simple scenarios. However, higher throughput or higher availability requires a more resilient architecture.
Configuring the gateways
In instances where multiple routes have the same priority, Google Cloud uses ECMP routing to distribute traffic. In this case, you create several NAT gateways to receive parts of the traffic through ECMP. The NAT gateways then forward the traffic to external hosts with their public IP addresses.
The following diagram shows this configuration.
For higher resiliency, you place each gateway in a separate managed instance
group with size 1 and attach a simple health check to ensure that the gateways
will automatically restart if they fail. The gateways are in separate instance
groups so they'll have a static external IP attached to the instance template.
You provision three
n1-standard-2 NAT gateways in this example, but you can
use any other number or size of gateway that you want. For example,
n1-standard-2 instances are capped at 4 Gbps of network traffic; if you need
to handle a higher volume of traffic, you might choose
Create a VPC network (if needed). If you're not adding these gateways to an existing VPC, create a VPC network and subnet for them. If you are adding them to an existing VPC, skip to the second step and modify the regions as appropriate for your environment.
Using Cloud Shell, create a custom VPC associated with your Google Cloud project. This VPC allows you to use non-default IP addressing, but does not include any default firewall rules:
gcloud compute networks create example-vpc --subnet-mode custom
Create a subnet within this VPC, and specify a region and IP range. For this tutorial, use
gcloud compute networks subnets create example-east \ --network example-vpc --range 10.0.1.0/24 --region us-east1
Reserve and store three static IP addresses.
Reserve and store an address named
gcloud compute addresses create nat-1 --region us-east1 nat_1_ip=$(gcloud compute addresses describe nat-1 \ --region us-east1 --format='value(address)')
Reserve and store an address named
gcloud compute addresses create nat-2 --region us-east1 nat_2_ip=$(gcloud compute addresses describe nat-2 \ --region us-east1 --format='value(address)')
Reserve and store an address named
gcloud compute addresses create nat-3 --region us-east1 nat_3_ip=$(gcloud compute addresses describe nat-3 \ --region us-east1 --format='value(address)')
Copy the startup config:
gsutil cp gs://nat-gw-template/startup.sh .
If you cannot access the startup script, copy it from the Startup script section.
gcloud compute instance-templates create nat-1 \ --machine-type n1-standard-2 --can-ip-forward --tags natgw \ --metadata-from-file=startup-script=startup.sh --address $nat_1_ip
gcloud compute instance-templates create nat-2 \ --machine-type n1-standard-2 --can-ip-forward --tags natgw \ --metadata-from-file=startup-script=startup.sh --address $nat_2_ip
gcloud compute instance-templates create nat-3 \ --machine-type n1-standard-2 --can-ip-forward --tags natgw \ --metadata-from-file=startup-script=startup.sh --address $nat_3_ip
n1-standard-2machine type has two vCPUs and can use up to 4 Gbps of network bandwidth. If you need more bandwidth, you might want to choose a different host. Bandwidth scales at 2 Gbps per vCPU, up to 16 Gbps on an 8vCPU host.
Create a health check to monitor responsiveness:
gcloud compute health-checks create http nat-health-check --check-interval 30 \ --healthy-threshold 1 --unhealthy-threshold 5 --request-path /health-check
gcloud compute firewall-rules create "natfirewall" \ --allow tcp:80 --target-tags natgw \ --source-ranges "22.214.171.124/22","126.96.36.199/16"
If a system fails and can't respond to HTTP traffic, it is restarted. In this case, since you need a project, you can use an existing project or create a new one.
Create a VM instance group for each NAT gateway:
gcloud compute instance-groups managed create nat-1 --size=1 --template=nat-1 --zone=us-east1-b gcloud compute instance-groups managed create nat-2 --size=1 --template=nat-2 --zone=us-east1-c gcloud compute instance-groups managed create nat-3 --size=1 --template=nat-3 --zone=us-east1-d
Set up autohealing to restart unresponsive NAT gateways:
gcloud compute instance-groups managed update nat-1 \ --health-check nat-health-check --initial-delay 120 --zone us-east1-b nat_1_instance=$(gcloud compute instances list --format='value(name)' --filter='name ~nat-1') gcloud compute instance-groups managed update nat-2 \ --health-check nat-health-check --initial-delay 120 --zone us-east1-c nat_2_instance=$(gcloud compute instances list --format='value(name)' --filter='name ~nat-2') gcloud compute instance-groups managed update nat-3 \ --health-check nat-health-check --initial-delay 120 --zone us-east1-d nat_3_instance=$(gcloud compute instances list --format='value(name)' --filter='name ~nat-3')
Add default routes to your VPC network, which apply to instances that use the NAT:
gcloud compute routes create natroute1 --destination-range 0.0.0.0/0 \ --tags no-ip --priority 800 --next-hop-instance-zone us-east1-b \ --next-hop-instance $nat_1_instance gcloud compute routes create natroute2 --destination-range 0.0.0.0/0 \ --tags no-ip --priority 800 --next-hop-instance-zone us-east1-c \ --next-hop-instance $nat_2_instance gcloud compute routes create natroute3 --destination-range 0.0.0.0/0 \ --tags no-ip --priority 800 --next-hop-instance-zone us-east1-d \ --next-hop-instance $nat_3_instance
Tag the instances that you want to use the NAT:
gcloud compute instances add-tags natted-servers --tags no-ip
Test NAT functionality. With your gateways configured and your guest VMs tagged, ping external hosts without giving your VMs external IPs, as in this example:
PING 188.8.131.52 (184.108.40.206): 56 data bytes 64 bytes from 220.127.116.11: icmp_seq=0 ttl=52 time=0.618 ms 64 bytes from 18.104.22.168: icmp_seq=1 ttl=52 time=0.325 ms 64 bytes from 22.214.171.124: icmp_seq=2 ttl=52 time=0.443 ms 64 bytes from 126.96.36.199: icmp_seq=3 ttl=52 time=0.314 ms 64 bytes from 188.8.131.52: icmp_seq=4 ttl=52 time=0.386 ms
Issues to consider
This configuration provides three NAT gateways in the us-east1 region, each capable of 2 Gbps. ECMP load balancing isn't perfect, though, and an individual flow is not spread across multiple links.
- A Terraform module for this configuration is also available for automating deployments.
- This configuration is best for ephemeral or non-stateful outbound links. If the size of the NAT gateway pool changes, TCP connections might be rebalanced, which could result in resetting an established connection.
- The nodes are not automatically updated, so if a Debian default installation presents a threat, you will need to update manually.
- These instances are all in the us-east1 region. If your VMs are in other zones, you might get better performance by moving gateways closer to those zones.
- Bandwidth per gateway is up to 2 Gbps per core unidirectional. During a gateway failure, traffic is distributed to the remaining gateways, but because running flows are not reprogrammed, traffic does not immediately resettle when the gateway comes back online. So make sure you allow enough overhead when sizing.
- To be alerted of unexpected results, use Stackdriver to monitor the managed instance groups and network traffic.
Startup script: startup.sh
Startup script referenced in step 3a:
#!/bin/bash echo 1 > /proc/sys/net/ipv4/ip_forward echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-natgw.conf iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE cat <<EOF > /usr/local/sbin/health-check-server.py #!/usr/bin/python from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer import subprocess PORT_NUMBER = 80 PING_HOST = "www.google.com" def connectivityCheck(): try: subprocess.check_call(["ping", "-c", "1", PING_HOST]) return True except subprocess.CalledProcessError as e: return False #This class handles any incoming request class myHandler(BaseHTTPRequestHandler): def do_GET(self): if self.path == '/health-check': if connectivityCheck(): self.send_response(200) else: self.send_response(503) else: self.send_response(404) try: server = HTTPServer(("", PORT_NUMBER), myHandler) print "Started httpserver on port " , PORT_NUMBER #Wait forever for incoming http requests server.serve_forever() except KeyboardInterrupt: print "^C received, shutting down the web server" server.socket.close() EOF nohup python /usr/local/sbin/health-check-server.py >/dev/null 2>&1 &
Migrate an instance-based NAT gateway to Cloud NAT
If you have an instance-based NAT gateway but would like to migrate to Cloud NAT, do the following:
- Configure Cloud NAT in the same region where you have the instance-based gateway.
- Delete the static route or routes sending packets the instance-based NAT gateway. Note that you still should have a default gateway route for your network.
- Traffic should start flowing through Cloud NAT.
- Once you have confirmed that everything is working, delete your instance-based NAT gateways.