Cloud NAT

Cloud NAT (network address translation) allows Google Cloud Platform (GCP) virtual machine (VM) instances to connect to the Internet even if they don't have external, public IP addresses. Cloud NAT implements outbound NAT, which allows your GCP VMs to reach the Internet. It does not implement inbound NAT. Hosts outside the network can only respond to connections initiated by your GCP VMs; they cannot initiate their own connections to your VMs via NAT.

Cloud NAT is a regional resource. You can configure it to allow traffic from all ranges of all subnets in a region, from specific subnets in the region only, or from specific primary and secondary CIDR ranges only.

Example with a GCP network containing three subnets in two regions:

  • us-east1
    • subnet 1 (10.240.0.0/16)
    • subnet 2 (172.16.0.0/16)
  • europe-west1
    • subnet 3 (192.168.1.0/24)

The VMs in subnet 1 (10.240.0.0/16) and subnet 3 (192.168.1.0/24) do not have external IP addresses, but need to fetch periodic updates from an external server at 203.0.113.1.

The VMs in subnet 2 (172.16.0.0/16) do not need to get these updates and should not be allowed to connect to Internet at all, even with NAT.

Cloud NAT (click to enlarge)
Cloud NAT (click to enlarge)

To achieve this setup, first configure one NAT gateway per region per network:

  1. Configure NAT-GW-US-EAST for subnet 1 (10.240.0.0/16).
  2. Configure NAT-GW-EU for subnet 3 (192.168.1.0/24).
  3. Configure each NAT gateway with the subnetwork(s) it should translate traffic for.
    • subnet 1 (10.240.0.0/16)
    • subnet 3 (192.168.1.0/24)
  4. Do not configure NAT for subnet 2. This isolates it from the Internet, which is required for this example.

Cloud NAT benefits

Security

You can provision your application servers without public IP addresses. These servers can access the Internet for updates and patches and in some cases, for bootstrapping as well. The NAT IPs can be whitelisted on the Internet servers.

High Availability

Cloud NAT is a managed service that provides high availability without any user management and intervention. This is handled automatically with automatically allocated IPs. If you manually allocate IP addresses, allocate an even number for high availability.

Scalability

Cloud NAT scales seamlessly with the number of VMs and the volume of network traffic. Cloud NAT supports autoscaled managed instance groups. The network bandwidth available for each VM is not affected by the number of VMs that use a NAT gateway and will be similar to VMs with external IP.

Logging

[Not yet available] For NAT traffic, you can trace the connections, bandwidth, etc. for compliance, debugging, analytics, and accounting purposes.

Cloud NAT features

Cloud NAT allows VMs without an external IP address to access the Internet. Cloud NAT allows outbound connections only. Inbound traffic is allowed only if it is in response to a connection initiated by a VM.

NAT type

There are many ways of classifying NATs, based on different RFCs. As per RFC 5128, Cloud NAT is an "Endpoint-Independent Mapping, Endpoint-Dependent Filtering" NAT. This means that if a VM tries to contact different destinations using the same source IP:port, then these connections are allocated the same NAT IP:port, because the mapping is endpoint independent. Response packets from the Internet are allowed through only if they are from an IP:port where a VM previously sent packets because the filtering is endpoint dependent.

Using the terminology in RFC 3489, Cloud NAT is a Port Restricted Cone NAT. It's not a symmetric NAT because the mapping is endpoint independent.

NAT traversal

Because the NAT mapping is endpoint independent, Cloud NAT supports widely used NAT traversal protocols like STUN and TURN, which allow clients behind NATs to communicate with each other.

  • STUN (Session Traversal Utilities for NAT, RFC 5389) allows direct communication between the peers once the communication channel is established. This is lightweight, but works only when the mapping is endpoint independent.
  • TURN (Traversal Using Relays around NAT, RFC 5766) requires all communication to happen via a relay server with a public IP address to which both peers connect. TURN is more robust and works for all types of NAT, but consumes a lot of bandwidth and resources on the TURN server.

Note that the servers, configuration, and protocols required to use STUN/TURN are not supplied by Cloud NAT. Customers need to provide their own STUN servers.

NAT timeouts

The RFCs recommend higher values, but don't enforce them. Cloud NAT uses slightly lower values which you can override.

  • UDP Mapping Idle Timeout: 30s (RFC 4787 REQ-5)
  • ICMP Mapping Idle Timeout: 30s (RFC 5508 REQ-2)
  • TCP Established Connection Idle Timeout: 1200s (RFC 5382 REQ-5)
  • TCP Transitory Connection Idle Timeout: 30s (RFC 5382 REQ-5)

Bandwidth limitations

A VM with Cloud NAT has as much external bandwidth as a VM with an external IP.

Cases where NAT will not be performed on traffic

GCP forwards traffic using Cloud NAT only when there are no other matching routes or paths for the traffic. Cloud NAT is not used in the following cases, even if it is configured:

  • You configure an external IP on a VM's interface.

    If you configure an external IP on a VM's interface, IP packets with the VM's internal IP as the source IP will use the VM's external IP to reach the Internet. NAT will not be performed on such packets.

    Making a VM accessible via a load balancer external IP does not prevent a VM from using NAT, as long as the VM network interface itself does not have an external IP address.

  • Cloud NAT does not process traffic destined for Google Services. Instead, configure Private Google Access to handle traffic between VMs without an external IP address and Google Services. See Private Google Access below for details.

  • You create static routes having non RFC 1918 destination IP ranges. The packets will go to the configured next hop without using NAT.

  • You deploy your own NAT gateway using static routes.

    1. On the VM sending the traffic, the static route will come into play and send the Internet bound traffic to the custom NAT gateway instance.
    2. On the custom NAT gateway instance, after NAT, the egress packets will NOT use Cloud NAT because the NAT gateway instance has an external IP address, and VMs with external IP cannot participate in Cloud NAT.

Translation example

In this example, instance 10.240.0.3 needs to fetch an update from external server 203.0.113.1.

Cloud NAT translation example (click to enlarge)
Cloud NAT translation example (click to enlarge)

Each of the instances in subnet 1 has been allocated 64 ports each for NAT translation using NAT IPs 192.0.2.50 or 192.0.2.60.

In the example, suppose instance 10.240.0.3 has been allocated port range 34000 to 34063 for NAT IP 192.0.2.50.

A flow from this instance has:

  • Source IP: 10.240.0.3 (instance IP)
  • Source port: 24000 (instance port)
  • Destination IP: 203.0.113.1 (Update server)
  • Destination port: port 80 (Update service port)

Since you have configured this subnet for NAT and the associated NAT-GW-US-EAST has NAT IP 192.0.2.50, this flow is translated to:

  • Source IP: 192.0.2.50 (NAT IP)
  • Source port: 34022 (NAT port - i.e one of the ports allocated to this instance)
  • Destination IP: 203.0.113.1 (Update server)
  • Destination Port: 80 (Update service port)

The flow is then sent to the Internet after the translation. When the response comes back, it has the following characteristics:

  • Source IP: 203.0.113.1 (Update server)
  • Source port: 80 (Update service port)
  • Destination IP: 192.0.2.50 (NAT IP)
  • Destination Port: 34022 (NAT port - one of the ports allocated to this VM)

This packet is translated and given to the instance. To the instance, the packet looks like this:

  • Source IP: 203.0.113.1 (Update server)
  • Source port: port 80 (Update service port)
  • Destination IP: 10.240.0.3 (instance IP)
  • Destination port: 24000 (instance port)

A packet sent to a different destination from this IP address might end up using the same NAT IP address and NAT port at the same time to allow more connections than the number of ports assigned to this VM.

IP address allocation

Specifying IP addresses in NAT config

A Cloud NAT IP pool contains one or more IP addresses used to translate the internal addresses of instances. Please see Number of NAT ports and connections for a detailed explanation of the relationship between NAT IP addresses, ports and instances (VMs).

You can choose between two modes for NAT pool IP allocation:

  • Recommended: Configure auto-allocation of IPs by a NAT gateway

    • If you do not specify the IP addresses for NAT, the system uses auto-allocation. Allowing GCP to auto allocate is the best way of ensuring that enough IPs are always available for NAT, irrespective of the number of VMs created.
    • Whitelisting the IPs becomes harder because some IPs may get freed up when they are not required anymore. Later on, new IPs may get allocated if required.

OR

  • Configure specific NAT pool IP(s) to be used by a NAT gateway

    • You manually specify one or more NAT IPs to be used by the NAT gateway. These IPs have to be reserved static IP addresses. You can specify one or more IPs. If specifying more than one IP, you must specify a number that is a multiple of two (2, 4, 6, 8,...).
    • In this case, no auto-allocation of IPs is performed.
    • If the number of NAT IPs is not enough to allocate NAT ports to all VMs configured to use NAT, then some VMs are not able to use NAT. Cloud Router logs and status convey this information when more NAT IPs are required.

Cloud NAT: Under the hood

Cloud NAT is a distributed, fully managed, software-defined service, not an instance or appliance-based solution.

NAT using Cloud Router (click to enlarge)
NAT using Cloud Router (click to enlarge)

Google Cloud NAT architecture differs from the traditional NAT proxy solutions. Unlike the NAT proxy solutions, there are no NAT proxy instances in the path from instance to destination for Cloud NAT. Instead each VM is allocated a globally unique set of NAT IPs and associated port-ranges, which are used by Andromeda, Google's network virtualization stack to perform NAT.

Cloud NAT uses the Cloud Router resource to group the NAT gateway configurations together. Cloud NAT is not actually performed by the Cloud Router. If a Cloud Router is used for both BGP and NAT, the NAT gateways do not consume any resources on the Cloud Router and will not affect the router's performance.

Traditional NAT vs. Cloud NAT (click to enlarge)
Traditional NAT vs. Cloud NAT (click to enlarge)

As a result, with Cloud NAT, there are no choke points in the path between your internal VM to external destination. This leads to better scalability, performance, throughput and availability.

Number of NAT ports and connections

Every Cloud NAT IP address has 64K (65536) ports available for TCP and another 64K for UDP. Of these, the first 1024 well-known ports are not used by Cloud NAT. So we have 64,512 available ports per NAT IP address. By default, each VM with Cloud NAT gets a NAT IP and 64 ports from that NAT IP address's 64,512 ports. Therefore, up to 1008 VMs can be supported by a single NAT IP address. If you have between 1 to 1008 VMs, you need 1 NAT IP address, for 1009 to 2016 VMs, you need 2 NAT IP addresses, etc.

If you using the VM with Google Kubernetes Engine (GKE) containers, the per-VM number of ports is the number of ports available to all containers on that VM.

If you allocate more ports per VM, say 4096, then you need a NAT IP address for every 15 VMs.

With the default 64 ports, a VM can open 64 TCP and 64 UDP connections to a given destination IP address and port. It can open another 64 TCP and 64 UDP connections to a different destination IP address and port, and so on as long as the VM has less than 64,000 connections.

If you want to open 1500 connections simultaneously to a given destination IP address and port, you will need to allocate 1500 ports to that VM. This is because all the fields of the IP packets from the destination to the VM will be identical, except for the NAT port which can be different. To know which connection a packet belongs to, the NAT port must be different for each connection.

However, if you want to open 1500 connections to 1500 different destination IP:ports, then you just need 1 port. All the response packets for the 1500 connections (from different destinations to the VM) will look different even if we use the same NAT port.

You can set the number of ports per VM to be anything from 2 through 57344 (=56K). With static NAT IP address allocation, you need to make sure that enough NAT IP addresses exist to cover all the VM instances that need NAT.

Cloud NAT with Google Kubernetes Engine

You can support Cloud NAT for Google Kubernetes Engine (GKE) containers by configuring Cloud NAT to NAT-translate all ranges in the subnet.

Both Nodes and Pods can use Cloud NAT. If you do not want Pods to be able to use NAT, you can create a Cluster Network Policy to prevent it.

Cloud NAT with GKE (click to enlarge)
Cloud NAT with GKE (click to enlarge)

In the example above, you want your containers to be NAT-translated. To enable NAT for all the containers and the GKE node, you must choose all the IP ranges of Subnet 1 as the NAT candidates. It is not possible to NAT only container1 or container2.

Cloud NAT does not affect the total bandwidth of individual VMs. Each VM has the same external bandwidth whether using Cloud NAT or an external IP address.

When using Cloud NAT, each VM is allocated a certain number of ports that can be used for outgoing connections. Containers on a VM cannot use more than this number of ports collectively. If a VM has, for example, 64 NAT ports, all the containers together cannot have more than 64 simultaneous connections to the same external IP address and port pair. See Number of NAT ports and connections for more details.

Cloud NAT with other GCP services

Shared VPC

Shared VPC enables multiple projects belonging to a single organization to use a single shared common network. In a Shared VPC setup, there is Shared VPC host project where the shared network is configured by a Network Admin. The Shared VPC host project is associated with one or more service projects where resources can be created and attached to the shared network configured in the Shared VPC host project.

You have several choices for customizing Cloud NAT deployments with Shared VPC based on your use cases.

Use Case 1:

You want a centralized Cloud NAT gateway that can be used by VMs of the host project as well as all of the service projects in a shared network (within a region).

In order to achieve this, you should create the Cloud NAT gateway as a part of the shared network configured in the host project. Note that there is one Cloud NAT gateway required per network per region, so if you have multiple regions, then one NAT gateway is required for each of these regions for a network. All VMs that are in this shared network/region ( in host and service projects) are able to use this Cloud NAT gateway.

Cloud NAT use case 1 (click to enlarge)
Cloud NAT use case 1 (click to enlarge)

In the example above, Network 1 is the shared network created in host Project A and shared by instances in host project A as well as in service projects B and C. Cloud NAT gateway, NAT-GW-1, is configured in the host project for network 1 (for region us-central). Instances from Projects A,B,C that are part of Network 1 in region us-central are be able to use this NAT gateway to access the Update server.

Use Case 2: You want a centralized Cloud NAT gateway for all of your VMs in a network in host project and associated service projects except for one network in a service project (as an example) for which you want to configure its own NAT gateway.

Cloud NAT use case 2 (click to enlarge)
Cloud NAT use case 2 (click to enlarge)

In the example above, instances using Network 1 and belonging to Project A, B, C in region us-central1 use NAT-GW-1 to access the update server. Instances in Project C belonging to network 2 in region us-central1 uses NAT-GW-2 to access the log server.

VPC Network Peering

VPC Network Peering allows GCP VPCs to peer with each other so that VMs in each of these peered VPC networks can communicate. Although certain configurations like internal load balancing rules are imported into a peered network, NAT configuration is not. For example, if network N1 peers with network N2, and N1 has a NAT gateway, only subnetworks in N1 can use the NAT gateway to reach the Internet. If N2 also wants its subnets to use NAT, then another NAT gateway has to be independently configured in network N2.

Multiple network interfaces

Cloud NAT fully supports multiple network interface VMs, with or without alias IP ranges.

firewall rules

Firewall rules are applied at the VM. For egress traffic, firewall rules are applied before NAT is performed. For ingress traffic, firewall rules are applied after NAT IP addresses are translated to internal IPs. NAT IP addresses should not be used in egress and ingress firewall rules.

Private Google Access

Cloud NAT does not support access to Google Services, but all subnets in a region where you enable Cloud NAT are automatically enabled for Private Google Access. Private Google Access allows VMs without an external IP address to reach most Google services.

Using Private Google Access along with Cloud NAT does not change the existing behavior of Private Google Access:

  • If the VM has a public IP, that is used to access Google Services.
  • If the VM only has an internal IP, that is used.
  • If the packet's source IP matches an alias IP range of the VM, then the alias IP is used.

Note that the public IPs configured for Cloud NAT are never used, thus avoiding the overhead of NAT.

What's next

このページは役立ちましたか?評価をお願いいたします。

フィードバックを送信...