Cloud NAT

Cloud NAT (network address translation) allows Google Cloud Platform (GCP) virtual machine (VM) instances and Google Kubernetes Engine (GKE) clusters to connect to the Internet if they don't have external, public IP addresses. Cloud NAT implements outbound NAT, which allows your instances to reach the Internet. It does not implement inbound NAT. Hosts outside the network can only respond to connections initiated by your instances; they cannot initiate their own connections to your instances via NAT.

Cloud NAT is a regional resource. You can configure it to allow traffic from all ranges of all subnets in a region, from specific subnets in the region only, or from specific primary and secondary CIDR ranges only.

Example with a GCP network containing three subnets in two regions:

  • us-east1
    • subnet 1 (
    • subnet 2 (
  • europe-west1
    • subnet 3 (

The instances in subnet 1 ( and subnet 3 ( do not have external IP addresses, but need to fetch periodic updates from an external server at

The instances in subnet 2 ( do not need to get these updates and should not be allowed to connect to Internet at all, even with NAT.

Cloud NAT (click to enlarge)
Cloud NAT (click to enlarge)

To achieve this setup, first configure one NAT gateway for each region and for each network within the region:

  1. Configure NAT-GW-US-EAST for subnet 1 (
  2. Configure NAT-GW-EU for subnet 3 (
  3. On each NAT gateway, configure the subnets it should translate traffic for.
    • subnet 1 (
    • subnet 3 (
  4. Do not configure NAT for subnet 2. This isolates it from the Internet, which is required for this example.

Cloud NAT benefits


You can provision your application servers without public IP addresses. These servers can access the Internet for updates and patches, and in some cases, for bootstrapping. The NAT IP addresses can be whitelisted on the Internet servers.

High availability

Cloud NAT is a managed service that provides high availability without user management and intervention. This is handled for both automatically allocated and manually allocated IP addresses.


Cloud NAT scales seamlessly with the number of instances and the volume of network traffic. Cloud NAT supports autoscaled managed instance groups. The network bandwidth available for each instance is not affected by the number of instances that use a NAT gateway and is similar to instances with an external IP address.


[Not yet available] For NAT traffic, you can monitor usage information, such as connections and bandwidth, for compliance, debugging, analytics, and accounting purposes.

Cloud NAT features

Cloud NAT allows instances without an external IP address to access the Internet. Cloud NAT allows outbound connections only. Inbound traffic is allowed only if it is in response to a connection initiated by an instance.

NAT type

There are many ways of classifying NATs, based on different RFCs. As per RFC 5128, Cloud NAT is an "Endpoint-Independent Mapping, Endpoint-Dependent Filtering" NAT. This means that if an instance tries to contact different destinations using the same source IP:port, then these connections are allocated the same NAT IP:port, because the mapping is endpoint independent. Response packets from the Internet are allowed through only if they are from an IP:port where an instance previously sent packets because the filtering is endpoint dependent.

Using the terminology in RFC 3489, Cloud NAT is a Port Restricted Cone NAT. It's not a symmetric NAT because the mapping is endpoint independent.

NAT traversal

Because the NAT mapping is endpoint independent, Cloud NAT supports widely used NAT traversal protocols like STUN and TURN, which allow clients behind NATs to communicate with each other.

  • STUN (Session Traversal Utilities for NAT, RFC 5389) allows direct communication between the peers once the communication channel is established. This is lightweight, but works only when the mapping is endpoint independent.
  • TURN (Traversal Using Relays around NAT, RFC 5766) requires all communication to happen via a relay server with a public IP address to which both peers connect. TURN is more robust and works for all types of NAT, but consumes a lot of bandwidth and resources on the TURN server.

Note that the servers, configuration, and protocols required to use STUN/TURN are not supplied by Cloud NAT. Customers need to provide their own STUN servers.

NAT timeouts

The RFCs recommend higher values, but don't enforce them. Cloud NAT uses slightly lower values that you can override.

  • UDP Mapping Idle Timeout: 30s (RFC 4787 REQ-5)
  • ICMP Mapping Idle Timeout: 30s (RFC 5508 REQ-2)
  • TCP Established Connection Idle Timeout: 1200s (RFC 5382 REQ-5)
  • TCP Transitory Connection Idle Timeout: 30s (RFC 5382 REQ-5)

Bandwidth limitations

An instance with Cloud NAT has as much external bandwidth as an instance with an external IP.

Cases where NAT is not performed on traffic

GCP forwards traffic using Cloud NAT only when there are no other matching routes or paths for the traffic. Cloud NAT is not used in the following cases, even if it is configured:

  • You configure an external IP on a VM or cluster.

    If you configure an external IP on an instance's interface, IP packets use the instance's external IP to reach the Internet. NAT will not be performed on such packets.

    Making an instance accessible via a load balancer external IP does not prevent an instance from using NAT, as long as the instance network interface itself does not have an external IP address.

  • Cloud NAT does not process traffic destined for Google Services. Instead, configure Private Google Access to handle traffic between instances without an external IP address and Google Services. See Private Google Access below for details.

  • You create static routes having non-internal destination IP ranges. The packets will go to the configured next hop without using NAT.

  • You deploy your own NAT gateway using static routes.

    1. On the instance sending the traffic, the static route comes into play and sends the Internet-bound traffic to the custom NAT gateway instance.
    2. On the custom NAT gateway instance, after NAT, the egress packets do NOT use Cloud NAT because the NAT gateway instance has an external IP address, and instances with external IP cannot participate in Cloud NAT.

Translation example

In this example, instance needs to download an update from external server

Cloud NAT translation example (click to enlarge)
Cloud NAT translation example (click to enlarge)

Each of the instances in subnet 1 has been allocated 64 ports each for NAT translation using NAT IPs or

In the example, suppose instance has been allocated port range 34000 to 34063 for NAT IP

A flow from this instance has:

  • Source IP: (instance IP)
  • Source port: 24000 (instance port)
  • Destination IP: (Update server)
  • Destination port: port 80 (Update service port)

Because you have configured this subnet for NAT and the associated NAT-GW-US-EAST has NAT IP, this flow is translated to:

  • Source IP: (NAT IP)
  • Source port: 34022 (NAT port - one of the ports allocated to this instance)
  • Destination IP: (Update server)
  • Destination Port: 80 (Update service port)

The flow is then sent to the Internet after the translation. The response has the following characteristics:

  • Source IP: (Update server)
  • Source port: 80 (Update service port)
  • Destination IP: (NAT IP)
  • Destination Port: 34022 (NAT port - one of the ports allocated to this instance)

This packet is translated and given to the instance. To the instance, the packet looks like this:

  • Source IP: (Update server)
  • Source port: port 80 (Update service port)
  • Destination IP: (instance IP)
  • Destination port: 24000 (instance port)

A packet sent to a different destination from this IP address might use the same NAT IP address and NAT port at the same time to allow more connections than the number of ports assigned to this instance.

IP address allocation

Specifying IP addresses in NAT config

A Cloud NAT IP pool contains one or more IP addresses used to translate the internal addresses of instances. See Number of NAT ports and connections for a detailed explanation of the relationship between NAT IP addresses, ports, and instances.

You can choose between two modes for NAT pool IP allocation:

  • Recommended: Configure auto-allocation of IPs by a NAT gateway

    • If you do not specify the IP addresses for NAT, the system uses auto-allocation. Allowing GCP to automatically allocate is the best way of ensuring that enough IPs are always available for NAT, irrespective of the number of VMs created.
    • Adding IP addresses to the "allow" list becomes harder because some IP addresses may get freed up when they are not required anymore. Later on, new IPs may get allocated if required.


  • Configure specific NAT pool IP(s) to be used by a NAT gateway

    • You manually specify one or more NAT IPs to be used by the NAT gateway. These IPs have to be reserved static IP addresses. You can specify one or more IPs.
    • In this case, no auto-allocation of IPs is performed.
    • If the number of NAT IPs is not enough to allocate NAT ports to all instances configured to use NAT, then some instances cannot use NAT. Cloud Router logs and status convey this information when more NAT IPs are required.
    • When you use auto-allocation, GCP reserves IP addresses in your project automatically. These addresses count against your static IP address quotas in the project.

Cloud NAT: Under the hood

Cloud NAT is a distributed, fully managed, software-defined service, not an instance or appliance-based solution.

Cloud NAT (click to enlarge)
Cloud NAT (click to enlarge)

Google Cloud NAT architecture differs from the traditional NAT proxy solutions. Unlike the NAT proxy solutions, there are no NAT proxy instances in the path from instance to destination for Cloud NAT. Instead each instance is allocated a globally unique set of NAT IPs and associated port-ranges, which are used by Andromeda, Google's network virtualization stack to perform NAT.

Traditional NAT vs. Cloud NAT (click to enlarge)
Traditional NAT vs. Cloud NAT (click to enlarge)

As a result, with Cloud NAT, there are no choke points in the path between your internal instances and external destinations. This leads to better scalability, performance, throughput and availability.

Number of NAT ports and connections

Every Cloud NAT IP address has 64K (65536) ports available for TCP and another 64K for UDP. Of these, the first 1024 well-known ports are not used by Cloud NAT. So we have 64,512 available ports per NAT IP address. By default, each instance with Cloud NAT gets a NAT IP and 64 ports from that NAT IP address's 64,512 ports. Therefore, up to 1008 instances can be supported by a single NAT IP address. If you have between 1 to 1008 instances, you need 1 NAT IP address, for 1009 to 2016 instances, you need 2 NAT IP addresses, etc.

If you using the VM instance with Google Kubernetes Engine (GKE) containers, the per-VM number of ports is the number of ports available to all containers on that VM.

If you allocate more ports per VM, say 4096, then you need a NAT IP address for every 15 VMs.

With the default 64 ports, a VM can open 64 TCP and 64 UDP connections to a given destination IP address and port. It can open another 64 TCP and 64 UDP connections to a different destination IP address and port, and so on as long as the VM has less than 64,000 connections.

If you want to open 1500 connections simultaneously to a given destination IP address and port, you will need to allocate 1500 ports to that VM. This is because all the fields of the IP packets from the destination to the VM will be identical, except for the NAT port which can be different. To know which connection a packet belongs to, the NAT port must be different for each connection.

However, if you want to open 1500 connections to 1500 different destination IP:ports, then you just need 1 port. All the response packets for the 1500 connections (from different destinations to the VM) will look different even if we use the same NAT port.

You can set the number of ports per VM to be anything from 2 through 57344 (=56K). With static NAT IP address allocation, you need to make sure that enough NAT IP addresses exist to cover all the VM instances that need NAT.

Cloud NAT with Google Kubernetes Engine

You can support Cloud NAT for Google Kubernetes Engine (GKE) containers by configuring Cloud NAT to NAT-translate all ranges in the subnet.

Both Nodes and Pods can use Cloud NAT. If you do not want Pods to be able to use NAT, you can create a Cluster Network Policy to prevent it.

Cloud NAT with GKE (click to enlarge)
Cloud NAT with GKE (click to enlarge)

In the example above, you want your containers to be NAT-translated. To enable NAT for all the containers and the GKE node, you must choose all the IP ranges of Subnet 1 as the NAT candidates. It is not possible to NAT only container1 or container2.

Cloud NAT does not affect the total bandwidth of individual VM nodes. Each node has the same external bandwidth whether using Cloud NAT or an external IP address.

When using Cloud NAT, each VM node is allocated a certain number of ports that can be used for outgoing connections. Containers on a node cannot use more than this number of ports collectively. If a node has, for example, 64 NAT ports, all the containers together cannot have more than 64 simultaneous connections to the same external IP address and port pair. See Number of NAT ports and connections for more details.

Cloud NAT with other GCP services

Shared VPC

Shared VPC enables multiple projects belonging to a single organization to use a single shared common network. In a Shared VPC setup, there is a Shared VPC host project where the shared network is configured by a Network Admin. The Shared VPC host project is associated with one or more service projects where resources can be created and attached to the shared network configured in the Shared VPC host project.

You have several choices for customizing Cloud NAT deployments with Shared VPC based on your use cases.

Use Case 1:

You want a centralized Cloud NAT gateway that can be used by instances of the host project as well as all of the service projects in a shared network (within a region).

In order to achieve this, you should create the Cloud NAT gateway as a part of the shared network configured in the host project. Note that there is one Cloud NAT gateway required per network per region, so if you have multiple regions, then one NAT gateway is required for each of these regions for a network. All instances that are in this shared network/region ( in host and service projects) are able to use this Cloud NAT gateway.

Cloud NAT use case 1 (click to enlarge)
Cloud NAT use case 1 (click to enlarge)

In the example above, Network 1 is the shared network created in host Project A and shared by instances in host project A as well as in service projects B and C. Cloud NAT gateway, NAT-GW-1, is configured in the host project for network 1 (for region us-central). Instances from Projects A,B,C that are part of Network 1 in region us-central are be able to use this NAT gateway to access the Update server.

Use Case 2: You want a centralized Cloud NAT gateway for all associated instances in a Shared VPC network. You want a separate gateway in an unrelated network in a service project.

Cloud NAT use case 2 (click to enlarge)
Cloud NAT use case 2 (click to enlarge)

In the example above, instances using Network 1 and belonging to Project A, B, and C in region us-central1 use NAT-GW-1 to access the update server. Instances in Project C belonging to Network 2 in region us-central1 uses NAT-GW-2 to access the log server.

VPC Network Peering

VPC Network Peering allows GCP VPCs to peer with each other so that instances in each of these peered VPC networks can communicate. Although certain configurations like internal load balancing rules are imported into a peered network, NAT configuration is not. For example, if network N1 peers with network N2, and N1 has a NAT gateway, only subnetworks in N1 can use the NAT gateway to reach the Internet. If N2 also wants its subnets to use NAT, then another NAT gateway has to be independently configured in network N2.

Multiple network interfaces

Cloud NAT supports multiple network interface instances, with or without alias IP ranges.

Firewall rules

Firewall rules are applied to instances. For egress traffic, firewall rules are applied before NAT is performed. For ingress traffic, firewall rules are applied after NAT IP addresses are translated to instance internal IPs. For this reason, do not create ingress firewall rules whose sources are external NAT IP addresses or egress firewall rules whose destinations are external NAT IP addresses.

Private Google Access

Cloud NAT never applies to traffic sent to the public IP addresses for Google APIs and services. Requests sent to Google APIs and services never use the external IPs configured for Cloud NAT as their sources.

When you enable Cloud NAT, GCP automatically enables Private Google Access for the subnets to which NAT applies in the region. Private Google Access allows VM instances without external IP addresses to reach the public IPs for certain Google APIs and services.

Using Private Google Access along with Cloud NAT does not change the behavior of Private Google Access:

  • If the VM has an external IP address, it can access the public IPs for Google APIs and services directly as long the Internet access requirements are met. Private Google Access does not apply to VMs that have external IP addresses.
  • If the VM only has an internal IP, it can access the public IPs for Google APIs and services only if Private Google Access is enabled for its subnet and the requirements for Private Google Access are met.
  • If the VM sends a packet with a source set to one of its configured Alias IPs: Since an Alias IP is always an internal IP, packets sent with an Alias IP source can only be delivered to Google APIs and services if Private Google Access is configured.

What's next

Was this page helpful? Let us know how we did:

Send feedback about...