This document describes how to set up an egress NAT gateway for Anthos clusters on bare metal. This gateway provides persistent, deterministic SNAT IP addresses for the egress traffic from your clusters. When you run workloads that have egress user traffic (outside of your clusters), your customers want to identify this traffic by using a few deterministic IP addresses. This allows your customers to establish IP-based security measures, like allowlisting policies. There is no charge to use this feature while it is in preview.
The egress NAT gateway is enabled using two custom resources. For a given
NetworkGatewayGroup custom resource specifies floating IP
addresses that can be configured on the network interface of a Node that is
chosen to act as a gateway. The
EgressNatPolicy custom resource lets you
specify egress routing policies to control the traffic on the egress gateway.
If you do not set up an egress NAT gateway, or if egress traffic does not meet traffic selection rules, egress traffic from a given Pod to a destination outside your cluster is masqueraded to the IP address of the node where the Pod is running. In this scenario, there is no guarantee that all egress traffic from a particular Pod will have the same source IP address or will masquerade to the same Node IP address.
Egress NAT gateway is an advanced networking offering built on top of Dataplane V2.
How the egress NAT gateway works
The egress traffic selection logic is based on a namespace selector, a Pod selector, and a set of destination IP address ranges in CIDR block notation. To illustrate how the egress NAT gateway works, let's consider the flow of a packet from a Pod to an external consumer and the corresponding response. Assume the Node subnet has IP addresses in the 192.168.1.0/24 CIDR block.
The following diagram shows the network architecture for egress traffic through a gateway node.
The packet flow through the egress NAT gateway might look like this:
Egress traffic is generated from a Pod with IP address
10.10.10.1in a Node with IP address
The traffic's destination address is an endpoint outside of the cluster.
If the traffic matches an egress rule, the eBPF program routes the egress traffic to the gateway Node, instead of directly masquerading with the Node IP address.
The gateway Node receives the egress traffic.
The gateway node masquerades the originating traffic's source IP address,
10.10.10.1, with the source egress IP address,
192.168.1.100specified in the
Return traffic comes back to the gateway Node with destination as
The gateway node matches the conntrack of the return traffic with that of the original egress traffic and rewrites the destination IP address as
10.10.10.1is treated as in-cluster traffic, routed to the original Node and delivered back to the original Pod.
Configure floating IP addresses for Node traffic
The Network Gateway Group custom resource is a bundled component of Anthos clusters on bare metal. The resource manages a list of one or more floating IP addresses to use for egress traffic from Nodes in your cluster. Participating Nodes are determined by the specified namespace. The Network Gateway Group makes a floating IP address available at all times on a best-effort basis. If a Node using a floating IP address goes down, the advanced network operator moves the assigned IP address to the next available Node. All workload egress traffic using that IP address will move as well.
Include the Network Gateway Group details (annotation and spec) in the cluster configuration file when you create a new 1.10.8 cluster.
NetworkGatewayGroup custom resource
You enable the Network Gateway Group by setting the
true in the cluster configuration file when you create a cluster as
shown in the following example:
apiVersion: baremetal.cluster.gke.io/v1 kind: Cluster metadata: name: cluster1 namespace: cluster-cluster1 spec: clusterNetwork: ... advancedNetworking: true ...
When you create the
NetworkGatewayGroup custom resource, set its namespace to
the cluster namespace and specify a list of floating IP addresses, as shown in
the following example:
kind: NetworkGatewayGroup apiVersion: networking.gke.io/v1 metadata: namespace: cluster-cluster1 name: default spec: floatingIPs: - 192.168.1.100 - 192.168.1.101 - 192.168.1.102
The advanced networking operator assigns the floating IPs to Nodes based on the following criteria:
- Node subnet - the floating IP address has to match Node's subnet.
- Node role (control plane, worker) - worker Nodes take precedence over control plane Nodes when assigning floating IP addresses.
- Whether a Node has a floating IP address - the operator prioritizes assignments to Nodes that do not have a floating IP assigned already.
The address/node mapping can be found in the
status section when you get the
NetworkGatewayGroup object. Note that the
NetworkGatewayGroup object is in
kube-system namespace. If a gateway node is down, the advanced network
operator assigns the floating IP addresses to the next available Node.
Verify the gateway configuration
After you have applied your gateway configuration changes, you can use
to check the status of the gateway and retrieve the floating IP addresses
specified for the gateway.
Use the following command to check the status of the
NetworkGatewayGroupand see how the floating IP addresses are allocated:
kubectl -n kube-system get networkgatewaygroups.networking.gke.io default -o yaml
The response for a cluster with two nodes,
worker2might look like this:
kind: NetworkGatewayGroup apiVersion: networking.gke.io/v1 metadata: namespace: kube-system name: default spec: floatingIPs: - 192.168.1.100 - 192.168.1.101 - 192.168.1.102 status: nodes: worker1: Up worker2: Up // Or Down floatingIPs: 192.168.1.100: worker1 192.168.1.101: worker2 192.168.1.102: worker1
Set traffic selection rules
EgressNATPolicy custom resource specifies traffic selection rules and
assigns a deterministic IP address for egress traffic that leaves the cluster.
When specifying the CR,
egress (with at least one rule),
egressSourceIP are all required.
kubectl apply to create the
EgressNATPolicy custom resource. The
following sections provide details and examples for defining the specification.
Specify egress routing rules
EgressNatPolicy custom resource lets you specify the following rules for
You must specify one or more egress traffic selection rules in the
- Each rule consists of a
- Selection is based on a namespace label,
namespaceSelector.matchLabels.**user**, and a Pod label,
- If a Pod matches any of the rules (matching uses an OR relationship), it is selected for egress traffic.
- Each rule consists of a
Specify allowed destination addresses in the
destinationCIDRstakes a list of CIDR blocks.
- If outgoing traffic from a Pod has a destination IP address that falls within the range of any of the specified CIDR blocks, it is selected for egress traffic.
In the following example, egress traffic from a Pod is permitted when the following criteria are met:
- Pod is labeled with
- Pod is in a namespace labeled as either
- Pod is communicating to IP addresses in the
kind: EgressNATPolicy apiVersion: networking.gke.io/v1 metadata: name: egress spec: sources: - namespaceSelector: matchLabels: user: alice podSelector: matchLabels: role: frontend - namespaceSelector: matchLabels: user: paul podSelector: matchLabels: role: frontend action: SNAT destinations: - cidr: 188.8.131.52/24 gatewayRef: name: default namespace: kube-system
For more information about using labels, refer to Labels and Selectors in the Kubernetes documentation.
Get a source IP address for egress traffic
EgressNATPolicy custom resource (policy) uses the
gatewayRef.namespace values to find a
NetworkGatewayGroup object (gateway).
The policy uses one of the gateway's floating IP addresses as the source IP
address for egress traffic. If there are multiple floating IP addresses in the
matching gateway, the policy uses the first IP address in the
and ignores any other IP addresses. For the
example gateway, the first
address in the
floatingIPs list is
192.168.1.100. Having invalid fields or
values in the
gatewayRef section will result in failure to apply the policy
Multiple egress policies and multiple gateway objects
As described in the previous section, each
egressNATPolicy object (policy)
uses the first IP address in the
floatingIPs list from the gateway object that
gatewayRef.namespace. You can create multiple
policies and, if you intend to use different IP addresses, you need to create
NetworkGatewayGroup objects and refer to them respectively.
A limitation of
NetworkGatewayGroup in Anthos clusters on bare metal for this release
is that only the "default" object is reconciled for floating IP allocations.
Additionally, only the default gateway reports the allocation status for all the
floating IP addresses.
The default gateway is defined by the
NetworkGatewayGroup with the
name in the
kube-system namespace. Therefore, when creating multiple gateway
objects, you need to make sure that the IP addresses in the non-default gateways
are also specified in the default gateway manifest. Otherwise they will not be
allocated as floating IPs, and therefore can't be used by the
To set up multiple egress policies and multiple gateway objects:
Verify the default gateway object (
name: default) with
kubectlto get the allocation status of the floating IP addresses:
kubectl -n kube-system get NetworkGatewayGroup.networking.gke.io default -o yaml
The response for a cluster with two nodes,
worker2might look like this:
kind: NetworkGatewayGroup apiVersion: networking.gke.io/v1 metadata: namespace: kube-system name: default spec: floatingIPs: - 192.168.1.100 - 192.168.1.101 - 192.168.1.102 status: ... floatingIPs: 192.168.1.100: worker1 192.168.1.101: worker2 192.168.1.102: worker1
After verifying the status of the default gateway, create additional gateway objects in the
kube-systemnamespace to "track" each floating IP.
Note that these new gateway objects don't report allocation status, which is in the
kind: NetworkGatewayGroup apiVersion: networking.gke.io/v1 metadata: namespace: kube-system name: gateway1 spec: floatingIPs: - 192.168.1.100 --- kind: NetworkGatewayGroup apiVersion: networking.gke.io/v1 metadata: namespace: kube-system name: gateway2 spec: floatingIPs: - 192.168.1.101 --- kind: NetworkGatewayGroup apiVersion: networking.gke.io/v1 metadata: namespace: kube-system name: gateway3 spec: floatingIPs: - 192.168.1.102
Create multiple policies that refer to the "secondary" gateway objects, such as
gateway1created in the preceding step:
kind: EgressNATPolicy apiVersion: networking.gke.io/v1 metadata: name: egress1 spec: ... gatewayRef: name: gateway1 namespace: kube-system --- kind: EgressNATPolicy apiVersion: networking.gke.io/v1 metadata: name: egress2 spec: ... gatewayRef: name: gateway2 namespace: kube-system --- kind: EgressNATPolicy apiVersion: networking.gke.io/v1 metadata: name: egress3 spec: ... gatewayRef: name: gateway3 namespace: kube-system
Egress traffic selection rules and network policies
The egress NAT gateway is compatible with network policy APIs. Network policies are assessed first and take precedence over the traffic selection rules of the egress NAT gateway. For example, if the egress traffic triggers a network policy resulting in the packet being dropped, egress gateway rules won't check the packet. Only when the network policy allows the packet to egress will the egress traffic selection rules be evaluated to decide how the traffic is handled, either using the egress NAT gateway or directly masquerading with the IP address of the Node where the Pod is running.
The current limitations for the egress NAT gateway include:
The egress NAT gateway is only enabled for IPv4 mode.
Egress IP addresses have to be in the same Layer 2 domain with Node IP addresses for this preview.
Upgrades are not supported for clusters that have been configured to use the preview of the egress NAT gateway.