Configuring an IP masquerade agent


This page explains how to configure clusters created in Google Kubernetes Engine (GKE) Standard mode to perform IP masquerade with the ip-masq-agent. For more information about IP masquerading in GKE Autopilot mode, see Use Egress NAT Policy to configure IP masquerade in Autopilot clusters.

Before you begin

Before you start, make sure you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI.

Checking ip-masq-agent status

This section shows you how to:

  • Determine whether your cluster has an ip-masq-agent DaemonSet.
  • Check the ip-masq-agent ConfigMap resource.

Checking the ip-masq-agent DaemonSet

To check if your cluster is running the ip-masq-agent DaemonSet, use the Google Cloud CLI or the Google Cloud console.

gcloud

  1. Get the credentials for your cluster:

    gcloud container clusters get-credentials CLUSTER_NAME
    

    Replace CLUSTER_NAME with the name of your cluster.

  2. Search for the ip-masq-agent in the kube-system namespace:

    kubectl get daemonsets/ip-masq-agent -n kube-system
    

    If the ip-masq-agent DaemonSet exists, then the output is similar to the following:

    NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    ip-masq-agent   3         3         3       3            3           <none>          13d
    

    If the ip-masq-agent DaemonSet does not exist, then the output is similar to the following:

    Error from server (NotFound): daemonsets.apps "ip-masq-agent" not found
    

Console

  1. Go to the Workloads page in console.

    Go to Workloads

  2. For Filter, do the following:

    1. Click to clear the Is system object: False filter.
    2. Filter the following properties:
      • Name: ip-masq-agent.
      • Cluster: the name of your cluster.

    If ip-masq-agent DaemonSet exists, then you can see the DaemonSet record in the table. If ip-masq-agent DaemonSet does not exist, then no rows are displayed.

To create the ip-masq-agent ConfigMap and deploy the ip-masq-agent DaemonSet, refer to Configuring and deploying the ip-masq-agent.

Checking the ip-masq-agent ConfigMap

To check if your cluster is running the ip-masq-agent ConfigMap, use the Google Cloud CLI or the Google Cloud console.

gcloud

  1. Get the credentials for your cluster:

    gcloud container clusters get-credentials CLUSTER_NAME
    

    Replace CLUSTER_NAME with the name of your cluster.

  2. Describe the ip-masq-agent ConfigMap in the kube-system namespace:

    kubectl describe configmaps/ip-masq-agent -n kube-system
    

    If the ip-masq-agent ConfigMap exists, then the output is similar to the following:

    Name:         ip-masq-agent
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>
    
    Data
    ====
    config:
    ----
    nonMasqueradeCIDRs:
      - 198.15.5.92/24
      - 10.0.0.0/8
    masqLinkLocal: false
    resyncInterval: 60s
    
    BinaryData
    ====
    
    Events:  <none>
    

    If the ip-masq-agent ConfigMap does not exist, then the output is similar to the following:

    Error from server (NotFound): configmaps "ip-masq-agent" not found
    

Console

  1. Go to the Configuration page in console.

    Go to Configuration

  2. For Filter, do the following:

    1. Click to clear the Is system object: False filter.
    2. Filter the following properties:
      • Name: ip-masq-agent.
      • Cluster: the name of your cluster.

    If ip-masq-agent ConfigMap exists, then you can see the ConfigMap record in the table. If ip-masq-agent ConfigMap does not exist, then no rows are displayed.

If the cluster already has the ip-masq-agent ConfigMap, you can configure and deploy it.

Configuring and deploying the ip-masq-agent

This section shows you how to create or edit the ip-masq-agent ConfigMap and how to deploy or delete the ip-masq-agent DaemonSet. To determine which tasks you need to perform, you must first determine whether your cluster already has the ip-masq-agent ConfigMap and ip-masq-agent DaemonSet.

Creating the ip-masq-agent ConfigMap

The following steps show how to create the ip-masq-agent ConfigMap. If your cluster already has the ip-masq-agent ConfigMap, edit an existing ip-masq-agent ConfigMap instead.

  1. Create a configuration file using the following template and save it locally. You can use any name for the local copy of this configuration file.

    nonMasqueradeCIDRs:
      - CIDR_1
      - CIDR_2
    masqLinkLocal: false
    resyncInterval: SYNC_INTERVAL
    

    Replace the following:

    • CIDR_1 and CIDR_2: the IP address ranges in CIDR format. When packets are sent to these destinations, your cluster does not masquerade IP address sources and preserves source Pod IP addresses. If you need more than two CIDRs, add more entries to the nonMasqueradeCIDRs list following the same format. At minimum, the nonMasqueradeCIDRs property should include the node and Pod IP address ranges of your cluster.

    • SYNC_INTERVAL: the number of seconds after which each ip-masq-agent Pod checks the contents of the ip-masq-agent ConfigMap and writes any changes to its local /etc/config/ip-masq-agent file. The format is Nx, where N is an integer and x is a time unit such as s (for seconds) or ms (for milliseconds). If unspecified, the default is 60s.

    Set masqLinkLocal to false (the default) unless you need to enable masquerading for packets sent to link local IPv4 addresses. For more information, see Masquerading to link-local destinations.

  2. Create the ConfigMap resource:

    kubectl create configmap ip-masq-agent \
       --namespace=kube-system \
       --from-file=config=LOCAL_CONFIG_FILE_PATH
    

    Replace LOCAL_CONFIG_FILE_PATH with the path to the config file you created in the previous step.

  3. Describe the ip-masq-agent ConfigMap in the kube-system namespace:

    kubectl describe configmaps/ip-masq-agent -n kube-system
    

    The output is similar to the following:

    Name:         ip-masq-agent
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>
    
    Data
    ====
    config:
    ----
    nonMasqueradeCIDRs:
      - 198.15.5.92/24
      - 10.0.0.0/8
    masqLinkLocal: false
    resyncInterval: 60s
    
    BinaryData
    ====
    Events:  <none>
    
    

    This output includes the config parameter with your configuration changes. You can now deploy the ip-masq-agent DeamonSet.

Editing an existing ip-masq-agent ConfigMap

You can modify the contents of an existing ip-masq-agent ConfigMap by completing the following steps:

  1. Open the ConfigMap in a text editor:

    kubectl edit configmap ip-masq-agent --namespace=kube-system
    
  2. Edit the content of the ConfigMap file:

    apiVersion: v1
    data:
      config: |
        nonMasqueradeCIDRs:
          - CIDR_1
          - CIDR_2
        masqLinkLocal: false
        resyncInterval: SYNC_INTERVAL
    kind: ConfigMap
    metadata:
      name: ip-masq-agent
      namespace: kube-system
    

    Replace the following:

    • CIDR_1 and CIDR_2: the IP address ranges in CIDR format. When packets are sent to these destinations, your cluster does not masquerade IP address sources and preserves source Pod IP addresses. If you need more than two CIDRs, add more entries to the nonMasqueradeCIDRs list following the same format. At minimum, the nonMasqueradeCIDRs property should include the node and Pod IP address ranges of your cluster.

    • SYNC_INTERVAL: the number of seconds after which each ip-masq-agent Pod checks the contents of the ip-masq-agent ConfigMap and writes any changes to its local /etc/config/ip-masq-agent file. The format is Nx, where N is an integer and x is a time unit such as s (for seconds) or ms (for milliseconds). If unspecified, the default is 60s.

    Set masqLinkLocal to false (the default) unless you need to enable masquerading for packets sent to link local IPv4 addresses. For more information, see Masquerading to link-local destinations.

  3. Describe the ip-masq-agent ConfigMap in the kube-system namespace:

    kubectl describe configmaps/ip-masq-agent -n kube-system
    

    The output is similar to the following:

    Name:         ip-masq-agent
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>
    
    Data
    ====
    config:
    ----
    nonMasqueradeCIDRs:
      - 198.15.5.92/24
      - 10.0.0.0/8
    masqLinkLocal: false
    resyncInterval: 60s
    
    BinaryData
    ====
    
    Events:  <none>
    

    This output includes the config parameter which matches the configuration value from the file you created.

Deploying the ip-masq-agent DaemonSet

After you create or edit your ip-masq-agent ConfigMap, deploy the ip-masq-agent DaemonSet.

  1. Save the following manifest as a YAML file:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: ip-masq-agent
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: ip-masq-agent
      template:
        metadata:
          labels:
            k8s-app: ip-masq-agent
        spec:
          hostNetwork: true
          containers:
          - name: ip-masq-agent
            image: k8s.gcr.io/networking/ip-masq-agent:v2.7.0
            args:
                # The masq-chain must be IP-MASQ
                - --masq-chain=IP-MASQ
                # To non-masquerade reserved IP ranges by default,
                # uncomment the following line.
                # - --nomasq-all-reserved-ranges
            securityContext:
              privileged: true
            volumeMounts:
              - name: config-volume
                mountPath: /etc/config
          volumes:
            - name: config-volume
              configMap:
                name: ip-masq-agent
                optional: true
                items:
                  - key: config
                    path: ip-masq-agent
          tolerations:
          - effect: NoSchedule
            operator: Exists
          - effect: NoExecute
            operator: Exists
          - key: "CriticalAddonsOnly"
            operator: "Exists"
    

    This manifest creates a volume named config-volume which is mounted as specified by the container's volumeMount.

    If you need to edit this manifest, consider the following conditions:

    • The volume name can be anything but must match the container's volumeMount name.

    • The ConfigMap name must match the name of the configMap referenced in the config-volume Volume in the Pod.

    • The name of the chain (--masq-chain) must be IP-MASQ. Otherwise, GKE does not override the default masquerading rules.

    • DaemonSet Pods read from the ip-masq-agent file. The ip-masq-agent file content is the value of the config key in the ConfigMap.

    • If you use non-masquerade reserved IP ranges by default, uncomment the - --nomasq-all-reserved-ranges line in the arg section.

  2. Deploy the DaemonSet:

    kubectl apply -f LOCAL_FILE_PATH
    

    Replace LOCAL_FILE_PATH with the path to the file you created in the previous step.

Deleting the ip-masq-agent

This section shows you how to delete the ip-masq-agent DaemonSet and the ip-masq-agent ConfigMap.

Deleting the ip-masq-agent DaemonSet

If you manually created the ip-masq-agent DaemonSet, you can delete it by running the following command:

kubectl delete daemonsets ip-masq-agent -n kube-system

Deleting the ip-masq-agent ConfigMap

To completely delete the ip-masq-agent ConfigMap, run the following command:

kubectl delete configmap ip-masq-agent -n kube-system

Troubleshooting

The following steps provide troubleshooting information:

  • Confirm the status of the ip-masq-agent. If the ConfigMap is not defined, traffic to all default destinations is not masqueraded and preserves the Pod IP address. Traffic to other destinations preserves the node IP address.
  • Confirm that the destination the Pod is trying to reach is included in the nonMasqueradeCIDRs in the ConfigMap. If the destination is not in the nonMasqueradeCIDRs, traffic preserves the node IP address.
  • Confirm that the destination allows both the node and Pod IP address ranges.
  • If traffic is not accessible from the node or Pod, run a connectivity test.

What's next