Southbound networking patterns

You're viewing Apigee X documentation.
View Apigee Edge documentation.


Apigee runtime uses VPC networking peering to route traffic to backend services. This interaction is shown in the simplified diagram below:

When creating an Apigee organization, customers are required to provide a network that Apigee can peer with:

curl -H "$AUTH" -X POST \
  -H "Content-Type:application/json" \
  "$PROJECT_ID"  \
  -d '{

Since the target/backend applications are in the peered network, Apigee can access any of the IP addresses and route traffic to them.

Many enterprises have multiple networks where they have deployed their applications. However, Apigee supports peering with only one network. This document explores a strategy for customers who require Apigee to send traffic to multiple networks.


Apigee proposes creating a new network (or reuse an existing one) as a transit network. The transit network acts as a hub for all other networks. Apigee will be one of the spokes to this hub.

This is illustrated in the following image:

In this example, the enterprise has workloads in network 1 and network 2 (on Google Cloud) and network 3 (running on-premises).

The customer creates (or reuses) a network called transit. All the networks peer with this network—including the one reserved for Apigee.

There are cases where a customer may not want Apigee's network peering directly with the transit network. In such cases the customer provisions a new network (called apim) and peers that network with Apigee. The apim network is then peered with the transit network.

This is illustrated in the following image:

Getting the traffic to transit VPC

In the case where the Apigee network is not directly peering with the transit network, set up a Compute Engine managed instance group (MIG) with iptable rules to forward all traffic to the transit network as shown in the following image:

The rest of this document explores strategies on how Apigee can send traffic to workloads running in network 1, network 2, and so on, from the transit network.


Use iptables to route traffic

This option uses iptable rules on Compute Engine to route traffic to different backends. In the Apigee API proxy, the target endpoint would contain ILB_IP1:Port1 or ILB_IP1:Port2, and so on. Each port uniquely identifies a different backend service.

On the Compute Engine virtual machine (VM), the customer would set up iptable rules to route Port1 to DESTINATION_IP1:DestPort (in the RFC 1918 network), and so on. Each time a new target is onboarded, iptable rules will have to be updated.


In this example, API calls coming to Port1 will be routed to DESTINATION_IP1:DestPort:

sysctl -w net.ipv4.ip_forward=1

# change source IP to (NAT) VM IP
iptables -t nat -A POSTROUTING -j MASQUERADE

# change the destination IP to the real target ip and port
iptables -t nat -A PREROUTING -p tcp --dport Port1 -j DNAT --to-destination DESTINATION_IP1:DestPort


Use internal load balancer to route traffic

This option uses Compute Engine's internal load balancer (L7) to route requests to various backends. The internal load balancer (ILB) supports path- (or hostname + path)-based routing. A url-map example is as follows:

- hosts:
  - '*'
  pathMatcher: path-matcher-1
id: '3331829996983273293'
kind: compute#urlMap
name: apigee-l7-ilb
- defaultService:
  name: path-matcher-1
  - paths:
    - /products/*
  - paths:
    - /catalog/*

In the Apigee API proxy, the target endpoint would contain the ILB_IP and Port (most likely 443) if the project is peered directly with the transit network.

The ILB in the transit project will forward traffic to various networks based on URL path routing (or hostname + path-based routing). A Compute Engine MIG with iptables is used again to bridge traffic to different networks.