Google Kubernetes Engine (GKE) provides an agile, scalable platform for container orchestration and for deploying a microservices-based architecture. Apigee Edge makes it easy for you to expose microservices at scale to internal teams, partners, external developers, and customers. This solution describes three patterns for how Apigee Edge can complement GKE deployments to manage the rollout of services running in your clusters. It assumes that you are familiar with Kubernetes concepts such as pods, deployments, Ingress, and Services.
Kubernetes is great when it comes to packaging and managing your applications. However, you might need to expose APIs for internal and external consumption.
Internally, you want to manage entitlements and access to the various teams and applications in your organization. Externally, you might want to enable partners and third-party developers to build on your APIs. Exposing APIs to external consumers at scale can involve weeks or even months of work. It involves architecting authentication and authorization, developing consumer onboarding portals and lifecycle management, managing policies, and possibly monetizing your app or site.
To help with these tasks, you can use Apigee Edge. The Apigee platform provides a rich feature set for API management that makes these and other tasks possible out of the box, accelerating the design and rollout of your APIs.
What is Apigee Edge?
Apigee Edge is a platform that fronts your APIs by letting you create API proxies that map public endpoints to your backend services. The proxies let you manage security, rate limiting and quotas, message transformation, API composition, and policy management. The proxies also help you manage change; you can update or extend your backend services without affecting the public-facing APIs.
Apigee Edge also provides analytics for API traffic. A customizable portal provides access to tools and information for exploring, testing, and consuming APIs.
Deployment options for Apigee Edge
Apigee Edge supports the following deployment models:
Apigee Cloud as a software as a service (SaaS) offering. The cloud model is a subscription-based model and gives you access to Apigee Edge as a service. In this model, all Apigee services are deployed in Apigee's cloud. This frees you from the need to provision, deploy, and maintain hardware to operate the Apigee Edge infrastructure.
Private cloud or on-premises deployment using an infrastructure as a service (IaaS) offering. In this model, the Apigee Edge infrastructure is hosted in your data center. You run the entire API platform in your own data center, including API services, analytics services, and developer services. You also manage message processing and keystores. As with any on-premises deployment, you monitor performance, scale up and down, deal with outages and downtime, update and manage software versions, and replace hardware as it fails.
Customers who deploy API management in the Apigee cloud typically go live with their digital initiatives much faster than those who deploy in the private cloud. Acquiring, provisioning, and deploying hardware, as well as configuring software deployment and providing the training required to deploy and manage API management software, can all delay deployments to private clouds.
For details about running Apigee Edge on a private cloud, see the Apigee documentation.
Using Edge Microgateway
Edge Microgateway is a lightweight, low-footprint gateway that you can install and deploy to work with your backend services. Edge Microgateway functions as the glue between your backend (for example, a GKE cluster) and the Apigee Edge server. Some of the patterns discussed in this solution use the Edge Microgateway.
This solution focuses on patterns that you can use when you want to deploy your microservices in GKE and plan to use an Apigee Edge deployment either on Apigee cloud or on a private cloud.
Comparing Istio (service mesh) and Apigee (API management)
Istio is a service mesh for connecting and monitoring microservices. A service mesh like Istio and an API management platform like Apigee Edge have some functional overlap, and customers sometimes wonder whether they should use only one or the other.
In general, these products complement each other for workloads deployed using Kubernetes. Istio and Apigee serve separate purposes: Istio is for service management and Apigee is for API management. Istio enables service-to-service communication, traffic routing and splitting, fault tolerance, and security within a cluster. In other words, a service mesh such as Istio is business-function agnostic. In contrast, an API management product like Apigee lets you define how you want your APIs exposed, managed, and consumed.
The following table outlines the role that each of these plays in a microservices-based architecture. The list is not exhaustive. It's included here to give you a view of the key differences and to reiterate that in many architectures, both elements are useful and complementary.
|Apigee (API management)||Istio (service mesh)|
Pattern 1: Using Apigee Gateway with GKE Ingress for load balancing
In the first architectural pattern, you deploy microservices in GKE pods as services, and you deploy the Apigee API gateway in Apigee's cloud using an SaaS deployment model. The following figure shows this pattern, which uses an API Gateway with GKE Ingress and Google Cloud Armor.
In this pattern, you have to create an API proxy in Apigee and define the policies that govern the API, such as security, traffic management, and mediation. You set the backend for the proxy to the GKE Ingress proxy that fronts the microservices that are running in GKE.
We recommend that you allow (whitelist) IP address ranges that you trust to reach your resources in Google Cloud. In this pattern, you have to create a BackendConfig resource to secure access to the GKE Ingress. The BackendConfig resource uses Google Cloud Armor and allows access only to the Apigee Edge Gateway's IP addresses, which helps prevent unauthorized access to the cluster.
- This pattern uses the Apigee Edge API gateway platform (not just the lightweight Edge Microgateway) and therefore uses the complete Apigee feature set.
- You can use a single gateway across your Kubernetes cluster or clusters, and across other resources or apps running on Google Cloud, on-premises, or on other public clouds.
- The architecture requires an extra network hop from the Apigee API gateway to the GKE Ingress load balancer.
Implementing pattern 1
To implement this pattern, you perform these tasks:
- Create the cluster.
- Deploy microservices and configure the GKE Ingress proxy.
- Create the API proxy in Apigee.
Create the cluster
You start by creating a GKE cluster. We recommend that you create a cluster that has at least three nodes, each with two vCPUs. The nodes should use a Container-Optimized OS image. (You can use Ubuntu if that's a better match for your business requirements.)
To get started with administering the cluster and preparing for the following
steps, you retrieve your credentials for the
kubectl tool. For example, you
can use a
gcloud command like the following, substituting appropriate values
for the placeholders:
gcloud container clusters get-credentials [CLUSTER_NAME] \ --zone [ZONE_NAME] \ --project [PROJECT_NAME]
Deploy microservices and configure the GKE Ingress proxy
You deploy your microservices as you would any microservice with Kubernetes. Make sure that you enable HTTP Load Balancing when you create the cluster (as shown in the following figure), because you need a load balancer to be able to configure an Ingress resource.
The following figure shows how you can use the Cloud Console to enable an HTTP(S) load balancer in the GKE cluster.
The following example shows a simple Ingress resource YAML file that configures the proxy to point to a web backend. This Ingress proxy defines rules for routing API traffic to your backend services or microservices.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: basic-ingress spec: backend: serviceName: web servicePort: 8080
In this pattern, you use Google Cloud Armor to help with IPv4-based and IPv6-based access controls. You use these controls to allow the Apigee Gateway IP addresses to communicate with the GKE cluster (that is, with the Ingress proxy).
BackendConfig resource holds configuration information that's specific to
Cloud Load Balancing. Kubernetes Ingress and Service objects don't offer a way
to configure provider-specific features like Google Cloud Armor; the
BackendConfig resource provides a way for you to add that configuration.
To add this resource, you create a
BackendConfig resource, create a Service,
and associate the resource's port to the
BackendConfig resource. Finally, you
map the Ingress and Service resources appropriately.
Create the API proxy in Apigee
When the GKE cluster setup is complete, you create an API proxy using the Apigee web console. You configure the API proxy with the following details:
- The proxy endpoints.
- The target endpoints. (These should point to the GKE Ingress resource's IP address.) We recommend that you use static IPs for production workloads.
- Policies that are appropriate for your enterprise.
For a complete list of policies, see the policy reference in the Apigee documentation.
To summarize, one of the key benefits of this pattern is that you can take advantage of the entire array of policies available with the Apigee API management suite. For example, you can define policies such as quotas to control the number of requests that an API proxy allows over a period of time. Or you can configure a spike arrest to throttle the number of requests processed by an API proxy and sent to a backend; this helps protect against performance lags and downtime.
Pattern 2: Using Edge Microgateway as a service
In the second pattern, you use Apigee Edge Microgateway as a Kubernetes Service. In this pattern, you install a lightweight Edge Microgateway in the Kubernetes cluster. All API traffic is directed to this gateway, which works in conjunction with Apigee Edge to enforce security and other policies. Edge Microgateway is responsible for talking to the services within the cluster.
The GKE Ingress proxy can be internal facing or external facing, depending on whether the services need to be consumed by external clients. The following figure shows this pattern.
- The lightweight gateway can be used by both internal and external clients.
- There's reduced latency between the pods and Edge Microgateway as compared to pattern 1, because all the components are in the same network.
- Edge Microgateway is a lightweight gateway and doesn't offer the full functionality of Apigee Edge API gateway.
Implementing pattern 2
To implement this pattern, you start by performing some of the steps from pattern 1:
- Create the cluster.
- Deploy microservices and configure the GKE Ingress proxy.
You then perform these additional steps:
Install Edge Microgateway for Kubernetes
To set up Edge Microgateway in the GKE cluster, you download the
latest Edge Microgateway for the Kubernetes release and run it. This tool lets
you configure Edge Microgateway resources in your cluster. You can run the
following commands in a terminal on your computer or in
Make sure that you have the Kubernetes config file set up so that the
command can access your cluster.
curl -L https://raw.githubusercontent.com/apigee-internal/microgateway/master/kubernetes/release/downloadEdgeMicrok8s.sh | sh -
The shell script downloads the latest Edge Microgateway release, which consists of the following:
- A directory named
- Configuration (YAML) files to deploy to Kubernetes in the
- Sample applications in the
edgemicroctlbinary in the
bindirectory. You need this binary in order to inject Edge Microgateway into your Kubernetes cluster as a service
After you download and run the shell file, you run the following command to set
$PATH environment variable to include the
/bin subdirectory that was
created by the shell command:
Create Edge Microgateway foundational resources in GKE
To set up Edge Microgateway, you apply the YAML configuration files that are in
install/kubernetes directory. This creates a Namespace for the Edge
Microgateway Service (called
edgemicro-system)and the necessary cluster roles.
You apply the configuration using the following command on your computer or in
kubectl apply -f install/kubernetes/edgemicro.yaml
Validate the setup
You can make sure that the
edgemicro-system service is deployed by running
the following command:
kubectl get svc -n edgemicro-system
If the service is running, it returns the service name and type.
Generate a key and secret
To complete the configuration of Edge Microgateway in your Kubernetes cluster, you need to provide it with the credentials that allow it to connect to your Apigee organization.
To do this, you use the Edge Microgateway command-line tool (
which lets you generate a key and secret. You can then run the
command (shown later) to generate the Kubernetes objects that configure the Edge
Microgateway instance on your Kubernetes cluster.
You use the following commands to install the
edgemicro tool in a local
environment or in a Docker container and then initialize it:
npm install edgemicro -g edgemicro init
You generate a key and secret by running the following
edgemicro command with
your Apigee organization, environment, user, and password details. (These
details are available in the Apigee console.)
edgemicro configure -o [ORGANIZATION] -e [ENV] -u [USER] -p [PASSWORD]
This command generates a YAML file named
org-env-config.yaml that contains
configuration details about your Edge Microgateway setup.
Deploy Edge Microgateway
You use the
edgemicroctl command to complete the deployment of Edge
Microgateway to your Kubernetes cluster. Substitute the values you collected
[CONFIG_FILE_PATH], use the path of the
org-env-config.yaml file that you generated by running the
in the previous step.
kubectl apply -f <(edgemicroctl \ -org=[ORGANIZATION] \ -env=[ENV] \ -key=[EDGEMICRO_KEY] \ -sec=[EDGEMICRO_SECRET] \ -conf=[CONFIG_FILE_PATH])
Set up the Ingress object
The next step is to set up an Ingress resource with a rule that maps the Edge Microgateway Service as a backend, as shown in the architecture diagram for this pattern.
To create the Ingress resource, you run the following command:
cat <<EOF | kubectl apply -f - apiVersion: extensions/v1 kind: Ingress metadata: name: edge-microgateway-ingress spec: rules: - http: paths: - path: / backend: serviceName: edge-microgateway servicePort: 8000 EOF
To summarize, in this pattern, the Ingress proxy routes API requests to Edge Microgateway, which in turn routes incoming requests to your microservices. This approach lets the gateway enforce policies (rate limiting, spike arrest, and others) that you define during the design of your APIs and of the Kubernetes cluster.
Pattern 3: Using Edge Microgateway as a sidecar
In pattern 3, you use the sidecar pattern to deploy Edge Microgateway in a separate container but in the same pod as your microservice. This approach is also used by other frameworks such as Istio. The microservice gateway sidecar interacts with Apigee Edge for its configuration. In order for external clients to talk to the services, you must deploy a GKE Ingress resource.
The following figure shows this pattern.
- The lightweight gateway can be used by both internal and external clients.
- There's minimum latency, because the gateway is in the same pod as the microservice.
- The system scales as the pods are scaled, given that the Edge Microgateway runs as a container in every pod.
- This pattern offers flexibility, because it decouples policy management across microservices. For example, service A might need API-key-based authentication, but service B might not.
- Edge Microgateway is a lightweight gateway and does not offer the full functionality of Apigee Edge API gateway.
Implementing pattern 3
To implement this pattern, you start by following the same steps as for pattern 2:
- Create the cluster.
- Install Edge Microgateway for Kubernetes.
- Create Edge Microgateway foundational resources in GKE and validate the setup.
- Generate a key and secret.
After you've performed these steps, you perform the following step that's unique to this pattern:
Inject Edge Microgateway into a pod
To inject Edge Microgateway into a service pod as a sidecar, you run the
following command. Substitute the values you collected earlier for
[CONFIG_FILE_PATH], you use the path of the
org-env-config.yamlfile that you generate by running the
[SERVICE_DEPLOYMENT_FILE], you use path to the deployment file of the service whose pod will get the sidecar service, such as
kubectl apply -f <(edgemicroctl \ -org=[ORGANIZATION] \ -env=[ENV] \ -key=[[EDGEMICRO_KEY] \ -sec=[EDGEMICRO_SECRET] \ -conf=[CONFIG_FILE_PATH] \ -svc=[SERVICE_DEPLOYMENT_FILE])
To summarize, the difference between the command you run here and the one that
you run for pattern 2 is the service (
svc) parameter. In pattern 3, the YAML
file that you specify adds Edge Microgateway as a sidecar proxy to the pods for
the Service that you select using the
svc parameter. With sidecar deployments,
an Apigee API proxy for your service is created for you automatically; you
don't need to create an Edge Microgateway-aware proxy.
For information on injecting Edge Microgateway as a sidecar automatically, see Deploy Edge Microgateway as a sidecar proxy in the Apigee documentation.
- Try out the Apigee platform for yourself by signing up for a free trial.
- Learn about Apigee by browsing through the
which addresses topics such as:
- Managing API versions
- Policy best practices
- Importing Open API Specifications to create API proxies
- Fault handling
- Explore setting up the Apigee adapter for Istio.
- Learn more about Google Cloud Armor.
- For more information about Edge Microgateway, see the Edge Microgateway documentation.
- Try out other Google Cloud features for yourself. Have a look at our tutorials.