This page applies to Apigee, but not to Apigee hybrid.
View Apigee Edge documentation.
This document explains how to configure Apigee to configure active health check for cases where you wish to use Private Service Connect (PSC) for northbound network routing (traffic from clients to Apigee). Active health check is useful for preventing loss of network traffic in case of a regional failure.
Overview
If you plan to use PSC for Apigee northbound network routing, follow the instructions in this document to configure active health check. At this time, PSC does not support active health check monitoring. To work around this limitation of PSC, you can modify the Apigee installation configuration to use a managed instance group (MIG), which does provide active health check capability.
You could use outlier detection for health monitoring; however during regional failures, you may lose some amount of traffic periodically as outlier detection uses real-time traffic as indicators. Outlier detection re-routes part of the live traffic periodically to check the health of the failed region.
Figure 1 shows the proposed architecture. A service endpoint connects to the service attachment in the Apigee instance, and a MIG proxies traffic to the service endpoint. You enable health check monitoring on the MIG.
MIG-based health check approach
Prerequisites
You can apply the technique described in this document to Apigee installations that use VPC peering or do not use VPC peering. But in the case of a VPC peered installation, the active health check technique described here only applies if you are using PSC for your routing configuration.
Before performing the steps in this section:
- For non-VPC peering installations:
- Complete Apigee provisioning Steps 1 through 6 for subscription-based or Pay-as-you-go installations. At this time, the only option is to perform these steps using the command-line interface.
- Skip Step 7: Configure routing, and do the following steps instead.
- For VPC peering installations that use PSC for routing:
- Complete Apigee provisioning Steps 1 through 7 for subscription-based or Pay-as-you-go installations. At this time, the only option is to perform these steps using the command-line interface.
- Skip Step 8: Configure routing, and do the following steps instead.
1. Configure a PSC service endpoint for the Apigee service attachment
In this step, you create a PSC Service Endpoint that points to the service attachment in the Apigee instance:
- Get the service attachment from the Apigee instance you created previously:
curl -i -X GET -H "Authorization: Bearer $AUTH" \ "https://apigee.googleapis.com/v1/organizations/$PROJECT_ID/instances"
In the following sample output, the
serviceAttachment
value is shown in bold type:{ "instances": [ { "name": "us-west1", "location": "us-west1", "host": "10.82.192.2", "port": "443", "createdAt": "1645731488019", "lastModifiedAt": "1646504754219", "diskEncryptionKeyName": "projects/my-project/locations/us-west1/keyRings/us-west1/cryptoKeys/dek", "state": "ACTIVE", "peeringCidrRange": "SLASH_22", "runtimeVersion": "1-7-0-20220228-190814", "ipRange": "10.82.192.0/22,10.82.196.0/28", "consumerAcceptList": [ "875609189304" ], "serviceAttachment": "projects/bfac74a67a320c43a12p-tp/regions/us-west1/serviceAttachments/apigee-us-west1-crw1" } ] }
- Create a PSC Service Endpoint that points to the service attachment that you obtained from the instance response body in the previous step, as explained in Create a Private Service Connect endpoint.
2. Configure a MIG that points to the service endpoint
In this step, you create a MIG that proxies traffic to the service endpoint. You can then enable active health check on the MIG.
2A. Enable Private Google Access for a subnet of your VPC network
To enable Private Google Access for a subnet of your VPC network, follow the steps listed in Enabling Private Google Access.
2B. Set environment variables
The instructions in this section use environment variables to refer to repeatedly used strings. We recommend that you set these before continuing:
MIG_NAME=YOUR_MIG_NAME # A name you provide for the MIGVPC_NAME=default # If you are using a shared VPC, use the shared VPC name
VPC_SUBNET=default # Private Google Access must be enabled for this subnet
REGION=RUNTIME_REGION # The same region as your Apigee runtime instance
SERVICE_ENDPOINT_IP=YOUR_SERVICE_ENDPOINT_IP. ## The endpoint IP of the service endpoint you just created
You'll use these variables several times during the remaining processes. If you wish to configure multiple regions, then create variables with values specific for each region.
2C. Create a managed instance group
In this step, you create and configure a managed instance group (MIG).
- Create an instance template by executing the following command.
gcloud compute instance-templates create $MIG_NAME \ --project $PROJECT_ID \ --region $REGION \ --network $VPC_NAME \ --subnet $VPC_SUBNET \ --tags=https-server,apigee-mig-proxy,gke-apigee-proxy \ --machine-type e2-medium --image-family debian-12 \ --image-project debian-cloud --boot-disk-size 20GB \ --no-address \ --metadata ENDPOINT=$SERVICE_ENDPOINT_IP,startup-script-url=gs://apigee-5g-saas/apigee-envoy-proxy-release/latest/conf/startup-script.sh
As you can see from this command, machines are of type
e2-medium
. They run Debian 12 and have 20GB of disk. Thestartup-script.sh
script configures the MIG to route inbound traffic from the load balancer to the Apigee instance. - Create a managed instance group by executing the following command:
gcloud compute instance-groups managed create $MIG_NAME \ --project $PROJECT_ID --base-instance-name apigee-mig \ --size 2 --template $MIG_NAME --region $REGION
- Configure autoscaling for the group by executing the following command:
gcloud compute instance-groups managed set-autoscaling $MIG_NAME \ --project $PROJECT_ID --region $REGION --max-num-replicas 3 \ --target-cpu-utilization 0.75 --cool-down-period 90
- Define a named port by executing the following command:
gcloud compute instance-groups managed set-named-ports $MIG_NAME \ --project $PROJECT_ID --region $REGION --named-ports https:443
3. Configure the load balancer with health check monitoring
In the following steps you configure a load balancer with health check monitoring.
3A. Create an SSL certificate and key for the load balancer
You only need to create the credentials once, whether you are installing in single or multi regions. In a later step, you will associate these credentials with the load balancer's target HTTPS proxy.
You can create the credentials with:
- Your own certificate from a certificate authority
- A Google-managed SSL certificate
- A self-signed certificate (not recommended for production).
For more information on creating and using SSL certificates for Google Cloud load balancer, see SSL certificates and SSL certificate overview.
In the following example, we create a Google-managed SSL certificate:
- Create these environment variables:
CERTIFICATE_NAME=YOUR_CERT_NAME
DOMAIN_HOSTNAME=YOUR_DOMAIN_HOSTNAME
Set
DOMAIN_HOSTNAME
to a valid domain hostname that you have registered. In a later step, you will obtain the load balancer's IP address and update the domain A record to point to that address. For example, a domain hostname might look like this:foo.example.com
. - Execute the
gcloud compute ssl-certificates create command:
gcloud compute ssl-certificates create $CERTIFICATE_NAME \ --domains=$DOMAIN_HOSTNAME \ --project $PROJECT_ID \ --global
The certificate can take up to an hour to be provisioned. To check the status of the provisioning, execute this command:
gcloud compute ssl-certificates describe $CERTIFICATE_NAME \ --global \ --format="get(name,managed.status, managed.Status)"
3B. Create a health check
- Create a health check:
gcloud compute health-checks create https HEALTH_CHECK_NAME \ --project $PROJECT_ID --port 443 --global \ --request-path /healthz/ingress
You'll use this health check to ensure that the backend service is running. For configuring more advanced health checks against a specific proxy, see Performing health checks.
- Create a backend service:
gcloud compute backend-services create PROXY_BACKEND_NAME \ --project $PROJECT_ID \ --protocol HTTPS \ --health-checks HEALTH_CHECK_NAME \ --port-name https \ --timeout 302s \ --connection-draining-timeout 300s \ --global
- Add the MIG to your backend service with the following command:
gcloud compute backend-services add-backend PROXY_BACKEND_NAME \ --project $PROJECT_ID --instance-group $MIG_NAME \ --instance-group-region $REGION \ --balancing-mode UTILIZATION --max-utilization 0.8 --global
- Create a load balancing URL map with the following command:
gcloud compute url-maps create MIG_PROXY_MAP_NAME \ --project $PROJECT_ID --default-service PROXY_BACKEND_NAME
- Create a load balancing target HTTPS proxy with the following command:
gcloud compute target-https-proxies create MIG_HTTPS_PROXY_NAME \ --project $PROJECT_ID --url-map MIG_PROXY_MAP_NAME \ --ssl-certificates $CERTIFICATE_NAME
3C. Get a reserved IP address and create firewall rules
You must assign an IP address to the load balancer and then create rules that allow the load balancer to access the MIG. You only need to do this step once, whether you are installing in single or multi regions.
- Reserve an IP address for the load balancer:
gcloud compute addresses create ADDRESSES_NAME \ --project $PROJECT_ID \ --ip-version=IPV4 \ --global
- Create a global forwarding rule with the following command:
gcloud compute forwarding-rules create FORWARDING_RULE_NAME \ --project $PROJECT_ID --address ADDRESSES_NAME --global \ --target-https-proxy MIG_HTTPS_PROXY_NAME --ports 443
- Get the reserved IP address by executing the following command:
gcloud compute addresses describe ADDRESSES_NAME \ --project $PROJECT_ID --format="get(address)" --global
- Important step: Go to the site, DNS host, or ISP where your DNS records are managed, and make sure your domain's DNS record resolves to the IP address of the Google Cloud load balancer. This address is the IP value returned in the last step. For more detail, see Update the DNS A and AAAA records to point to the load balancer's IP address.
- Create a firewall rule that lets the load balancer access the MIG by using the
following command:
gcloud compute firewall-rules create FIREWALL_RULE_NAME \ --description "Allow incoming from GLB on TCP port 443 to Apigee Proxy" \ --project $PROJECT_ID --network $VPC_NAME --allow=tcp:443 \ --source-ranges=130.211.0.0/22,35.191.0.0/16 --target-tags=gke-apigee-proxy
Note that the IP address ranges
130.211.0.0/22
and35.191.0.0/16
are the source IP address ranges for Google Load Balancing. This firewall rule allows Google Cloud Load Balancing to make health check requests to the MIG.
Apigee provisioning is complete. Go to Deploy a sample proxy.