Google offers various network edge services that can augment the capabilities and security of services based outside of Google Cloud (on-premises and multi-cloud services)—for example, in an on-premises data center or in another public cloud.
These network edge services let you do the following:
Accept and route external customer traffic globally with a single Anycast VIP.
Enable pre-configured web application firewall (WAF) rules and apply allowlists and denylists to incoming traffic with Google Cloud Armor.
Control user access to your applications and resources with Identity-Aware Proxy (IAP).
Optimize content delivery and end-user latency with Cloud CDN.
To bring these benefits to your private, on-premises, or multi-cloud services, you deploy an external HTTP(S) load balancer to receive traffic from the public internet. The external HTTP(S) load balancer forwards traffic to a middle proxy that Traffic Director configures. This middle proxy routes traffic to your on-premises environment or non-Google Cloud environment by using Cloud VPN or Cloud Interconnect.
This tutorial walks you through an end-to-end example that uses Google Cloud Armor at Google's edge to selectively allow clients to access an on-premises service privately. Clients are allowed access based on their IP address.
You complete the following tasks:
- Deploy Envoy as a middle proxy in a managed instance group (MIG). This Envoy proxy is automatically connected to Traffic Director.
- Create a simulated private, on-premises virtual machine (VM) instance. In a real-world example, you would probably already have an on-premises VM.
- Configure Traffic Director to route all requests that reach the middle proxy to the simulated on-premises VM.
- Create an external HTTP(S) load balancer to receive traffic from the public internet and forward it to the middle proxy.
- Attach a Google Cloud Armor security policy to the external HTTP(S) load balancer.
After you complete these tasks, you can optionally explore additional edge services and advanced traffic management features.
Before setting up your middle proxy, complete the following tasks:
Review Prepare to set up Traffic Director with Envoy and complete all the prerequisite tasks, which include granting the required permissions and roles and enabling the Traffic Director API.
Ensure that your private on-premises endpoints are reachable from within your Google Cloud Virtual Private Cloud (VPC) network through Cloud VPN or Cloud Interconnect. The example used in this tutorial only routes traffic to an endpoint within Google Cloud, so you don't need to configure hybrid connectivity to follow along. In a real-world deployment scenario, configuring hybrid connectivity would be required.
Ensure that your VPC subnet CIDR ranges do not conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.
For this demonstration, obtain the necessary permissions to create and update Google Cloud Armor security policies. The permissions might vary if you want to use a different service, such as Cloud CDN.
Deploy the middle proxy on Compute Engine VMs
This section describes how to deploy Envoy as a middle proxy on Compute Engine to receive traffic from the external load balancer and forward it to your remote destination.
Create the instance template for the middle proxy
An instance template specifies configuration for VMs within a managed instance group (MIG).
Use the following template to create VM instances with an Envoy proxy connected to Traffic Director:
gcloud compute instance-templates create td-middle-proxy \ --service-proxy=enabled \ --tags=allow-hc
To customize your Envoy deployment, such as by specifying the network name, setting a log path, and/or enabling tracing, see the Automated Envoy deployment option guide.
Create the MIG for the middle proxy
Create the MIG based on the template. In this example, you can keep the instance group size of 1 to deploy a single instance. For production usage, enable autoscaling—for example, based on CPU utilization, to avoid creating a bottleneck if your middle proxy needs to handle a lot of traffic.
gcloud compute instance-groups managed create td-middle-proxy-us-central1-a \ --zone=us-central1-a \ --template=td-middle-proxy \ --size=1
For future configurations and verification, identify and store the internal IP address of the instance in
MIDDLE_PROXY_IP=$(gcloud compute instances list \ --filter="name~'td-middle-proxy-us-central1-a-.*'" \ --zones=us-central1-a \ --format="value(networkInterfaces.networkIP))"
In this example, we create the MIG that contains VM instances running Envoy in
us-central1-a. Later in this tutorial, you create an external load balancer to
handle public internet traffic from your clients.
Because the external load balancer can automatically route traffic to the region that is closest to your clients and to the MIG within that region, you might want to consider creating multiple MIGs. For a full list of Google Cloud's available regions and zones, see Regions and zones.
Deploy the simulated on-premises service
This section describes deploying a hybrid connectivity network endpoint group
(NEG). In a production deployment, this NEG would contain an endpoint
IP:port) that resolves to your on-premises server. In this example, you
create a Compute Engine VM that is reachable on an
VM acts as your simulated on-premises server.
Create the simulated on-premises VM
Deploy a single VM instance to simulate a private, on-premises server:
gcloud compute instances create on-prem-vm \ --zone=us-central1-a \ --metadata startup-script='#! /bin/bash ## Installs apache and a custom homepage sudo su - apt-get update apt-get install -y apache2 cat <<EOF > /var/www/html/index.html <html><body><h1>Hello world from on-premises!</h1></body></html> EOF'
Identify and store its internal IP address for future configurations and verification. The server on this VM listens for inbound requests on port
ON_PREM_IP=$(gcloud compute instances describe on-prem-vm \ --zone=us-central1-a \ --format="value(networkInterfaces.networkIP)" | sed "s/['\[\]]*//g")
Create the NEG
Create the NEG for this demonstration setup by specifying the
non-gcp-private-ip-port network endpoint type. Add the IP address and port
for your simulated on-premises VM as an endpoint to this NEG. Following on the
previous step, the IP address is stored in the
ON_PREM_IP environment variable.
Create the NEG:
gcloud compute network-endpoint-groups create td-on-prem-neg \ --network-endpoint-type=non-gcp-private-ip-port \ --zone=us-central1-a
IP:portto your new NEG:
gcloud compute network-endpoint-groups update td-on-prem-neg \ --zone=us-central1-a \ --add-endpoint="ip=$ON_PREM_IP,port=80"
Configure Traffic Director with Cloud Load Balancing components
This section shows how to configure Traffic Director and enable your middle proxy to forward traffic to your private, on-premises service. You configure the following components:
A health check. This health check behaves slightly differently from the health checks configured for other NEG types.
A backend service. For more information, see the Backend services overview.
A routing rule map. This step includes creating a forwarding rule, a target proxy, and a URL map. For more information, see Routing rule maps.
Create the health check
Health checks verify that your endpoints are healthy and able to receive requests. Health checking for this type of NEG relies on Envoy's distributed health checking mechanism.
Other NEG types use Google Cloud's centralized health checking system:
gcloud compute health-checks create http td-on-prem-health-check
Create the backend service
Create a backend service with the
INTERNAL_SELF_MANAGEDload-balancing scheme to be used with Traffic Director. When you create this backend service, specify the health check that you previously created:
gcloud compute backend-services create td-on-prem-backend-service \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --health-checks=td-on-prem-health-check
Add the NEG that you created earlier as the backend of this backend service:
gcloud compute backend-services add-backend td-on-prem-backend-service \ --global \ --network-endpoint-group=td-on-prem-neg \ --network-endpoint-group-zone=us-central1-a \ --balancing-mode=RATE \ --max-rate-per-endpoint=5
Create the routing rule map
The routing rule map defines how Traffic Director routes traffic to your backend service.
Create a URL map that uses the backend service defined previously:
gcloud compute url-maps create td-hybrid-url-map \ --default-service=td-on-prem-backend-service
Create a file called
target_proxy.yamlwith the following contents:
name: td-hybrid-proxy proxyBind: true urlMap: global/urlMaps/td-hybrid-url-map
importcommand to create the target HTTP proxy (for more information, see Target proxies for Traffic Director):
gcloud compute target-http-proxies import td-hybrid-proxy \ --source=target_proxy.yaml
Create a forwarding rule that references this target HTTP proxy. Set the forwarding rule's IP address to
0.0.0.0. Setting the rule's IP address to
0.0.0.0routes traffic based on the inbound port, HTTP hostname, and path information configured in the URL map. The IP address specified in the HTTP request is ignored.
gcloud compute forwarding-rules create td-hybrid-forwarding-rule \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --address=0.0.0.0 \ --target-http-proxy=td-hybrid-proxy \ --ports=8080 \ --network=default
Verify that the middle proxy can route requests to the simulated on-premises service
Traffic Director is now configured to route traffic through the middle proxy to your simulated private, on-premises service. You can verify this configuration by creating a test client VM, logging into that VM, and sending a request to the middle proxy that is running Envoy. After verifying the configuration, delete the test client VM.
Get the IP address of the middle proxy. You only need this information for the verification step:
gcloud compute instances list
Write down or otherwise note the IP address of the instance in the
Create a test client instance:
gcloud compute instances create test-client \ --zone=us-central1-a
sshto log in to the test client:
gcloud compute ssh test-client --zone=us-central1-a
Send a request to the middle proxy VM, substituting the IP address that you obtained previously for
You should see the following output:
Hello world from on-premises!
Exit the test client VM. After you exit, you can delete the VM:
gcloud compute instances delete test-client \ --zone=us-central1-a
Deploy the external HTTP(S) load balancer
In this section, you deploy an external HTTP(S) load balancer that sends incoming traffic to the middle proxy. This deployment is a standard external HTTP(S) load balancer setup.
Reserve an external IP address
Create a global static external IP address (
external-lb-vip) to which external
clients will send traffic. You retrieve this external IP address during the
verification step later in this tutorial.
gcloud compute addresses create external-lb-vip \ --ip-version=IPV4 \ --global
Set up the external HTTP load balancer
Configure the external load balancer to route internet customer traffic to your already-configured middle proxy.
Create a health check that is used to determine whether the MIG that runs the middle proxy is healthy and able to receive traffic:
gcloud compute health-checks create tcp tcp-basic-check \ --port=8080
Create a firewall rule to allow health checking. You re-use the
allow-hctag here to apply the firewall rule to the middle proxy VMs:
gcloud compute firewall-rules create fw-allow-health-checks \ --network=default \ --action=ALLOW \ --direction=INGRESS \ --source-ranges=184.108.40.206/16,220.127.116.11/22 \ --target-tags=allow-hc \ --rules=tcp
Create a backend service:
gcloud compute backend-services create td-middle-proxy-backend-service \ --protocol=HTTP \ --health-checks=tcp-basic-check \ --global
Add the middle proxy MIG as the backend to this backend service:
gcloud compute backend-services add-backend td-middle-proxy-backend-service \ --instance-group=td-middle-proxy-us-central1-a \ --instance-group-zone=us-central1-a \ --global
Create a URL map to route the incoming requests to the middle proxy as the default backend service:
gcloud compute url-maps create lb-map-http \ --default-service=td-middle-proxy-backend-service
Create a target HTTP proxy so that requests to the external load balancer's forwarding rule virtual IP address (VIP) are handled according to the URL map:
gcloud compute target-http-proxies create http-lb-proxy \ --url-map=lb-map-http
Create a global forwarding rule to route incoming requests to the target HTTP proxy:
gcloud compute forwarding-rules create http-forwarding-rule \ --address=external-lb-vip\ --global \ --load-balancing-scheme=EXTERNAL \ --target-http-proxy=http-lb-proxy \ --ports=80
Set the MIG's named port
Set a named port for the instance group to allow your middle proxy to receive HTTP traffic from the external load balancer:
gcloud compute instance-groups managed set-named-ports td-middle-proxy-us-central1-a \ --named-ports=http:8080 \ --zone=us-central1-a
Verify the external HTTP(S) load balancer configuration
In this step, you verify that the external load balancer is set up correctly.
You should be able to send a request to the load balancer's VIP and get a response from the simulated on-premises VM:
PUBLIC_VIP=gcloud compute addresses describe external-lb-vip \ --format="get(address)" \ --global
curlrequest to the public IP address and verify that you receive the
You see the following output:
Hello world from on-premises!
Enable Google Cloud Armor
Configure Google Cloud Armor security policies to only allow access to your
CLIENT_IP_RANGE, which should include the public
IP address of the client device that you intend to test with—for example,
These policies are applied on the backend service of the
external load balancer (in this example,
td-hybrid-backend-service pointing at
the middle proxy). For more information about the permissions required to set
these rules, see
Configure Google Cloud Armor security policies.
Create the Google Cloud Armor security policy:
gcloud compute security-policies create external-clients-policy \ --description="policy for external clients"
Update the default rule of the security policy to deny all traffic:
gcloud compute security-policies rules update 2147483647 \ --security-policy=external-clients-policy \ --action="deny-404"
Add a higher priority rule to allow traffic from a specific IP range:
gcloud compute security-policies rules create 1000 \ --security-policy=external-clients-policy \ --description="allow traffic from CLIENT_IP_RANGE" \ --src-ip-ranges="CLIENT_IP_RANGE" \ --action="allow"
Attach the Google Cloud Armor security policies to your backend service:
gcloud compute backend-services update td-middle-proxy-backend-service \ --security-policy=external-clients-policy
curlrequest to the external HTTP(S) load balancer's public virtual IP address. If your client device's IP address is within the allowed
CLIENT_IP_RANGEspecified previously, you should receive the expected response.
You see the following output:
Hello world from on-premises!
Issue the same
curlrequest from a different client device whose IP address lies outside of
CLIENT_IP_RANGE, or update your security policy rule to no longer include your client IP address. You should now receive a
404 Not Founderror.
My on-premises service is not accessible through the global external HTTP(S) load balancer IP address
Assuming that your on-premises service is already accessible on the Google Cloud VMs running Envoys, follow these steps to troubleshoot your setup:
Make sure that the Google Cloud Envoy MIG is reported as healthy. In the Google Cloud Console, go to Network services > Load balancing and click
url-map lb-map-httpto view its details. You should be able to see that 1/1 of the instance in
If it’s not healthy, check if a firewall rule has been configured to allow ingress health-check traffic to the Google Cloud VMs that are running Envoy:
gcloud compute firewall-rules describe fw-allow-health-check
You should see the following output:
- IPProtocol: tcp ... direction: INGRESS disabled: false ... ... sourceRanges:
- 18.104.22.168/16 targetTags:
To find more complex routing policies for your on-premises or multi-cloud services, see Traffic Director's advanced traffic management features.
To deploy Traffic Director, see the Setup guide overview.
To enable additional network edge services, see the following guides: