Google offers a variety of network edge services that can augment the capabilities and security of services based outside of Google Cloud (on-premises and multi-cloud services), for example, in an on-premises data center or in another public cloud. These network edge services allow you to:
- Accept and route external customer traffic globally with a single Anycast VIP
- Reduce server load by terminating TLS traffic at the edge with an external HTTP(S) load balancer and Google-managed SSL certificates
- Enable pre-configured web application firewall (WAF) rules and apply allow and deny lists to incoming traffic with Google Cloud Armor
- Control user access to your applications and resources with Identity-Aware Proxy
- Optimize content delivery and end-user latency with Cloud CDN
To bring these benefits to your private, on-premises, or multi-cloud services, you deploy an external HTTP(S) load balancer to receive traffic from the public internet. The HTTP(S) load balancer forwards traffic to a Traffic Director- configured middle proxy that routes the traffic to your on-premises environment or non-Google Cloud cloud using Cloud VPN or Cloud Interconnect.
This tutorial walks you through an end-to-end example that uses Google Cloud Armor at Google's edge to selectively allow clients to access an on-premises service privately. Clients are allowed access based on their IP address.
You complete the following tasks:
- Deploying Envoy as a middle proxy in a managed instance group. This Envoy proxy is automatically connected to Traffic Director.
- Creating a simulated private, on-premises VM. In a real example, you would probably already have an on-premises VM.
- Configuring Traffic Director to route all requests that reach the middle proxy to the simulated on-premises VM
- Creating an external HTTP(S) load balancer to receive traffic from the public Internet and forward it to the middle proxy
- Attaching a Google Cloud Armor security policy to the external HTTP(S) load balancer
After you complete these tasks, you can optionally explore additional edge services and advanced traffic management features.
Before setting up your middle proxy, complete the following tasks:
- Review Preparing for Traffic Director setup and complete all prerequisite tasks.
- Ensure that your private on-premises endpoint(s) are reachable from within your Google Cloud VPC network through Cloud VPN or Cloud Interconnect. The example used in this solutions guide just routes traffic to an endpoint within Google Cloud, so you don't need to configure hybrid connectivity to follow along. In a real-world deployment scenario, this would be required.
- Ensure that your VPC subnet CIDR ranges do not conflict with your remote CIDR ranges. Subnet routes are prioritized over remote connectivity when IP addresses overlap.
- For this demonstration, obtain the necessary permissions to create and update Google Cloud Armor security policies. The permissions might vary if you want to use a different service, such as Cloud CDN.
Deploying the middle proxy on Compute Engine VMs
This section describes how to deploy Envoy as a middle proxy on Compute Engine to receive traffic from the external load balancer and forward it to your remote destination.
Creating the instance template for the middle proxy
An instance template specifies configuration for VMs within a managed instance group. The following template can be used to create VM instances with an Envoy proxy connected to Traffic Director:
gcloud compute instance-templates create td-middle-proxy \ --service-proxy=enabled \ --tags=allow-hc
To customize your Envoy deployment, such as by specifying the network name, setting a log path, and/or enabling tracing, see the Automated Envoy deployment option guide.
Creating the managed instance group for the middle proxy
Create the managed instance group based on the template above. In this example, you can keep the instance group size of 1 to deploy a single instance. For production usage, enable autoscaling, for example, based on CPU utilization, to avoid creating a bottleneck if your middle proxy needs to handle a lot of traffic.
gcloud compute instance-groups managed create td-middle-proxy-us-central1-a \ --zone us-central1-a \ --template=td-middle-proxy \ --size=1
In this example, we create the managed instance group, containing VM instances
running Envoy, in
us-central1-a. Later on in this guide, you create an
external load balancer to handle public internet traffic from your clients.
Because the external load balancer can automatically route traffic to the region
that is closest to your clients, and to managed instance group within that
region, you might want to consider creating multiple managed instance groups. A
full list of Google Cloud's available regions and zones can be found
Deploying the simulated on-premises service
This section describes deploying a hybrid connectivity network endpoint group
(NEG). In a production deployment, this NEG would contain an endpoint
IP:port) that resolves to your on-premises server. In this example, you
just create a Compute Engine VM that is reachable on an
VM acts as your simulated on-premises server.
Creating the simulated on-premises VM
Deploy a single VM instance to simulate a private, on-premises server.
gcloud compute instances create on-prem-vm \ --zone us-central1-a \ --metadata startup-script='#! /bin/bash ## Installs apache and a custom homepage sudo su - apt-get update apt-get install -y apache2 cat <<EOF > /var/www/html/index.html <html><body><h1>Hello world from on-premises!</h1></body></html> EOF'
Identify and store its internal IP address for future configurations and verification. The server on this VM listens for inbound requests on port 80.
ON_PREM_IP=`gcloud compute instances describe on-prem-vm --zone=us-central1-a --format="value(networkInterfaces.networkIP)" | sed "s/['\[\]]*//g"`
Creating the network endpoint group
Create the network endpoint group (NEG) for this demonstration setup by
non-gcp-private-ip-port network endpoint type. Add the IP
address and port for your simulated on-premises VM as an endpoint to this NEG.
Following on the previous step, the IP address is stored in the
First, create the NEG:
gcloud compute network-endpoint-groups create td-on-prem-neg \ --network-endpoint-type non-gcp-private-ip-port \ --zone us-central1-a
Next, add the IP:port to your newly-created NEG.
gcloud compute network-endpoint-groups update td-on-prem-neg \ --zone=us-central1-a \ --add-endpoint="ip=$ON_PREM_IP,port=80"
Configuring Traffic Director with Google Cloud load balancing components
This section shows how to configure Traffic Director and enable your middle proxy to forward traffic to your private, on-premises service. You configure the following components:
- A health check. This health check behaves slightly differently from the health checks configured for other NEG types, as described below.
- A backend service. For more information on backend services, see Backend services overview.
- A routing rule map. This includes creating a forwarding rule, a target proxy, and a URL map. For more information, read Using forwarding rules and Using URL maps.
Creating the health check
Health checks verify that your endpoints are healthy and able to receive requests. Health checking for this type of NEG relies on Envoy's distributed health checking mechanism, while other NEG types use Google Cloud's centralized health checking system.
gcloud compute health-checks create http td-on-prem-health-check
Creating the backend service
Create a backend service with the
INTERNAL_SELF_MANAGED load balancing scheme
to be used with Traffic Director. When you create this backend service,
specify the health check that you just created.
gcloud compute backend-services create td-on-prem-backend-service \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --health-checks=td-on-prem-health-check
Next, add the NEG that you created earlier as the backend of this backend service.
gcloud compute backend-services add-backend td-on-prem-backend-service \ --global \ --network-endpoint-group=td-on-prem-neg \ --network-endpoint-group-zone=us-central1-a \ --balancing-mode=RATE \ --max-rate-per-endpoint=5
Creating the routing rule map
The routing rule map defines how Traffic Director routes traffic to your backend service.
Create a URL map that uses the backend service defined above.
gcloud compute url-maps create td-hybrid-url-map \ --default-service td-on-prem-backend-service
Create a file called
target_proxy.yamlwith the following contents:
name: td-hybrid-proxy proxyBind: true urlMap: global/urlMaps/td-hybrid-url-map
importcommand to create the target HTTP proxy. For more information, see Target proxies for Traffic Director.
gcloud compute target-http-proxies import td-hybrid-proxy \ --source target_proxy.yaml
Create a forwarding rule that references this target HTTP proxy. Set the forwarding rule's IP address to
0.0.0.0. Setting the rule's address to
0.0.0.0routes traffic based on the inbound port, HTTP hostname, and path information configured in the URL map. The IP address specified in the HTTP request is ignored.
gcloud compute forwarding-rules create td-hybrid-forwarding-rule \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --address=0.0.0.0 \ --target-http-proxy=td-hybrid-proxy \ --ports=8080 \ --network=default
Verifying that the middle proxy can route requests to the simulated on-premises service
Traffic Director is now configured to route traffic through the middle proxy to your simulated private, on-premises service. You can verify this configuration by creating a test client VM, logging into that VM and sending a request to the middle proxy that is running Envoy. Once verified, delete the test client VM.
First, get the IP address of the middle proxy. You only need this information for the verification step:
gcloud compute instances list
Write down or otherwise note the IP address of the instance in the
td-middle-proxy-us-central1-a managed instance group.
Next, create a test client instance:
gcloud compute instances create test-client --zone us-central1-a
Use ssh to log in to the test client:
gcloud compute ssh test-client --zone us-central1-a
Finally, send a request to the middle proxy VM, substituting the IP address
you obtained above for
You should see the following output:
Hello world from on-premises!
Exit the test client VM. Once you've exited, you can delete the VM:
gcloud compute instances delete test-client \ --zone us-central1-a
Deploying the external HTTP load balancer
In this section, you deploy an external HTTP(S) load balancer that sends incoming traffic to the middle proxy. This is a standard external HTTP(S) load balancer setup.
Reserve an external IP address
Create a global static external IP address (external-lb-vip) to which external clients will send traffic. You retrieve this external IP address during the verification step later in this guide.
gcloud compute addresses create external-lb-vip \ --ip-version=IPV4 \ --global
Setting up the external HTTP load balancer
Configure the external load balancer to route internet customer traffic to your already-configured middle proxy.
Create a health check that is used to determine whether the managed instance group that run the middle proxy is healthy and able to receive traffic.
gcloud compute health-checks create tcp tcp-basic-check \ --port 8080
Create a firewall rule to allow health checking. Note that you re-use the
allow-hctag here to apply the firewall rule to the middle proxy VMs.
gcloud compute firewall-rules create fw-allow-health-checks \ --network default \ --action ALLOW \ --direction INGRESS \ --source-ranges 18.104.22.168/16,22.214.171.124/22 \ --target-tags allow-hc \ --rules tcp
Create a backend service.
gcloud compute backend-services create td-middle-proxy-backend-service \ --protocol HTTP \ --health-checks tcp-basic-check \ --global
Add the middle proxy managed instance group as the backend to this backend service.
gcloud compute backend-services add-backend td-middle-proxy-backend-service \ --instance-group=td-middle-proxy-us-central1-a \ --instance-group-zone=us-central1-a \ --global
Create a URL map to route the incoming requests to the middle proxy as the default backend service.
gcloud compute url-maps create lb-map-http \ --default-service td-middle-proxy-backend-service
Create a target HTTP proxy so that requests to the external load balancer's forwarding rule VIP are handled according to the URL map.
gcloud compute target-http-proxies create http-lb-proxy \ --url-map lb-map-http
Create a global forwarding rule to route incoming requests to the target HTTP proxy.
gcloud compute forwarding-rules create http-forwarding-rule \ --address=external-lb-vip\ --global \ --load-balancing-scheme=EXTERNAL \ --target-http-proxy=http-lb-proxy \ --ports=80
Setting the managed instance group's named port
Set a named port for the instance group to allow your middle proxy to receive HTTP traffic from the external load balancer.
gcloud compute instance-groups managed set-named-ports td-middle-proxy-us-central1-a \ --named-ports http:8080 \ --zone us-central1-a
Verifying the external HTTP load balancer configuration
In this step, you verify that the external load balancer is set up correctly. You should be able to send a request to the load balancer's VIP and get a response from the simulated on-premises VM.
PUBLIC_VIP=`gcloud compute addresses describe external-lb-vip \ --format="get(address)" \ --global`
curl request to the public IP address and verify that you receive the
Hello world message.
You see the following output:
Hello world from on-premises!
Enabling Google Cloud Armor
Configure Google Cloud Armor security policies to solely allow access to your
service from CLIENT_IP_RANGE, which should include the public IP
address of the client device you intend to test with; for example,
"192.0.2.0/24". These policies are applied on the backend service of the
external load balancer (in this example,
td-hybrid-backend-service pointing at
the middle proxy). For more information on the permissions required to set these
rules, see Configuring Google Cloud Armor security policies.
Create the Google Cloud Armor security policy.
gcloud compute security-policies create external-clients-policy \ --description "policy for external clients"
Update the default rule of the security policy to deny all traffic.
gcloud compute security-policies rules update 2147483647 \ --security-policy external-clients-policy \ --action "deny-404"
Add a higher priority rule to allow traffic from a specific IP range.
gcloud compute security-policies rules create 1000 \ --security-policy external-clients-policy \ --description "allow traffic from CLIENT_IP_RANGE" \ --src-ip-ranges "CLIENT_IP_RANGE" \ --action "allow"
Attach the Google Cloud Armor security policies to your backend service.
gcloud compute backend-services update td-middle-proxy-backend-service \ --security-policy external-clients-policy
curl request to the external HTTP load balancer's public virtual IP
address. You should still receive the expected response if your client device's
IP address is within the allowed
<var>CLIENT_IP_RANGE</var> specified above.
You see the following output:
Hello world from on-premises!
Issue the same curl request from a different client device whose IP address
lies outside of
CLIENT_IP_RANGE, or simply update your security policy rule to
no longer include your client IP address. You should now receive a
404 Not Found error.
My on-premises service is not accessible through the global external HTTP(S) load balancer IP address
Assuming that your on-premises service is already accessible on the Google Cloud VMs running Envoys, follow these steps to troubleshoot your setup.
Make sure the GCP Envoy MIG is reported as healthy. In the Google Cloud Console, go
to Network services > Load balancing and click
url-map lb-map-http to
view its details. You should be able to see 1/1 of the instance in
td-middle-proxy-us-central1-a is healthy.
If it’s not healthy, check if a firewall rule has been configured to allow ingress health-check traffic to the Google Cloud VMs that are running Envoys:
gcloud compute firewall-rules describe fw-allow-health-check
You should see the following output:
allowed: - IPProtocol: tcp ... direction: INGRESS disabled: false ... ... sourceRanges: - 126.96.36.199/22 - 188.8.131.52/16 targetTags: - allow-hc
- Establish a connection between Google Cloud and your on-premises network using a hybrid connectivity product. Replace the simulated on-premises backend VM with your real on-premises or multi-cloud service.
- Enable additional network edge services using the following guides:
- Explore Traffic Director's advanced traffic management features for more complex routing policies for your on-premises or multi-cloud services.