Set up network edge services with hybrid connectivity network endpoint groups
Google offers various network edge services that can augment the capabilities and security of services based outside of Google Cloud (on-premises and multi-cloud services)—for example, in an on-premises data center or in another public cloud.
These network edge services let you do the following:
Accept and route external customer traffic globally with a single Anycast VIP.
Reduce server load by terminating TLS traffic at the edge with an external Application Load Balancer and Google-managed SSL certificates.
Enable pre-configured web application firewall (WAF) rules and apply allowlists and denylists to incoming traffic with Google Cloud Armor.
Control user access to your applications and resources with Identity-Aware Proxy (IAP).
Optimize content delivery and end-user latency with Cloud CDN.
To bring these benefits to your private, on-premises, or multi-cloud services, you deploy an external Application Load Balancer to receive traffic from the public internet. The external Application Load Balancer forwards traffic to a middle proxy that Cloud Service Mesh configures. This middle proxy routes traffic to your on-premises environment or non-Google Cloud environment by using Cloud VPN or Cloud Interconnect.
This tutorial walks you through an end-to-end example that uses Google Cloud Armor at Google's edge to selectively allow clients to access an on-premises service privately. Clients are allowed access based on their IP address.
You complete the following tasks:
- Deploy Envoy as a middle proxy in a managed instance group (MIG). This Envoy proxy is automatically connected to Cloud Service Mesh.
- Create a simulated private, on-premises virtual machine (VM) instance. In a real-world example, you would probably already have an on-premises VM.
- Configure Cloud Service Mesh to route all requests that reach the middle proxy to the simulated on-premises VM.
- Create an external Application Load Balancer to receive traffic from the public internet and forward it to the middle proxy.
- Attach a Google Cloud Armor security policy to the external Application Load Balancer.
After you complete these tasks, you can optionally explore additional edge services and advanced traffic management features.
Prerequisites
Before setting up your middle proxy, complete the following tasks:
Review Prepare to set up Cloud Service Mesh with Envoy and complete all the prerequisite tasks, which include granting the required permissions and roles and enabling the Cloud Service Mesh API.
Ensure that your private on-premises endpoints are reachable from within your Google Cloud Virtual Private Cloud (VPC) network through Cloud VPN or Cloud Interconnect. The example used in this tutorial only routes traffic to an endpoint within Google Cloud, so you don't need to configure hybrid connectivity to follow along. In a real-world deployment scenario, configuring hybrid connectivity would be required.
Ensure that your VPC subnet CIDR ranges don't conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.
For this demonstration, obtain the necessary permissions to create and update Google Cloud Armor security policies. The permissions might vary if you want to use a different service, such as Cloud CDN.
Deploy the middle proxy on Compute Engine VMs
This section describes how to deploy Envoy as a middle proxy on Compute Engine to receive traffic from the external load balancer and forward it to your remote destination.
Create the instance template for the middle proxy
An instance template specifies configuration for VMs within a managed instance group (MIG).
Use the following template to create VM instances with an Envoy proxy connected to Cloud Service Mesh:
gcloud compute instance-templates create td-middle-proxy \ --service-proxy=enabled \ --tags=allow-hc
To customize your Envoy deployment, such as by specifying the network name, setting a log path, or enabling tracing, see the Automated Envoy deployment option guide.
Create the MIG for the middle proxy
Create the MIG based on the template. In this example, you can keep the instance group size of 1 to deploy a single instance. For production usage, enable autoscaling—for example, based on CPU utilization, to avoid creating a bottleneck if your middle proxy needs to handle a lot of traffic.
gcloud compute instance-groups managed create td-middle-proxy-us-central1-a \ --zone=us-central1-a \ --template=td-middle-proxy \ --size=1
For future configurations and verification, identify and store the internal IP address of the instance in
MIDDLE_PROXY_IP
:MIDDLE_PROXY_IP=$(gcloud compute instances list \ --filter="name~'td-middle-proxy-us-central1-a-.*'" \ --zones=us-central1-a \ --format="value(networkInterfaces.networkIP)")
In this example, we create the MIG that contains VM instances running Envoy in
us-central1-a
. Later in this tutorial, you create an external load balancer to
handle public internet traffic from your clients.
Because the external load balancer can automatically route traffic to the region that is closest to your clients and to the MIG within that region, you might want to consider creating multiple MIGs. For a full list of Google Cloud's available regions and zones, see Regions and zones.
Deploy the simulated on-premises service
This section describes deploying a hybrid connectivity network endpoint group
(NEG). In a production deployment, this NEG would contain an endpoint
(IP:port
) that resolves to your on-premises server. In this example, you
create a Compute Engine VM that is reachable on an IP:port
. This
VM acts as your simulated on-premises server.
Create the simulated on-premises VM
Deploy a single VM instance to simulate a private, on-premises server:
gcloud compute instances create on-prem-vm \ --zone=us-central1-a \ --metadata startup-script='#! /bin/bash ## Installs apache and a custom homepage sudo su - apt-get update apt-get install -y apache2 cat <<EOF > /var/www/html/index.html <html><body><h1>Hello world from on-premises!</h1></body></html> EOF'
Identify and store its internal IP address for future configurations and verification. The server on this VM listens for inbound requests on port
80
:ON_PREM_IP=$(gcloud compute instances describe on-prem-vm \ --zone=us-central1-a \ --format="value(networkInterfaces.networkIP)" | sed "s/['\[\]]*//g")
Create the NEG
Create the NEG for this demonstration setup by specifying the
non-gcp-private-ip-port
network endpoint type. Add the IP address and port
for your simulated on-premises VM as an endpoint to this NEG. Following on the
previous step, the IP address is stored in the ON_PREM_IP
environment variable.
Create the NEG:
gcloud compute network-endpoint-groups create td-on-prem-neg \ --network-endpoint-type=non-gcp-private-ip-port \ --zone=us-central1-a
Add the
IP:port
to your new NEG:gcloud compute network-endpoint-groups update td-on-prem-neg \ --zone=us-central1-a \ --add-endpoint="ip=$ON_PREM_IP,port=80"
Configure Cloud Service Mesh with Cloud Load Balancing components
This section shows how to configure Cloud Service Mesh and enable your middle proxy to forward traffic to your private, on-premises service. You configure the following components:
A health check. This health check behaves slightly differently from the health checks configured for other NEG types.
A backend service. For more information, see the Backend services overview.
A routing rule map. This step includes creating a forwarding rule, a target proxy, and a URL map. For more information, see Routing rule maps.
Create the health check
Health checks verify that your endpoints are healthy and able to receive requests. Health checking for this type of NEG relies on Envoy's distributed health checking mechanism.
Other NEG types use Google Cloud's centralized health checking system:
gcloud compute health-checks create http td-on-prem-health-check
Create the backend service
Create a backend service with the
INTERNAL_SELF_MANAGED
load-balancing scheme to be used with Cloud Service Mesh. When you create this backend service, specify the health check that you previously created:gcloud compute backend-services create td-on-prem-backend-service \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --health-checks=td-on-prem-health-check
Add the NEG that you created earlier as the backend of this backend service:
gcloud compute backend-services add-backend td-on-prem-backend-service \ --global \ --network-endpoint-group=td-on-prem-neg \ --network-endpoint-group-zone=us-central1-a \ --balancing-mode=RATE \ --max-rate-per-endpoint=5
Create the routing rule map
The routing rule map defines how Cloud Service Mesh routes traffic to your backend service.
Create a URL map that uses the backend service defined previously:
gcloud compute url-maps create td-hybrid-url-map \ --default-service=td-on-prem-backend-service
Create a file called
target_proxy.yaml
with the following contents:name: td-hybrid-proxy proxyBind: true urlMap: global/urlMaps/td-hybrid-url-map
Use the
import
command to create the target HTTP proxy (for more information, see Target proxies for Cloud Service Mesh):gcloud compute target-http-proxies import td-hybrid-proxy \ --source=target_proxy.yaml
Create a forwarding rule that references this target HTTP proxy. Set the forwarding rule's IP address to
0.0.0.0
. Setting the rule's IP address to0.0.0.0
routes traffic based on the inbound port, HTTP hostname, and path information configured in the URL map. The IP address specified in the HTTP request is ignored.gcloud compute forwarding-rules create td-hybrid-forwarding-rule \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --address=0.0.0.0 \ --target-http-proxy=td-hybrid-proxy \ --ports=8080 \ --network=default
Verify that the middle proxy can route requests to the simulated on-premises service
Cloud Service Mesh is now configured to route traffic through the middle proxy to your simulated private, on-premises service. You can verify this configuration by creating a test client VM, logging into that VM, and sending a request to the middle proxy that is running Envoy. After verifying the configuration, delete the test client VM.
Get the IP address of the middle proxy. You only need this information for the verification step:
gcloud compute instances list
Write down or otherwise note the IP address of the instance in the
td-middle-proxy-us-central1-a
MIG.Create a test client instance:
gcloud compute instances create test-client \ --zone=us-central1-a
Use
ssh
to sign in to the test client:gcloud compute ssh test-client --zone=us-central1-a
Send a request to the middle proxy VM, substituting the IP address that you obtained previously for
MIDDLE_PROXY_IP
:curl $MIDDLE_PROXY_IP:8080
You should see the following output:
Hello world from on-premises!
Exit the test client VM. After you exit, you can delete the VM:
gcloud compute instances delete test-client \ --zone=us-central1-a
Deploy the external Application Load Balancer
In this section, you deploy an external Application Load Balancer that sends incoming traffic to the middle proxy. This deployment is a standard external Application Load Balancer setup.
Reserve an external IP address
Create a global static external IP address (external-lb-vip
) to which external
clients will send traffic. You retrieve this external IP address during the
verification step later in this tutorial.
gcloud compute addresses create external-lb-vip \ --ip-version=IPV4 \ --global
Set up the external HTTP load balancer
Configure the external load balancer to route internet customer traffic to your already-configured middle proxy.
Create a health check that is used to determine whether the MIG that runs the middle proxy is healthy and able to receive traffic:
gcloud compute health-checks create tcp tcp-basic-check \ --port=8080
Create a firewall rule to allow health checking. You re-use the
allow-hc
tag here to apply the firewall rule to the middle proxy VMs:gcloud compute firewall-rules create fw-allow-health-checks \ --network=default \ --action=ALLOW \ --direction=INGRESS \ --source-ranges=35.191.0.0/16,130.211.0.0/22 \ --target-tags=allow-hc \ --rules=tcp
Create a backend service:
gcloud compute backend-services create td-middle-proxy-backend-service \ --protocol=HTTP \ --health-checks=tcp-basic-check \ --global
Add the middle proxy MIG as the backend to this backend service:
gcloud compute backend-services add-backend td-middle-proxy-backend-service \ --instance-group=td-middle-proxy-us-central1-a \ --instance-group-zone=us-central1-a \ --global
Create a URL map to route the incoming requests to the middle proxy as the default backend service:
gcloud compute url-maps create lb-map-http \ --default-service=td-middle-proxy-backend-service
Create a target HTTP proxy so that requests to the external load balancer's forwarding rule virtual IP address (VIP) are handled according to the URL map:
gcloud compute target-http-proxies create http-lb-proxy \ --url-map=lb-map-http
Create a global forwarding rule to route incoming requests to the target HTTP proxy:
gcloud compute forwarding-rules create http-forwarding-rule \ --address=external-lb-vip\ --global \ --load-balancing-scheme=EXTERNAL \ --target-http-proxy=http-lb-proxy \ --ports=80
Set the MIG's named port
Set a named port for the instance group to allow your middle proxy to receive HTTP traffic from the external load balancer:
gcloud compute instance-groups managed set-named-ports td-middle-proxy-us-central1-a \ --named-ports=http:8080 \ --zone=us-central1-a
Verify the external Application Load Balancer configuration
In this step, you verify that the external load balancer is set up correctly.
You should be able to send a request to the load balancer's VIP and get a response from the simulated on-premises VM:
PUBLIC_VIP=gcloud compute addresses describe external-lb-vip \ --format="get(address)" \ --global
Issue a
curl
request to the external IP address (PUBLIC_VIP
) and verify that you receive theHello world
message:curl $PUBLIC_VIP
You see the following output:
Hello world from on-premises!
Enable Google Cloud Armor
Configure Google Cloud Armor security policies to only allow access to your
service from CLIENT_IP_RANGE
, which should include the external
IP address of the client device that you intend to test with—for example,
"192.0.2.0/24"
.
These policies are applied on the backend service of the
external load balancer (in this example, td-hybrid-backend-service
pointing at
the middle proxy). For more information about the permissions required to set
these rules, see
Configure Google Cloud Armor security policies.
Create the Google Cloud Armor security policy:
gcloud compute security-policies create external-clients-policy \ --description="policy for external clients"
Update the default rule of the security policy to deny all traffic:
gcloud compute security-policies rules update 2147483647 \ --security-policy=external-clients-policy \ --action="deny-404"
Add a higher priority rule to allow traffic from a specific IP range:
gcloud compute security-policies rules create 1000 \ --security-policy=external-clients-policy \ --description="allow traffic from CLIENT_IP_RANGE" \ --src-ip-ranges="CLIENT_IP_RANGE" \ --action="allow"
Attach the Google Cloud Armor security policies to your backend service:
gcloud compute backend-services update td-middle-proxy-backend-service \ --security-policy=external-clients-policy
Final verification
Issue a
curl
request to the external Application Load Balancer's public virtual IP address. If your client device's IP address is within the allowedCLIENT_IP_RANGE
specified previously, you should receive the expected response.curl $PUBLIC_VIP
You see the following output:
Hello world from on-premises!
Issue the same
curl
request from a different client device whose IP address lies outside ofCLIENT_IP_RANGE
, or update your security policy rule to no longer include your client IP address. You should now receive a404 Not Found
error.
Troubleshooting
The following instructiions describe how to fix problems with your configuration.
My on-premises service is not accessible through the global external Application Load Balancer IP address
Assuming that your on-premises service is already accessible on the Google Cloud VMs where the Envoys are running, follow these steps to troubleshoot your setup:
Make sure that the Google Cloud Envoy MIG is reported as healthy. In the Google Cloud console, go to Network services > Load balancing and click
url-map lb-map-http
to view its details. You should be able to see that 1/1 of the instance intd-middle-proxy-us-central1-a
is healthy.If it's not healthy, check if a firewall rule has been configured to allow ingress health-check traffic to the Google Cloud VMs that are running Envoy:
gcloud compute firewall-rules describe fw-allow-health-check
You should see the following output:
allowed: ‑ IPProtocol: tcp ... direction: INGRESS disabled: false ... ... sourceRanges: ‑ 130.211.0.0/22 ‑ 35.191.0.0/16 targetTags: ‑ allow-hc
What's next
To find more complex routing policies for your on-premises or multicloud services, see Cloud Service Mesh's advanced traffic management features.
To deploy Cloud Service Mesh, see the Setup guide overview.
To enable additional network edge services, see the following guides: