This example demonstrates how all requests to port 80 are redirected to their respective HTTPS services.
To learn how to set up HTTP-to-HTTPS redirect for external load balancing, see Setting up HTTP-to-HTTPS redirect for external HTTP(S) load balancers.
To use HTTP-to-HTTPS redirects with a shared IP address, you must create two load balancers, one for HTTPS traffic and another for HTTP traffic. Each load balancer has its own forwarding rule and URL map but shares the same IP address. For the HTTP load balancer, you don't need to configure a backend because the frontend redirects traffic to the HTTPS load balancer's backend.
This example demonstrates how to redirect all requests from port 80 to port 443.
At a high level, to redirect HTTP traffic to HTTPS, you must do the following:
- Create a normal internal HTTPS load balancer with a reserved, shared internal IP address.
- Test the internal HTTPS load balancer to make sure that it's working.
- Redirect traffic to the internal HTTPS load balancer.
To do this, you must add a partial internal HTTP load balancer that has a
frontend but no backends. The frontend receives requests and then redirects
them to the internal HTTPS load balancer. It does this by using the following:
- A forwarding rule with the same reserved internal IP address that your HTTPS load balancer uses (referred to in step 1)
- A target HTTP proxy
- A URL map that redirects traffic to the HTTPS load balancer
As shown in the following diagram, the internal HTTPS load balancer is a normal load balancer with the expected load balancer components.
The HTTP load balancer has the same IP address as the HTTPS load balancer, a redirect instruction in the URL map, and no backend.
Creating the internal HTTPS load balancer
This process is similar to setting up an internal HTTP(S) load balancer.
The setup for Internal HTTP(S) Load Balancing has two parts:
- Performing prerequisite tasks, such as ensuring that required accounts have the correct permissions and preparing the Virtual Private Cloud (VPC) network.
- Setting up the load balancer resources.
Before following this guide, familiarize yourself with the following:
- Internal HTTP(S) Load Balancing overview, including the Limitations section
- VPC firewall rules overview
Permissions
To follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles.
Task | Required role |
---|---|
Create networks, subnets, and load balancer components | Network Admin |
Add and remove firewall rules | Security Admin |
Create instances | Instance Admin |
For more information, see the following guides:
Configuring the network and subnets
You need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. An internal HTTP(S) load balancer is regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network. The network is a custom-mode VPC network named
lb-network
.Subnet for backends. A subnet named
backend-subnet
in theus-west1
region uses10.1.2.0/24
for its primary IP range.Subnet for proxies. A subnet named
proxy-only-subnet
in theus-west1
region uses10.129.0.0/23
for its primary IP range.
Configuring the network and subnet for backends
Console
- Go to the VPC network page in the Google Cloud Console.
Go to the VPC network page - Click Create VPC network.
- For the Name, enter
lb-network
. - In the Subnets section:
- Set the Subnet creation mode to Custom.
- In the New subnet section, enter the following information:
- Name:
backend-subnet
- Region:
us-west1
- IP address range:
10.1.2.0/24
- Name:
- Click Done.
- Click Create.
gcloud
Create the custom VPC network with the
gcloud compute networks create
command:gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the
lb-network
network in theus-west1
region with thegcloud compute networks subnets create
command:gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1
API
Make a POST
request to the
networks.insert
method,
replacing PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID
/global/networks
{
"routingConfig": {
"routingMode": "REGIONAL"
},
"name": "lb-network",
"autoCreateSubnetworks": false
}
Make a POST
request to the
subnetworks.insert
method,
replacing PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID
/regions/us-west1/subnetworks
{
"name": "backend-subnet",
"network": "projects/project-id/global/networks/lb-network",
"ipCidrRange": "10.1.2.0/24",
"region": "projects/project-id/regions/us-west1",
}
Configuring the proxy-only subnet
The proxy-only subnet is for all internal HTTP(S) load balancers in the us-west1
region.
Console
If you're using the Google Cloud Console, you can wait and create the proxy-only subnet later on the Load balancing page.
gcloud
Create the proxy-only subnet with the gcloud compute networks subnets
create
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=INTERNAL_HTTPS_LOAD_BALANCER \ --role=ACTIVE \ --region=us-west1 \ --network=lb-network \ --range=10.129.0.0/23
API
Create the proxy-only subnet with the
subnetworks.insert
method, replacing PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/projects/PROJECT_ID
/regions/us-west1/subnetworks
{
"name": "proxy-only-subnet",
"ipCidrRange": "10.129.0.0/23",
"network": "projects/project-id/global/networks/lb-network",
"region": "projects/project-id/regions/us-west1",
"purpose": "INTERNAL_HTTPS_LOAD_BALANCER",
"role": "ACTIVE"
}
Configuring firewall rules
This example uses the following firewall rules:
fw-allow-ssh
. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tagallow-ssh
.fw-allow-health-check
. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in130.211.0.0/22
and35.191.0.0/16
). This example uses the target tagload-balanced-backend
.fw-allow-proxies
. An ingress rule, applicable to the instances being load balanced, that allows TCP traffic on ports80
,443
, and8080
from the internal HTTP(S) load balancer's managed proxies. This example uses the target tagload-balanced-backend
.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Creating a managed instance group.
Console
- Go to the Firewall rules page in the Google Cloud Console.
Go to the Firewall rules page - Click Create firewall rule to create the rule to allow incoming
SSH connections:
- Name:
fw-allow-ssh
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh
- Source filter: IP ranges
- Source IP ranges:
0.0.0.0/0
- Protocols and ports:
- Choose Specified protocols and ports.
- Select the tcp checkbox, and then enter
22
for the port number.
- Name:
- Click Create.
- Click Create firewall rule a second time to create the rule to allow
Google Cloud health checks:
- Name:
fw-allow-health-check
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend
- Source filter: IP ranges
- Source IP ranges:
130.211.0.0/22
and35.191.0.0/16
- Protocols and ports:
- Choose Specified protocols and ports.
- Select the tcp checkbox, and then enter
80
for the port number. As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you usetcp:80
for the protocol and port, Google Cloud can use HTTP on port80
to contact your VMs, but it cannot use HTTPS on port443
to contact them.
- Name:
- Click Create.
- Click Create firewall rule a third time to create the rule to allow
the load balancer's proxy servers to connect the backends:
- Name:
fw-allow-proxies
- Network:
lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
load-balanced-backend
- Source filter: IP ranges
- Source IP ranges:
10.129.0.0/23
- Protocols and ports:
- Choose Specified protocols and ports.
- Select the tcp checkbox, and then enter
80, 443, 8080
for the port numbers.
- Name:
- Click Create.
gcloud
Create the
fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tagallow-ssh
. When you omitsource-ranges
, Google Cloud interprets the rule to mean any source.gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the
fw-allow-health-check
rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
Create the
fw-allow-proxies
rule to allow the internal HTTP(S) load balancer's proxies to connect to your backends. Setsource-ranges
to the allocated ranges of your proxy-only subnet, for example,10.129.0.0/23
.gcloud compute firewall-rules create fw-allow-proxies \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=source-range \ --target-tags=load-balanced-backend \ --rules=tcp:80,tcp:443,tcp:8080
API
Create the fw-allow-ssh
firewall rule by making a POST
request to
the firewalls.insert
method, replacing PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID
/global/firewalls
{
"name": "fw-allow-ssh",
"network": "projects/project-id/global/networks/lb-network",
"sourceRanges": [
"0.0.0.0/0"
],
"targetTags": [
"allow-ssh"
],
"allowed": [
{
"IPProtocol": "tcp",
"ports": [
"22"
]
}
],
"direction": "INGRESS"
}
Create the fw-allow-health-check
firewall rule by making a POST
request to
the firewalls.insert
method, replacing PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID
/global/firewalls
{
"name": "fw-allow-health-check",
"network": "projects/project-id/global/networks/lb-network",
"sourceRanges": [
"130.211.0.0/22",
"35.191.0.0/16"
],
"targetTags": [
"load-balanced-backend"
],
"allowed": [
{
"IPProtocol": "tcp"
}
],
"direction": "INGRESS"
}
Create the fw-allow-proxies
firewall rule to allow TCP traffic within the
proxy subnet for the
firewalls.insert
method, replacing PROJECT_ID
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/{project}/global/firewalls
{
"name": "fw-allow-proxies",
"network": "projects/project-id/global/networks/lb-network",
"sourceRanges": [
"10.129.0.0/23"
],
"targetTags": [
"load-balanced-backend"
],
"allowed": [
{
"IPProtocol": "tcp",
"ports": [
"80"
]
},
{
"IPProtocol": "tcp",
"ports": [
"443"
]
},
{
"IPProtocol": "tcp",
"ports": [
"8080"
]
}
],
"direction": "INGRESS"
}
Creating the internal HTTPS load balancer resources
This example uses a virtual machine backends on Compute Engine in a managed instance group. You can instead use other supported backend types.
Creating an instance template with an HTTP server
gcloud
gcloud compute instance-templates create l7-ilb-backend-template \ --region=us-west1 \ --network=lb-network \ --subnet=backend-subnet \ --tags=allow-ssh,load-balanced-backend \ --image-family=debian-9 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://169.254.169.254/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Creating a managed instance group in the zone
gcloud
gcloud compute instance-groups managed create l7-ilb-backend \ --zone=us-west1-a \ --size=2 \ --template=l7-ilb-backend-template
Creating an HTTP health check
gcloud
gcloud compute health-checks create http l7-ilb-basic-check \ --region=us-west1 \ --use-serving-port
Creating a backend service
gcloud
gcloud compute backend-services create l7-ilb-backend-service \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=HTTP \ --health-checks=l7-ilb-basic-check \ --health-checks-region=us-west1 \ --region=us-west1
Adding backends to the backend service
gcloud
gcloud compute backend-services add-backend l7-ilb-backend-service \ --balancing-mode=UTILIZATION \ --instance-group=l7-ilb-backend \ --instance-group-zone=us-west1-a \ --region=us-west1
Creating a URL map
gcloud
gcloud compute url-maps create l7-ilb-service-url-map \ --default-service=l7-ilb-backend-service \ --region=us-west1
Creating a regional SSL certificate
The following example shows how use gcloud
to create a self-managed SSL
certificate. For more information, see Using self-managed SSL
certificates and
Using Google-managed SSL
certificates.
gcloud
gcloud compute ssl-certificates create CERTIFICATE_NAME \ --certificate=CERTIFICATE_FILE \ --private-key=PRIVATE_KEY_FILE \ --region=us-west1
Using the regional SSL certificate to create a target proxy
gcloud
gcloud compute target-https-proxies create l7-ilb-https-proxy \ --url-map=l7-ilb-service-url-map \ --region=us-west1 \ --ssl-certificates=l7-ilb-cert
Reserving a shared IP address for the forwarding rules
gcloud
gcloud beta compute addresses create \ --addresses=10.1.2.99 \ --region=us-west1 \ --subnet=backend-subnet \ --purpose=SHARED_LOADBALANCER_VIP
Creating an HTTPS forwarding rule
gcloud
gcloud compute forwarding-rules create l7-ilb-https-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --address=10.1.2.99 \ --ports=443 \ --region=us-west1 \ --target-https-proxy=l7-ilb-https-proxy \ --target-https-proxy-region=us-west1
Testing the HTTPS load balancer
At this point, the load balancer (including the backend service, URL map, and forwarding rule) are ready. Test this load balancer to verify that it is working as expected.
Create a client VM instance to test connectivity.
gcloud
gcloud compute instances create l7-ilb-client-us-west1-a \ --image-family=debian-9 \ --image-project=debian-cloud \ --network=lb-network \ --subnet=backend-subnet \ --zone=us-west1-a \ --tags=allow-ssh
Connect to the client VM via SSH.
gcloud
gcloud compute ssh l7-ilb-client-us-west1-a \ --zone=us-west1-a
Use Curl to connect to the HTTPS load balancer.
curl -k -s 'https://10.1.2.99:443' --connect-to 10.1.2.99:443
Sample output:
Page served from: l7-ilb-backend-850t
Send 100 requests and verify they are all load balanced.
{
RESULTS=
for i in {1..100}
do
RESULTS="$RESULTS:$(curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:10.1.2.99:443)"
done
echo "***"
echo "*** Results of load-balancing to 10.1.2.99: "
echo "***"
echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c
echo
}
Sample output:
***
*** Results of load-balancing to 10.1.2.99:
***
51 l7-ilb-backend-https-850t
49 l7-ilb-backend-https-w11t
100 Page served from
Redirecting traffic to your HTTPS load balancer
The HTTP load balancer has a shared IP address that redirects traffic from port 80 to 443.
Creating a new URL map for redirecting traffic
Create a YAML file with the traffic redirect configuration.
echo "defaultService: regions/us-west1/backendServices/l7-ilb-backend-service
kind: compute#urlMap
name: l7-ilb-redirect-url-map
hostRules:
- hosts:
- '*'
pathMatcher: matcher1
pathMatchers:
- name: matcher1
defaultUrlRedirect:
hostRedirect: 10.1.2.99:443
pathRedirect: /
redirectResponseCode: PERMANENT_REDIRECT
httpsRedirect: True
stripQuery: True" > /tmp/url_map.yaml
Import the YAML file to a new URL map.
gcloud
gcloud compute url-maps import l7-ilb-redirect-url-map \ --source /tmp/url_map.yaml \ --region us-west1
Creating the HTTP load balancer's target proxy
gcloud
gcloud compute target-http-proxies create l7-ilb-http-proxy \ --url-map=l7-ilb-redirect-url-map \ --url-map-region=us-west1 \ --region=us-west1
Creating a new forwarding rule and the shared IP address
gcloud
gcloud compute forwarding-rules create l7-ilb-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --address=10.1.2.99 \ --ports=80 \ --region=us-west1 \ --target-http-proxy=l7-ilb-http-proxy \ --target-http-proxy-region=us-west1
Testing the traffic redirect
Connect to your client VM.
gcloud
gcloud compute ssh l7-ilb-client-us-west1-a \ --zone=us-west1-a
Send a http request to 10.1.2.99 on port 80, expect a traffic redirect.
curl -L -k 10.1.2.99
A sample output.
Page served from: l7-ilb-backend-w11t
You can add -vvv to see more details.
curl -L -k 10.1.2.99 -vvv
* Rebuilt URL to: 10.1.2.99/
* Trying 10.1.2.99...
* TCP_NODELAY set
* Connected to 10.1.2.99 (10.1.2.99) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.1.2.99
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< location: https://10.1.2.99:443/
< date: Fri, 07 Aug 2020 05:07:18 GMT
< via: 1.1 google
< content-length: 0
<
* Curl_http_done: called premature == 0
* Connection #0 to host 10.1.2.99 left intact
* Issue another request to this URL: 'https://10.1.2.99:443/'
* Trying 10.1.2.99...
* TCP_NODELAY set
* Connected to 10.1.2.99 (10.1.2.99) port 443 (#1)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
...
...
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: O=Google TESTING; CN=test_cert_1
* start date: Jan 1 00:00:00 2015 GMT
* expire date: Jan 1 00:00:00 2025 GMT
* issuer: O=Google TESTING; CN=Intermediate CA
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x561a6b0e3ea0)
> GET / HTTP/1.1
> Host: 10.1.2.99
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< date: Fri, 07 Aug 2020 05:07:18 GMT
< server: Apache/2.4.25 (Debian)
< last-modified: Thu, 06 Aug 2020 13:30:21 GMT
< etag: "2c-5ac357d7a47ec"
< accept-ranges: bytes
< content-length: 44
< content-type: text/html
< via: 1.1 google
<
Page served from: l7-ilb-backend-https-w11t
* Curl_http_done: called premature == 0
* Connection #1 to host 10.1.2.99 left intact
What's next
To learn how Internal HTTP(S) Load Balancing works, see Internal HTTP(S) Load Balancing overview.
To manage the proxy-only subnet resource required by Internal HTTP(S) Load Balancing, see Proxy-only subnet for internal HTTP(S) load balancers.