Set up HTTP-to-HTTPS redirect for internal HTTP(S) load balancers

Stay organized with collections Save and categorize content based on your preferences.

This example demonstrates how all requests to port 80 are redirected to their respective HTTPS services.

To learn how to set up HTTP-to-HTTPS redirect for external load balancing, see Setting up HTTP-to-HTTPS redirect for external HTTP(S) load balancers.

To use HTTP-to-HTTPS redirects with a shared IP address, you must create two load balancers, one for HTTPS traffic and another for HTTP traffic. Each load balancer has its own forwarding rule and URL map but shares the same IP address. For the HTTP load balancer, you don't need to configure a backend because the frontend redirects traffic to the HTTPS load balancer's backend.

This example demonstrates how to redirect all requests from port 80 to port 443.

At a high level, to redirect HTTP traffic to HTTPS, you must do the following:

  1. Create a normal internal HTTPS load balancer with a reserved, shared internal IP address.
  2. Test the internal HTTPS load balancer to make sure that it's working.
  3. Redirect traffic to the internal HTTPS load balancer.
    To do this, you must add a partial internal HTTP load balancer that has a frontend. The frontend receives requests and then redirects them to the internal HTTPS load balancer. It does this by using the following:
    • A forwarding rule with the same reserved internal IP address that your HTTPS load balancer uses (referred to in step 1)
    • A target HTTP proxy
    • A URL map that redirects traffic to the HTTPS load balancer

As shown in the following diagram, the internal HTTPS load balancer is a normal load balancer with the expected load balancer components.

The HTTP load balancer has the same IP address as the HTTPS load balancer and a redirect instruction in the URL map.

Internal HTTP-to-HTTPS redirect configuration (click to enlarge)
Internal HTTP-to-HTTPS redirect configuration

Creating the internal HTTPS load balancer

This process is similar to setting up an internal HTTP(S) load balancer.

The setup for Internal HTTP(S) Load Balancing has two parts:

  • Perform prerequisite tasks, such as ensuring that required accounts have the correct permissions and preparing the Virtual Private Cloud (VPC) network.
  • Set up the load balancer resources.

Before following this guide, familiarize yourself with the following:

Permissions

To follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles.

Task Required role
Create networks, subnets, and load balancer components Compute Network Admin
Add and remove firewall rules Compute Security Admin
Create instances Compute Instance Admin

For more information, see the following guides:

Configure the network and subnets

You need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. An internal HTTP(S) load balancer is regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.

This example uses the following VPC network, region, and subnets:

  • Network. The network is a custom-mode VPC network named lb-network.

  • Subnet for backends. A subnet named backend-subnet in the us-west1 region uses 10.1.2.0/24 for its primary IP range.

  • Subnet for proxies. A subnet named proxy-only-subnet in the us-west1 region uses 10.129.0.0/23 for its primary IP range.

To demonstrate global access, this example also creates a second test client VM in a different region and subnet:

  • Region: europe-west1
  • Subnet: europe-subnet, with primary IP address range 10.3.4.0/24

Configure the network and subnets

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. For Name, enter lb-network.

  4. In the Subnets section, set the Subnet creation mode to Custom.

  5. Create a subnet for the load balancer's backends. In the New subnet section, enter the following information:

    • Name: backend-subnet
    • Region: us-west1
    • IP address range: 10.1.2.0/24
  6. Click Done.

  7. Click Add subnet.

  8. Create a subnet to demonstrate global access. In the New subnet section, enter the following information:

    • Name: europe-subnet
    • Region: europe-west1
    • IP address range: 10.3.4.0/24
  9. Click Done.

  10. Click Create.

gcloud

  1. Create the custom VPC network with the gcloud compute networks create command:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network in the us-west1 region with the gcloud compute networks subnets create command:

    gcloud compute networks subnets create backend-subnet \
        --network=lb-network \
        --range=10.1.2.0/24 \
        --region=us-west1
    
  3. Create a subnet in the lb-network network in the europe-west1 region with the gcloud compute networks subnets create command:

    gcloud compute networks subnets create europe-subnet \
        --network=lb-network \
        --range=10.3.4.0/24 \
        --region=europe-west1
    

API

Make a POST request to the networks.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks

{
 "routingConfig": {
   "routingMode": "REGIONAL"
 },
 "name": "lb-network",
 "autoCreateSubnetworks": false
}

Make a POST request to the subnetworks.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks

{
 "name": "backend-subnet",
 "network": "projects/PROJECT_ID/global/networks/lb-network",
 "ipCidrRange": "10.1.2.0/24",
 "region": "projects/PROJECT_ID/regions/us-west1",
}

Make a POST request to the subnetworks.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/europe-west1/subnetworks

{
 "name": "europe-subnet",
 "network": "projects/PROJECT_ID/global/networks/lb-network",
 "ipCidrRange": "10.3.4.0/24",
 "region": "projects/PROJECT_ID/regions/europe-west1",
}

Configure the proxy-only subnet

This proxy-only subnet is for all regional Envoy-based load balancers in the us-west1 region of the lb-network.

Console

If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.

If you want to create the proxy-only subnet now, use the following steps:

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click the name of the VPC network: lb-network.

  3. Click Add subnet.

  4. For Name, enter proxy-only-subnet.

  5. For Region, select us-west1.

  6. Set Purpose to Regional Managed Proxy.

  7. For IP address range, enter 10.129.0.0/23.

  8. Click Add.

gcloud

Create the proxy-only subnet with the gcloud compute networks subnets create command.

gcloud compute networks subnets create proxy-only-subnet \
  --purpose=REGIONAL_MANAGED_PROXY \
  --role=ACTIVE \
  --region=us-west1 \
  --network=lb-network \
  --range=10.129.0.0/23

API

Create the proxy-only subnet with the subnetworks.insert method, replacing PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/projects/PROJECT_ID/regions/us-west1/subnetworks

{
  "name": "proxy-only-subnet",
  "ipCidrRange": "10.129.0.0/23",
  "network": "projects/PROJECT_ID/global/networks/lb-network",
  "region": "projects/PROJECT_ID/regions/us-west1",
  "purpose": "REGIONAL_MANAGED_PROXY",
  "role": "ACTIVE"
}

Configure firewall rules

This example uses the following firewall rules:

  • fw-allow-ssh. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh to identify the VMs that the firewall rule applies to.

  • fw-allow-health-check. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in 130.211.0.0/22 and 35.191.0.0/16). This example uses the target tag load-balanced-backend to identify the VMs that the firewall rule applies to.

  • fw-allow-proxies. An ingress rule, applicable to the instances being load balanced, that allows TCP traffic on ports 80, 443, and 8080 from the internal HTTP(S) load balancer's managed proxies. This example uses the target tag load-balanced-backend to identify the VMs that the firewall rule applies to.

Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.

The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Creating a managed instance group.

Console

  1. In the Google Cloud console, go to the Firewall rules page.

    Go to Firewall rules

  2. Click Create firewall rule to create the rule to allow incoming SSH connections:

    • Name: fw-allow-ssh
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: allow-ssh
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 0.0.0.0/0
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Select the TCP checkbox, and then enter 22 for the port number.
  3. Click Create.

  4. Click Create firewall rule a second time to create the rule to allow Google Cloud health checks:

    • Name: fw-allow-health-check
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 130.211.0.0/22 and 35.191.0.0/16
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Select the TCP checkbox, and then enter 80 for the port number.
        As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use tcp:80 for the protocol and port, Google Cloud can use HTTP on port 80 to contact your VMs, but it cannot use HTTPS on port 443 to contact them.
  5. Click Create.

  6. Click Create firewall rule a third time to create the rule to allow the load balancer's proxy servers to connect the backends:

    • Name: fw-allow-proxies
    • Network: lb-network
    • Direction of traffic: Ingress
    • Action on match: Allow
    • Targets: Specified target tags
    • Target tags: load-balanced-backend
    • Source filter: IPv4 ranges
    • Source IPv4 ranges: 10.129.0.0/23
    • Protocols and ports:
      • Choose Specified protocols and ports.
      • Select the TCP checkbox, and then enter 80, 443, 8080 for the port numbers.
  7. Click Create.

gcloud

  1. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh. When you omit source-ranges, Google Cloud interprets the rule to mean any source.

    gcloud compute firewall-rules create fw-allow-ssh \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    
  2. Create the fw-allow-health-check rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.

    gcloud compute firewall-rules create fw-allow-health-check \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --target-tags=load-balanced-backend \
        --rules=tcp
    
  3. Create the fw-allow-proxies rule to allow the internal HTTP(S) load balancer's proxies to connect to your backends. Set source-ranges to the allocated ranges of your proxy-only subnet, for example, 10.129.0.0/23.

    gcloud compute firewall-rules create fw-allow-proxies \
      --network=lb-network \
      --action=allow \
      --direction=ingress \
      --source-ranges=source-range \
      --target-tags=load-balanced-backend \
      --rules=tcp:80,tcp:443,tcp:8080
    

API

Create the fw-allow-ssh firewall rule by making a POST request to the firewalls.insert method, replacing PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{
 "name": "fw-allow-ssh",
 "network": "projects/PROJECT_ID/global/networks/lb-network",
 "sourceRanges": [
   "0.0.0.0/0"
 ],
 "targetTags": [
   "allow-ssh"
 ],
 "allowed": [
  {
    "IPProtocol": "tcp",
    "ports": [
      "22"
    ]
  }
 ],
"direction": "INGRESS"
}

Create the fw-allow-health-check firewall rule by making a POST request to the firewalls.insert method, replacing PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{
 "name": "fw-allow-health-check",
 "network": "projects/PROJECT_ID/global/networks/lb-network",
 "sourceRanges": [
   "130.211.0.0/22",
   "35.191.0.0/16"
 ],
 "targetTags": [
   "load-balanced-backend"
 ],
 "allowed": [
   {
     "IPProtocol": "tcp"
   }
 ],
 "direction": "INGRESS"
}

Create the fw-allow-proxies firewall rule to allow TCP traffic within the proxy subnet for the firewalls.insert method, replacing PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{
 "name": "fw-allow-proxies",
 "network": "projects/PROJECT_ID/global/networks/lb-network",
 "sourceRanges": [
   "10.129.0.0/23"
 ],
 "targetTags": [
   "load-balanced-backend"
 ],
 "allowed": [
   {
     "IPProtocol": "tcp",
     "ports": [
       "80"
     ]
   },
 {
     "IPProtocol": "tcp",
     "ports": [
       "443"
     ]
   },
   {
     "IPProtocol": "tcp",
     "ports": [
       "8080"
     ]
   }
 ],
 "direction": "INGRESS"
}

Creating the internal HTTPS load balancer resources

This example uses a virtual machine backends on Compute Engine in a managed instance group. You can instead use other supported backend types.

Creating an instance template with an HTTP server

gcloud

gcloud compute instance-templates create l7-ilb-backend-template \
    --region=us-west1 \
    --network=lb-network \
    --subnet=backend-subnet \
    --tags=allow-ssh,load-balanced-backend \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    sudo apt-get update
    sudo apt-get install apache2 -y
    sudo a2ensite default-ssl
    sudo a2enmod ssl
    vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://metadata.google.internal/computeMetadata/v1/instance/name)"
    sudo echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    sudo systemctl restart apache2'

Creating a managed instance group in the zone

gcloud

gcloud compute instance-groups managed create l7-ilb-backend \
    --zone=us-west1-a \
    --size=2 \
    --template=l7-ilb-backend-template

Creating an HTTP health check

gcloud

gcloud compute health-checks create http l7-ilb-basic-check \
     --region=us-west1 \
     --use-serving-port

Creating a backend service

gcloud

gcloud compute backend-services create l7-ilb-backend-service \
    --load-balancing-scheme=INTERNAL_MANAGED \
    --protocol=HTTP \
    --health-checks=l7-ilb-basic-check \
    --health-checks-region=us-west1 \
    --region=us-west1

Adding backends to the backend service

gcloud

gcloud compute backend-services add-backend l7-ilb-backend-service \
    --balancing-mode=UTILIZATION \
    --instance-group=l7-ilb-backend \
    --instance-group-zone=us-west1-a \
    --region=us-west1

Creating a URL map

gcloud

gcloud compute url-maps create l7-ilb-service-url-map \
    --default-service=l7-ilb-backend-service \
    --region=us-west1

Creating a regional SSL certificate

The following example shows how use gcloud to create a self-managed SSL certificate. For more information, see Using self-managed SSL certificates and Using Google-managed SSL certificates.

gcloud

gcloud compute ssl-certificates create CERTIFICATE_NAME \
    --certificate=CERTIFICATE_FILE \
    --private-key=PRIVATE_KEY_FILE \
    --region=us-west1

Using the regional SSL certificate to create a target proxy

gcloud

gcloud compute target-https-proxies create l7-ilb-https-proxy \
    --url-map=l7-ilb-service-url-map \
    --region=us-west1 \
    --ssl-certificates=l7-ilb-cert

Reserving a shared IP address for the forwarding rules

gcloud

gcloud compute addresses create NAME \
    --addresses=10.1.2.99 \
    --region=us-west1 \
    --subnet=backend-subnet \
    --purpose=SHARED_LOADBALANCER_VIP

Creating an HTTPS forwarding rule

gcloud

gcloud compute forwarding-rules create l7-ilb-https-forwarding-rule \
    --load-balancing-scheme=INTERNAL_MANAGED \
    --network=lb-network \
    --subnet=backend-subnet \
    --address=10.1.2.99 \
    --ports=443 \
    --region=us-west1 \
    --target-https-proxy=l7-ilb-https-proxy \
    --target-https-proxy-region=us-west1

Testing the HTTPS load balancer

At this point, the load balancer (including the backend service, URL map, and forwarding rule) are ready. Test this load balancer to verify that it is working as expected.

Create a client VM instance to test connectivity.

gcloud

gcloud compute instances create l7-ilb-client-us-west1-a \
    --image-family=debian-10 \
    --image-project=debian-cloud \
    --network=lb-network \
    --subnet=backend-subnet \
    --zone=us-west1-a \
    --tags=allow-ssh

Connect to the client VM via SSH.

gcloud

gcloud compute ssh l7-ilb-client-us-west1-a \
    --zone=us-west1-a

Use Curl to connect to the HTTPS load balancer.

curl -k -s 'https://10.1.2.99:443' --connect-to 10.1.2.99:443

Sample output:

Page served from: l7-ilb-backend-850t

Send 100 requests and verify they are all load balanced.

{
  RESULTS=
  for i in {1..100}
  do
      RESULTS="$RESULTS:$(curl -k -s 'https://test.example.com:443' --connect-to test.example.com:443:10.1.2.99:443)"
  done
  echo "***"
  echo "*** Results of load-balancing to 10.1.2.99: "
  echo "***"
  echo "$RESULTS" | tr ':' '\n' | grep -Ev "^$" | sort | uniq -c
  echo
}

Sample output:

***
*** Results of load-balancing to 10.1.2.99:
***
     51  l7-ilb-backend-https-850t
     49  l7-ilb-backend-https-w11t
    100 Page served from

Redirecting traffic to your HTTPS load balancer

The HTTP load balancer has a shared IP address that redirects traffic from port 80 to 443.

Creating a new URL map for redirecting traffic

Create a YAML file with the traffic redirect configuration.

defaultService: regions/us-west1/backendServices/l7-ilb-backend-service
kind: compute#urlMap
name: l7-ilb-redirect-url-map
hostRules:
- hosts:
  - '*'
  pathMatcher: matcher1
pathMatchers:
- name: matcher1
  defaultUrlRedirect:
        hostRedirect: 10.1.2.99:443
        pathRedirect: /
        redirectResponseCode: PERMANENT_REDIRECT
        httpsRedirect: True

Import the YAML file to a new URL map.

gcloud

gcloud compute url-maps import l7-ilb-redirect-url-map \
    --source=/tmp/url_map.yaml \
    --region=us-west1

Creating the HTTP load balancer's target proxy

gcloud

gcloud compute target-http-proxies create l7-ilb-http-proxy \
    --url-map=l7-ilb-redirect-url-map \
    --region=us-west1

Creating a new forwarding rule and the shared IP address

gcloud

gcloud compute forwarding-rules create l7-ilb-forwarding-rule \
    --load-balancing-scheme=INTERNAL_MANAGED \
    --network=lb-network \
    --subnet=backend-subnet \
    --address=10.1.2.99 \
    --ports=80 \
    --region=us-west1 \
    --target-http-proxy=l7-ilb-http-proxy \
    --target-http-proxy-region=us-west1

Testing the traffic redirect

Connect to your client VM.

gcloud

gcloud compute ssh l7-ilb-client-us-west1-a \
    --zone=us-west1-a

Send a http request to 10.1.2.99 on port 80, expect a traffic redirect.

curl -L -k 10.1.2.99

A sample output.

Page served from: l7-ilb-backend-w11t

You can add -vvv to see more details.

curl -L -k 10.1.2.99 -vvv

* Rebuilt URL to: 10.1.2.99/
*   Trying 10.1.2.99...
* TCP_NODELAY set
* Connected to 10.1.2.99 (10.1.2.99) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.1.2.99
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 308 Permanent Redirect
< location: https://10.1.2.99:443/
< date: Fri, 07 Aug 2020 05:07:18 GMT
< via: 1.1 google
< content-length: 0
<
* Curl_http_done: called premature == 0
* Connection #0 to host 10.1.2.99 left intact
* Issue another request to this URL: 'https://10.1.2.99:443/'
*   Trying 10.1.2.99...
* TCP_NODELAY set
* Connected to 10.1.2.99 (10.1.2.99) port 443 (#1)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
...
...
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: O=Google TESTING; CN=test_cert_1
*  start date: Jan  1 00:00:00 2015 GMT
*  expire date: Jan  1 00:00:00 2025 GMT
*  issuer: O=Google TESTING; CN=Intermediate CA
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x561a6b0e3ea0)
> GET / HTTP/1.1
> Host: 10.1.2.99
> User-Agent: curl/7.52.1
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< date: Fri, 07 Aug 2020 05:07:18 GMT
< server: Apache/2.4.25 (Debian)
< last-modified: Thu, 06 Aug 2020 13:30:21 GMT
< etag: "2c-5ac357d7a47ec"
< accept-ranges: bytes
< content-length: 44
< content-type: text/html
< via: 1.1 google
<
Page served from: l7-ilb-backend-https-w11t
* Curl_http_done: called premature == 0
* Connection #1 to host 10.1.2.99 left intact

What's next