ILB as Next-Hop with Tags

1. Introduction

With the internal TCP/UDP load balancer as the next hop feature, you can integrate third-party appliances in a highly available, scale-out manner. To do this, you need to configure a custom static route and set the next hop to the ILB, which will load-balance traffic for the destination prefix to the pool of health-checked third-party VM appliances. This functionality has been enhanced with the additional capabilities below to give more options in selecting your next-hops and supporting high-availability of these appliances. Please refer to the internal TCP/UDP Load Balancing documentation for more details.

  • Specify VIP as next-hop: Earlier you could only specify an internal TCP/UDP load balancer next hop by using the forwarding rule's name and the load balancer's region, but now you can use the internal IP address associated with the forwarding rule as the next-hop. This ILB's virtual IP address can be learned across peers without having to export the custom route via peers.
  • Use Network Tags: You can now specify a network tag so that the ILB as next-hop route only applies to client instances that have been configured with the tag. This lets you select which client instances get populated with which tagged ILB as next-hop route and hence which set of appliances to route your traffic to. You no longer need to segregate the different client instances into separate VPCs, each pointing to their preferred ILB front-ending a set of appliances. Note: tagged routes are not exported/imported via peering.
  • Configure Multiple Routes to the same destination prefix: With tags you can now specify multiple routes to the same destination with different internal load-balancers as next-hops. While ECMP is still not supported (same destination prefix, same tags, different next-hops), you can use different tags or different priorities for these same destination routes.

What you'll learn

  • ILB as Next-Hop benefits

What you'll need

  • Knowledge of deploying instances and configuring networking components

2. Test Environment

Managed Instance Groups using VMs with a single NIC will be defined in different regions, with Linux instances configured to SNAT all outbound traffic to the Internet (North-South outbound traffic flow), with regional failover manually triggered. East-West connectivity with Symmetric Hashing and All Protocols using ILB as Next-Hops is also going to be demonstrated.

9d3b47fdf75a9f0f.jpeg

Self-paced environment setup

  1. Sign-in to the Google Cloud Console and create a new project or reuse an existing one. If you don't already have a Gmail or Google Workspace account, you must create one.

96a9c957bc475304.png

b9a10ebdf5b5a448.png

a1e3c01a38fa61c2.png

  • The Project name is the display name for this project's participants. It is a character string not used by Google APIs, and you can update it at any time.
  • The Project ID must be unique across all Google Cloud projects and is immutable (cannot be changed after it has been set). The Cloud Console auto-generates a unique string; usually you don't care what it is. In most codelabs, you'll need to reference the Project ID (and it is typically identified as PROJECT_ID), so if you don't like it, generate another random one, or, you can try your own and see if it's available. Then it's "frozen" after the project is created.
  • There is a third value, a Project Number which some APIs use. Learn more about all three of these values in the documentation.
  1. Next, you'll need to enable billing in the Cloud Console in order to use Cloud resources/APIs. Running through this codelab shouldn't cost much, if anything at all. To shut down resources so you don't incur billing beyond this tutorial, follow any "clean-up" instructions found at the end of the codelab. New users of Google Cloud are eligible for the $300 USD Free Trial program.

Start Cloud Shell

While Google Cloud can be operated remotely from your laptop, in this codelab you will be using Google Cloud Shell, a command line environment running in the Cloud.

From the GCP Console click the Cloud Shell icon on the top right toolbar:

bce75f34b2c53987.png

It should only take a few moments to provision and connect to the environment. When it is finished, you should see something like this:

f6ef2b5f13479f3a.png

This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory, and runs on Google Cloud, greatly enhancing network performance and authentication. All of your work in this lab can be done with simply a browser.

3. Before you begin

Codelab requires a single project and ability to create multiple virtual private clouds (VPCs)

Update project to support producer and consumer network

Using Cloud Shell, make sure that your project id is set up

gcloud config set project [project]
export PROJECT_ID=`gcloud config list --format="value(core.project)"`
export region_a=us-central1
export zone_a=us-central1-a
export region_b=us-west2
export zone_b=us-west2-a

4. Create VPC networks and subnets

VPC Network

From Cloud Shell create hub vpc:

gcloud compute networks create hub-vpc --subnet-mode custom

From Cloud Shell create the spoke1 vpc:

gcloud compute networks create spoke1-vpc --subnet-mode custom

From Cloud Shell create the spoke2 vpc:

gcloud compute networks create spoke2-vpc --subnet-mode custom

Subnets

From Cloud Shell create hub subnet in region A:

gcloud compute networks subnets create hub-subnet-a \
    --network hub-vpc --range 10.0.0.0/24 --region $region_a

From Cloud Shell create spoke1 subnet in region A:

gcloud compute networks subnets create spoke1-subnet1 \
    --network spoke1-vpc --range 192.168.0.0/24 --region $region_a

From Cloud Shell create spoke2 subnet in region A:

gcloud compute networks subnets create spoke2-subnet1 \
    --network spoke2-vpc --range 192.168.1.0/24 --region $region_a

Firewall Rules

From Cloud Shell, create the firewall rules:

gcloud compute firewall-rules create hub-vpc-web-ping-dns \
    --network hub-vpc --allow tcp:80,tcp:443,icmp,udp:53 \
    --source-ranges 10.0.0.0/24,10.0.1.0/24,192.168.0.0/24,192.168.1.0/24

gcloud compute firewall-rules create spoke1-vpc-web-ping-dns \
    --network spoke1-vpc --allow tcp:80,tcp:443,icmp,udp:53 \
    --source-ranges 10.0.0.0/24,10.0.1.0/24,192.168.0.0/24,192.168.1.0/24

gcloud compute firewall-rules create spoke2-vpc-web-ping-dns \
    --network spoke2-vpc --allow tcp:80,tcp:443,icmp,udp:53 \
    --source-ranges 10.0.0.0/24,10.0.1.0/24,192.168.0.0/24,192.168.1.0/24

gcloud compute firewall-rules create hub-vpc-health-checks \
    --network hub-vpc --allow tcp:80 --target-tags natgw \
    --source-ranges 130.211.0.0/22,35.191.0.0/16

From Cloud Shell, create the firewall rules for SSH access. You can skip this step if there are rules in place already allowing it. If you prefer to use Identity-Aware Proxy for TCP Forwarding (recommended), follow these steps to enable SSH.

gcloud compute firewall-rules create hub-vpc-allow-ssh \
    --network hub-vpc --allow tcp:22

gcloud compute firewall-rules create spoke1-vpc-allow-ssh \
    --network spoke1-vpc --allow tcp:22

gcloud compute firewall-rules create spoke2-vpc-allow-ssh \
    --network spoke2-vpc --allow tcp:22

VPC Network Peering

From Cloud Shell create the VPC Network Peerings:

gcloud compute networks peerings create hub-to-spoke1 \
    --network hub-vpc --peer-network spoke1-vpc \
    --peer-project $PROJECT_ID \
    --export-custom-routes

gcloud compute networks peerings create hub-to-spoke2 \
    --network hub-vpc --peer-network spoke2-vpc \
    --peer-project $PROJECT_ID \
    --export-custom-routes

gcloud compute networks peerings create spoke1-to-hub \
    --network spoke1-vpc --peer-network hub-vpc \
    --peer-project $PROJECT_ID \
    --import-custom-routes

gcloud compute networks peerings create spoke2-to-hub \
    --network spoke2-vpc --peer-network hub-vpc \
    --peer-project $PROJECT_ID \
    --import-custom-routes

5. Create NAT-GWs and Load Balancing Resources in Region A

From Cloud Shell create the instance template in Region A:

gcloud compute instance-templates create \
    hub-natgw-region-a-template \
    --network hub-vpc \
    --subnet hub-subnet-a --region $region_a \
    --machine-type n1-standard-2 --can-ip-forward \
    --tags natgw \
    --metadata startup-script='#! /bin/bash
# Enable IP forwarding:
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf
# iptables configuration
iptables -t nat -F
sudo iptables -t nat -A POSTROUTING ! -d 192.168.0.0/16 -j MASQUERADE
iptables-save
# Use a web server to pass the health check for this example.
# You should use a more complete test in production.
apt-get update
apt-get install apache2 tcpdump -y
a2ensite default-ssl
a2enmod ssl
echo "Example web page to pass health check" | \
tee /var/www/html/index.html \
systemctl restart apache2'

Create a Managed Instance Group

From Cloud Shell create the health-check:

gcloud compute health-checks create http natgw-ilbnhop-health-check \
    --port=80

From Cloud Shell create the managed instance group hub-natgw-region-a-mig:

gcloud compute instance-groups managed create \
    hub-natgw-region-a-mig \
    --region $region_a --size=2 \
    --template=hub-natgw-region-a-template \
    --health-check=natgw-ilbnhop-health-check \
    --initial-delay 15

Create the Load Balancing Resources

From Cloud Shell create the backend services and add the Managed Instance Group:

gcloud compute backend-services create hub-natgw-region-a-be \
    --load-balancing-scheme=internal \
    --protocol=tcp --region $region_a\
    --health-checks=natgw-ilbnhop-health-check

gcloud compute backend-services add-backend \
    hub-natgw-region-a-be \
    --instance-group=hub-natgw-region-a-mig \
    --instance-group-region=$region_a

From Cloud Shell create the forwarding rule for the Internal TCP/UDP Load Balancer:

gcloud compute forwarding-rules create \
    hub-natgw-region-a \
    --load-balancing-scheme=internal \
    --network=hub-vpc \
    --subnet=hub-subnet-a \
    --address=10.0.0.10 \
    --ip-protocol=TCP \
    --ports=all \
    --allow-global-access \
    --backend-service=hub-natgw-region-a-be \
    --backend-service-region=$region_a

From Cloud Shell create the ILB as Next-Hop routes with the pre-defined network tag:

gcloud compute routes create spoke1-natgw-region-a \
    --network=spoke1-vpc \
    --destination-range=0.0.0.0/0 \
    --next-hop-ilb=10.0.0.10 \
    --tags=ilbanh-region-a \
    --priority 800

gcloud compute routes create spoke2-natgw-region-a \
    --network=spoke2-vpc \
    --destination-range=0.0.0.0/0 \
    --next-hop-ilb=10.0.0.10 \
    --tags=ilbanh-region-a --priority 800

Deploy the client instances to test connectivity

Create client instances in spoke1 and spoke2 VPCs

gcloud compute instances create spoke1-client \
    --subnet=spoke1-subnet1 --no-address --zone $zone_a \
    --tags=ilbanh-region-a \
    --metadata startup-script='#! /bin/bash
apt-get update
apt-get install tcpdump -y'

gcloud compute instances create spoke2-client \
    --subnet=spoke2-subnet1 --no-address --zone $zone_a \
    --tags=ilbanh-region-a \
    --metadata startup-script='#! /bin/bash
apt-get update
apt-get install tcpdump -y'

6. Validate North-South and East-West Traffic Flows

From Cloud Shell, make sure that the NAT-GW VMs are running and write down the assigned external IPs as per the sample output below (external IPs are redacted):

gcloud compute instances list --filter="status:RUNNING AND name~natgw"
NAME                         ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
hub-natgw-region-a-mig-zrkn  us-central1-b  n1-standard-2               10.0.0.5     [external-ip]     RUNNING
hub-natgw-region-a-mig-5f6x  us-central1-f  n1-standard-2               10.0.0.6     [external-ip]     RUNNING

From Cloud Shell, confirm that the L4 Internal Load Balancer is healthy, and the routes were created as expected::

gcloud compute backend-services get-health hub-natgw-region-a-be --region $region_a

---
backend: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/regions/us-central1/instanceGroups/hub-natgw-region-a-mig
status:
  healthStatus:
  - forwardingRule: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/regions/us-central1/forwardingRules/hub-natgw-region-a
    forwardingRuleIp: 10.0.0.10
    healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/zones/us-central1-b/instances/hub-natgw-region-a-mig-zrkn
    ipAddress: 10.0.0.5
    port: 80
  - forwardingRule: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/regions/us-central1/forwardingRules/hub-natgw-region-a
    forwardingRuleIp: 10.0.0.10
    healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/zones/us-central1-f/instances/hub-natgw-region-a-mig-5f6x
    ipAddress: 10.0.0.6
    port: 80
  kind: compute#backendServiceGroupHealth

From Cloud Shell, verify that the ILB as Next-Hop routes were added to the spoke VPCs with the expected priority and targeting the ILB VIP::

gcloud compute routes list --filter="name~natgw" \
     --format='table(name,network,destRange,Priority,nextHopIlb,tags)'
NAME                   NETWORK     DEST_RANGE  PRIORITY  NEXT_HOP_ILB  TAGS
spoke1-natgw-region-a  spoke1-vpc  0.0.0.0/0   800       10.0.0.10     ['ilbanh-region-a']
spoke2-natgw-region-a  spoke2-vpc  0.0.0.0/0   800       10.0.0.10     ['ilbanh-region-a']

Navigate to the Cloud Console, establish SSH connections to the NAT-GWs in different tabs, and start tcpdump using the command below:

sudo tcpdump -n net 192.168.0.0/16

Navigate to the Cloud Console, establish a new SSH connection to the spoke1-client VM and try to ping the spoke2-client internal IP as per below.

ping [spoke2-client-ip]

Switch to the NAT-GW SSH windows and verify that you can see the ICMP packets as per the sample below:

16:51:28.411260 IP 192.168.0.2 > 192.168.1.2: ICMP echo request, id 1684, seq 492, length 64
16:51:28.411676 IP 192.168.1.2 > 192.168.0.2: ICMP echo reply, id 1684, seq 492, length 64

You should be able to successfully ping the client VM, which demonstrates that:

Stop the tcpdump outputs on the NAT-GW VMs and watch the iptables statistics as below:

watch sudo iptables -t nat -nvL

Switch to the spoke1-client VM again and run the command below multiple times, which displays the public source IP being used to connect to the website:

curl ifconfig.io

You should see both NAT-GW external IPs being displayed as source IPs, which demonstrates that the Internal TCP/UDP Load Balancing is distributing the traffic based on the default affinity (5-tuple hashing). Switch back to the NAT-GW VM to confirm that the packet counters increased:

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
  105 11442 MASQUERADE  all  --  *      *       0.0.0.0/0           !192.168.0.0/16

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

7. Create NAT-GWs and Load Balancing Resources in Region B

From Cloud Shell create hub subnet in region B:

gcloud compute networks subnets create hub-subnet-b \
    --network hub-vpc --range 10.0.1.0/24 --region $region_b

From Cloud Shell create the instance template in Region B:

gcloud compute instance-templates create \
    hub-natgw-region-b-template \
    --network hub-vpc \
    --subnet hub-subnet-b --region $region_b \
    --machine-type n1-standard-2 --can-ip-forward \
    --tags natgw \
    --metadata startup-script='#! /bin/bash
# Enable IP forwarding:
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "net.ipv4.ip_forward=1" > /etc/sysctl.d/20-iptables.conf
# iptables configuration
iptables -t nat -F
sudo iptables -t nat -A POSTROUTING ! -d 192.168.0.0/16 -j MASQUERADE
iptables-save
# Use a web server to pass the health check for this example.
# You should use a more complete test in production.
apt-get update
apt-get install apache2 -y
a2ensite default-ssl
a2enmod ssl
echo "Example web page to pass health check" | \
tee /var/www/html/index.html
systemctl restart apache2'

Create a Managed Instance Group

From Cloud Shell create the managed instance group hub-natgw-region-a-mig:

gcloud compute instance-groups managed create \
    hub-natgw-region-b-mig \
    --region $region_b --size=2 \
    --template=hub-natgw-region-b-template \
    --health-check=natgw-ilbnhop-health-check \
    --initial-delay 15

Create the Load Balancing Resources

From Cloud Shell create the backend services and add the Managed Instance Group:

gcloud compute backend-services create hub-natgw-region-b-be \
    --load-balancing-scheme=internal \
    --protocol=tcp --region $region_b\
    --health-checks=natgw-ilbnhop-health-check

gcloud compute backend-services add-backend \
    hub-natgw-region-b-be \
    --instance-group=hub-natgw-region-b-mig \
    --instance-group-region=$region_b

From Cloud Shell create the forwarding rule for the Internal TCP/UDP Load Balancer:

gcloud compute forwarding-rules create \
    hub-natgw-region-b \
    --load-balancing-scheme=internal \
    --network=hub-vpc \
    --subnet=hub-subnet-b \
    --address=10.0.1.10 \
    --ip-protocol=TCP \
    --ports=all \
    --allow-global-access \
    --backend-service=hub-natgw-region-b-be \
    --backend-service-region=$region_b

From Cloud Shell create the ILB as Next-Hop routes:

gcloud beta compute routes create spoke1-natgw-region-b \
    --network=spoke1-vpc \
    --destination-range=0.0.0.0/0 \
    --next-hop-ilb=10.0.1.10 \
    --tags=ilbanh-region-a \
    --priority 900

gcloud beta compute routes create spoke2-natgw-region-b \
    --network=spoke2-vpc \
    --destination-range=0.0.0.0/0 \
    --next-hop-ilb=10.0.1.10 \
    --tags=ilbanh-region-a \
    --priority 900

8. Validate Regional Failover using ILB as Next-Hop with Tags

From Cloud Shell, make sure that the NAT-GW VMs are running and write down the assigned external IPs as per the sample output below (external IPs are redacted):

gcloud compute instances list --filter="status:RUNNING AND name~natgw"
NAME                         ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
hub-natgw-region-a-mig-zrkn  us-central1-b  n1-standard-2               10.0.0.5     [public-ip]     RUNNING
hub-natgw-region-a-mig-5f6x  us-central1-f  n1-standard-2               10.0.0.6     [public-ip]     RUNNING
hub-natgw-region-b-mig-t3wh  us-west2-b     n1-standard-2               10.0.1.2     [public-ip]     RUNNING
hub-natgw-region-b-mig-3mmv  us-west2-a     n1-standard-2               10.0.1.3     [public-ip]     RUNNING

From Cloud Shell, confirm that the L4 Internal Load Balancer is healthy, and the routes were created as expected::

gcloud compute backend-services get-health hub-natgw-region-b-be --region $region_b

---
backend: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/regions/us-west2/instanceGroups/hub-natgw-region-b-mig
status:
  healthStatus:
  - forwardingRule: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/regions/us-west2/forwardingRules/hub-natgw-region-b
    forwardingRuleIp: 10.0.1.10
    healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/zones/us-west2-a/instances/hub-natgw-region-b-mig-3mmv
    ipAddress: 10.0.1.3
    port: 80
  - forwardingRule: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/regions/us-west2/forwardingRules/hub-natgw-region-b
    forwardingRuleIp: 10.0.1.10
    healthState: HEALTHY
    instance: https://www.googleapis.com/compute/v1/projects/vpc-test-third-party/zones/us-west2-b/instances/hub-natgw-region-b-mig-t3wh
    ipAddress: 10.0.1.2
    port: 80
  kind: compute#backendServiceGroupHealth

From Cloud Shell, verify that the ILB as Next-Hop routes were added to the spoke VPCs with the expected priority and targeting the ILB VIP::

gcloud compute routes list --filter="name~natgw" \
     --format='table(name,network,destRange,Priority,nextHopIlb,tags)'
NAME                   NETWORK     DEST_RANGE  PRIORITY  NEXT_HOP_ILB  TAGS
spoke1-natgw-region-a  spoke1-vpc  0.0.0.0/0   800       10.0.0.10     ['ilbanh-region-a']
spoke1-natgw-region-a  spoke1-vpc  0.0.0.0/0   900       10.0.1.10     ['ilbanh-region-a']
spoke2-natgw-region-a  spoke2-vpc  0.0.0.0/0   800       10.0.0.10     ['ilbanh-region-a']
spoke2-natgw-region-a  spoke2-vpc  0.0.0.0/0   900       10.0.1.10     ['ilbanh-region-a']

We can now validate the regional failover by deleting the high priority routes and verifying the behavior. In the current release, the route is not removed automatically if all backends are unhealthy, thus consider using Cloud Functions to automate this process. Switch to the spoke1-client VM and run the command below, which will send a curl request every second and report back the external IP being used:

while true; do echo -n `date` && echo -n ' - ' && curl ifconfig.io --connect-timeout 1; done

Only the external IPs assigned to the NAT-GWs in Region A should be displayed, since it is the high priority route. Leave the curl command running, switch to Cloud Shell to remove the route to the ILB in Region A to verify the result:

gcloud -q beta compute routes delete spoke1-natgw-region-a

You should start seeing the external IPs assigned to the NAT-GWs in Region B, likely with minimal downtime, which demonstrates that the regional failover was successful.

9. Cleanup steps

From Cloud Shell, remove the ILB as Next-Hop routes and ILB resources:

gcloud -q beta compute routes delete spoke1-natgw-region-b

gcloud -q beta compute routes delete spoke2-natgw-region-a

gcloud -q beta compute routes delete spoke2-natgw-region-b

gcloud -q compute forwarding-rules delete hub-natgw-region-a \
    --region $region_a

gcloud -q compute backend-services delete hub-natgw-region-a-be \
    --region $region_a

gcloud -q compute instance-groups managed delete hub-natgw-region-a-mig \
    --region $region_a

gcloud -q compute instance-templates delete hub-natgw-region-a-template

gcloud -q compute forwarding-rules delete hub-natgw-region-b \
    --region $region_b

gcloud -q compute backend-services delete hub-natgw-region-b-be \
    --region $region_b

gcloud -q compute instance-groups managed delete hub-natgw-region-b-mig \
    --region $region_b

gcloud -q compute instance-templates delete hub-natgw-region-b-template

gcloud -q compute health-checks delete natgw-ilbnhop-health-check

From Cloud Shell, delete the client instances:

gcloud -q compute instances delete spoke1-client \
    --zone=$zone_a

gcloud -q compute instances delete spoke2-client \
    --zone=$zone_a

From Cloud Shell, delete the VPC Peerings, firewall rules, subnets and VPCs:

gcloud -q compute networks peerings delete spoke2-to-hub \
    --network spoke2-vpc

gcloud -q compute networks peerings delete spoke1-to-hub \
    --network spoke1-vpc

gcloud -q compute networks peerings delete hub-to-spoke1 \
    --network hub-vpc

gcloud -q compute networks peerings delete hub-to-spoke2 \
    --network hub-vpc

gcloud -q compute firewall-rules delete spoke2-vpc-web-ping-dns

gcloud -q compute firewall-rules delete spoke1-vpc-web-ping-dns

gcloud -q compute firewall-rules delete hub-vpc-web-ping-dns

gcloud -q compute firewall-rules delete hub-vpc-health-checks

gcloud -q compute firewall-rules delete hub-vpc-allow-ssh

gcloud -q compute firewall-rules delete spoke1-vpc-allow-ssh

gcloud -q compute firewall-rules delete spoke2-vpc-allow-ssh

gcloud -q compute networks subnets delete spoke1-subnet1 \
    --region $region_a

gcloud -q compute networks subnets delete spoke2-subnet1 \
    --region $region_a

gcloud -q compute networks subnets delete hub-subnet-a \
    --region $region_a

gcloud -q compute networks subnets delete hub-subnet-b \
    --region $region_b

gcloud -q compute networks delete spoke1-vpc

gcloud -q compute networks delete spoke2-vpc

gcloud -q compute networks delete hub-vpc

10. Congratulations!

Congratulations for completing the codelab.

What we've covered

  • ILB as Next-Hop benefits