Installing F5 BIG-IP ADC for Anthos GKE on-prem using manual load balancing

By Gregory Coward, Solution Architect, F5 Networks

This tutorial shows how to set up the F5 BIG-IP Application Delivery Controller (ADC) before you integrate with GKE on-prem using the manual load-balancing mode on GKE on-prem. If you're interested in installing F5 BIG-IP ADC on GKE on-prem to automatically provision L4 load-balancing services, see Installing F5 BIG-IP ADC for Anthos GKE on-prem.

F5 is a leading provider of ADC services. The F5 BIG-IP platform provides various services to help you enhance the security, availability, and performance of your apps. These services include, L7 load balancing, network firewalling, web application firewalling (WAF), DNS services, and more. For Anthos GKE on-prem, BIG-IP provides external access and L3/4 load-balancing services.

When to choose manual load-balancer mode

When deployed in integrated mode, Anthos uses a version of F5 container ingress services (CIS) to automatically provision L4 load-balancing services on the BIG-IP platform. CIS continues to monitor and update BIG-IP when the GKE on-prem cluster is modified. However, CIS comes with limitations.

At the time of publication, you cannot add L7 services such as F5 Advanced WAF or Access Policy Manager (F5 APM) to the virtual IP address endpoints when the environment is deployed using the integrated mode. This limitation is due to the nature of CIS. Any modifications made to the BIG-IP partitons are overwritten by the CIS controller when it's updated.

By deploying the GKE on-prem environment using the manual load-balancer mode, on the other hand, you create the required virtual servers and related BIG-IP resources prior to deploying GKE on-prem. This type of deployment lets you customize and secure the BIG-IP hosted environment endpoints. The trade-off is that as the environment changes, for example when cluster node instances are added or removed, you need to manually update the BIG-IP.

Objectives

  • Learn about the BIG-IP architecture.
  • Configure the BIG-IP for GKE on-prem external endpoints.
  • Create virtual servers.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Obtain an F5 BIG-IP Application Delivery Controller and license. The F5 BIG-IP ADC is available in various hardware platforms and virtual editions. Regardless of the platform you use, the solution is supported, and the following configuration process is applicable.

    There are three types of licenses for F5 BIG-IP.

    License type Production Evaluation Demonstration
    Expiration date No Yes, 45 days Yes, 30 days
    Throughput limitations Up to 40 Gbps on GKE on-prem 25 Mbps - 10 Gbps 1 Mbps
    Use case Production Proof of concept, demonstration Proof of concept, demonstration
  2. Activate a license key for BIG-IP.

  3. Make sure your environment meets the following minimum system requirements:

    • 8 vCPUs that aren't shared between other hosts on that system
    • 16 GB memory that isn't shared between other hosts on that system

Architecture

There are two common scenarios to deploy BIG-IP ADC to GKE on-prem clusters. Because the BIG-IP acts as a proxy for external access to the clusters, it's common to deploy a BIG-IP with three or more interfaces, as illustrated in the following diagram.

Architecture of BIG-IP deployment.

In the preceding diagram, separate interfaces serve internal private- and external public-facing traffic independently. This architecture provides better visibility for monitoring and troubleshooting, and increased throughput.

Architecture of BIG-IP deployment in two-armed mode.

While this configuration isn't considered a best practice, if you're integrating into an existing environment with a pre-defined network architecture, you might need this type of configuration.

Setting up your environment

  1. Follow the instructions to set up BIG-IP virtual edition deployed on VMware ESXi 6.5.

    The OVF template requires configuring four interfaces. The fourth interface is designated for HA heartbeat traffic between BIG-IP pairs. For this three-arm deployment, assign the internal network gke-node.

    Interfaces you need to configure the OVF template.

  2. After the VM boots, use the F5 BIG-IP's Setup utility for initial configuration. The setup utility walks you through the following configuration tasks:

    1. From a network accessible workstation on which you configured the gke-mgmt interface, go to the following URL https://management_IP_address, where management_IP_address is the address you configured for your device.
    2. When prompted, enter the default username as admin and the password as admin.
    3. Click Log in.
  3. To install a license, in the Base Registration Key field, enter your key. The type of license dictates the BIG-IP's services and bandwidth limits.

  4. To enhance performance when working with GKE clusters, set the Management (MGMT) plane provisioning to Large.

  5. To provide L3/4 load balancing to the GKE on-prem environment, set the Local Traffic (LTM) module is set to Nominal

    VM setup utility with local traffic configuration.

  6. On the Host and User Information page, you provide the hostname, FQDN of BIG-IP system, and update the admin and root account passwords.

  7. On the Networking page, you walk through configuring the BIG-IP's basic networking. The utility creates the internal, gke-node and external, gke-external interfaces, VLANs, and self-IP addresses.

    Networking window.

    The 4th interface deployed by VMware is left unconfigured.

    4th interface unconfigured.

Additional configuration

After the Setup utility completes, you have a functioning BIG-IP with a management plane interface attached to the gke-mgmt VMware network and two data plane interfaces attached to VMware networks,gke-node and gke-external.

Before you deploy GKE on-prem, more configuration of the BIG-IP is required.

  • Create an administrative partition for each admin and user cluster you intend to expose and access.

    Initially, you define two partitions: one for the admin cluster, and one for the first user cluster. Don't use cluster partitions for anything else. Each of the clusters must have a partition that is for the sole use of that cluster.

F5 BIG-IP account permissions

The existing Administrator role provides enough permissions for use with GKE on-prem. For more information, see User roles. You can also learn how to create additional users.

Configuring the BIG-IP for GKE on-prem external endpoints

Before deploying GKE on-prem, you must configure the BIG-IP with six virtual servers, (VIPs), corresponding to the following GKE on-prem endpoints:

  • Admin partition

    • VIP for admin cluster control plane (port exposed: 443)
    • VIP for admin cluster ingress controller (port exposed: 443)
    • VIP for admin cluster ingress controller (port exposed: 80)
    • VIP for user control plane (port exposed: 443)
  • User partition

    • VIP for user cluster ingress controller (port exposed: 443)
    • VIP for user cluster ingress controller (port exposed: 80)

Perform the following steps from both the admin and user partitions to create node objects on the BIG-IP for each host specified in the corresponding host configuration files.

Create node object

GKE on-prem clusters can run with one of two load-balancing modes: integrated or manual. For manual mode, cluster nodes (both admin and user clusters) must be assigned static IP addresses. These addresses are in turn used to configure node objects on the BIG-IP system. You will create a node object for each GKE on-prem cluster node. The nodes are added to backend pools that are then associated with virtual servers.

  1. To log in to the BIG-IP management console, go to the IP address. The address is provided during the installation.
  2. Click the Administrative partition that you previously created.
  3. Go to Local Traffic > Nodes > Node List.
  4. Click Create.
  5. Enter a name and IP address for each cluster host and click Finished.
  6. Repeat these steps each admin cluster member.
  7. Repeat these steps for each user cluster member, but click the User partition instead of the Administrative partition.

    Configuration of partitions.

Create backend pools

You create a backend pool for each required VIP, seven in total.

  1. In the BIG-IP management console, click adminpart for the admin partiton that you previously created.
  2. Go to Local Traffic > Pools > Pool List.
  3. Click Create.
  4. In the Configuration drop-down list, click Advanced.
  5. In the Name field, enter Istio-80-pool.
  6. To verify the pool member accessibility, under Health Monitor, click tcp. Optional: Because this is a manual configuration, you can also take advantage of more advanced monitors as appropriate for your deployment.
  7. For Action on Service Down, click Reject.

  8. For this tutorial, in the Load Balancing Method drop-down list, click Round Robin.

  9. In the New Members section, click Node List and then select the previously created node.

  10. In the Service Port field, enter the appropriate nodePort from the gkectl configuration file.

  11. Click Add.

  12. Repeat steps 8-9 and add each cluster node instance.

    Configuration of cluster node.

  13. Click Finished.

  14. Repeat all of these steps in this section for the remaining required admin cluster VIPs.

  15. Repeat all of these steps in this section for each user cluster pool, except in step 1, click userpart instead of adminpart.

Create virtual servers

You create a total of seven virtual servers on the BIG-IP with five for the admin clusters and two for the user clusters. The virtual servers correspond to the VIPs required to deploy GKE on-prem.

  1. In the BIG-IP management console, click the Admin partition that you previously created.
  2. Go to Local Traffic > Virtual Servers > Virtual Server List.
  3. Click Create.
  4. In the Name field, enter istio-ingress-80.
  5. In the Destination Address/Mask field, enter the IP address for the VIP. For this tutorial, use the HTTP ingress VIP in the gkectl configuration file.
  6. In the Service Port field, enter the appropriate listener port for the VIP. For this tutorial, use port 80.

    Configuration of virtual servers.

    There are several configuration options for enhancing your app's endpoint, such as associating protocol-specific profiles, certificate profiles, and WAF policies.

  7. For Source Address Translation click Auto Map.

  8. For Default Pool select the appropriate pool that you previously created.

  9. Click Finished.

  10. Repeat these to create the remaining required admin cluster VIPs.

  11. Repeat these steps to create the user cluster virtual servers, but select the User partition.

  12. Create and download an archive of the current configuration.

What's next