Installing F5 BIG-IP ADC for GKE on-prem

By Gregory Coward, Solution Architect, F5 Networks

This document shows how to install and configure the F5 BIG-IP Application Delivery Controller (ADC) before you integrate the ADC with GKE on-prem. If you're interested in installing F5 BIG-IP ADC using manual load-balancing mode on GKE on-prem, see Installing F5 BIG-IP ADC for GKE on-prem using manual load balancing.

F5 is a leading provider of ADC services. The F5 BIG-IP platform provides various services to help you enhance the security, availability, and performance of your apps. These services include L7 load balancing, network firewalling, web application firewalling (WAF), and DNS services. For GKE on-prem, the BIG-IP provides external access and L3/4 load-balancing services.

Why F5?

Among its various services and features, BIG-IP provides container ingress services (CIS). CIS provides platform-native integrations for BIG-IP devices from platform as a service (PaaS) providers like Kubernetes. This integration makes it possible to dynamically allocate BIG-IP L4-L7 services in container orchestration environments.

Anthos uses a version of CIS to automatically provision L4 load-balancing services on the BIG-IP platform while providing external access to apps deployed on GKE on-prem clusters.

After you configure BIG-IP, you can integrate with GKE on-prem for ingress as part of the default installation.

Objectives

  • Learn about BIG-IP high availability.
  • Set up BIG-IP.
  • Configure BIG-IP before you deploy it to GKE on-prem.

Costs

This tutorial uses the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. GKE on-prem automatically installs F5 BIG-IP container ingress services (CIS). Each version of GKE on-prem relies on a different version of CIS. Ensure that the F5 BIG-IP load balancer version supports the F5 BIG-IP CIS version that comes with GKE on-prem by consulting with the F5 BIG-IP Controller/Platform compatibility matrix.
  2. Obtain an F5 BIG-IP Application Delivery Controller and license. The F5 BIG-IP ADC is available in various hardware platforms and virtual editions. Regardless of the platform you use, the solution is supported, and the following configuration process is applicable.

    There are three types of licenses for F5 BIG-IP.

    License type Production Evaluation Demonstration
    Expiration date No Yes, 45 days Yes, 30 days
    Throughput limitations Up to 40 Gbps on GKE on-prem 25 Mbps - 10 Gbps 1 Mbps
    Use case Production Proof of concept, demonstration Proof of concept, demonstration
  3. Activate a license key for BIG-IP.

  4. Make sure your environment meets the following minimum system requirements:

    • 8 vCPUs that aren't shared between other hosts on that system
    • 16 GB memory that isn't shared between other hosts on that system

Architecture

There are two common scenarios to deploy BIG-IP ADC to GKE on-prem clusters. Because the BIG-IP acts as a proxy for external access to the clusters, it's common to deploy a BIG-IP with three or more interfaces, as illustrated in the following diagram.

Architecture of BIG-IP deployment.

In the preceding diagram, separate interfaces serve internal private- and external public-facing traffic independently. This architecture provides better visibility for monitoring and troubleshooting, and increased throughput.

Although uncommon, you can deploy BIG-IP in a two-armed mode, where only one interface serves data plane traffic, as illustrated in the following diagram.

Architecture of BIG-IP deployment in two-armed mode.

While the preceding configuration isn't considered a best practice, if you're integrating into an existing environment with a pre-defined network architecture, you might need this type of configuration.

BIG-IP high availability

To ensure availability of your applications, it is considered a best practice for the BIG-IP to be deployed in either an active-standby or active-active device service cluster. The BIG-IP uses device service clustering (DCS) to synchronize configuration data between cluster group members and to provide automatic failover if the active device fails.

You can configure the BIG-IP for DSC as part of the initial setup or anytime thereafter by running the Config Sync/HA utility. With that said, there are important considerations for the DSC and how the BIG-IP interacts with GKE on-prem.

F5 CIS and Config Sync

As noted earlier, the BIG-IP uses F5 Container Integration Services (CIS) to integrate with GKE on-prem. CIS deploys a lightweight controller into the environment to monitor the partition it manages for configuration changes. If CIS discovers changes, the connector reapplies its own configuration to the BIG-IP system.

F5 doesn't typically recommend making configuration changes to objects in any partition managed by F5 CIS by any other means, including by syncing configuration from another device or service group. However, because GKE on-prem uses NodePort mode only, you can deploy the BIG-IP systems in an active-standby cluster and take advantage of F5’s native configuration auto-sync capabilities.

Controller instance to BIG-IP ratio

BIG-IP systems use two types of IP addresses, floating and non-floating. A non-floating IP address always remains with the BIG-IP device where it was defined. Floating IP addresses are used and are owned by whichever BIG-IP is the active device. In the event of a failover, these addresses float from the formerly active to the newly active BIG-IP.

While F5 typically recommends deploying one controller instance per BIG-IP device to further enhance availability, you can deploy a single CIS instance pointing to the BIG-IP cluster's floating self-IP address. When changes to the GKE on-prem infrastructure occur and are updated on the active BIG-IP by the controller), the standby device receives updates automatically from the configuration synchronization.

Setting up your environment

  1. Follow the instructions to set up BIG-IP virtual edition deployed on VMware ESXi 6.5.

    The OVF template requires configuring four interfaces. The fourth interface is designated for HA heartbeat traffic between BIG-IP pairs. For this three-arm deployment, assign the internal network gke-node.

    Interfaces you need to configure the OVF template.

  2. After the VM boots, use the F5 BIG-IP's setup utility for initial configuration. The setup utility walks you through the following configuration tasks:

    1. From a network accessible workstation on which you configured the gke-mgm interface, go to the following URL https://management_IP_address where management_IP_address is the address you configured for your device.
    2. When prompted, enter the default username as admin and the password as admin.
    3. Click Log in.
  3. To install a license, in the Base Registration Key field, enter your key. The type of license dictates the BIG-IP's services and bandwidth limits.

  4. To enhance performance when working with GKE clusters, set the Management (MGMT) plane provisioning to Large.

  5. To provide L3/4 load balancing to the GKE on-prem environment, set the Local Traffic (LTM) module to Nominal.

    VM setup utility with local traffic configuration.

  6. In the Host and User Information window, you provide the hostname, FQDN of the BIG-IP system, and update the admin and root account passwords.

  7. In the Networking window, you walk through configuring the BIG-IP's basic networking. The utility creates the internal, gke-node and external, gke-external interfaces, VLANs, and self-IP addresses.

    Networking window.

    The fourth interface deployed by VMware is left unconfigured.

    4th interface unconfigured.

Additional configuration

After the Setup utility completes, you have a functioning BIG-IP with a management plane interface attached to the gke-mgmt VMware network and two data plane interfaces attached to VMware networks, gke-node and gke-external.

Before you deploy GKE on-prem, more configuration of BIG-IP is required.

  • Create an administrative partition for each admin and user cluster you intend to expose and access. Initially, you define two partitions: one for the admin cluster, and one for the first user cluster. Don't use cluster partitions for anything else. Each of the clusters must have a partition that is for the sole use of that cluster.

F5 BIG-IP account permissions

The existing Administrator role provides enough permissions for use with GKE on-prem. For more information, see User roles. You can also learn how to create additional users.

Anthos-specific system optimization

  1. Use an SSH utility, such as Putty or Cygwin to establish an SSH connection to the BIG-IP's management interface.
  2. Allocate more memory to the restjavad process:

    tmsh modify sys db restjavad.useextramb value true
    

    You don't need access to tmsh ( Traffic Management Shell) to run this command. The restjavad process provides control-plane access to the BIG-IP system by using an HTTP REST API.

  3. Restart the restjavad process:

    bigstart restart restjavad
    
  4. If you're using BIG-IP 13.1 or a higher version, you can skip this step. Allocate more memory to Tomcat:

    tmsh modify /sys db provision.tomcat.extramb value 20
    
  5. Restart the Tomcat service:

    tmsh restart /sys service tomcat
    

Modifying CIS-managed virtual servers

As noted earlier, CIS is deployed to the GKE on-prem environment to monitor configuration changes and update the BIG-IP system, effectively overwriting the relevant user partition. As a result of this integration, manual modifications made to a virtual server within a CIS-managed cluster are overwritten.

While not typically recommended, there can be situations, such as using custom protocol profiles or persistence settings, where you might use BIG-IP system functionality that isn't natively supported by the CIS controller. To accomplish this, you can whitelist a CIS-configured virtual server. Whitelisting tells the Controller to merge its configuration into the existing virtual server instead of overwriting it.

To whitelist a particular CIS-managed virtual server, complete the following steps:

  1. Establish an SSH connection to the BIG-IP's management interface.
  2. Access the TMOS command-line shell:

    tmsh
    
  3. Change to your partition-name user partition.

    cd /partition name
    
  4. Whitelist a virtual server:

    modify ltm virtual virtual-server-name metadata add { cccl-whitelist { value 1 }}
    

    Replace the following:

    • virtual-server-name: the name of the virtual server you want to whitelist.

At this point, your BIG-IP is configured and ready to provide L3/4 load-balancing services for your GKE on-prem environment. You can now proceed with deploying GKE on-prem.

What's next