This tutorial shows how to set up the F5 BIG-IP Application Delivery Controller (ADC) before you integrate with Anthos clusters on VMware using the manual load-balancing mode on Anthos clusters on VMware. If you're interested in installing F5 BIG-IP ADC on Anthos clusters on VMware to automatically provision L4 load-balancing services, see Installing F5 BIG-IP ADC for Anthos clusters on VMware.
F5 is a leading provider of ADC services. The F5 BIG-IP platform provides various services to help you enhance the security, availability, and performance of your apps. These services include, L7 load balancing, network firewalling, web application firewalling (WAF), DNS services, and more. For Anthos clusters on VMware, BIG-IP provides external access and L3/4 load-balancing services.
When to choose manual load-balancer mode
When deployed in integrated mode, Anthos uses a version of F5 container ingress services (CIS) to automatically provision L4 load-balancing services on the BIG-IP platform. CIS continues to monitor and update BIG-IP when the Anthos clusters on VMware cluster is modified. However, CIS comes with limitations.
At the time of publication, you cannot add L7 services such as F5 Advanced WAF or Access Policy Manager (F5 APM) to the virtual IP address endpoints when the environment is deployed using the integrated mode. This limitation is due to the nature of CIS. Any modifications made to the BIG-IP partitons are overwritten by the CIS controller when it's updated.
By deploying the Anthos clusters on VMware environment using the manual load-balancer mode, on the other hand, you create the required virtual servers and related BIG-IP resources prior to deploying Anthos clusters on VMware. This type of deployment lets you customize and secure the BIG-IP hosted environment endpoints. The trade-off is that as the environment changes, for example when cluster node instances are added or removed, you need to manually update the BIG-IP.
Objectives
- Learn about the BIG-IP architecture.
- Configure the BIG-IP for Anthos clusters on VMware external endpoints.
- Create virtual servers.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage,
use the pricing calculator.
Before you begin
Obtain an F5 BIG-IP Application Delivery Controller and license. The F5 BIG-IP ADC is available in various hardware platforms and virtual editions. Regardless of the platform you use, the solution is supported, and the following configuration process is applicable.
There are three types of licenses for F5 BIG-IP.
License type Production Evaluation Demonstration Expiration date No Yes, 45 days Yes, 30 days Throughput limitations Up to 40 Gbps on Anthos clusters on VMware 25 Mbps - 10 Gbps 1 Mbps Use case Production Proof of concept, demonstration Proof of concept, demonstration - If you have a production license, you can use that license.
- For the purpose of this tutorial, you can request a free trial license key.
- If your throughput requirements are greater than 1 Mbps provided by the trial license, you can request an evaluation license from F5:
- If the BIG-IP system contains an evaluation or demonstration license, the BIG-IP system stops processing traffic when the license expires.
Make sure your environment meets the following minimum system requirements:
- 8 vCPUs that aren't shared between other hosts on that system
- 16 GB memory that isn't shared between other hosts on that system
Architecture
There are two common scenarios to deploy BIG-IP ADC to Anthos clusters on VMware clusters. Because the BIG-IP acts as a proxy for external access to the clusters, it's common to deploy a BIG-IP with three or more interfaces, as illustrated in the following diagram.
In the preceding diagram, separate interfaces serve internal private- and external public-facing traffic independently. This architecture provides better visibility for monitoring and troubleshooting, and increased throughput.
While this configuration isn't considered a best practice, if you're integrating into an existing environment with a pre-defined network architecture, you might need this type of configuration.
Setting up your environment
Follow the instructions to set up BIG-IP virtual edition deployed on VMware ESXi 6.5.
The OVF template requires configuring four interfaces. The fourth interface is designated for HA heartbeat traffic between BIG-IP pairs. For this three-arm deployment, assign the internal network
gke-node
.After the VM boots, use the F5 BIG-IP's Setup utility for initial configuration. The setup utility walks you through the following configuration tasks:
- From a network accessible workstation on which you configured
the
gke-mgmt
interface, go to the following URLhttps://management_IP_address
, wheremanagement_IP_address
is the address you configured for your device. - When prompted, enter the default username as
admin
and the password asadmin
. - Click Log in.
- From a network accessible workstation on which you configured
the
To install a license, in the Base Registration Key field, enter your key. The type of license dictates the BIG-IP's services and bandwidth limits.
To enhance performance when working with GKE clusters, set the Management (MGMT) plane provisioning to Large.
To provide L3/4 load balancing to the Anthos clusters on VMware environment, set the Local Traffic (LTM) module is set to Nominal
On the Host and User Information page, you provide the hostname, FQDN of BIG-IP system, and update the admin and root account passwords.
On the Networking page, you walk through configuring the BIG-IP's basic networking. The utility creates the internal,
gke-node
and external,gke-external
interfaces, VLANs, and self-IP addresses.The 4th interface deployed by VMware is left unconfigured.
Additional configuration
After the Setup utility completes, you have a functioning BIG-IP with a
management plane interface attached to the gke-mgmt
VMware network and two
data plane interfaces attached to VMware networks,gke-node
and
gke-external
.
Before you deploy Anthos clusters on VMware, more configuration of the BIG-IP is required.
Create an administrative partition for each admin and user cluster you intend to expose and access.
Initially, you define two partitions: one for the admin cluster, and one for the first user cluster. Don't use cluster partitions for anything else. Each of the clusters must have a partition that is for the sole use of that cluster.
F5 BIG-IP account permissions
The existing Administrator role provides enough permissions for use with Anthos clusters on VMware. For more information, see User roles. You can also learn how to create additional users.
Configuring the BIG-IP for Anthos clusters on VMware external endpoints
Before deploying Anthos clusters on VMware, you must configure the BIG-IP with the virtual servers, (VIPs), corresponding to the following Anthos clusters on VMware endpoints:
Admin partition
- VIP for admin cluster control plane (port exposed:
443
) - VIP for admin cluster add-on (port exposed:
8443
) - VIP for user control plane (port exposed:
443
)
- VIP for admin cluster control plane (port exposed:
User partition
- VIP for user cluster ingress controller (port exposed:
443
) - VIP for user cluster ingress controller (port exposed:
80
)
- VIP for user cluster ingress controller (port exposed:
Perform the following steps from both the admin and user partitions to create node objects on the BIG-IP for each host specified in the corresponding host configuration files.
Create node object
Anthos clusters on VMware clusters can run with one of two load-balancing modes: integrated or manual. For manual mode, cluster nodes (both admin and user clusters) must be assigned static IP addresses. These addresses are in turn used to configure node objects on the BIG-IP system. You will create a node object for each Anthos clusters on VMware cluster node. The nodes are added to backend pools that are then associated with virtual servers.
- To log in to the BIG-IP management console, go to the IP address. The address is provided during the installation.
- Click the Administrative partition that you previously created.
- Go to Local Traffic > Nodes > Node List.
- Click Create.
- Enter a name and IP address for each cluster host and click Finished.
- Repeat these steps each admin cluster member.
Repeat these steps for each user cluster member, but click the User partition instead of the Administrative partition.
Create backend pools
You create a backend pool for each required VIP, seven in total.
- In the BIG-IP management console, click adminpart for the admin partiton that you previously created.
- Go to Local Traffic > Pools > Pool List.
- Click Create.
- In the Configuration drop-down list, click Advanced.
- In the Name field, enter
Istio-80-pool
. - To verify the pool member accessibility, under Health Monitor, click tcp. Optional: Because this is a manual configuration, you can also take advantage of more advanced monitors as appropriate for your deployment.
For Action on Service Down, click Reject.
For this tutorial, in the Load Balancing Method drop-down list, click Round Robin.
In the New Members section, click Node List and then select the previously created node.
In the Service Port field, enter the appropriate
nodePort
from the gkectl configuration file.Click Add.
Repeat steps 8-9 and add each cluster node instance.
Click Finished.
Repeat all of these steps in this section for the remaining required admin cluster VIPs.
Repeat all of these steps in this section for each user cluster pool, except in step 1, click userpart instead of adminpart.
Create virtual servers
You create a total of seven virtual servers on the BIG-IP with five for the admin clusters and two for the user clusters. The virtual servers correspond to the VIPs required to deploy Anthos clusters on VMware.
- In the BIG-IP management console, click the Admin partition that you previously created.
- Go to Local Traffic > Virtual Servers > Virtual Server List.
- Click Create.
- In the Name field, enter
istio-ingress-80
. - In the Destination Address/Mask field, enter the IP address for the
VIP. For this tutorial, use the HTTP ingress VIP in the
gkectl configuration file
. In the Service Port field, enter the appropriate listener port for the VIP. For this tutorial, use port
80
.There are several configuration options for enhancing your app's endpoint, such as associating protocol-specific profiles, certificate profiles, and WAF policies.
For Source Address Translation click Auto Map.
For Default Pool select the appropriate pool that you previously created.
Click Finished.
Repeat these to create the remaining required admin cluster VIPs.
Repeat these steps to create the user cluster virtual servers, but select the User partition.
Create and download an archive of the current configuration.
What's next
To further enhance the security and performance of the external-facing VIPs, consider the following:
Learn more about F5 BIG-IP Application Services.
Learn more about BIG-IP configurations and capabilities:
Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center.