GKE On-Prem uses Kubernetes networking concepts like Service and Ingress. This document describes how GKE On-Prem networking is configured out of the box.
Cluster services operations and Island Mode
GKE On-Prem uses an Island Mode configuration in which Pods can directly talk to each other within a cluster, but cannot be reached from outside the cluster. This configuration forms an "island" within the network that is not connected to the external network. Clusters use BGP (via the Calico CNI plugin) to form a full node-to-node mesh across the cluster nodes, allowing Pod to reach other Pods within the cluster directly.
All egress traffic from the Pod to targets outside the cluster is NAT'd by the node IP. GKE On-Prem includes an L7 load balancer with an Envoy-based ingress controller that handles Ingress object rules for ClusterIP Services deployed within the cluster. The ingress controller itself is exposed as a NodePort Service in the cluster.
The ingress NodePort service can be reached through a L3/L4 F5 load balancer. The installation configures a virtual IP address (VIP) (with port 80 and 443) on the load balancer. The VIP points to the ports in the NodePort Service for the Ingress controller. This is how external clients can access services in the cluster.
User clusters can run Services of type LoadBalancer as long as a
loadBalancerIP field is configured in the Service's specification. In the
loadBalancerIP
field, you need to provide the VIP that you want to use.
This will be configured on F5, pointing to the NodePorts of the Service.
As an alternative to using the F5 load balancer, you can
enable manual load balancing mode.
If you choose to use manual load balancing , you cannot run Services of
type LoadBalancer
. Instead, you can create Services of type NodePort
and
manually configure your load balancer to use them as backends. Also, you can
expose Services to outside clients by using an Ingress object.
Networking architecture
Figure: GKE On-Prem networking.
- Node IP addresses
- DHCP or statically-assigned IP addresses for the nodes (alternatively called virtual machines or VMs). Must be routable within the data center. You can manually assign static IPs.
- Pod CIDR block
- Non-routable CIDR block for all Pods in the cluster. From this range, smaller /24 ranges are assigned per node. If you need an N node cluster, ensure this block is large enough to support N /24 blocks.
- Services CIDR block
- In Island Mode, similar to Pod CIDR block, so only used within the cluster. Any private CIDR block which does not overlap with the nodes, VIPs, or Pod CIDR block. You can share the same block among clusters. The size of the block determines the number of services. One Service IP is needed for the ingress service itself, and ten or more IPs for Kubernetes services like cluster DNS, etc.
- Services VIPs
- N number of routable IP addresses to be configured on F5 for L4 ingress when
you expose a Service. These VIPs are the same as the
loadBalancerIP
value generated you create Services of type LoadBalancer. - Control plane VIP
- A routable IP address to configure on F5 load balancer for the Kubernetes API server.
- Ingress VIP
- A routable IP address to configure on the F5 load balancer for L7 ingress in conjunction with the Envoy proxies running on each node.
network
configuration
These parameters are captured under the network
field of the cluster
configuration file.
Below is an example of a network
field that specifies most of the parameters:
# Example of a network section with static node IPs
network:
clusterip:
servicecidr: 10.96.232.0/24
podcidr: 192.168.0.0/16
nodeip:
mode: static
addressblock:
hostconfig:
dns: 8.8.8.8
tod: 192.138.210.214
blocks:
- netmask: 255.255.252.0
gateway: 10.116.232.0
ips:
- ip: 10.116.232.23
hostname: host1.enterprise.net
- ip: 10.116.232.65
hostname: host2.enterprise.net
- ip: 10.116.232.66
hostname: host3.enterprise.net
loadbalancer:
controlplaneip: 10.115.231.45
ingressip: 10.115.231.54
kind: F5BigIP
f5bigip:
server: 10.113.24.12
username: # encoded value
password: # encoded value
partition: admin-partition
The network
field consists of three subfields, clusterip
, nodeip
, and
loadbalancer
, which configure different networking settings for the cluster.
The nodeip.blocks
subfield is optional; you only need to specify this field if
nodeip.mode
is set to static.
If nodes are configured to get IPs through DHCP, you would configure the
network
field as follows:
# Example of a network section using DHCP for node IPs
network:
clusterip:
servicecidr: 10.96.232.0/24
podcidr: 192.168.0.0/16
nodeip:
mode: dhcp
loadbalancer:
controlplaneip: 10.115.231.45
ingressip: 10.115.231.54
kind: F5BigIP
f5bigip:
server: 10.113.24.12
username: admin
password: secret
partition: user-partition
Example: Access a web application via URL
Suppose that you have a guestbook web application running in your cluster
as a Deployment named frontend
. You want to connect to the application using a
URL, www.guestbook.com
. You need some way of mapping the URL to the Deployment
running in your cluster. You can do this using a Kubernetes Ingress object.
To begin, you first create a wildcard DNS entry for *.guestbook.com
that
points to the cluster's existing ingress VIP:
*.guestbook.com A [INGRESS_VIP]
Next, you need to create a Service for the frontend Deployment. Running kubectl
expose
creates a Service that logically groups the Deployment's Pods and
provides them with a common IP address within the cluster:
kubectl expose deployment frontend
This creates a Service of type ClusterIP, like this:
apiVersion: v1 kind: Service metadata: labels: app: guestbook name: frontend spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: guestbook type: ClusterIP
You need to map the URL, www.guestbook.com
, to the frontend Service you
just created. Applying the Ingress below creates that mapping:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: frontend labels: app: guestbook spec: rules: - host: www.guestbook.com http: paths: - backend: serviceName: frontend # name of the frontend Service servicePort: 80
Now, visiting www.guestbook.com opens the web application in your browser.
Here's how this works under the hood:
- Because you created that wildcard DNS entry, when you visit the URL, you're accessing the cluster's ingress VIP.
- The cluster looks for the correct Ingress object based on the hostname, which in this case is www.guestbook.com.
- The traffic is port forwarded to a frontend Pod.
Example: Access a web application via IP address
If your application is not a web application, or if you have networking constraints, you might prefer to create a VIP specifically for your service. You can do this using a Kubernetes Service of type LoadBalancer.
The Service below creates a VIP specifically for the guestbook
application:
apiVersion: v1 kind: Service metadata: labels: app: guestbook name: frontend spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: guestbook type: LoadBalancer loadBalancerIP: [IP_ADDRESS]
After you apply this Service, you'd see the VIP in your F5 console, and in the console's Pools menu you'd see the IP addresses of the nodes. Visiting the IP address would load the application.