This page explains how to create a Compute Engine internal TCP/UDP load balancer on Google Kubernetes Engine.
Overview
Internal TCP/UDP Load Balancing makes your cluster's services accessible to
applications outside of your cluster that use the same
VPC network and are located in the same Google Cloud
region. For example, suppose you have a cluster in the us-west1
region and you
need to make one of its services accessible to Compute Engine VM
instances running in that region on the same VPC network.
You can create an internal TCP/UDP load balancer by creating a Service
resource with a type: LoadBalancer
specification and an annotation. The
annotation depends on the version of your GKE cluster.
For GKE versions 1.17 and later, use the annotation
networking.gke.io/load-balancer-type: "Internal"
.
For earlier versions, use the annotation
cloud.google.com/load-balancer-type: "Internal"
.
Without Internal TCP/UDP Load Balancing, you would need to set up an external load balancer and firewall rules to make the application accessible outside the cluster.
Internal TCP/UDP Load Balancing creates an internal IP address for the Service that receives traffic from clients in the same VPC network and compute region. If you enable global access, clients in any region of the same VPC network can access the Service. In addition, clients in a VPC network connected to the LoadBalancer network using VPC Network Peering can also access the Service.
Pricing
You are charged per Compute Engine's pricing model. For more information, see Load balancing and forwarding rules pricing and the Compute Engine page on the Google Cloud pricing calculator.
Before you begin
Before you start, make sure you have performed the following tasks:
- Ensure that you have enabled the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- Ensure that you have installed the Cloud SDK.
Set up default gcloud
settings using one of the following methods:
- Using
gcloud init
, if you want to be walked through setting defaults. - Using
gcloud config
, to individually set your project ID, zone, and region.
Using gcloud init
If you receive the error One of [--zone, --region] must be supplied: Please specify
location
, complete this section.
-
Run
gcloud init
and follow the directions:gcloud init
If you are using SSH on a remote server, use the
--console-only
flag to prevent the command from launching a browser:gcloud init --console-only
-
Follow the instructions to authorize
gcloud
to use your Google Cloud account. - Create a new configuration or select an existing one.
- Choose a Google Cloud project.
- Choose a default Compute Engine zone for zonal clusters or a region for regional or Autopilot clusters.
Using gcloud config
- Set your default project ID:
gcloud config set project PROJECT_ID
- If you are working with zonal clusters, set your default compute zone:
gcloud config set compute/zone COMPUTE_ZONE
- If you are working with Autopilot or regional clusters, set your default compute region:
gcloud config set compute/region COMPUTE_REGION
- Update
gcloud
to the latest version:gcloud components update
Create a Deployment
The following manifest describes a Deployment that runs 3 replicas of a Hello World app.
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
The source code and Dockerfile for this sample app is available on GitHub.
Since no PORT
environment variable is specified, the containers listen on the
default port: 8080.
To create the Deployment, create the file my-deployment.yaml
from the manifest,
and then run the following command in your shell or terminal window:
kubectl apply -f my-deployment.yaml
Create an internal TCP load balancer
The following sections explain how to create an internal TCP load balancer using a Service.
Writing the Service configuration file
The following is an example of a Service that creates an internal TCP load balancer:
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
networking.gke.io/load-balancer-type: "Internal"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
Minimum Service requirements
Your manifest must contain the following:
- A
name
for the Service, in this caseilb-service
. - An annotation that specifies an internal TCP/UDP load balancer. The annotation depends on
the version of your GKE cluster. For GKE
versions 1.17 and later, use the annotation
networking.gke.io/load-balancer-type: "Internal"
. For earlier versions, use the annotationcloud.google.com/load-balancer-type: "Internal"
. - The
type: LoadBalancer
. - A
spec: selector
field to specify the Pods the Service should target, for example,app: hello
. - The
port
, the port over which the Service is exposed, andtargetPort
, the port on which the containers are listening.
Deploying the Service
To create the internal TCP load balancer, create the file my-service.yaml
from
the manifest, and then run the following command in your shell or terminal window:
kubectl apply -f my-service.yaml
Inspecting the Service
After deployment, inspect the Service to verify that it has been configured successfully.
Get detailed information about the Service:
kubectl get service ilb-service --output yaml
In the output, you can see the internal load balancer's IP address under
status.loadBalancer.ingress
. Notice that this is different from the value
of clusterIP
. In this example, the load balancer's IP address is
10.128.15.193
:
apiVersion: v1
kind: Service
metadata:
...
labels:
app: hello
name: ilb-service
...
spec:
clusterIP: 10.0.9.121
externalTrafficPolicy: Cluster
ports:
- nodePort: 30835
port: 80
protocol: TCP
targetPort: 8080
selector:
app: hello
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.128.15.193
Any Pod that has the label app: hello
is a member of this Service. These
are the Pods that can be the final recipients of requests sent to your
internal load balancer.
Clients call the Service by using the loadBalancer
IP address and the TCP
port specified in the port
field of the Service manifest. The request is
forwarded to one of the member Pods on the TCP port specified in the
targetPort
field. So for the preceding example, a client calls the Service
at 10.128.15.193 on TCP port 80. The request is forwarded to one of the
member Pods on TCP port 8080. Note that the member Pod must have a container
listening on port 8080.
The nodePort
value of 30835 is extraneous; it is not relevant to your internal
load balancer.
Viewing the load balancer's forwarding rule
An internal load balancer is implemented as a forwarding rule. The forwarding rule has a backend service, which has an instance group.
The internal load balancer address, 10.128.15.193
in the preceding example, is
the same as the forwarding rule address. To see the forwarding rule that
implements your internal load balancer, start by listing all of the forwarding
rules in your project:
gcloud compute forwarding-rules list --filter="loadBalancingScheme=INTERNAL"
In the output, look for the forwarding rule that has the same address as your
internal load balancer, 10.128.15.193
in this example.
NAME ... IP_ADDRESS ... TARGET
...
aae3e263abe0911e9b32a42010a80008 10.128.15.193 us-central1/backendServices/aae3e263abe0911e9b32a42010a80008
The output shows the associated backend service,
ae3e263abe0911e9b32a42010a80008
in this example.
Describe the backend service:
gcloud compute backend-services describe aae3e263abe0911e9b32a42010a80008 --region us-central1
The output shows the associated instance group, k8s-ig--2328fa39f4dc1b75
in
this example:
backends:
- balancingMode: CONNECTION
group: .../us-central1-a/instanceGroups/k8s-ig--2328fa39f4dc1b75
...
kind: compute#backendService
loadBalancingScheme: INTERNAL
name: aae3e263abe0911e9b32a42010a80008
...
How the Service abstraction works
When a packet is handled by your forwarding rule, the packet gets forwarded to one of your cluster nodes. When the packet arrives at the cluster node, the addresses and port are as follows:
Destination IP address | Forwarding rule, 10.128.15.193 in this example |
Destination TCP port | Service port field, 80 in this example |
Note that the forwarding rule (that is, your internal load balancer) does
not change the destination IP address or destination port. Instead,
iptables
rules on the cluster node route the packet to an appropriate Pod. The iptables
rules change the destination IP address to a Pod IP address and the destination
port to the targetPort
value of the Service, 8080 in this example.
Verifying the internal TCP load balancer
SSH into a VM instance, and run the following command:
curl load-balancer-ip
Where load-balancer-ip is your LoadBalancer Ingress
IP address.
The response shows the output of hello-app
:
Hello, world!
Version: 2.0.0
Hostname: hello-app-77b45987f7-pw54n
Running the command from outside of the same VPC network or outside the same region results in a timed out error. If you configure global access, clients in any region in the same VPC network can access the load balancer.
Cleaning up
You can delete the Deployment and Service using kubectl delete
or
Cloud Console.
kubectl
Delete the Deployment
To delete the Deployment, run the following command:
kubectl delete deployment hello-app
Delete the Service
To delete the Service, run the following command:
kubectl delete service ilb-service
Console
Delete the Deployment
To delete the Deployment, perform the following steps:
Visit the Google Kubernetes Engine Workloads menu in Cloud Console.
Select the Deployment you want to delete, then click delete Delete.
When prompted to confirm, select the Delete Horizontal Pod Autoscaler associated with selected Deployment checkbox, then click Delete.
Delete the Service
To delete the Service, perform the following steps:
Visit the Google Kubernetes Engine Services menu in Cloud Console.
Select the Service you want to delete, then click delete Delete.
When prompted to confirm, click Delete.
Using internal TCP/UDP load balancer subsetting (Preview)
Internal load balancer subsetting for GKE improves the scalability of internal TCP/UDP load balancer by partitioning backends into smaller, overlapping groups. With subsetting, you can configure internal TCP/UDP load balancers on clusters with more than 250 nodes.
You can enable subsetting when you create a cluster and by editing an existing cluster.
Architecture
Subsetting changes how internal TCP/UDP load balancer are deployed. Without subsetting, the GKE controller places all nodes of a cluster into one or more zonal unmanaged instance groups, which are shared by all internal load balancers in the GKE cluster. For example, all internal TCP/UDP load balancers in a 40-node GKE cluster share the same 40 nodes as backends.
With internal TCP/UDP load balancer subsetting, the GKE controller
places nodes into
GCE_VM_IP zonal network endpoint groups (NEGs)
.
Unlike instance groups, nodes can be
members of more than one zonal NEG, and each of the zonal NEGs can be referenced
by an internal TCP/UDP load balancer. The GKE controller creates a NEG
for each service using a subset of the GKE nodes as members. For
example, a 40-node GKE cluster might have one internal TCP/UDP load balancer
with 25 nodes in a backend zonal NEG and another internal TCP/UDP load balancer with 25
nodes in a different backend zonal NEG.
When you enable subsetting for your cluster, the GKE controller
automatically determines how to subset nodes based on
externalTrafficPolicy
.
When externalTrafficPolicy=Cluster
, the maximum number of backends in the
subset is 25 nodes. The GKE controller selects the nodes to be
members of a subset at random.
When externalTrafficPolicy=Local
, the maximum number of backends in the subset
is 250 nodes. The GKE controller selects nodes to be members of
a subset at random from the nodes that host the Pods for the service. If the
backend Pods for the service are scheduled across more than 250 nodes, then a
maximum of 250 nodes and the Pods running on them receive traffic from the
internal load balancer.
Requirements and limitations
Subsetting for GKE has the following requirements and limitations:
- You can enable subsetting in new and existing clusters in GKE versions 1.18 and later.
- The cluster should have HttpLoadBalancing add-on enabled. This add-on is enabled by default. A cluster that has disabled this add-on is unable to use subsetting.
- Cloud SDK version 322.0.0 and later.
- Subsetting cannot be used with Autopilot clusters.
- Quotas for Network Endpoint Groups apply. Google Cloud creates 1 NEG per internal TCP/UDP load balancer per zone.
- Subsetting cannot be disabled once it is enabled in a cluster.
- Subsetting cannot be used with the annotation to share backend services,
alpha.cloud.google.com/load-balancer-backend-share
.
Enabling internal load balancer subsetting in a new cluster
To create a cluster with internal load balancer subsetting enabled, specify the
--enable-l4-ilb-subsetting
option:
gcloud beta container clusters create CLUSTER_NAME \
--cluster-version VERSION \
--enable-l4-ilb-subsetting
{--zone ZONE_NAME | --region REGION_NAME}
Replace the following:
CLUSTER_NAME
: the name of the new cluster.VERSION
: the GKE version, which must be 1.18 or later. You can also use the--release-channel
option to select a release channel. The release channel must have a default version 1.18 or later.ZONE_NAME
orREGION_NAME
: the location for the cluster. These arguments are mutually exclusive. For more information, see Types of clusters.
Enabling internal load balancer subsetting in an existing cluster
To enable internal load balancer subsetting in an existing cluster, specify the
--enable-l4-ilb-subsetting
option:
gcloud beta container clusters update CLUSTER_NAME \
--enable-l4-ilb-subsetting
Replace the following:
CLUSTER_NAME
: the name of the cluster.
Verifying internal load balancer subsetting
To verify that internal load balancer subsetting is working correctly for your cluster, perform the following steps:
Deploy a workload.
The following manifest describes a Deployment that runs a sample web application container image. Save the manifest as
ilb-deployment.yaml
:apiVersion: apps/v1 kind: Deployment metadata: name: ilb-deployment spec: replicas: 3 selector: matchLabels: app: ilb-deployment template: metadata: labels: app: ilb-deployment spec: containers: - name: hello-app image: gcr.io/google-samples/hello-app:1.0
Apply the manifest to your cluster:
kubectl apply -f ilb-deployment.yaml
Create a Service.
The following manifest describes a Service that creates an internal load balancer on TCP port 8080. Save the manifest as
ilb-svc.yaml
:apiVersion: v1 kind: Service metadata: name: ilb-svc annotations: networking.gke.io/load-balancer-type: "Internal" spec: type: LoadBalancer externalTrafficPolicy: Cluster selector: app: ilb-deployment ports: - name: tcp-port protocol: TCP port: 8080 targetPort: 8080
Apply the manifest to your cluster:
kubectl apply -f ilb-svc.yaml
Inspect the Service:
kubectl get svc ilb-svc -o=jsonpath="{.metadata.annotations.cloud\.google\.com/neg-status}"
The output is similar to the following:
{"network_endpoint_groups":{"0":"k8s2-knlc4c77-default-ilb-svc-ua5ugas0"},"zones":["us-central1-c"]}
The response indicates that GKE has created a network endpoint group named
k8s2-knlc4c77-default-ilb-svc-ua5ugas0
. This annotation is present in services of typeLoadBalancer
that use GKE subsetting and is not present in Services that do not use subsetting.
Troubleshooting
To determine the list of nodes in a subset for a service, use the following command:
gcloud compute network-endpoint-groups list-network-endpoints NEG_NAME \
--zone ZONE
Replace the following:
NEG_NAME
: the name of the network endpoint group created by the GKE controller.ZONE
: the zone of the network endpoint group to operate on.
Service parameters
For more information about the load balancers parameters you can configure, see see Configuring TCP/UDP load balancing. In addition, internal LoadBalancer Services support the following additional parameters:
Feature | Summary | Service Field | GKE Version Support |
---|---|---|---|
Load Balancer Subnet | Specifies from which subnet the load balancer should automatically provision an IP | metadata:annotations:
networking.gke.io/internal-load-balancer-subnet |
Beta in GKE 1.17+ and 1.16.8-gke.10+ GA in GKE 1.17.9-gke.600+ |
Global Access | Allows the TCP/UDP load balancer VIP to be accessible by clients across GCP regions | metadata:annotations:
networking.gke.io/internal-load-balancer-allow-global-access |
Beta in GKE 1.16+ GA in GKE 1.17.9-gke.600+ |
Load balancer subnet
By default, GKE will deploy an internal TCP/UDP load balancer
using the node subnet range. The subnet can be user-specified on a per-Service
basis using the networking.gke.io/internal-load-balancer-subnet
annotation.
This is useful for separately firewalling the internal load balancer IPs from
node IPs or for sharing the same Service subnet across multiple
GKE clusters. This parameter is only relevant for the
internal TCP/UDP LoadBalancer Services.
The subnet must exist before it is referenced by the Service resource as GKE does not manage the lifecycle of the subnet itself. The subnet must also be in the same VPC and region as the GKE cluster. In this step it's created out of band from GKE:
gcloud compute networks subnets create gke-vip-subnet \
--network=default \
--range=10.23.0.0/24 \
--region=us-central1
The following Service definition uses the internal-load-balancer-subnet
to
reference the subnet by name. By default an available IP from the subnet will
automatically be chosen. You can also specify the loadBalancerIP
but it must
be part of the referenced subnet.
There are multiple ways to share this internal load balancer subnet to achieve different use cases:
- Multiple subnets for groups of Services in the same cluster
- A single subnet for all Services in a cluster
- A single subnet shared across multiple clusters and multiple Services
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
networking.gke.io/load-balancer-type: "Internal"
networking.gke.io/internal-load-balancer-subnet: "gke-vip-subnet"
labels:
app: hello
spec:
type: LoadBalancer
loadBalancerIP: 10.23.0.15
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
Global access
Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC network to access the internal TCP/UDP load balancer. Without global access, traffic originating from clients in your VPC network must be in the same region as the load balancer. Global access allows clients in any region to access the load balancer. Backend instances must still be located in the same region as the load balancer.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true"
.
Global access is not supported with legacy networks. Normal inter-region traffic costs apply when using global access across regions. Refer to Network pricing for information about network pricing for egress between regions. Global access is available in Beta on GKE clusters 1.16+ and GA on 1.17.9-gke.600+.
Shared IP (Beta)
The Internal TCP/UDP Load Balancer allows the sharing of a Virtual IP address
amongst multiple forwarding rules.
This is useful for expanding the number of simultaneous ports on the same IP or
for accepting UDP and TCP traffic on the same IP. It allows up to a maximum of
50 exposed ports per IP address. Shared IPs are supported natively on
GKE clusters with internal LoadBalancer Services.
When deploying, the Service's loadBalancerIP
field is used to indicate
which IP should be shared across Services.
Limitations
A shared IP for multiple load balancers has the following limitations and capabilities:
- Each Service (or forwarding rule) can have a maximum of five ports.
- A maximum of ten Services (forwarding rules) can share an IP address. This results in a maximum of 50 ports per shared IP.
- Protocol/port tuples cannot overlap between Services that share the same IP.
- A combination of TCP-only and UDP-only Services is supported on the same shared IP, however you cannot expose both TCP and UDP ports in the same Service.
Enabling Shared IP
To enable an internal LoadBalancer Services to share a common IP, follow these steps:
Create a static internal IP with
--purpose SHARED_LOADBALANCER_VIP
. An IP address must be created with this purpose to enable its ability to be shared. If you create the static internal IP address in a Shared VPC, you must create the IP address in the same service project as the instance that will use the IP address, even though the value of the IP address will come from the range of available IPs in a selected shared subnet of the Shared VPC network. Refer to reserving a static internal IP on the Provisioning Shared VPC page for more information.Deploy up to ten internal LoadBalancer Services using this static IP in the
loadBalancerIP
field. The Internal TCP/UDP Load Balancers are reconciled by the GKE service controller and deploy using the same frontend IP.
The following example demonstrates how this is done to support multiple TCP and UDP ports against the same internal load balancer IP.
Create a static IP in the same region as your GKE cluster. The subnet must be the same subnet that the load balancer uses, which by default is the same subnet that is used by the GKE cluster node IPs.
gcloud
If you are reserving a static IP in the project containing the network, run the command:
gcloud compute addresses create IP_ADDR_NAME \ --project PROJECT_ID \ --subnet SUBNET \ --region REGION --purpose SHARED_LOADBALANCER_VIP
If you are reserving a static IP in the service project of a Shared VPC, run the command:
gcloud compute addresses create IP_ADDR_NAME \ --project SERVICE_PROJECT_ID \ --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \ --region REGION --purpose SHARED_LOADBALANCER_VIP
Replace the following:
IP_ADDR_NAME
with a name for the IP address objectSERVICE_PROJECT_ID
with the ID of the service projectPROJECT_ID
with the ID of your project(single project)HOST_PROJECT_ID
with the ID of the Shared VPC host projectREGION
with the region containing the shared subnetSUBNET
with the name of the shared subnet
Save the following TCP Service configuration to a file named
tcp-service.yaml
and then deploy to your cluster. It uses the shared IP10.128.2.98
.apiVersion: v1 kind: Service metadata: name: tcp-service namespace: default annotations: networking.gke.io/load-balancer-type: "Internal" spec: type: LoadBalancer loadBalancerIP: 10.128.2.98 selector: app: myapp ports: - name: 8001-to-8001 protocol: TCP port: 8001 targetPort: 8001 - name: 8002-to-8002 protocol: TCP port: 8002 targetPort: 8002 - name: 8003-to-8003 protocol: TCP port: 8003 targetPort: 8003 - name: 8004-to-8004 protocol: TCP port: 8004 targetPort: 8004 - name: 8005-to-8005 protocol: TCP port: 8005 targetPort: 8005
Apply this Service definition against your cluster:
kubectl apply -f tcp-service.yaml
Save the following UDP Service configuration to a file named
udp-service.yaml
and then deploy it. It also uses the shared IP10.128.2.98
.apiVersion: v1 kind: Service metadata: name: udp-service namespace: default annotations: networking.gke.io/load-balancer-type: "Internal" spec: type: LoadBalancer loadBalancerIP: 10.128.2.98 selector: app: my-udp-app ports: - name: 9001-to-9001 protocol: UDP port: 9001 targetPort: 9001 - name: 9002-to-9002 protocol: UDP port: 9002 targetPort: 9002
Apply this file against your cluster:
kubectl apply -f udp-service.yaml
Validate that the VIP is shared amongst load balancer forwarding rules by listing them out and filtering for the static IP. This shows that there is a UDP and a TCP forwarding rule both listening across seven different ports on the shared IP address
10.128.2.98
.gcloud compute forwarding-rules list | grep 10.128.2.98 ab4d8205d655f4353a5cff5b224a0dde us-west1 10.128.2.98 UDP us-west1/backendServices/ab4d8205d655f4353a5cff5b224a0dde acd6eeaa00a35419c9530caeb6540435 us-west1 10.128.2.98 TCP us-west1/backendServices/acd6eeaa00a35419c9530caeb6540435
All ports (Preview)
Internal forwarding rules support up to
five ports per forwarding rule or an
optional parameter
--ports=ALL
that forwards all ports on the forwarding rule.
Requirements
All ports on GKE has the following requirements and limitations:
- Only supported when
--enable-l4-ilb-subsetting
is enabled. - Only supported for internal load balancer services.
- Supports any number of ports across a maximum of 100 contiguous port ranges.
The GKE controller automatically enables all ports on the forwarding rule when a service has more than five ports. For example, the following service manifest has six ports configured across two contiguous ranges:
apiVersion: v1
kind: Service
metadata:
name: all-ports
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 8081
targetPort: 8081
name: 8081-to-8081
protocol: TCP
- port: 8082
targetPort: 8082
name: 8082-to-8082
protocol: TCP
- port: 8083
targetPort: 8083
name: 8083-to-8083
protocol: TCP
- port: 9001
targetPort: 9001
name: 9001-to-9001
protocol: TCP
- port: 9002
targetPort: 9002
name: 9002-to-9002
protocol: TCP
- port: 9003
targetPort: 9003
name: 9003-to-9003
protocol: TCP
The GKE controller enables all ports on the forwarding rule because the service has more than five ports. However, the GKE controller only creates firewall ports for the ports specified in the service. All other rules are blocked by VPC firewalls.
Restrictions for internal TCP/UDP load balancers
- For clusters running Kubernetes version 1.7.3 and earlier, you could only use internal TCP/UDP load balancers with auto-mode subnets, but with Kubernetes version 1.7.4 and later, you can use internal load balancers with custom-mode subnets in addition to auto-mode subnets.
- For clusters running Kubernetes 1.7.X or later, while the clusterIP remains
unchanged, internal TCP/UDP load balancers cannot use reserved IP addresses.
The
spec.loadBalancerIP
field can still be defined using an unused IP address to assign a specific internal IP. Changes made to ports, protocols, or session affinity may cause these IP addresses to change.
Restrictions for internal UDP load balancers
- Internal UDP load balancers do not support using
sessionAffinity: ClientIP
.
Limits
A Kubernetes service with type: LoadBalancer
and the
networking.gke.io/load-balancer-type: Internal
annotation creates an internal
load balancer that targets the Kubernetes service. The number of such services
is limited by the number of internal forwarding rules that you can create in a
VPC network. For details, see Per network limits.
The maximum number of nodes in a GKE cluster with an internal load balancer is 250. To remove this limitation, enable internal load balancer subsetting. Otherwise, if you have autoscaling enabled for your cluster, you must ensure that autoscaling does not scale your cluster beyond 250 nodes.
For more information about VPC limits, see Quotas and limits.
What's next
- Read the GKE network overview.
- Learn more about Compute Engine load balancers.
- Learn about Alias IPs.
- Learn about IP masquerade agent.
- Learn about configuring authorized networks.