Node Autoscaler not working due to CIDR range

Problem

Changes to nodes not possible. All of these or some of these don't work:

  • Node autoscaling not scaling up
  • User can't add nodes manually
  • User can't add new nodepools
  • User can't upgrade nodepool

Environment

Solution

The cluster needs to be recreated with a smaller POD CIDR range (smaller number after the slash). 

Cause

When creating a Google Kubernetes Engine cluster we have a possibility to define pod range and number of IPs that will be used for pods. This range indirectly influences the maximum number of nodes that can be added to the cluster. Use this formula to see what is the maximum number of nodes across all node-pools:

x = nr after the slash
max nr of nodes = 2^(32-x)/2(8) = 2^(24-x)

Even though we can change number of pods per node (by default 110), cluster creation takes into account the hard coded value of 256 pod IPs per node (/24 that is 2(8) that is 256).

Example
podCIDR is /18
max number of nodes : 2(32-18)/256 = 2(14)/256=64

Other examples
podCIDR  nodes
/16              1024
/18              64
/19              32
/21              8
/24              1