Kubernetes resource quotas
are a tool for administrators to ensure a fair share of resources among
different users. A resource quota, defined by a ResourceQuota
object, provides
constraints that limit aggregate resource consumption in a single namespace.
Hierarchy Controller
extends the concept of per-namespace resource quotas to support hierarchical
namespaces. A HierarchicalResourceQuota
object limits the aggregate resource
consumption across all the namespaces in a subtree, allowing administrators to
limit resource consumption across multiple related namespaces.
When hierarchical resource quotas are enabled, Hierarchy Controller installs two validating admission webhooks , one to actually enforce the resource consumption limits and the other to validate the hierarchical resource quotas themselves.
Enable hierarchical resource quotas
Hierarchical resource quotas are provided by Hierarchy Controller. To enable hierarchical resource quotas, follow these steps:
Install Hierarchy Controller, using Config Sync 1.6.2 or later.
In the configuration file for the ConfigManagement Operator, in the
spec.hierarchyController
object, set the value ofenableHierarchicalResourceQuota
totrue
:# config-management.yaml apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: hierarchyController: enabled: true # Set to true to enable hierarchical resource quotas: enableHierarchicalResourceQuota: true # ...other fields...
Apply the configuration:
kubectl apply -f config-management.yaml
After about a minute, Hierarchy Controller and hierarchical resource quotas become usable on your cluster.
To verify that hierarchical resource quotas are enabled, follow these steps:
Create a
HierarchicalResourceQuota
object in any namespace, such as the following:cat > example-hrq.yaml <<EOF apiVersion: hierarchycontroller.configmanagement.gke.io/v1alpha1 kind: HierarchicalResourceQuota metadata: name: example-hrq spec: hard: configmaps: "1" EOF kubectl apply -f example-hrq.yaml -n default
Verify that a new
ResourceQuota
object calledgke-hc-hrq
is created in the namespace with the samespec.hard
of 1configmap
, e.g.:kubectl describe resourcequota gke-hc-hrq -n default
Output:
Name: gke-hc-hrq Namespace: default Resource Used Hard -------- ---- ---- configmaps 0 1
Clean up:
kubectl delete hrq -n default example-hrq
Ensure that the automatically created object is removed:
kubectl get resourcequota gke-hc-hrq -n default
Output:
Error from server (NotFound): resourcequotas "gke-hc-hrq" not found
Using hierarchical resource quotas
Setting quotas
Setting a HierarchicalResourceQuota
is the same as setting a regular
ResourceQuota
, but with a different apiVersion
and kind
; therefore, you
can set limits on resources in the spec.hard
field as you would in
ResourceQuota
.
Consider a team called team-a
that owns a service called service-a
and has a
subteam called team-b
, all of which are represented by hierarchical namespaces
as follows:
kubectl hns tree team-a
Output:
team-a
├── service-a
└── team-b
If you want to limit the number of configmaps
in team-a
, but without
limiting the number in any descendants, you can create a regular ResourceQuota
as follows:
cat > team-a-rq.yaml <<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-a-rq
namespace: team-a
spec:
hard:
configmaps: "1"
EOF
kubectl apply -f team-a-rq.yaml
By contrast, to limit the total number of configmaps
in team-a
and its
descendants combined, replace the apiVersion
and kind
in the previous
example:
cat > team-a-hrq.yaml <<EOF
# Modify the following two lines:
apiVersion: hierarchycontroller.configmanagement.gke.io/v1alpha1
kind: HierarchicalResourceQuota
# Everything below this line remains the same
metadata:
name: team-a-hrq
namespace: team-a
spec:
hard:
configmaps: "1"
EOF
kubectl apply -f team-a-hrq.yaml
The first attempt to create a configmap
in any of these three namespaces
succeeds. For example, we might chose to create the configmap
in one of the
child namespaces:
kubectl create configmap config-1 --from-literal key=value -n team-b
Output:
confimap/config-1 created
But any further attempts to create new configmaps in any of the three namespaces fails, including in the sibling or parent namespaces:
kubectl create configmap config-2 --from-literal key=value -n service-a
kubectl create configmap config-2 --from-literal key=value -n team-a
Output for both:
Error from server (Forbidden): admission webhook "resourcesquotasstatus.hierarchycontroller.configmanagement.gke.io" denied the request: exceeded hierarchical quota in namespace "team-a": "team-a-hrq", requested: configmaps=1, used: configmaps=1, limited: configmaps=1
Inspect quotas
To view the current limits and usage of the HierarchicalResourceQuota
, use the
kubectl describe
command to view a regular resource quota:
kubectl describe hrq team-a-hrq -n team-a
Output:
# ...other fields...
Spec:
Hard:
Configmaps: 1
Status:
Hard:
Configmaps: 1
Used:
Configmaps: 1
Update namespace hierarchy
A namespace is always subject to any HierarchicalResourceQuota
in its
ancestors. Modifying the namespace hierarchy triggers a re-calculation of the
usages of any quotas.
Remove a namespace from a subtree with hierarchical quotas
When a namespace is moved out of a subtree with hierarchical quotas in its ancestors, it is no longer subject to these quotas, and its resources are removed from the quotas' usages.
For example, if team-b
is removed from the preceding subtree, there will be no
limits on configmap
consumption in team-b
. The hierarchical quota usage
resets to 0
, which means that team-a
and service-a
can now consume one
more configmap
in total.
Add a namespace to a subtree with hierarchical quotas
When a namespace is added to a subtree with hierarchical quotas, it is subject to the hierarchical quotas, and its resource usages are added to the quotas' usages.
For example, if another namespace is added to the previous subtree, no further
configmap
consumption is allowed in the newly added namespace. Similarly, any
existing configmap
usage in the newly added namespace is added to the
hierarchical quota's usage.
Hierarchical quotas do not prevent you from moving a new namespace into a
subtree, even if the usage of the new namespace exceeds the limit in
hierarchical quotas. However, if a limit is exceeded, further resource usage
is banned until either the usage drops below the limit, or until the limit
is raised. This is similar to the behavior of Kubernetes ResourceQuota
when a
limit is imposed that's lower than the existing usage in the namespace.
General rules
Hierarchical resource quotas behave similarly to Kubernetes resource quotas in corner cases. For example:
- If multiple hierarchical resource quotas apply to the same namespace, the most restrictive resource limits are respected.
- If you create a limit that is lower than the amount of already consumed resources, existing resources are not deleted, but any future resource consumption is forbidden until the usage drops below the limit or the limit is raised.
Troubleshooting
InternalError
when consuming resources
When you consume resources, for example, when creating a configmap
, your
request might stop responding for 10 seconds, and you get the following error
message:
Error from server (InternalError): Internal error occurred: resource quota evaluates timeout
You are not expected to see this error message unless the
gke-hc-controller-manager
Pod is in a bad state.
To fix the issue, administrators with permission can delete the Pod by using
the gke-hc-controller-manager-
prefix in the hnc-system
namespace directly.
The Pod restarts automatically. Before the Pod is ready, be aware of the
following:
- No resource consumption is subject to hierarchical quotas.
- Creating/updating
HierarchicalResourceQuota
fails. - If hierarchical observability is enabled, Pods are created without applying labels.
If this doesn't fix the issue, report it to us for analysis, preferably with logs that you can get by using the following:
kubectl logs -n hnc-system deployment/gke-hc-controller-manager -c manager
What's next
- Observe hierarchical workloads.
- Learn more about common tasks that you might want to use HNC to accomplish in the HNC User Guide: How-to.