Kubernetes resource quotas
are a tool for administrators to ensure a fair share of resources among
different users. A resource quota, defined by a
ResourceQuota object, provides
constraints that limit aggregate resource consumption in a single namespace.
extends the concept of per-namespace resource quotas to support hierarchical
HierarchicalResourceQuota object limits the aggregate resource
consumption across all the namespaces in a subtree, allowing administrators to
limit resource consumption across multiple related namespaces.
When hierarchical resource quotas are enabled, Hierarchy Controller installs two validating admission webhooks , one to actually enforce the resource consumption limits and the other to validate the hierarchical resource quotas themselves.
Enabling hierarchical resource quotas
Hierarchical resource quotas are provided by Hierarchy Controller. To enable hierarchical resource quotas, follow these steps:
Install Hierarchy Controller, using Config Sync 1.6.2 or later.
In the configuration file for the Config Sync Operator, in the
spec.hierarchyControllerobject, set the value of
# config-management.yaml apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: hierarchyController: enabled: true # Set to true to enable hierarchical resource quotas: enableHierarchicalResourceQuota: true # ...other fields...
Apply the configuration:
kubectl apply -f config-management.yaml
After about a minute, Hierarchy Controller and hierarchical resource quotas become usable on your cluster.
To verify that hierarchical resource quotas are enabled, follow these steps:
HierarchicalResourceQuotaobject in any namespace, such as the following:
cat > example-hrq.yaml <<EOF apiVersion: hierarchycontroller.configmanagement.gke.io/v1alpha1 kind: HierarchicalResourceQuota metadata: name: example-hrq spec: hard: configmaps: "1" EOF kubectl apply -f example-hrq.yaml -n default
Verify that a new
gke-hc-hrqis created in the namespace with the same
kubectl describe resourcequota gke-hc-hrq -n default
Name: gke-hc-hrq Namespace: default Resource Used Hard -------- ---- ---- configmaps 0 1
kubectl delete hrq -n default example-hrq
Ensure that the automatically created object is removed:
kubectl get resourcequota gke-hc-hrq -n default
Error from server (NotFound): resourcequotas "gke-hc-hrq" not found
Using hierarchical resource quotas
HierarchicalResourceQuota is the same as setting a regular
ResourceQuota, but with a different
kind; therefore, you
can set limits on resources in the
spec.hard field as you would in
Consider a team called
team-a that owns a service called
service-a and has a
team-b, all of which are represented by hierarchical namespaces
kubectl hns tree team-a
team-a ├── service-a └── team-b
If you want to limit the number of
team-a, but without
limiting the number in any descendants, you can create a regular
cat > team-a-rq.yaml <<EOF apiVersion: v1 kind: ResourceQuota metadata: name: team-a-rq namespace: team-a spec: hard: configmaps: "1" EOF kubectl apply -f team-a-rq.yaml
By contrast, to limit the total number of
team-a and its
descendants combined, replace the
kind in the previous
cat > team-a-hrq.yaml <<EOF # Modify the following two lines: apiVersion: hierarchycontroller.configmanagement.gke.io/v1alpha1 kind: HierarchicalResourceQuota # Everything below this line remains the same metadata: name: team-a-hrq namespace: team-a spec: hard: configmaps: "1" EOF kubectl apply -f team-a-hrq.yaml
The first attempt to create a
configmap in any of these three namespaces
succeeds. For example, we might chose to create the
configmap in one of the
kubectl create configmap config-1 --from-literal key=value -n team-b
But any further attempts to create new configmaps in any of the three namespaces fails, including in the sibling or parent namespaces:
kubectl create configmap config-2 --from-literal key=value -n service-a kubectl create configmap config-2 --from-literal key=value -n team-a
Output for both:
Error from server (Forbidden): admission webhook "resourcesquotasstatus.hierarchycontroller.configmanagement.gke.io" denied the request: exceeded hierarchical quota in namespace "team-a": "team-a-hrq", requested: configmaps=1, used: configmaps=1, limited: configmaps=1
To view the current limits and usage of the
HierarchicalResourceQuota, use the
kubectl describe command to view a regular resource quota:
kubectl describe hrq team-a-hrq -n team-a
# ...other fields... Spec: Hard: Configmaps: 1 Status: Hard: Configmaps: 1 Used: Configmaps: 1
Updating namespace hierarchy
A namespace is always subject to any
HierarchicalResourceQuota in its
ancestors. Modifying the namespace hierarchy triggers a re-calculation of the
usages of any quotas.
Removing a namespace from a subtree with hierarchical quotas
When a namespace is moved out of a subtree with hierarchical quotas in its ancestors, it is no longer subject to these quotas, and its resources are removed from the quotas' usages.
For example, if
team-b is removed from the preceding subtree, there will be no
configmap consumption in
team-b. The hierarchical quota usage
0, which means that
service-a can now consume one
configmap in total.
Adding a namespace to a subtree with hierarchical quotas
When a namespace is added to a subtree with hierarchical quotas, it is subject to the hierarchical quotas, and its resource usages are added to the quotas' usages.
For example, if another namespace is added to the previous subtree, no further
configmap consumption is allowed in the newly added namespace. Similarly, any
configmap usage in the newly added namespace is added to the
hierarchical quota's usage.
Hierarchical quotas do not prevent you from moving a new namespace into a
subtree, even if the usage of the new namespace exceeds the limit in
hierarchical quotas. However, if a limit is exceeded, further resource usage
is banned until either the usage drops below the limit, or until the limit
is raised. This is similar to the behavior of Kubernetes
ResourceQuota when a
limit is imposed that's lower than the existing usage in the namespace.
Hierarchical resource quotas behave similarly to Kubernetes resource quotas in corner cases. For example:
- If multiple hierarchical resource quotas apply to the same namespace, the most restrictive resource limits are respected.
- If you create a limit that is lower than the amount of already consumed resources, existing resources are not deleted, but any future resource consumption is forbidden until the usage drops below the limit or the limit is raised.
InternalError when consuming resources
When you consume resources, for example, when creating a
request might stop responding for 10 seconds, and you get the following error
Error from server (InternalError): Internal error occurred: resource quota evaluates timeout
You are not expected to see this error message unless the
gke-hc-controller-manager Pod is in a bad state.
To fix the issue, administrators with permission can delete the Pod by using
gke-hc-controller-manager- prefix in the
hnc-system namespace directly.
The Pod restarts automatically. Before the Pod is ready, be aware of the
- No resource consumption is subject to hierarchical quotas.
- If hierarchical observability is enabled, Pods are created without applying labels.
If this doesn't fix the issue, report it to us for analysis, preferably with logs that you can get by using the following:
kubectl logs -n hnc-system deployment/gke-hc-controller-manager -c manager
- Observe hierarchical workloads.
- Learn more about common tasks that you might want to use HNC to accomplish in the HNC User Guide: How-to.