Troubleshoot Policy Controller

This page shows you how to resolve issues with Policy Controller.

General tips

Stop Policy Controller

If Policy Controller is causing issues in your cluster, you can stop Policy Controller while you investigate the issue.

Examine metrics

Examining the Policy Controller metrics can help you to diagnose issues with Policy Controller.

Verify installation

You can verify if Policy Controller and the constraint template library were installed successfully.

Constraint not enforced

Check if your constraint is being enforced

If you are concerned that your constraint is not enforced, you can check the spec.status of your constraint and the constraint template. To check the status, run the following command:

kubectl describe CONSTRAINT_TEMPLATE_NAME CONSTRAINT_NAME

Replace the following:

  • CONSTRAINT_TEMPLATE_NAME: the name of the constraint template that you want to check. For example, K8sNoExternalServices.
  • CONSTRAINT_NAME: the Name of the constraint that you want to check.

    If needed, run kubectl get constraint to see which constraint templates and constraints are installed on your system.

In the output of the kubectl describe command, take note the values in the metadata.generation and status.byPod.observedGeneration fields. In the following example these values are bolded:

Name:         no-internet-services
Namespace:
Labels:       app.kubernetes.io/managed-by=configmanagement.gke.io
              configsync.gke.io/declared-version=v1beta1
Annotations:  config.k8s.io/owning-inventory: config-management-system_root-sync
              configmanagement.gke.io/cluster-name: acm-cluster
              configmanagement.gke.io/managed: enabled
              configmanagement.gke.io/source-path: quickstart/config-sync/policies/no-ext-services.yaml
              configmanagement.gke.io/token: bcc0610f505f8c07c8115d9989c69721624ee614
              configsync.gke.io/declared-fields: {"f:metadata":{"f:annotations":{},"f:labels":{}},"f:spec":{"f:parameters":{"f:internalCIDRs":{}}}}
              configsync.gke.io/git-context:
                {"repo":"https://github.com/GoogleCloudPlatform/anthos-config-management-samples","branch":"main","rev":"HEAD"}
              configsync.gke.io/manager: :root
              configsync.gke.io/resource-id: constraints.gatekeeper.sh_k8snoexternalservices_no-internet-services
API Version:  constraints.gatekeeper.sh/v1beta1
Kind:         K8sNoExternalServices
Metadata:
  Creation Timestamp:  2021-12-03T19:00:06Z
  Generation:          1
  Managed Fields:
    API Version:  constraints.gatekeeper.sh/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:config.k8s.io/owning-inventory:
          f:configmanagement.gke.io/cluster-name:
          f:configmanagement.gke.io/managed:
          f:configmanagement.gke.io/source-path:
          f:configmanagement.gke.io/token:
          f:configsync.gke.io/declared-fields:
          f:configsync.gke.io/git-context:
          f:configsync.gke.io/manager:
          f:configsync.gke.io/resource-id:
        f:labels:
          f:app.kubernetes.io/managed-by:
          f:configsync.gke.io/declared-version:
      f:spec:
        f:parameters:
          f:internalCIDRs:
    Manager:      configsync.gke.io
    Operation:    Apply
    Time:         2022-02-15T17:13:20Z
    API Version:  constraints.gatekeeper.sh/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
    Manager:         gatekeeper
    Operation:       Update
    Time:            2021-12-03T19:00:08Z
  Resource Version:  41460953
  UID:               ac80849d-a644-4c5c-8787-f73e90b2c988
Spec:
  Parameters:
    Internal CID Rs:
Status:
  Audit Timestamp:  2022-02-15T17:21:51Z
  By Pod:
    Constraint UID:       ac80849d-a644-4c5c-8787-f73e90b2c988
    Enforced:             true
    Id:                   gatekeeper-audit-5d4d474f95-746x4
    Observed Generation:  1
    Operations:
      audit
      status
    Constraint UID:       ac80849d-a644-4c5c-8787-f73e90b2c988
    Enforced:             true
    Id:                   gatekeeper-controller-manager-76d777ddb8-g24dh
    Observed Generation:  1
    Operations:
      webhook
  Total Violations:  0
Events:              <none>

If you see every Policy Controller Pod with an observedGeneration value equal to the metadata.generation value (which is the case in the preceding example), then your constraint is likely enforced. However, if these values match, but you are still experiencing problems with your constraint being enforced, see the following section for tips. If you notice that there are only some values that match, or some Pods aren't listed, then the status of your constraint is unknown. The constraint might be inconsistently enforced across Policy Controller's Pods, or not enforced at all. If there are no values that match, then your constraint is not enforced.

Constraint not enforced, but audit results reported

If the observedGeneration check described in the preceding section had matching values and there are audit results reported on the constraint that show expected violations (for pre-existing objects, not for inbound requests), but the constraint is still not enforced then the problem is likely to do with the webhook. The webhook might be experiencing one of the following issues:

  • The Policy Controller webhook Pod might not be operational. Kubernetes debugging techniques might help you to resolve issues with the webhook Pod.
  • There could be a firewall between the API server and the webhook service. Refer to your firewall provider's documentation for details on how to fix the firewall.

Referential constraint not enforced

If your constraint is a referential constraint, make sure the necessary resources are being cached. For details on how to cache resources, see Configure Policy Controller for referential constraints.

Check the syntax of the constraint template

If you wrote your own constraint template, and it's not enforced, there might be an error in the constraint template syntax.

You can review the template by using the following command:

kubectl describe constrainttemplate CONSTRAINT_TEMPLATE_NAME

Replace CONSTRAINT_TEMPLATE_NAME with the name of the template that you want to investigate. Errors should be reported in the status field.