Syncing to a read-only repo

This quickstart shows you how to get started with Config Sync on a new cluster, using the foo-corp example repo to bootstrap a cluster with a set of configs. In this quickstart, you do not need write access to the repo. Imagine that a compliance team in your organization is responsible for creating the configs, and that each cluster is required to sync to the repo.

After you complete this quickstart, you can follow an advanced quickstart about writing, testing, and syncing configs.

Before you begin

  1. Create a cluster.

  2. Set up the kubectl command to authenticate to the cluster and create a RoleBinding to make yourself a cluster administrator, using the following commands. Use your cluster name where you see [MY-CLUSTER], and use your Cloud Billing account's email address where you see [USER-ACCOUNT]. Depending on how you configured the gcloud command on your local system, you may need to add the --project and --zone fields.

    gcloud container clusters get-credentials [MY-CLUSTER]
    kubectl create clusterrolebinding cluster-admin-binding \
      --clusterrole cluster-admin --user [USER_ACCOUNT]
  3. Install the nomos command onto your local system.

  4. Install the Config Sync Operator onto the cluster you just created.

Configure your cluster

Create a file config-management.yaml and copy the below YAML file into it. Because the repo is world-readable, secretType is set to none. For an explanation of the fields, see Configuration for the Git repository.

kind: ConfigManagement
  name: config-management
  # clusterName is required and must be unique among all managed clusters
  clusterName: my-cluster
    syncBranch: 1.0.0
    secretType: none
    policyDir: "foo-corp"

Apply the configuration to your cluster:

kubectl apply -f config-management.yaml

If the command succeeds, Kubernetes updates the Config Sync Operator on your cluster to begin syncing your cluster's configuration from the repository. To verify that the Config Sync Operator is running, list all Pods running in the config-management-system namespace:

kubectl get pods -n config-management-system


NAME                                   READY     STATUS    RESTARTS   AGE
git-importer-6bc498bc8b-bdv8f          3/3     Running   0          5m42s
monitor-8665bd4df4-6ghxv               1/1     Running   0          26m

Examine your cluster and repo

The foo-corp repo includes configs in the cluster/ and namespaces/ directories. These configs are applied as soon as the Config Sync Operator is configured to read from the repo.

All objects managed by Config Sync have the label set to

List namespaces managed by Config Sync:

kubectl get ns -l


NAME               STATUS   AGE
audit              Active   4m
shipping-dev       Active   4m
shipping-prod      Active   4m
shipping-staging   Active   4m

Examine the configs that caused these namespaces to be created, such as namespaces/audit/namespace.yaml and namespaces/online/shipping-app-backend/shipping-dev/namespace.yaml.

List ClusterRoles managed by Config Sync:

kubectl get clusterroles -l


NAME               AGE
namespace-reader   6m52s
pod-creator        6m52s

Examine the ClusterRole configs declaring:

  • cluster/namespace-reader-clusterrole.yaml
  • cluster/pod-creator-clusterrole.yaml

You can examine other objects, such as Roles and PodSecurityPolicies, in the same way.

Attempt to manually modify a managed object

If you manually modify a Kubernetes object that is managed by Config Sync, that object's configuration is automatically updated to match the object's config in your repo. To test this, delete the shipping-dev namespace.

kubectl delete namespace shipping-dev

If you check immediately, the namespace may be missing, but within a few seconds, it exists again. For example:

kubectl get ns shipping-dev


Error from server (NotFound): namespaces "shipping-dev" not found

Seconds later:

kubectl get ns shipping-dev


NAME           STATUS   AGE
shipping-dev   Active   3s

Cleaning up

After you finish the exercises in this topic, you can clean up by deleting the cluster you used for testing.

What's next