This quickstart shows you how to get started with Anthos Config Management on a new cluster, using the foo-corp example repo to bootstrap a cluster with a set of configs. In this quickstart, you do not need write access to the repo. Imagine that a compliance team in your organization is responsible for creating the configs, and that each cluster is required to sync to the repo.
After you complete this quickstart, you can follow an advanced quickstart about writing, testing, and syncing configs.
Before you begin
Sign in to your Google Account.
If you don't already have one, sign up for a new account.
In the Cloud Console, on the project selector page, select or create a Cloud project.
Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
- Enable the Anthos API.
- Install and initialize the Cloud SDK.
Anthos Config Management requires an active Anthos entitlement. For more information, see Pricing for Anthos.
Set up the
kubectlcommand to authenticate to the cluster and create a RoleBinding to make yourself a cluster administrator, using the following commands. Use your cluster name where you see
[MY-CLUSTER], and use your Google Cloud account's email address where you see [USER-ACCOUNT]. Depending on how you configured the
gcloudcommand on your local system, you may need to add the
gcloud container clusters get-credentials [MY-CLUSTER] kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin --user [USER_ACCOUNT]
Anthos GKE on-prem users:
Configure your cluster
Create a file
config-management.yaml and copy the below YAML into it. See
the installation instructions for an
explanation of the fields. Because the repo is world-readable,
apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: # clusterName is required and must be unique among all managed clusters clusterName: my-cluster git: syncRepo: https://github.com/GoogleCloudPlatform/csp-config-management/ syncBranch: 1.0.0 secretType: none policyDir: "foo-corp"
Apply the configuration to your cluster.
kubectl apply -f config-management.yaml
If the command succeeds, Kubernetes updates the Config Management Operator on your
cluster to begin syncing your cluster's configuration from the repository.
To verify that the Config Management Operator is running, list all Pods running in the
kubectl get pods -n config-management-system
NAME READY STATUS RESTARTS AGE git-importer-5f8bdb59bd-7nn5m 2/2 Running 0 2m monitor-58c48fbc66-ggrmd 1/1 Running 0 2m syncer-7bbfd7686b-dxb45 1/1 Running 0 2m
Examine your cluster and repo
foo-corp repo includes configs in the
directories. These configs are applied as soon as the Config Management Operator is
configured to read from the repo.
All objects managed by Anthos Config Management have the
app.kubernetes.io/managed-by label set to
List Namespaces managed by Anthos Config Management:
kubectl get ns -l app.kubernetes.io/managed-by=configmanagement.gke.io
NAME STATUS AGE audit Active 4m shipping-dev Active 4m shipping-prod Active 4m shipping-staging Active 4m
Examine the configs that caused these Namespaces to be created, such
List ClusterRoles managed by Anthos Config Management:
kubectl get clusterroles -l app.kubernetes.io/managed-by=configmanagement.gke.io
NAME AGE namespace-reader 6m52s pod-creator 6m52s
Examine the ClusterRole configs declaring these:
You can examine other objects, such as Roles and PodSecurityPolicies, in the same way.
Attempt to manually modify a managed object
If you manually modify a Kubernetes object that is managed by
Anthos Config Management, that object's configuration is automatically updated
to match the object's config in your repo. To test this, delete the
kubectl delete namespace shipping-dev
If you check immediately, the Namespace may be missing, but within a few seconds, it exists again:
kubectl get ns shipping-dev
Error from server (NotFound): namespaces "shipping-dev" not found
kubectl get ns shipping-dev
NAME STATUS AGE shipping-dev Active 3s
After you finish the exercises in this topic, you can clean up by deleting the cluster you used for testing.