This page introduces you to the architecture of Config Sync, including the hosted components that run in Google Cloud and the open source components that run on your Google Kubernetes Engine (GKE) Enterprise edition cluster. Learning about the architecture can give you a deeper understanding of Config Sync and might help you debug and fix issues that you encounter.
The architecture of Config Sync has changed over time:
- In Config Sync version 1.5.1, a new multi-repo mode was added. This mode uses a more scalable multi-tenant architecture with a reconciler for each RootSync and RepoSync object, allowing independent permission management, independent scaling, and multiple independent fault domains.
- In Config Sync version 1.18.0, the auto-upgrade feature was added. With
auto-upgrades enabled, the ConfigManagement Operator and the
ConfigManagement
object are removed from the cluster. Instead, the Fleet (GKE Hub) service manages the open source components on the cluster directly. With auto-upgrades disabled, the ConfigManagement Operator is still used. - In Config Sync version 1.19.0, the optional mono-repo mode was removed. This mode used a simpler architecture with fewer components, but was hard to scale and didn't support multi-tenancy. For those reasons, it was replaced by the newer multi-repo mode. For more information, see migrate to multi-repo mode.
- In Config Sync version 1.20.0, the ConfigManagement Operator and the
ConfigManagement
object were removed completely, even with auto-upgrades disabled. Instead, the Fleet (GKE Hub) service manages the open source components on the cluster directly.
The following section shows the architecture of Config Sync, including its components and dependencies, both in Google Cloud and on your Google Kubernetes Engine (GKE) Enterprise edition cluster, for various versions of Config Sync:
1.20.0 and later
In Config Sync version 1.20.0 and later, the
Fleet service
manages the Config Sync components on your cluster directly, without the
legacy ConfigManagement Operator or ConfigManagement
object. You can configure the
Fleet service to upgrade Config Sync automatically, or you can manually
upgrade Config Sync as needed.
This architecture is also used for Config Sync version 1.18.0 and later, when automatic upgrades are enabled.
There are multiple steps to install Config Sync and each of these steps deploys additional components on your cluster:
Enabling Config Sync on your cluster adds the following components:
- The
ConfigSync
custom resource definition (CRD) - A
ConfigSync
object namedconfig-sync
. - The Reconciler Manager in a Deployment named
reconciler-manager
. - The ResourceGroup Controller in a Deployment named
resource-group-controller-manager
. - The OpenTelemetry Collector in
a Deployment named
otel-collector
. - Optional: The Config Sync Admission Webhook in a Deployment named
admission-webhook
. The Config Sync Admission Webhook is only installed if you enable drift prevention.
These resources and objects are automatically created when you install Config Sync and shouldn't be modified directly.
- The
Creating
RootSync
andRepoSync
objects adds the following components:- For each
RootSync
object, a reconciler Deployment namedroot-reconciler-ROOTSYNC_NAME
. For theRootSync
object namedroot-sync
, the reconciler Deployment is namedroot-reconciler
. - For each
RepoSync
object, a reconciler Deployment namedns-reconciler-REPOSYNC_NAMESPACE-REPOSYNC_NAME-REPOSYNC_NAME_LENGTH
. For theRepoSync
object namedrepo-sync
, the reconciler Deployment is namedns-reconciler
.
- For each
1.19.x and earlier
In Config Sync version 1.19.x and earlier, when using manual upgrades, the Fleet service manages the ConfigManagement Operator, which in turn manages the Config Sync components on your cluster.
In Config Sync version 1.18.x and later, when using automatic upgrades, the 1.20.0 and later architecture is used.
There are multiple steps to install Config Sync and each of these steps deploys additional components on your cluster:
Enabling Config Sync on your cluster adds the following components:
- The ConfigManagement Operator in a Deployment named
config-management-operator
. - The
ConfigManagement
custom resource definition (CRD). - A
ConfigManagement
object namedconfig-management
. - The Reconciler Manager in a Deployment named
reconciler-manager
. - The ResourceGroup Controller in a Deployment named
resource-group-controller-manager
. - The OpenTelemetry Collector in
a Deployment named
otel-collector
. - Optional: The Config Sync Admission Webhook in a Deployment named
admission-webhook
. The Config Sync Admission Webhook is only installed if you enable drift prevention.
Most of these resources and objects are automatically created when you install Config Sync and shouldn't be modified directly. However, the
ConfigManagement
object has some fields that are allowed to be modified directly. For more information, see ConfigManagement fields.- The ConfigManagement Operator in a Deployment named
Creating
RootSync
andRepoSync
objects adds the following components:- For each
RootSync
object, a reconciler Deployment namedroot-reconciler-ROOTSYNC_NAME
. For theRootSync
object namedroot-sync
, the reconciler Deployment is namedroot-reconciler
. - For each
RepoSync
object, a reconciler Deployment namedns-reconciler-REPOSYNC_NAMESPACE-REPOSYNC_NAME-REPOSYNC_NAME_LENGTH
. For theRepoSync
object namedrepo-sync
, the reconciler Deployment is namedns-reconciler
.
- For each
Config Sync Deployments, Pods, and containers
The following table provides more information about the Config Sync Deployment, Pods, and containers:
Deployment name | Deployment namespace | Deployment description | Replica count | Pod name regular expression | Container count | Container names |
---|---|---|---|---|---|---|
config-management-operator |
config-management-system |
The ConfigManagement Operator runs on clusters with Config Sync version
1.19.x and earlier installed, when using manual upgrades. It watches the
ConfigManagement object and manages the other Config Sync
components, like the Reconciler Manager and OpenTelemetry Collector. In
Config Sync version 1.20.0 and later, the ConfigManagement Operator was
replaced by an extension of the Fleet (GKE Hub) service.
|
1 | config-management-operator-.* |
1 | manager |
reconciler-manager |
config-management-system |
The Reconciler Manager runs on every cluster with Config Sync
enabled in the ConfigManagement object. It watches
RootSync
and RepoSync objects and manages a reconciler Deployment
for each one. |
1 | reconciler-manager-.* |
2 | reconciler-manager otel-agent |
root-reconciler |
config-management-system |
A root reconciler Deployment is created for every RootSync object. |
1 | root-reconciler-.* |
3 - 51 |
reconciler otel-agent git-sync helm-sync oci-sync gcenode-askpass-sidecar hydration-controller |
ns-reconciler |
config-management-system |
A namespace reconciler Deployment is created for every RepoSync object. |
1 | ns-reconciler-.* |
3 - 51 |
reconciler otel-agent git-sync helm-sync oci-sync gcenode-askpass-sidecar hydration-controller |
otel-collector |
config-management-monitoring |
The OpenTelemetry Collector runs on every cluster with Config Sync
enabled in the ConfigManagement object. It collects metrics
from the Config Sync components running under the
config-management-system and resource-group-system
namespaces, and exports these metrics to Prometheus and Cloud Monitoring. |
1 | otel-collector-.* |
1 | otel-collector |
resource-group-controller-manager |
resource-group-system |
The ResourceGroup Controller runs on every cluster with Config Sync
enabled in the ConfigManagement object. It watches
ResourceGroup objects and updates them with the current
reconciliation status of each object in their inventory. A
ResourceGroup object is created for every
RootSync and RepoSync object to inventory the
list of objects applied by the reconciler from the source of truth. |
1 | resource-group-controller-manager-.* |
2 | manager otel-agent |
admission-webhook |
config-management-system |
The Config Sync Admission Webhook runs on each cluster with
drift prevention
enabled in the ConfigManagement object. It monitors
Kubernetes API requests and prevents modification or deletion of
resources managed by Config Sync. The Config Sync admission
webhook is disabled by default. |
2 | admission-webhook-.* |
1 | admission-webhook |
1 For details about when these containers are created, see Reconciler containers.
Key components
The following sections explore important Config Sync components in greater detail.
Fleet service and the ConfigSync
object
In Config Sync version 1.20.0 and later, as well as version 1.18.0 and later with automatic upgrades enabled, the GKE Fleet service manages the Config Sync components on your cluster directly:
The Fleet service also manages the ConfigSync
object on your cluster. The
Fleet service updates both the ConfigSync
object's spec based on your inputs
to the Google Cloud API and its status to reflect the status of the
Config Sync components.
To make changes to your Config Sync installation configuration, you must use the Google Cloud API. But you can use either the Google Cloud API or the Kubernetes API to monitor the configuration and health of your Config Sync installation.
ConfigManagement Operator and the ConfigManagement
object
In Config Sync version 1.19.x and earlier, when using manual upgrades, the GKE Fleet service manages the ConfigManagement Operator, which in turn manages the Config Sync components on your cluster:
To make changes to your Config Sync installation configuration, you
primarily use the Google Cloud API. However, you can also use the
Kubernetes API to make some changes to the ConfigManagement
object. For
more information, see
ConfigManagement fields.
The ConfigManagement Operator watches the ConfigManagement
object for spec changes,
manages the Config Sync components on your cluster to reflect the spec,
and updates the ConfigManagement
object's status to reflect the status of
the Config Sync components.
Because the ConfigManagement Operator installs some components that require
cluster-admin
permissions, the ConfigManagement Operator requires
cluster-admin
permissions as well.
Reconciler Manager and reconcilers
The Reconciler Manager is responsible for creating and managing the individual reconcilers that ensure your cluster configuration stays in sync.
The Reconciler Manager creates a root reconciler for every RootSync
object and
a namespace reconciler for every RepoSync
object. Config Sync uses this
design instead of sharing a single monolithic reconciler because it improves
reliability by reducing single points of failure and allows for individual
reconcilers to be scaled independently.
Root and namespace reconcilers automatically fetch configs from your source of truth and apply them to enforce the state that you want within your cluster.
The following diagrams show how the Reconciler Manager handles controlling the lifecycle of each root reconciler and namespace reconciler:
Reconciler containers
The specific containers deployed in the reconciler Pods depend on the configuration choices you make. The following table explains more about what each of these reconciler containers do and the condition that causes Config Sync to create them:
Container name | Description | Condition |
---|---|---|
reconciler |
Handles syncing and drift remediation. | Always enabled. | otel-agent |
Receives metrics from the other reconciler containers and sends them to the OpenTelemetry Collector. | Always enabled. |
git-sync |
Pulls configs from your Git repository to a local directory that the reconciler container can read. | Enabled when spec.sourceType is git . |
helm-sync |
Pulls and renders Helm charts from your chart repository to a local directory that the reconciler container can read. | Enabled when spec.sourceType is helm . |
oci-sync |
Pulls OCI images containing your configs from your container registry to a local directory that the reconciler container can read. | Enabled when spec.sourceType is oci . |
gcenode-askpass-sidecar |
Caches Git credentials from the GKE metadata service for
use by the git-sync container. |
Enabled when spec.sourceType is git and
spec.git.auth is gcenode or
gcpserviceaccount . |
hydration-controller |
Handles building Kustomize configurations to a local directory that the reconciler container can read. | Enabled when the source includes a kustomize.yaml file. |
As shown in the preceding table, you can typically expect a container count of
three to five within each reconciler Pod. The reconciler
and otel-agent
containers are always present. Specifying a type for your source of truth
dictates which sync container is added. Additionally, the hydration-controller
and gcenode-askpass-sidecar
containers are created if you made the
configuration changes mentioned in the table.
ResourceGroup Controller and ResourceGroup objects
The root and namespace reconcilers create a ResourceGroup
inventory object for
each RootSync
and RepoSync
object you set up. Each ResourceGroup
object
contains a list of objects synced to the cluster from the source of truth by the
reconciler for that RootSync
or RepoSync
object. The ResourceGroup
Controller then watches all the objects in the ResourceGroup
object and
updates the status of the ResourceGroup
object with the current reconciliation
status of the synced objects. This lets you check the status of the ResourceGroup
object
for an overview of the sync status, instead of having to query the status of
every individual object yourself.
ResourceGroup
objects have the same name and namespace as their corresponding
RootSync
or RepoSync
object. For example, for the RootSync
object with the
name root-sync
in the namespace config-management-system
, the corresponding
ResourceGroup
object is also named root-sync
in the
config-management-system
namespace.
Don't create or modify ResourceGroup
objects, as this can interfere with
the operation of Config Sync.
Admission Webhook
The Config Sync Admission Webhook is created when you enable drift prevention. Drift prevention proactively intercepts modification requests, ensuring they align with the source of truth before allowing changes.
If you don't enable drift prevention, Config Sync still uses a self-healing mechanism to revert configuration drift. With self-healing, Config Sync continuously monitors managed objects and automatically reverses any changes that deviate from the intended state.
RootSync and RepoSync objects
RootSync
objects configure Config Sync to create a root reconciler that
watches the specified source of truth and apply objects from that source to the
cluster. By default, the root reconciler for each RootSync
object has cluster-admin
permission. With
this default permission, root reconcilers can sync both cluster-scoped and
namespace-scoped resources. If needed, you can change these permissions by
configuring the
spec.override.roleRefs
fields. RootSync
objects are designed for use by cluster admins.
RepoSync
objects configure Config Sync to create a namespace reconciler
that watches the specified source and apply objects from that source to a
specific namespace in the cluster. Namespace reconcilers can sync any
namespace-scoped resources in that namespace with custom user-specified
permissions. RepoSync
objects are designed for use by namespace tenants.
How the Fleet service manages RootSync objects
When you install Config Sync with the Google Cloud console, Google Cloud CLI, Config Connector, or Terraform, Config Sync is managed by the Fleet service, based on your inputs to the Google Cloud API.
When your Config Sync installation is managed by the Fleet service, you can
optionally also have it manage your initial RootSync
object, named
root-sync
. This lets you bootstrap GitOps on your cluster without needing to
manually apply anything to the cluster directly. If you decide not to have the
Fleet service manage your initial RootSync
object, you can still apply
whatever RootSync
and RepoSync
objects you want directly to the cluster.
The RootSync
object named root-sync
is created based on your inputs to the
Google Cloud API, specifically the spec.configSync
section of the
config-management apply
API. Because this API
only exposes a subset of the RootSync
fields,
those fields are considered managed in the root-sync
, while the other fields
are considered unmanaged. Managed fields can only be edited using the
Google Cloud API. The unmanaged
fields
can be edited using kubectl
,
or any other Kubernetes client.
Additional RootSync and RepoSync objects
To create additional RootSync
or RepoSync
objects, you can use the kubectl
command-line tool or another Kubernetes client. You can also use the initial
root-sync
object to manage additional RootSync
or RepoSync
objects with
GitOps, by adding their YAML manifests to the source of truth that the
root-sync
is configured to sync from. This method cannot be used to manage the
config of the initial root-sync
, because some of its fields are managed by the
Fleet service. To manage the root-sync
object with GitOps, use Config Connector or
Terraform. To learn more about creating additional RootSync
and RepoSync
objects, see
Configure syncing from more than one source of truth.
What's next
- You might want to monitor Config Sync components or check their logs. For an introduction, see Use monitoring and logs.
- Learn about the Resource requests for Config Sync components.