Running outside of Google Cloud
If your cluster is not running inside Google Cloud, you must manually
configure values for the project_id
and location
labels. We recommend
the following:
Set
project_id
based on how this cluster fits in your multi-tenant monitoring model. Your service account must be configured with the correct permissions for your chosenproject_id
.Set
location
based on the closest Google Cloud region to your deployment.
You can't rewrite these labels using a relabeling rule.
Having more than 1,000 projects in your organization
The maximum supported number of projects in a metrics scope is 375, but the maximum unsupported number of projects in a metrics scope is 1,000.
If you have more than 1,000 projects,
the recommended workaround is to configure your collectors to use a central
project_id
instead of the ID of the project they are running in. Metrics
from all your projects are then stored in Monarch under that
central project ID, and you can simply put the central project into a
metrics scope.
If you use this approach, then be aware of the following potential drawbacks:
- You lose some multi-tenancy granularity by doing this, as permissions can only be set at a per-project level. You might want to logically group projects into a few categories and use a different central project for each.
- The
project_id
value of Google Cloud system metrics cannot be overridden. This workaround will not let you see free Google Kubernetes Engine metrics in the central project, as those metrics stay within each originating project. - Using a central project might complicate your use of Rules and ClusterRules, because those rules are scoped to the project in which they are installed, and you are unlikely to have the same set of cluster and namespace names in each project. You might have to use GlobalRules instead.
Manually locating data in a single Google Cloud region
By default, Managed Service for Prometheus stores data in the Google Cloud region from which the data originates, and queries are naturally global, meaning you do not have to geographically co-locate data in order to query data across multiple Google Cloud regions.
In most situations, this default behavior is sufficient. However, there might be some situations where you want to store all metric data in a single Google Cloud region, for example, if you are in a highly-regulated environment.
To store all metric data in a single region, configure your collectors to use a single location
instead of the auto-detected location of the cluster they
are running in.
Storing data in a single Google Cloud region might complicate your use of Rules and ClusterRules, as those are scoped to the location in which they are installed, and you are unlikely to have the same set of cluster and namespace names in each Google Cloud region. You might have to use GlobalRules instead.