This document lists the quotas and limits that apply to Cloud Monitoring.
A quota restricts how much of a particular shared Google Cloud resource your Google Cloud project can use, including hardware, software, and network components.
Quotas are part of a system that does the following:
- Monitors your use or consumption of Google Cloud products and services.
- Restricts your consumption of those resources for reasons including ensuring fairness and reducing spikes in usage.
- Maintains configurations that automatically enforce prescribed restrictions.
- Provides a means to make or request changes to the quota.
When a quota is exceeded, in most cases, the system immediately blocks access to the relevant Google resource, and the task that you're trying to perform fails. In most cases, quotas apply to each Google Cloud project and are shared across all applications and IP addresses that use that Google Cloud project.
To increase or decrease most quotas, use the Google Cloud console. For more information, see Requesting a higher quota.
There are also limits on Monitoring resources. These limits are unrelated to the quota system. Limits cannot be changed unless otherwise stated.
|Custom metric descriptors per project1||10,000|
|Labels per metric descriptor||30|
|String length for label key||100|
|String length for label value||1024|
|Time series included in a write request2||200|
|Rate at which data can be written to a single time series3||one point each 5 seconds|
|Histogram buckets per custom distribution metric||200|
|Workload, Prometheus, and external4 metric descriptors per project||25,000|
|Active time series from custom metrics per monitored resource5||200,000|
|Active time series from workload metrics per monitored resource5||200,000|
|Active time series from Prometheus per monitored resource5||1,000,000|
|Active time series from external metrics per monitored resource5||200,000|
This limit is imposed by Cloud Monitoring. Other services
might impose lower maximum values. Custom metrics are those written to
2 You can write only one data point for each time series in a request, so this limit also functions as the maximum number of points that can be written per request.
3 The Cloud Monitoring API requires that the end times of points written to a time series be at least 5 seconds apart. You can batch write points to a time series, provided that the data points are written in order.
4 External metrics are those written to
5 A time series is active if you have written data points to it within the previous 24 hours. The limit specified in the row is the total number of active time series for a single monitored resource (for example, a single
or a single
k8s_container container) across all user-defined
metrics within that row (custom, workload, Prometheus, or external). An
exception is the
global monitored resource, for which the limit applies to
each user-defined metric separately. This is a system-wide safety limit and
Monitoring API quotas and limits
|Limits to API usage||See the Quotas dashboard. For an API, click All Quotas to see your quota.|
|Lifetime of API page tokens||24 hours|
About Monitoring API quotas
The Monitoring API has quota limits for the rates of time-series ingestion requests and time-series queries. Ingestion requests are calls that write time-series data, and queries are calls that retrieve time-series data. There are also internal limits on other Monitoring API endpoints; these endpoints aren't intended to handle high rates of requests.
To reduce the number of API requests you issue when your services write time-series data, use one API request to write data for multiple time series. We recommend that you write at least 10 objects per request.
For more information about batching API requests, see
If, after batching your API requests, you still require a higher Monitoring API quota limits, contact Google Cloud Support.
The other limits are fixed and as detailed on this page.
For more information, go to Working with quotas.
Metric data points older than the retention period are deleted from time series.
|Retention of data points from custom, external, and agent metric
|Retention of data points from process-health metric
|Retention of data points from all other metric types, including:||6 weeks|
|Lifetime of API page tokens||24 hours|
1 Metric data is stored for
6 weeks at its original sampling frequency,
then it is down-sampled to 10-minute intervals for extended storage.
2 Google Cloud Managed Service for Prometheus metric data is stored for 1 week at its original sampling frequency, then it is down-sampled to 1-minute intervals for the next 5 weeks, then it is down-sampled to 10-minute intervals for extended storage.
|Number of resource groups per metrics scope||500|
|Maximum number of groups included in an email report1||10|
1 When you configure Cloud Monitoring email reports, you can request information on utilization of your resource groups. Due to a limitation in the email reporter, the generated reports include information for only 10 groups.
Monitored project limits
Cloud Monitoring officially supports up to 375 Google Cloud projects per metrics scope .
You can add up to 1,000 Google Cloud projects per metrics scope , but you might experience performance issues, especially when querying custom metrics or historical data. Cloud Monitoring guarantees performant queries and charts only for 375 Google Cloud projects per metrics scope .
To raise your Google Cloud projects per metrics scope quota, you can request an increase of the "Monitored Projects / Monitoring Metrics Scope" quota. See the documentation about managing your quota for more details.
Limits on creating and updating metric descriptors
Cloud Monitoring enforces a per-minute rate limit on creating new metrics, on adding new metric labels to existing metrics, and on deleting metrics. This rate limit is usually only hit when first integrating with Cloud Monitoring, for example when you migrate an existing, mature Prometheus deployment to Cloud Monitoring. This is not a rate limit on ingesting data points. This rate limit only applies when creating never-before-seen metrics or when adding new labels to existing metrics.
This quota is fixed, but any issues should automatically resolve as new metrics and metric labels get created up to the per-minute limit.
Limits for alerting and uptime checks
|Alerting policies (sum of metric and log) per metrics scope 2||500||Metric, Log|
|Conditions per alerting policy||6||Metric|
|Maximum time period that a
metric-absence condition evaluates3
|Maximum time period that a
metric-threshold condition evaluates3
|23 hours 30 minutes||Metric|
|Maximum length of the filter used
in a metric-threshold condition
|2,048 Unicode characters||Metric|
|Maximum number of time series
monitored by a forecast condition
|Minimum forecast window||1 hour (3,600 seconds)||Metric|
|Maximum forecast window||7 days (604,800 seconds)||Metric|
|Notification channels per alerting policy||16||Metric, Log|
|Maximum rate of notifications||1 notification every 5 minutes for each log-based alert||Log|
|Maximum number of notifications||20 notifications a day for each log-based alert||Log|
|Maximum number of simultaneously open incidents
per alerting policy
|Period after which an incident with no new data is
|Maximum duration of an incident if not manually closed||7 days||Log|
|Retention of closed incidents||13 months||Not applicable|
|Retention of open incidents||Indefinite||Not applicable|
|Notification channels per metrics scope||4,000||Not applicable|
|Maximum number of alerting policies per snooze||16||Metric, Log|
|Retention of a snooze||13 months||Not applicable|
|Uptime checks per metrics scope 4||100||Not applicable|
|Maximum number of ICMP pings per public uptime check||3||Not applicable|
2Apigee and Apigee hybrid are deeply integrated with Cloud Monitoring. The alerting limit for all Apigee subscription levels—Standard, Enterprise, and Enterprise Plus—is the same as for Cloud Monitoring: 500 per metrics scope .
3The maximum time period that a condition evaluates is the sum of the alignment period and the duration window values. For example, if the alignment period is set to 15 hours, and the duration window is set 15 hours, then 30 hours of data is required to evaluate the condition.
4This limit applies to the number of uptime-check configurations. Each uptime-check configuration includes the time interval between testing the status of the specified resource. See Managing uptime checks for more information.
Limits for charting
|Dashboards per metrics scope||1000|
|Charts on a dashboard||40|
|Lines on a chart||300|
|Number of SLOs per service||500|