This page explains how to use GKE usage metering to understand the usage profiles of Google Kubernetes Engine (GKE) clusters, and tie usage to individual teams or business units within your organization. GKE usage metering has no impact on billing for your project; it lets you understand resource usage at a granular level.
Overview
GKE usage metering tracks information about the resource requests and actual resource usage of your cluster's workloads. Currently, GKE usage metering tracks information about CPU, GPU, TPU, memory, storage, and optionally network egress. You can differentiate resource usage by using Kubernetes namespaces, labels, or a combination of both.
Data is stored in BigQuery, where you can query it directly or export it for analysis with external tools such as Looker Studio.
GKE usage metering is helpful for scenarios such as the following:
- Tracking per-tenant resource requests and actual resource consumption in a multi-tenant cluster where each tenant operates within a given namespace.
- Determining the resource consumption of a workload running in a given cluster, by assigning a unique label to the Kubernetes objects associated with the workload.
- Identifying workloads whose resource requests differ significantly from their actual resource consumption, so that you can more efficiently allocate resources for each workload.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Limitations
You can use the sample BigQuery queries and Looker Studio template to join GKE usage metering data with exported Google Cloud billing data in BigQuery. This lets you estimate a cost breakdown by cluster, namespace, and labels.
GKE usage metering data is purely advisory, and doesn't affect your Google Cloud bill. For billing data, your Google Cloud billing invoice is the sole source of truth.
The following limitations apply:
- Special contract discounts or credits are not accounted for.
- Resources created outside the scope of GKE are not tracked by namespace or label.
- Only labels from Pod and PersistentVolumeClaim objects are tracked by usage reporting.
- Only dynamically provisioned PersistentVolumes are supported.
- Only pd-standard and pd-ssd disk types are supported. GKE usage metering might include costs for regional versions of both disk types under the same SKU.
- Looker Studio does not support the visualization of machine types capable of bursting.
- You can only export data to a BigQuery dataset that is in the same project as your cluster.
- You must not use ports 27304, 47082 and 47083, because these ports are reserved by network egress tracking.
- Custom
StorageClass
objects are not supported. - Network egress metering is not supported for Windows Server nodes.
- Network egress metering is not supported for Shared VPC or VPC Network Peering.
- Network egress metering is not supported for clusters with more than 150 nodes.
Prerequisites
Before you use GKE usage metering, you must meet the following prerequisites:
- To track actual resource consumption, the cluster must use GKE 1.14.2-gke.3 or later.
- If you are using E2 or N2 machine types, the cluster version must be GKE 1.15.11-gke.9 or later.
- Billing export for BigQuery is enabled. Charges are associated with BigQuery usage.
- Version 250.0.0 or later of the
gcloud
command is required. Usegcloud --version
to check. - You must enable the BigQuery API in your Google Cloud project. If you first enabled GKE after July 2018, the API is already enabled.
Enable GKE usage metering
To enable GKE usage metering, you first create a BigQuery dataset for either a single cluster, multiple clusters in the project, or the entire project. For more information about choosing a mapping between datasets and clusters, see Choosing one or more BigQuery datasets.
Next, you enable GKE usage metering when creating a new cluster or by modifying an existing cluster.
Optionally, you can create a Looker Studio dashboard to visualize the resource usage of your clusters.
Create the BigQuery dataset
To use GKE usage metering for clusters in your Google Cloud project, you first create the BigQuery dataset, and then configure clusters to use it. You can use a single BigQuery dataset to store information about resource usage for multiple clusters in the same project.
Visit Creating Datasets for more
details. Set the Default table expiration
for the dataset to Never
so that
the table doesn't expire. If a table expires, it is recreated automatically as
an empty table.
Enable GKE usage metering for a cluster
You can enable GKE usage metering on a new or existing cluster by using
either the gcloud
command or the Google Cloud console.
Enabling GKE usage metering also enables resource
consumption metering by default. To selectively disable resource consumption
metering while continuing to track resource requests, see the specific
instructions for enabling GKE usage metering using the gcloud
command, in this
topic.
Network egress metering is disabled by default. To enable it, see the caveats and instructions in Optional: Enabling network egress metering in this topic.
Create a new cluster
You can create a cluster by using the gcloud CLI or the Google Cloud console.
gcloud
To create a cluster with GKE usage metering enabled, run the following command:
gcloud container clusters create CLUSTER_NAME \
--resource-usage-bigquery-dataset RESOURCE_USAGE_DATASET
Replace the following:
CLUSTER_NAME
: the name of your GKE cluster.RESOURCE_USAGE_DATASET
: the name of your BigQuery dataset.
Resource consumption metering is enabled by default. To disable it and only
track resource requests, add the flag --no-enable-resource-consumption-
metering
to the preceding command. You also need to modify the example queries
in the rest of this topic so that they do not query for resource
consumption.
If needed, the required tables are created within the BigQuery dataset when the cluster starts.
Console
To create a cluster with GKE usage metering enabled:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
From the navigation pane, under Cluster, click Features.
Select Enable GKE usage metering.
Enter the name of your BigQuery dataset.
Optional: select Enable network egress metering after reviewing the caveats and instructions in Optional: Enabling network egress metering.
Continue configuring your cluster, then click Create.
Configure an existing cluster
gcloud
To enable GKE usage metering on an existing cluster, run the following command:
gcloud container clusters update CLUSTER_NAME \
--resource-usage-bigquery-dataset RESOURCE_USAGE_DATASET
Resource consumption metering is enabled by default. To disable it and only
track resource requests, add the flag --no-enable-resource-consumption-
metering
to the preceding command. You also need to modify the example queries in
the rest of this topic so that they do not query for resource consumption.
You can also change the dataset an existing cluster uses to store its usage
metering data by changing the value of the --resource-usage-bigquery-dataset
flag.
If needed, a table is created within the BigQuery dataset when the cluster is updated.
Console
Go to the Google Kubernetes Engine page in Google Cloud console.
Next to the cluster you want to modify, click more_vert Actions, then click edit Edit.
Under Features, click edit Edit next to GKE usage metering.
Select Enable GKE usage metering.
Enter the name of the BigQuery dataset.
Optional: select Enable network egress metering after reviewing the caveats and instructions in Optional: Enabling network egress metering.
Click Save Changes..
Optional: Enable network egress metering
By default, network egress data is not collected or exported. Measuring network egress requires a network metering agent (NMA) running on each node. The NMA runs as a privileged Pod, consumes some resources on the node (CPU, memory, and disk space), and enables the nf_conntrack_acct sysctl flag on the kernel (for connection tracking flow accounting).
If you are comfortable with these caveats, you can enable network egress
tracking for use with GKE usage metering. To enable network egress tracking,
include the --enable-network-egress-metering
option when creating or updating
your cluster, or select Enable network egress metering when enabling
GKE usage metering in the Google Cloud console.
To disable network egress metering, add the flag --no-enable-network-egress-metering
when updating your cluster with the command line. Alternatively, you can clear Enable network egress metering in the GKE usage metering section of the cluster in the Google Cloud console.
Verify that GKE usage metering is enabled
To verify that GKE usage metering is enabled on a cluster, and to confirm which BigQuery dataset stores the cluster's resource usage data, run the following command:
gcloud container clusters describe CLUSTER_NAME \
--format="value(resourceUsageExportConfig)"
The output is empty if GKE usage metering is not enabled, and otherwise shows the BigQuery dataset used by the cluster, as in the following example output:
bigqueryDestination={u'datasetId': u'test_usage_metering_dataset'}
Choose one or more BigQuery datasets
A dataset can hold GKE usage metering data for one or more clusters in your project. Whether you use one or many datasets depends on your security needs:
- A single dataset for the entire project simplifies administration.
- A dataset per cluster lets you delegate granular access to the datasets.
- A dataset per related group of clusters lets you find the right mix of simplicity and granularity for your needs.
Visualize GKE usage metering data using a Looker Studio dashboard
You can visualize your GKE usage metering data using a Looker Studio dashboard. This lets you filter your data by cluster name, namespace, or label. You can also adjust the reporting period dynamically. If you have experience with Looker Studio and BigQuery, you can create a customized dashboard. You can also clone a dashboard that we created specifically for GKE usage metering.
You can use the dashboard to visualize resource requests and consumption on your clusters over time.
Prerequisites
Enable Exporting Google Cloud billing data to BigQuery if it is not already enabled.
During this process, you create a dataset, but the table within the dataset can take up to 5 hours to appear and start populating. When the table appears, its name is
gcp_billing_export_v1_BILLING_ACCOUNT_ID
.Enable GKE usage metering on at least one cluster in the project. Note the name you chose for the BigQuery dataset.
Enable Looker Studio if it's not already enabled.
Gather the following information, which is needed to configure the dashboard:
- Cloud Billing export dataset ID and data table
- GKE usage metering dataset ID
Ensure that you have version 2.0.58 or later of the BigQuery CLI. To check the version, run
bq version
, andgcloud components update
to update your BigQuery CLI.The commands in this section should be run in a Linux terminal or in Cloud Shell.
Create the BigQuery cost breakdown table
Download one of the following query templates:
- If you enabled consumption metering, download
this template
. - If you didn't enable consumption metering, download
this template
.
If you are using Cloud Shell, copy this file into the directory where you perform the following commands.
- If you enabled consumption metering, download
Run the following command to set environment variables:
export GCP_BILLING_EXPORT_TABLE_FULL_PATH=YOUR_BILLING_EXPORT_TABLE_PATH export USAGE_METERING_PROJECT_ID=YOUR_USAGE_METERING_PROJECT_ID export USAGE_METERING_DATASET_ID=YOUR_USAGE_METERING_DATASET_ID export USAGE_METERING_START_DATE=YOUR_USAGE_METERING_START_DATE export COST_BREAKDOWN_TABLE_ID=YOUR_COST_BREAKDOWN_TABLE_ID export USAGE_METERING_QUERY_TEMPLATE=YOUR_TEMPLATE_PATH export USAGE_METERING_QUERY=YOUR_RENDERED_QUERY_PATH
Replace the following:
YOUR_BILLING_EXPORT_TABLE_PATH
: the path to your generated billing export table. This table has a name similar toPROJECT_ID.DATASET_ID.gcp_billing_export_v1_xxxx
.YOUR_USAGE_METERING_PROJECT_ID
: the name of your Google Cloud project.YOUR_USAGE_METERING_DATASET_ID
: the name of the dataset you created in BigQuery, such asall_billing_data
.YOUR_USAGE_METERING_START_DATE
: the start date of your query in the formYYYY-MM-DD
.YOUR_COST_BREAKDOWN_TABLE_ID
: the name of a new table that you chose, such asusage_metering_cost_breakdown
. This table is used as input to Looker Studio.YOUR_TEMPLATE_PATH
: the name of the query template you downloaded, eitherusage_metering_query_template_request_and_consumption.sql
orusage_metering_query_template_request_only.sql
.YOUR_RENDERED_QUERY_PATH
: the name of the path for the rendered query that you choose, such ascost_breakdown_query.sql
.
As an example, your environment variables might resemble the following:
export GCP_BILLING_EXPORT_TABLE_FULL_PATH=my-billing-project.all_billing_data.gcp_billing_export_v1_xxxx export USAGE_METERING_PROJECT_ID=my-billing-project export USAGE_METERING_DATASET_ID=all_billing_data export USAGE_METERING_START_DATE=2022-05-01 export COST_BREAKDOWN_TABLE_ID=usage_metering_cost_breakdown export USAGE_METERING_QUERY_TEMPLATE=usage_metering_query_template_request_only.sql export USAGE_METERING_QUERY=cost_breakdown_query.sql
Render the query from the template:
sed \ -e "s/\${fullGCPBillingExportTableID}/$GCP_BILLING_EXPORT_TABLE_FULL_PATH/" \ -e "s/\${projectID}/$USAGE_METERING_PROJECT_ID/" \ -e "s/\${datasetID}/$USAGE_METERING_DATASET_ID/" \ -e "s/\${startDate}/$USAGE_METERING_START_DATE/" \ "$USAGE_METERING_QUERY_TEMPLATE" \ > "$USAGE_METERING_QUERY"
Create a new cost breakdown table that refreshes every 24 hours:
bq query \ --project_id=$USAGE_METERING_PROJECT_ID \ --use_legacy_sql=false \ --destination_table=$USAGE_METERING_DATASET_ID.$COST_BREAKDOWN_TABLE_ID \ --schedule='every 24 hours' \ --display_name="GKE Usage Metering Cost Breakdown Scheduled Query" \ --replace=true \ "$(cat $USAGE_METERING_QUERY)"
For more information about scheduling queries, see Set up scheduled queries.
Create the BigQuery data source
- In Looker Studio, go to Data Sources.
- Click add Create, and then click Data source.
- Select BigQuery.
- Name your data source. From the toolbar, click the words Untitled Data Source to replace the text with a descriptive name.
- Select Custom Query and then select your project ID.
Paste the following query into the Query Editor:
SELECT * FROM `USAGE_METERING_PROJECT_ID.USAGE_METERING_DATASET_ID.COST_BREAKDOWN_TABLE_ID`
Click Connect.
Create the Looker Studio dashboard
- Copy the GKE usage metering dashboard into your project.
- Click more_vert More options, and then click Make a copy.
- In the Copy this report dialog, from the New data source list, select the data source that you created.
- Click Copy report.
The dashboard is created, and you can access it at any time in the list of Looker Studio reports for your project.
Use the Looker Studio dashboard
The dashboard contains multiple reports:
- Usage breakdown
- This report contains overall cluster usage ratio among all clusters sending usage metering data to the same BigQuery data source. It also includes detailed information about resource type such as CPU, memory, or network egress by namespace. You can limit the report data to one or more individual clusters or namespaces.
- Usage breakdown with unallocated resources
- This report is similar to the usage breakdown report, but spreads unallocated resources proportionally across all namespaces. Unallocated resources include idle resources and any resources that are not currently allocated by GKE usage metering to specific tenants.
- Cost trends * drill down by namespace
- Usage trends among all clusters sending usage metering data to the same BigQuery data source by namespace. You can select one or more individual clusters, namespaces, resources, or SKUs.
- Cost trends * drill down by label
- Cost trends among all clusters sending usage metering data to the same BigQuery data source. You can select one or more individual clusters, resources, label names, or label values.
- Consumption-based Metering
- Consumption trends among all clusters sending usage metering data to the same BigQuery data source. You can select one or more individual namespaces, label keys, or label values. This report is only populated if resource consumption metering is enabled on at least one cluster.
You can change pages using the navigation menu. You can change the timeframe for a page using the date picker. To share the report with members of your organization, or to revoke access, click person_add_alt Share Report.
After you copy the report into your project, you can customize it by using the Looker Studio report editor. Even if the report template provided by Google changes, your copy is unaffected.
Explore GKE usage metering data using BigQuery
To view data about resource requests using BigQuery, query the
gke_cluster_resource_usage
table within the relevant BigQuery dataset.
To view data about actual resource consumption, query the
gke_cluster_resource_consumption
table. Network egress consumption data
remains in the gke_cluster_resource_usage
because there is no concept of
resource requests for egresses.
For more information about using queries in BigQuery, see Running queries. The fields in the schema are stable, though more fields may be added in the future.
These queries are simple examples. Customize your query to find the data you need.
Query for resource requests
SELECT
cluster_name,
labels,
usage
FROM
'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_usage'
WHERE
namespace="NAMESPACE"
Query for resource consumption
SELECT
cluster_name,
labels,
usage
FROM
'CLUSTER_GCP_PROJECT.USAGE_METERING_DATASET.gke_cluster_resource_consumption'
WHERE
namespace="NAMESPACE"
Replace the following:
CLUSTER_GCP_PROJECT
: the name of your Google Cloud project that contains the cluster that you want to query.USAGE_METERING_DATASET
: the name of your usage metering table.NAMESPACE
: the name of your namespace.
More examples
Expand the following sections to see more sophisticated examples.
GKE usage metering schema in BigQuery
The following table describes the schema for the GKE usage metering tables in the BigQuery dataset. If your cluster is running a version of GKE that supports resource consumption metering and resource requests, an additional table is created with the same schema.
Field | Type | Description |
---|---|---|
cluster_location |
STRING |
The name of the Compute Engine zone or region in which the GKE cluster resides. |
cluster_name |
STRING |
The name of the GKE cluster. |
namespace |
STRING |
The Kubernetes namespace from which the usage is generated. |
resource_name |
STRING |
The name of the resource, such as "cpu", "memory", and "storage". |
sku_id |
STRING |
The SKU ID of the underlying Google Cloud cloud resource. |
start_time |
TIMESTAMP |
The UNIX timestamp of when the usage began. |
end_time |
TIMESTAMP |
The UNIX timestamp of when the usage ended. |
fraction |
FLOAT |
The fraction of a cloud resource used by the usage. For a dedicated cloud resource that is solely used by a single namespace, the fraction is always 1.0. For resources shared among multiple namespaces, the fraction is calculated as the requested amount divided by the total capacity of the underlying cloud resource. |
cloud_resource_size |
INTEGER |
The size of the underlying Google Cloud resource. For example, the size of vCPUs on a n1-standard-2 instances is 2. |
labels.key |
STRING |
The key of a Kubernetes label associated with the usage. |
labels.value |
STRING |
The value of a Kubernetes label associated with the usage. |
project.id |
STRING |
The ID of the project in which the GKE cluster resides. |
usage.amount |
FLOAT |
The quantity of usage.unit used.
|
usage.unit |
STRING |
The base unit in which resource usage is measured. For example, the base unit for standard storage is byte-seconds. |
The units for GKE usage metering must be interpreted in the following way:
The CPU
usage.unit
is seconds, which is the total CPU time that a Pod requested or utilized. For example, if we have two Pods that each request 30 CPU and run for 15 minutes then the aggregate amount of the request table is 54,000 seconds (2 Pods * 30 CPU * 15 minutes * 60 seconds / minute).The memory
usage.unit
is bytes-seconds, which is the integral of memory over time that a Pod requested or utilized. For example, if we have two Pods that each request 30 GiB and run for 15 minutes then the aggregate amount of the request table is 5.798+13 byte-seconds (2 Pods * 30 GiB * 15 minutes * 60 seconds / minute * 1073741824 bytes / GiB).
Understanding when GKE usage metering data is written to BigQuery
There are two conditions when GKE usage metering writes usage records to BigQuery metrics:
- The Pod phase changes to
succeeded
orfailed
, or when the Pod is deleted. The hourly schedule's timestamp to write records is reached while the Pod is still running.
GKE usage metering generates an hourly schedule where it writes Pod usage records to BigQuery for all currently running Pods. The schedule's timestamp is not the same across all clusters.
If you have multiple Pods running at that timestamp, you'll find multiple usage records with the same
end_time
. These usage records'end_time
indicate the hourly schedule's timestamp.Also, if you have multiple Pods that have been running for multiple hours, you also have a set of usage records with an
end_time
that matches thestart_time
of another set of usage records.
Disable GKE usage metering
gcloud
To disable GKE usage metering on a cluster, run the following command:
gcloud container clusters update CLUSTER_NAME \
--clear-resource-usage-bigquery-dataset
Console
Go to the Google Kubernetes Engine page in Google Cloud console.
Next to the cluster you want to modify, click more_vert Actions, then click edit Edit.
Under Features, click edit Edit next to GKE usage metering.
Clear Enable GKE usage metering.
Click Save Changes.