JOBS_TIMELINE_BY_FOLDER view

The INFORMATION_SCHEMA.JOBS_TIMELINE_BY_FOLDER view contains near real-time BigQuery metadata by timeslice for all jobs submitted in the parent folder of the current project, including the jobs in subfolders under it. This view contains currently running and completed jobs.

Required permissions

To query the INFORMATION_SCHEMA.JOBS_TIMELINE_BY_FOLDER view, you need the bigquery.jobs.listAll Identity and Access Management (IAM) permission for the parent folder. Each of the following predefined IAM roles includes the required permission:

  • Folder Admin
  • BigQuery Admin

For more information about BigQuery permissions, see Access control with IAM.

Schema

When you query the INFORMATION_SCHEMA.JOBS_TIMELINE_BY_* views, the query results contain one row for every second of execution of every BigQuery job. Each period starts on a whole-second interval and lasts exactly one second.

The INFORMATION_SCHEMA.JOBS_TIMELINE_BY_* view has the following schema:

Column name Data type Value
period_start TIMESTAMP Start time of this period.
period_slot_ms INTEGER Slot milliseconds consumed in this period.
period_shuffle_ram_usage_ratio FLOAT Shuffle usage ratio in the selected time period.
project_id STRING (Clustering column) ID of the project.
project_number INTEGER Number of the project.
folder_numbers REPEATED INTEGER Number IDs of the folders that contain the project, starting with the folder that immediately contains the project, followed by the folder that contains the child folder, and so forth. For example, if `folder_numbers` is `[1, 2, 3]`, then folder `1` immediately contains the project, folder `2` contains `1`, and folder `3` contains `2`.
user_email STRING (Clustering column) Email address or service account of the user who ran the job.
job_id STRING ID of the job. For example, bquxjob_1234.
job_type STRING The type of the job. Can be QUERY, LOAD, EXTRACT, COPY, or null. Job type null indicates an internal job, such as script job statement evaluation or materialized view refresh.
statement_type STRING The type of query statement, if valid. For example, SELECT, INSERT, UPDATE, or DELETE.
job_creation_time TIMESTAMP (Partitioning column) Creation time of this job. Partitioning is based on the UTC time of this timestamp.
job_start_time TIMESTAMP Start time of this job.
job_end_time TIMESTAMP End time of this job.
state STRING Running state of the job at the end of this period. Valid states include PENDING, RUNNING, and DONE.
reservation_id STRING Name of the primary reservation assigned to this job at the end of this period, if applicable.
total_bytes_processed INTEGER Total bytes processed by the job.
error_result RECORD Details of error (if any) as an ErrorProto.
cache_hit BOOLEAN Whether the query results of this job were from a cache.
period_estimated_runnable_units INTEGER Units of work that can be scheduled immediately in this period. Additional slots for these units of work accelerate your query, provided no other query in the reservation needs additional slots.

Data retention

This view contains currently running jobs and the job history of the past 180 days.

Scope and syntax

Queries against this view must include a region qualifier. If you do not specify a regional qualifier, metadata is retrieved from all regions. The following table explains the region scope for this view:

View name Resource scope Region scope
[PROJECT_ID.]`region-REGION`.INFORMATION_SCHEMA.JOBS_TIMELINE_BY_FOLDER Project level REGION
Replace the following:

  • Optional: PROJECT_ID: the ID of your Google Cloud project. If not specified, the default project is used.
  • REGION: any dataset region name. For example, region-us.

Examples

The following examples show how to query the INFORMATION_SCHEMA.JOBS_TIMELINE_BY_FOLDER view.

Get the number of unique jobs

The following query displays the number of unique jobs running per minute in the designated project's folder:

SELECT
  TIMESTAMP_TRUNC(period_start, MINUTE) AS per_start,
  COUNT(DISTINCT job_id) AS unique_jobs
FROM
  `region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE_BY_FOLDER,
  UNNEST(folder_numbers) f
WHERE
  my_folder_number = f
GROUP BY
  per_start
ORDER BY
  per_start DESC;

The result is similar to the following:

+---------------------------+---------------------------------+
|  per_start                |  unique_jobs                    |
+---------------------------+---------------------------------+
|  2019-10-10 00:04:00 UTC  |  5                              |
|  2019-10-10 00:03:00 UTC  |  2                              |
|  2019-10-10 00:02:00 UTC  |  3                              |
|  2019-10-10 00:01:00 UTC  |  4                              |
|  2019-10-10 00:00:00 UTC  |  4                              |
+---------------------------+---------------------------------+

Calculate the slot-time used

The following query displays the slot-time used per minute in the designated project's folder:

SELECT
  TIMESTAMP_TRUNC(period_start, MINUTE) AS per_start,
  SUM(period_slot_ms) AS slot_ms
FROM
  `region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE_BY_FOLDER,
  UNNEST(folder_numbers) f
WHERE
  my_folder_number = f
  AND reservation_id = "my reservation id"
  AND statement_type != "SCRIPT"
GROUP BY
  per_start
ORDER BY
  per_start DESC;

The result is similar to the following:

+---------------------------+---------------------------------+
|  per_start                |  slot_ms                        |
+---------------------------+---------------------------------+
|  2019-10-10 00:04:00 UTC  |  500                            |
|  2019-10-10 00:03:00 UTC  |  1000                           |
|  2019-10-10 00:02:00 UTC  |  3000                           |
|  2019-10-10 00:01:00 UTC  |  4000                           |
|  2019-10-10 00:00:00 UTC  |  4000                           |
+---------------------------+---------------------------------+