Usage
view: my_view { derived_table: { ... } }
Hierarchy
derived_table |
Default Value
None
|
Definition
A derived table can be treated as if it is a normal table in your database. You can create either of the following types of derived tables:
- A native derived table that is defined using the
explore_source
parameter - A SQL-based derived table using a raw SQL query that you define using the
sql
parameter
Certain edge cases won't permit the use of the
sql
parameter. In such cases, Looker supports the following parameters for defining a SQL query for PDTs:create_process
sql_create
With
sql
,create_process
, orsql_create
parameter, in all these cases you are defining the derived table with a SQL query, so these are all considered SQL-based derived tables.
The derived_table
parameter begins a section of LookML where you define how a derived table should be calculated, what indexes or keys it should use, and when it should be regenerated.
Derived tables can be temporary derived tables that are calculated ad hoc as users request data from them, or they can be persistent derived tables (PDTs) that are persisted on your database using a persistence strategy.
If your admin has enabled the Incremental PDTs Labs feature, and if your dialect supports them, you can use incremental PDTs in your project. An incremental PDT is a PDT that is built incrementally by appending fresh data to the table, instead of rebuilding the table in its entirety. See the Incremental PDTs documentation page for more information on incremental PDTs.
See the Derived tables in Looker documentation page for more information on derived tables.
Derived table subparameters
This section lists the derived table subparameters.
Parameter | Description |
---|---|
cluster_keys : ["column_name", "column_name", ...] |
Specifies that a PDT be clustered by one or more fields in BigQuery and Snowflake. |
create_process : { sql_step:SQL query ;; } |
Specifies an ordered sequence of steps to create a PDT on a database dialect that requires custom DDL commands. |
datagroup_trigger : datagroup_name |
Specifies the datagroup to use for the PDT rebuilding policy. |
distribution : "column_name" |
Sets the distribution key of a PDT that is built in Redshift or Aster. |
distribution_style : all | even |
Sets the distribution style of a PDT that is built in Redshift. |
explore_source : explore_name { # Desired subparameters (described on explore_source page) } |
Specifies a native derived table based on an Explore. |
increment_key : "column_name" |
Specifies the time increment for which fresh data should be queried and appended to an incremental PDT. |
increment_offset : N |
Specifies the number of previous time periods (at the increment key granularity) that are rebuilt to account for late arriving data for an incremental PDT. |
indexes : ["column_name", "column_name", ...] |
Sets the indexes of a PDT built in a traditional database (for example, MySQL, Postgres) or an interleaved sort key in Redshift. |
interval_trigger : "N (seconds | minutes | hours)" |
Specifies a rebuild schedule for a PDT. |
materialized_view : yes | no |
Creates a materialized view on your database for a derived table. |
partition_keys : ["column_name", "column_name", ...] |
Specifies that a PDT be partitioned by one or more fields in Presto, or by a single date/time field in BigQuery. |
persist_for : "N (seconds | minutes | hours)" |
Sets the maximum age of a PDT before it is regenerated. |
publish_as_db_view : yes | no |
Creates a stable database view for the PDT to enable querying the table outside of Looker. |
sortkeys : ["column_name", "column_name", ...] |
Sets the sort keys of a PDT that is built in Redshift. |
sql : SQL query ;; |
Declares the SQL query for a derived table. |
sql_create : { SQL query ;; } |
Defines a SQL CREATE statement to create a PDT on a database dialect that requires custom DDL commands. |
sql_trigger_value : SQL query ;; |
Specifies the condition that causes a PDT to be regenerated. |
table_compression : GZIP | SNAPPY |
Specifies the table compression to use for a PDT in Amazon Athena. |
table_format : PARQUET | ORC | AVRO | JSON | TEXTFILE |
Specifies the table format to use for a PDT in Amazon Athena. |
Examples
Create a customer_order_facts
native derived table with the explore_source
parameter:
view: customer_order_facts {
derived_table: {
explore_source: order {
column: customer_id { field: order.customer_id }
column: lifetime_orders { field: order.count }
column: lifetime_spend { field: order.total_spend }
}
}
}
Create a customer_order_facts
derived table with the sql
parameter:
view: customer_order_facts {
derived_table: {
sql:
SELECT
customer_id,
COUNT(*) AS lifetime_orders,
SUM(total) AS lifetime_spend
FROM
order
GROUP BY
customer_id ;;
}
}
Things to consider
Avoid overusing derived tables to create pure SQL solutions
Users who are particularly SQL savvy often use derived tables to solve problems with complex SQL queries, the results of which can then be exposed to users. While it may sometimes be necessary to do, it can also miss out on the modular, reusable nature of LookML and can lock users into rigid ways of exploring their data.
Our general suggestion is to think about the underlying purpose of your SQL query and then convert that logic into LookML, rather than copying and pasting existing queries into a derived table. If you need assistance creating analyses that don't rely on derived tables, Looker support analysts are here to help with best practices.
Supported database dialects for derived tables
Supported database dialects for temporary derived tables
For Looker to support derived tables in your Looker project, your database dialect must also support them. The following table shows which dialects support derived tables in the latest release of Looker:
Dialect | Supported? |
---|---|
Actian Avalanche | Yes |
Amazon Athena | Yes |
Amazon Aurora MySQL | Yes |
Amazon Redshift | Yes |
Apache Druid | Yes |
Apache Druid 0.13+ | Yes |
Apache Druid 0.18+ | Yes |
Apache Hive 2.3+ | Yes |
Apache Hive 3.1.2+ | Yes |
Apache Spark 3+ | Yes |
ClickHouse | Yes |
Cloudera Impala 3.1+ | Yes |
Cloudera Impala 3.1+ with Native Driver | Yes |
Cloudera Impala with Native Driver | Yes |
DataVirtuality | Yes |
Databricks | Yes |
Denodo 7 | Yes |
Denodo 8 | Yes |
Dremio | Yes |
Dremio 11+ | Yes |
Exasol | Yes |
Firebolt | Yes |
Google BigQuery Legacy SQL | Yes |
Google BigQuery Standard SQL | Yes |
Google Cloud PostgreSQL | Yes |
Google Cloud SQL | Yes |
Google Spanner | Yes |
Greenplum | Yes |
HyperSQL | Yes |
IBM Netezza | Yes |
MariaDB | Yes |
Microsoft Azure PostgreSQL | Yes |
Microsoft Azure SQL Database | Yes |
Microsoft Azure Synapse Analytics | Yes |
Microsoft SQL Server 2008+ | Yes |
Microsoft SQL Server 2012+ | Yes |
Microsoft SQL Server 2016 | Yes |
Microsoft SQL Server 2017+ | Yes |
MongoBI | Yes |
MySQL | Yes |
MySQL 8.0.12+ | Yes |
Oracle | Yes |
Oracle ADWC | Yes |
PostgreSQL 9.5+ | Yes |
PostgreSQL pre-9.5 | Yes |
PrestoDB | Yes |
PrestoSQL | Yes |
SAP HANA 2+ | Yes |
SingleStore | Yes |
SingleStore 7+ | Yes |
Snowflake | Yes |
Teradata | Yes |
Trino | Yes |
Vector | Yes |
Vertica | Yes |
Supported database dialects for PDTs
For Looker to support PDTs in your Looker project, your database dialect must also support them.
To support any type of PDTs (either LookML-based or SQL-based), the dialect must support writes to the database, among other requirements. There are some read-only database configurations that don't allow persistence to work (most commonly Postgres hot-swap replica databases). In these cases, you can use temporary derived tables instead.
The following table shows the dialects that support persistent SQL-based derived tables in the latest release of Looker:
Dialect | Supported? |
---|---|
Actian Avalanche | Yes |
Amazon Athena | Yes |
Amazon Aurora MySQL | Yes |
Amazon Redshift | Yes |
Apache Druid | No |
Apache Druid 0.13+ | No |
Apache Druid 0.18+ | No |
Apache Hive 2.3+ | Yes |
Apache Hive 3.1.2+ | Yes |
Apache Spark 3+ | Yes |
ClickHouse | No |
Cloudera Impala 3.1+ | Yes |
Cloudera Impala 3.1+ with Native Driver | Yes |
Cloudera Impala with Native Driver | Yes |
DataVirtuality | No |
Databricks | Yes |
Denodo 7 | No |
Denodo 8 | No |
Dremio | No |
Dremio 11+ | No |
Exasol | Yes |
Firebolt | No |
Google BigQuery Legacy SQL | Yes |
Google BigQuery Standard SQL | Yes |
Google Cloud PostgreSQL | Yes |
Google Cloud SQL | Yes |
Google Spanner | No |
Greenplum | Yes |
HyperSQL | No |
IBM Netezza | Yes |
MariaDB | Yes |
Microsoft Azure PostgreSQL | Yes |
Microsoft Azure SQL Database | Yes |
Microsoft Azure Synapse Analytics | Yes |
Microsoft SQL Server 2008+ | Yes |
Microsoft SQL Server 2012+ | Yes |
Microsoft SQL Server 2016 | Yes |
Microsoft SQL Server 2017+ | Yes |
MongoBI | No |
MySQL | Yes |
MySQL 8.0.12+ | Yes |
Oracle | Yes |
Oracle ADWC | Yes |
PostgreSQL 9.5+ | Yes |
PostgreSQL pre-9.5 | Yes |
PrestoDB | Yes |
PrestoSQL | Yes |
SAP HANA 2+ | Yes |
SingleStore | Yes |
SingleStore 7+ | Yes |
Snowflake | Yes |
Teradata | Yes |
Trino | Yes |
Vector | Yes |
Vertica | Yes |
To support persistent native derived tables (which have LookML-based queries), the dialect must also support a CREATE TABLE
DDL function. Here is a list of the dialects that support persistent native (LookML-based) derived tables in the latest release of Looker:
Dialect | Supported? |
---|---|
Actian Avalanche | Yes |
Amazon Athena | Yes |
Amazon Aurora MySQL | Yes |
Amazon Redshift | Yes |
Apache Druid | No |
Apache Druid 0.13+ | No |
Apache Druid 0.18+ | No |
Apache Hive 2.3+ | Yes |
Apache Hive 3.1.2+ | Yes |
Apache Spark 3+ | Yes |
ClickHouse | No |
Cloudera Impala 3.1+ | Yes |
Cloudera Impala 3.1+ with Native Driver | Yes |
Cloudera Impala with Native Driver | Yes |
DataVirtuality | No |
Databricks | Yes |
Denodo 7 | No |
Denodo 8 | No |
Dremio | No |
Dremio 11+ | No |
Exasol | Yes |
Firebolt | No |
Google BigQuery Legacy SQL | Yes |
Google BigQuery Standard SQL | Yes |
Google Cloud PostgreSQL | Yes |
Google Cloud SQL | No |
Google Spanner | No |
Greenplum | Yes |
HyperSQL | No |
IBM Netezza | Yes |
MariaDB | Yes |
Microsoft Azure PostgreSQL | Yes |
Microsoft Azure SQL Database | Yes |
Microsoft Azure Synapse Analytics | Yes |
Microsoft SQL Server 2008+ | Yes |
Microsoft SQL Server 2012+ | Yes |
Microsoft SQL Server 2016 | Yes |
Microsoft SQL Server 2017+ | Yes |
MongoBI | No |
MySQL | Yes |
MySQL 8.0.12+ | Yes |
Oracle | Yes |
Oracle ADWC | Yes |
PostgreSQL 9.5+ | Yes |
PostgreSQL pre-9.5 | Yes |
PrestoDB | Yes |
PrestoSQL | Yes |
SAP HANA 2+ | Yes |
SingleStore | Yes |
SingleStore 7+ | Yes |
Snowflake | Yes |
Teradata | Yes |
Trino | Yes |
Vector | Yes |
Vertica | Yes |
PDTs are not supported for Snowflake connections that use OAuth.
Supported database dialects for incremental PDTs
For Looker to support incremental PDTs in your Looker project, your database dialect must support Data Definition Language (DDL) commands that enable deleting and inserting rows.
The following table shows which dialects support incremental PDTs in the latest release of Looker:
Dialect | Supported? |
---|---|
Actian Avalanche | No |
Amazon Athena | No |
Amazon Aurora MySQL | No |
Amazon Redshift | Yes |
Apache Druid | No |
Apache Druid 0.13+ | No |
Apache Druid 0.18+ | No |
Apache Hive 2.3+ | No |
Apache Hive 3.1.2+ | No |
Apache Spark 3+ | No |
ClickHouse | No |
Cloudera Impala 3.1+ | No |
Cloudera Impala 3.1+ with Native Driver | No |
Cloudera Impala with Native Driver | No |
DataVirtuality | No |
Databricks | Yes |
Denodo 7 | No |
Denodo 8 | No |
Dremio | No |
Dremio 11+ | No |
Exasol | No |
Firebolt | No |
Google BigQuery Legacy SQL | No |
Google BigQuery Standard SQL | Yes |
Google Cloud PostgreSQL | Yes |
Google Cloud SQL | No |
Google Spanner | No |
Greenplum | Yes |
HyperSQL | No |
IBM Netezza | No |
MariaDB | No |
Microsoft Azure PostgreSQL | Yes |
Microsoft Azure SQL Database | No |
Microsoft Azure Synapse Analytics | Yes |
Microsoft SQL Server 2008+ | No |
Microsoft SQL Server 2012+ | No |
Microsoft SQL Server 2016 | No |
Microsoft SQL Server 2017+ | No |
MongoBI | No |
MySQL | Yes |
MySQL 8.0.12+ | Yes |
Oracle | No |
Oracle ADWC | No |
PostgreSQL 9.5+ | Yes |
PostgreSQL pre-9.5 | Yes |
PrestoDB | No |
PrestoSQL | No |
SAP HANA 2+ | No |
SingleStore | No |
SingleStore 7+ | No |
Snowflake | Yes |
Teradata | No |
Trino | No |
Vector | No |
Vertica | Yes |