This document describes how to create and use tables in BigQuery. After creating a table, you can:
- Control access to your table data
- Get information about your tables
- List the tables in a dataset
- Get table metadata using meta-tables
For more information on managing tables including updating table properties, copying a table, and deleting a table, see Managing Tables.
Table limitations
BigQuery tables are subject to the following limitations:
- Table names must be unique per dataset.
- The BigQuery web UI supports copying only one table at a time.
- When copying tables, the destination dataset must reside in the same location as the table being copied. For example, you cannot copy a table from an EU-based dataset to a US-based dataset.
- When copying multiple source tables to a destination table using the CLI or API, all source tables must have identical schemas.
- You can only delete one table at a time using the BigQuery web UI, the command-line tool, and the API.
- When exporting table data, the only supported destination is Google Cloud Storage.
- As you approach 50,000 or more tables in a dataset, enumerating them becomes
slower. Enumeration performance suffers whether you use an API call, the
BigQuery web UI, or the
__TABLES_SUMMARY__
meta table. To improve UI performance, you can use the?minimal
parameter to limit the load operation to 30,000 tables per project. You add the parameter to the BigQuery web UI URL in the following format:https://bigquery.cloud.google.com/queries/[PROJECT_NAME]?minimal
.
Creating a table
You can create a table in BigQuery:
- Manually using the BigQuery web UI or the command line tool's
bq mk
command - Programmatically by calling the
tables.insert
API method - From query results
- By defining a table that references an external data source
- When you load data
When you create a table in BigQuery, the table name must be unique per dataset. The table name can:
- Contain up to 1,024 characters
- Contain letters (upper or lower case), numbers, and underscores
Required permissions
To create a table, you must have WRITER
access at the dataset level, or you must be assigned a project-level IAM role
that includes bigquery.tables.create
permissions. The following
predefined, project-level IAM roles include bigquery.tables.create
permissions:
In addition, because the bigquery.user
role has bigquery.datasets.create
permissions, a user assigned to the bigquery.user
role can create tables in
any dataset that user creates. When a user assigned to the bigquery.user
role creates a dataset, that user is given OWNER
access to the dataset.
OWNER
access to a dataset gives the user full control over it.
For more information on IAM roles and permissions in BigQuery, see Access Control. For more information on dataset-level roles, see Primitive roles for datasets.
Creating an empty table without a schema definition
When you create an empty table without a schema definition, you can define the schema when you load data into it, or for supported data formats, you can use schema auto-detection. You can also manually specify the schema before you load data.
To create an empty table without a schema definition:
Web UI
Click the down arrow icon
next to your dataset name in the navigation and click Create new table.
On the Create table page:
- For Source Data click Create empty table.
- For Destination Table choose your dataset and enter the table name in the Destination table name field.
- In the Schema section, delete the existing entry by clicking the delete (X) icon to the right of the entry. Deleting the entry creates an empty table with no schema definition.
- In the Options section, leave the default values.
- Partitioning:
None
. - Encryption Type:
Default
— This option is for customer-managed encryption keys. By default, BigQuery encrypts customer content stored at rest.
- Partitioning:
Click Create Table.
Command-line
Use the mk
command with the --table
flag (or -t
shortcut). Optional
parameters include --expiration
, --description
, --time_partitioning_type
,
--destination_kms_key
, and --label
. If you are creating
a table in a project other than your default project, add the project ID to
the dataset in the following format: [PROJECT_ID]:[DATASET]
.
--time_partitioning_type
and --destination_kms_key
are not demonstrated
here. For more information on --time_partitioning_type
, see
ingestion-time partitioned tables
or partitioned tables. For more
information on --destination_kms_key
, see
customer-managed encryption keys.
To create an empty table in an existing dataset, enter:
bq mk --table --expiration [INTEGER] --description "[DESCRIPTION]" --label [KEY:VALUE, KEY:VALUE] [PROJECT_ID]:[DATASET].[TABLE]
Where:
[INTEGER]
is the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current time plus the integer value. If you set the expiration time when you create a table, the dataset's default table expiration setting is ignored.[DESCRIPTION]
is a description of the table in quotes.[KEY:VALUE]
is the key:value pair that represents a label. You can enter multiple labels using a comma-separated list.[PROJECT_ID]
is your project ID.[DATASET]
is a dataset in your project.[TABLE]
is the name of the table you're creating.
Examples:
Enter the following command to create a table named mytable
in
mydataset
in your default project. The table expiration is set to
3600 seconds (1 hour), the description is set to This is my table
, and the
label is set to organization:development
. The command uses the -t
shortcut instead of --table
.
bq mk -t --expiration 3600 --description "This is my table" --label organization:development mydataset.mytable
Enter the following command to create a table named mytable
in
mydataset
in myotherproject
. The table expiration is set to
3600 seconds (1 hour), the description is set to This is my table
, and the
label is set to organization:development
. The command uses the -t
shortcut instead of --table
.
bq mk -t --expiration 3600 --description "This is my table" --label organization:development myotherproject:mydataset.mytable
After the table is created, you can update the table's expiration, description, and labels.
API
Call the tables.insert
method
with a defined table resource.
Python
For more on installing and creating a BigQuery client, refer to
BigQuery Client Libraries.
Creating an empty table with a schema definition
When you create an empty table with a schema definition, you can:
- Enter the schema using the web UI
- Provide the schema inline using the command-line tool
- Submit a JSON schema file using the command-line tool
- Provide the schema in a table resource
when calling the API's
tables.insert
method.
For more information on specifying a table schema, see Specifying a Schema.
After the table is created, you can load data into it or populate it by writing query results to it.
To create an empty table with a schema definition:
Web UI
-
Click the down arrow icon
next to your dataset name in the navigation and click Create new table.
-
On the Create Table page, in the Source Data section, click Create empty table.
-
On the Create Table page, in the Destination Table section:
- For Table name, choose the appropriate dataset, and in the table name field, enter the name of the table you're creating.
- Verify that Table type is set to Native table.
-
In the Schema section, manually enter the schema definition.
-
You can enter schema information manually by:
-
Clicking Edit as text and entering the table schema as a JSON array:
-
Using Add Field to input the schema:
-
-
-
In the Options section, leave the default values.
-
Click Create Table.
Command-line
Use the mk
command with the --table
or -t
flag. You can supply table
schema information inline or via a JSON schema file. Optional parameters
include --expiration
, --description
, --time_partitioning_type
,
--destination_kms_key
, and --label
. If you are creating
a table in a project other than your default project, add the project ID to
the dataset in the following format: [PROJECT_ID]:[DATASET]
.
--time_partitioning_type
and --destination_kms_key
are not demonstrated
here. For more information on --time_partitioning_type
, see
ingestion-time partitioned tables
or partitioned tables. For more
information on --destination_kms_key
, see
customer-managed encryption keys.
To create an empty table in an existing dataset with a schema definition, enter:
bq mk --table --expiration [INTEGER] --description [DESCRIPTION] --label [KEY:VALUE, KEY:VALUE] [PROJECT_ID]:[DATASET].[TABLE] [SCHEMA]
Where:
[INTEGER]
is the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current time plus the integer value. If you set the expiration time when you create a table, the dataset's default table expiration setting is ignored.[DESCRIPTION]
is a description of the table in quotes.[KEY:VALUE]
is the key:value pair that represents a label. You can enter multiple labels using a comma-separated list.[PROJECT_ID]
is your project ID.[DATASET]
is a dataset in your project.[TABLE]
is the name of the table you're creating.[SCHEMA]
is an inline schema definition in the format[FIELD]:[DATA_TYPE],[FIELD]:[DATA_TYPE]
or the path to the JSON schema file on your local machine.
When you specify the schema on the command line, you cannot include a RECORD
(STRUCT
) type,
you cannot include a column description, and you cannot specify the column's
mode. All modes default to NULLABLE
. To include descriptions, modes, and
RECORD
types, supply a JSON schema file
instead.
Examples:
Enter the following command to create a table using an inline schema definition.
This command creates a table named mytable
in mydataset
in your default
project. The table expiration is set to 3600 seconds (1 hour), the description
is set to This is my table
, and the label is set to organization:development
.
The command uses the -t
shortcut instead of --table
. The schema is specified
inline as: qtr:STRING,sales:FLOAT,year:STRING
.
bq mk -t --expiration 3600 --description "This is my table" --label organization:development mydataset.mytable qtr:STRING,sales:FLOAT,year:STRING
Enter the following command to create a table using an JSON schema file. This
command creates a table named mytable
in mydataset
in your default project.
The table expiration is set to 3600 seconds (1 hour), the description is set to
This is my table
, and the label is set to organization:development
. The path
to the schema file is /tmp/myschema.json
.
bq mk --table --expiration 3600 --description "This is my table" --label organization:development mydataset.mytable /tmp/myschema.json
Enter the following command to create a table using an JSON schema file. This
command creates a table named mytable
in mydataset
in myotherproject
.
The table expiration is set to 3600 seconds (1 hour), the description is set to
This is my table
, and the label is set to organization:development
. The path
to the schema file is /tmp/myschema.json
.
bq mk --table --expiration 3600 --description "This is my table" --label organization:development myotherproject:mydataset.mytable /tmp/myschema.json
After the table is created, you can update the table's expiration, description, and labels. You can also modify the schema definition.
API
Call the tables.insert
method
with a defined table resource.
C#
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Go
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Java
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Node.js
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
PHP
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Python
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Ruby
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Creating a table from a query result
To create a table from a query result, write the results to a destination table.
Web UI
Go to the BigQuery web UI.
Go to the BigQuery web UIClick the Compose query button.
Enter a valid BigQuery SQL query in the New Query text area.
Click Show Options.
In the Destination Table section, click Select Table.
In the Select Destination Table dialog:
For Project, choose the project where the destination table will be created.
For Dataset, choose the dataset that will store the table.
In the Table ID field, enter a table name. The name must be unique in the destination dataset. The table name can be up to 1024 characters long and can contain only a-z, A-Z, 0-9, or _ (the underscore character).
Click OK.
In the Destination Table section, for Write Preference, choose one of the following:
- Write if empty — Writes the query results to the table only if the table is empty.
- Append to table — Appends the query results to an existing table.
- Overwrite table — Overwrites an existing table with the same name using the query results.
For Processing Location, click Unspecified and choose your data's location. You can leave processing location set to unspecified if your data is in the
US
orEU
multi-region location. When your data is in theUS
or theEU
, the processing location is automatically detected.Click Run query. This creates a query job that writes the query results to the table you specified.
Alternately, if you forget to specify a destination table before running your query, you can copy the temporary table to a permanent table by clicking the Save as Table button in the results window.
CLI
Enter the bq query
command and specify the --destination_table
flag to
create a permanent table based on the query results. Specify the
use_legacy_sql=false
flag to use standard SQL syntax. To write the query
results to a table that is not in your default project, add the project ID to
the dataset name in the following format: [PROJECT_ID]:[DATASET]
.
Supply the --location
flag and set the value to your
location.
To control the write disposition for an existing destination table, specify one of the following optional flags:
--append_table
— If the destination table exists, the query results are appended to it.--replace
— If the destination table exists, it is overwritten with the query results.bq --location=[LOCATION] query --destination_table [PROJECT_ID]:[DATASET].[TABLE] --use_legacy_sql=false '[QUERY]'
Where:
[LOCATION]
is the name of the location used to process the query. The--location
flag is optional if your data is in theUS
or theEU
multi-region location. For example, if you are using BigQuery in the Tokyo region, set the flag's value toasia-northeast1
. You can set a default value for the location using the .bigqueryrc file.[PROJECT_ID]
is your project ID.[DATASET]
is the name of the dataset that contains the table to which you are writing the query results.[TABLE]
is the name of the table to which you're writing the query results.[QUERY]
is a query in standard SQL syntax.
If no write disposition flag is specified, the default behavior is to write
the results to the table only if it is empty. If the table exists and it is not
empty, the following error is returned: BigQuery error in query operation:
Error processing job '[PROJECT_ID]:bqjob_123abc456789_00000e1234f_1': Already
Exists: Table [PROJECT_ID]:[DATASET].[TABLE]
.
Examples:
Enter the following command to write query results to a destination table
named mytable
in mydataset
. The dataset is in your default project. Since no
write disposition flag is specified in the command, the table must be new or
empty. Otherwise, an Already exists
error is returned. The query retrieves
data from the USA Name Data public dataset.
bq --location=US query --destination_table mydataset.mytable --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'
Enter the following command to use query results to overwrite a destination
table named mytable
in mydataset
. The dataset is in your default project.
The command uses the --replace
flag to overwrite the destination table.
bq --location=US query --destination_table mydataset.mytable --replace --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'
Enter the following command to append query results to a destination table named
mytable
in mydataset
. The dataset is in myotherproject
, not your default
project. The command uses the --append
flag to append the query results to the
destination table.
bq --location=US query --destination_table myotherproject:mydataset.mytable --append --use_legacy_sql=false 'SELECT name,number FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE gender = "M" ORDER BY number DESC'
API
To save query results to a permanent table, call the jobs.insert
method, configure a query
job, and include a value for the configuration.query.destinationTable
property. To control the write disposition for an existing destination table,
configure the configuration.query.writeDisposition
property.
Specify your location in the location
property in the
jobReference
section of the job resource.
Java
For more on installing and creating a BigQuery client, refer to
BigQuery Client Libraries.
To save query results to a permanent table, set the destination table to the desired TableId in a QueryJobConfiguration.
Python
To save query results to a permanent table, create a QueryJobConfig and set the destination to the desired TableReference. Pass the job configuration to the query method.
Creating a table that references an external data source
An external data source (also known as a federated data source) is a data source that you can query directly even though the data is not stored in BigQuery. Instead of loading or streaming the data, you create a table that references the external data source.
BigQuery offers support for querying data directly from:
You can query data in a supported external data source by creating a temporary or permanent table that references data stored in the external data source. For more information on working with external data sources, see:
Creating a table when you load data
When you load data into BigQuery, you can load data into a new table or partition, you can append data to an existing table or partition, or you can overwrite a table or partition. You do not need to create an empty table before loading data into it. You can create the new table and load your data at the same time.
When you load data into BigQuery, you can supply the table or partition schema, or for supported data formats, you can use schema auto-detection.
For more information on loading data, see Introduction to Loading Data into BigQuery.
Controlling access to tables
You cannot assign access controls directly to tables. You can control table access by configuring access controls at the dataset level or at the project level.
Dataset-level access controls specify the operations users, groups, and service
accounts are allowed to perform on tables in that specific dataset. If you
assign only dataset-level permissions, you must also assign a primitive or
predefined, project-level role that provides access to the project, for example,
bigquery.user
.
Instead of granting access to individual datasets, you can assign predefined, project-level IAM roles that grant permissions to all table data in all datasets in a project.
You can also create IAM custom roles. If you create a custom role, the permissions you grant depend on the table operations you want the user, group, or service account to be able to perform.
For more information on roles and permissions, see:
- Understanding Roles in the IAM documentation
- BigQuery Access Controls
- Assigning access controls to datasets.
For information on working with table data, see Managing Table Data.
Using tables
Getting information about tables
You can get information about tables using the BigQuery web UI, using the bq show
CLI command or by calling the tables.get
API method.
Required permissions
To get information about tables, you must be assigned the
READER
role on the
dataset, or you must be assigned a project-level IAM role that
includes bigquery.tables.get
permissions. If you are granted
bigquery.tables.get
permissions at the project level, you can get information
about all tables in the project. The following predefined, project-level IAM
roles include bigquery.tables.get
permissions:
In addition, a user assigned the bigquery.user
role has
bigquery.datasets.create
permissions. This allows a user assigned to the
bigquery.user
role to get information about tables in any dataset that user
creates. When a user assigned to the bigquery.user
role creates a dataset,
that user is given OWNER
access to the dataset. OWNER
access to a dataset
gives the user full control over it and all the tables in it.
For more information on IAM roles and permissions in BigQuery, see Access Control. For more information on dataset-level roles, see Primitive roles for datasets.
Getting table information
To get information about tables:
Web UI
In the navigation pane, click the down arrow icon
to the left of your dataset to expand it, or double-click the dataset name. This displays the tables and views in the dataset.
Click the table name.
Click Details. The Table Details page displays the table's description and table information.
Click the Schema tab to view the table's schema definition.
CLI
Issue the bq show
command to display all table information. Use the
--schema
flag to display only table schema information. The --format
flag can be used to control the output.
If you are getting information about a table in a project other than
your default project, add the project ID to the dataset in the following
format: [PROJECT_ID]:[DATASET]
.
bq show --schema --format=prettyjson [PROJECT_ID]:[DATASET].[TABLE]
Where:
[PROJECT_ID]
is your project ID.[DATASET]
is the name of the dataset.[TABLE]
is the name of the table.
Examples:
Enter the following command to display all information about mytable
in
mydataset
. mydataset
is in your default project.
bq show --format=prettyjson mydataset.mytable
Enter the following command to display all information about mytable
in
mydataset
. mydataset
is in myotherproject
, not your default project.
bq show --format=prettyjson myotherproject:mydataset.mytable
Enter the following command to display only schema information about
mytable
in mydataset
. mydataset
is in myotherproject
, not your
default project.
bq show --schema --format=prettyjson myotherproject:mydataset.mytable
API
Call the tables.get
method and provide any relevant parameters.
Python
For more on installing and creating a BigQuery client, refer to
BigQuery Client Libraries.
Listing tables in a dataset
You can list tables in datasets using the BigQuery web UI, using the bq ls
CLI
command or by calling the tables.list
API method.
Required permissions
To list tables in a dataset, you must be assigned the
READER
role on the
dataset, or you must be assigned a project-level IAM role that
includes bigquery.tables.list
permissions. If you are granted
bigquery.tables.list
permissions at the project level, you can list tables in
any dataset in the project. All predefined, project-level IAM roles
include bigquery.tables.list
permissions except for bigquery.jobUser
.
For more information on IAM roles and permissions in BigQuery, see Access Control. For more information on dataset-level roles, see Primitive roles for datasets.
Listing tables
To list the tables in a dataset:
Web UI
-
In the web UI, in the navigation pane, click the down arrow icon
to the left of your dataset to expand it, or double-click the dataset name. This displays the tables and views in the dataset.
-
Scroll through the list to see the tables in the dataset. Tables and views are identified by different icons.
Command-line
Issue the bq ls
command. The --format
flag can be used to control the
output. If you are listing tables in a project other than your default project,
add the project ID to the dataset in the following format:
[PROJECT_ID]:[DATASET]
.
bq ls --format=pretty [PROJECT_ID]:[DATASET]
Where:
[PROJECT_ID]
is your project ID.[DATASET]
is the name of the dataset.
When you run the command, the Type
field displays either TABLE
or VIEW
.
For example:
+-------------------------+-------+----------------------+-------------------+ | tableId | Type | Labels | Time Partitioning | +-------------------------+-------+----------------------+-------------------+ | mytable | TABLE | department:shipping | | | myview | VIEW | | | +-------------------------+-------+----------------------+-------------------+
Examples:
Enter the following command to list tables in dataset mydataset
in your
default project.
bq ls --format=pretty mydataset
Enter the following command to list tables in dataset mydataset
in
myotherproject
.
bq ls --format=pretty myotherproject:mydataset
API
To list tables using the API, call the tables.list
method.
C#
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Go
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Java
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Node.js
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
PHP
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Python
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Ruby
For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.
Getting table metadata using meta-tables
BigQuery offers some special tables whose contents represent metadata, such as
the list of tables and views in a dataset. The "meta-tables" are read-only. To
access metadata about the tables and views in a dataset, use the
__TABLES_SUMMARY__
meta-table in a query's SELECT
statement. You can run the
query using the BigQuery web UI, using the command-line tool's bq query
command,
or by calling the jobs.insert
API method and configuring a query job.
A query that uses the __TABLES_SUMMARY__
meta-table looks like the following:
SELECT [FIELD] FROM [DATASET].__TABLES_SUMMARY__
Where:
DATASET
is the name of your datasetFIELD
is one of the following:
Value | Description |
---|---|
project_id |
Name of the project. |
dataset_id |
Name of the dataset. |
table_id |
Name of the table or view. |
creation_time |
The time at which the table or view was created, in milliseconds since January 1, 1970 UTC. |
type |
An integer representing the table type: a regular table (1) or a view (2). |
Required permissions
To run a query job that uses the __TABLES_SUMMARY__
meta-table, you must have
bigquery.jobs.create
permissions. The following predefined, project-level IAM
roles include bigquery.jobs.create
permissions:
You must also be assigned the READER
role at the dataset level, or you must be assigned a project-level IAM role that
includes bigquery.tables.getData
permissions. All predefined, project-level
IAM roles except bigquery.user
and bigquery.jobUser
have
bigquery.tables.getData
permissions.
Meta-table limitations
Meta-tables are subject to the following limitations:
- Generally,
__TABLES_SUMMARY__
is reasonably fast for datasets with up to a few thousand tables. For larger datasets__TABLES_SUMMARY__
becomes increasingly slow, and may exceed available resources. - Meta-tables cannot be used with the
tables.insert
method. - Meta-tables cannot be used as a destination table.
- Meta-tables do not support table decorators in legacy SQL.
- Meta-tables do not appear when you list the tables in a dataset.
Meta-table examples
The following query retrieves all metadata for the publicdata.samples
dataset.
Web UI
#standardSQL SELECT * FROM `publicdata.samples.__TABLES_SUMMARY__`
Command-line
bq --location=US query --use_legacy_sql=false ' SELECT * FROM publicdata.samples.TABLES_SUMMARY'
The output looks like the following:
+------------+------------+-----------------+---------------+------+ | project_id | dataset_id | table_id | creation_time | type | +------------+------------+-----------------+---------------+------+ | publicdata | samples | github_nested | 1348782587310 | 1 | | publicdata | samples | github_timeline | 1335915950690 | 1 | | publicdata | samples | gsod | 1335916040125 | 1 | | publicdata | samples | natality | 1335916045005 | 1 | | publicdata | samples | shakespeare | 1335916045099 | 1 | | publicdata | samples | trigrams | 1335916127449 | 1 | | publicdata | samples | wikipedia | 1335916132870 | 1 | +------------+------------+-----------------+---------------+------+
The following query lists all tables and views in the publicdata.samples
dataset.
Web UI
#standardSQL SELECT table_id FROM `publicdata.samples.__TABLES_SUMMARY__`
Command-line
bq --location=US query --use_legacy_sql=false ' SELECT table_id FROM publicdata.samples.TABLES_SUMMARY'
The output looks like the following:
+-----------------+ | table_id | +-----------------+ | github_nested | | github_timeline | | gsod | | natality | | shakespeare | | trigrams | | wikipedia | +-----------------+
The following query lists the type for each table in the publicdata.samples
dataset.
Web UI
#standardSQL SELECT table_id, type FROM `publicdata.samples.__TABLES_SUMMARY__`
Command-line
bq --location=US query --use_legacy_sql=false ' SELECT table_id, type FROM publicdata.samples.TABLES_SUMMARY'
The output looks like the following:
+-----------------+------+ | table_id | type | +-----------------+------+ | github_nested | 1 | | github_timeline | 1 | | gsod | 1 | | natality | 1 | | shakespeare | 1 | | trigrams | 1 | | wikipedia | 1 | +-----------------+------+
Next steps
- For more information on datasets, see Introduction to Datasets.
- For more information on handling table data, see Managing Table Data.
- For more information on specifying table schemas, see Specifying a Schema.
- For more information on modifying table schemas, see Modifying Table Schemas.
- For more information on managing tables, see Managing Tables.