public interface BigQuery extends Service<BigQueryOptions>
An interface for Google Cloud BigQuery. See Also: Google Cloud BigQuery
Implements
com.google.cloud.Service<com.google.cloud.bigquery.BigQueryOptions>Methods
cancel(JobId jobId)
public abstract boolean cancel(JobId jobId)
Sends a job cancel request. This call will return immediately. The job status can then be checked using either #getJob(JobId, JobOption...) or #getJob(String, JobOption...)).
If the location of the job is not "US" or "EU", the jobId
must specify the job
location.
Example of cancelling a job.
{ @code String jobName = "my_job_name"; JobId jobId = JobId.of(jobName); boolean success = bigquery.cancel(jobId); if (success) { // job was cancelled } else { // job was not found } }
Parameter | |
---|---|
Name | Description |
jobId |
JobId |
Returns | |
---|---|
Type | Description |
boolean |
|
cancel(String jobId)
public abstract boolean cancel(String jobId)
Sends a job cancel request. This call will return immediately. The job status can then be checked using either #getJob(JobId, JobOption...) or #getJob(String, JobOption...)).
If the location of the job is not "US" or "EU", #cancel(JobId) must be used instead.
Example of cancelling a job.
{ @code String jobName = "my_job_name"; boolean success = bigquery.cancel(jobName); if (success) { // job was cancelled } else { // job was not found } }
Parameter | |
---|---|
Name | Description |
jobId |
String |
Returns | |
---|---|
Type | Description |
boolean |
|
create(DatasetInfo datasetInfo, BigQuery.DatasetOption[] options)
public abstract Dataset create(DatasetInfo datasetInfo, BigQuery.DatasetOption[] options)
Creates a new dataset.
Example of creating a dataset.
{ @code String datasetName = "my_dataset_name"; Dataset dataset = null; DatasetInfo datasetInfo = DatasetInfo.newBuilder(datasetName).build(); try { // the dataset was created dataset = bigquery.create(datasetInfo); } catch (BigQueryException e) { // the dataset was not created } }
Parameters | |
---|---|
Name | Description |
datasetInfo |
DatasetInfo |
options |
DatasetOption[] |
Returns | |
---|---|
Type | Description |
Dataset |
create(JobInfo jobInfo, BigQuery.JobOption[] options)
public abstract Job create(JobInfo jobInfo, BigQuery.JobOption[] options)
Creates a new job.
Example of loading a newline-delimited-json file with textual fields from GCS to a table.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; String sourceUri = "gs://cloud-samples-data/bigquery/us-states/us-states.json"; TableId tableId = TableId.of(datasetName, tableName); // Table field definition Field[] fields = new Field[] { Field.of("name", LegacySQLTypeName.STRING), Field.of("post_abbr", LegacySQLTypeName.STRING) }; // Table schema definition Schema schema = Schema.of(fields); LoadJobConfiguration configuration = LoadJobConfiguration.builder(tableId, sourceUri) .setFormatOptions(FormatOptions.json()).setCreateDisposition(CreateDisposition.CREATE_IF_NEEDED) .setSchema(schema).build(); // Load the table Job loadJob = bigquery.create(JobInfo.of(configuration)); loadJob = loadJob.waitFor(); // Check the table System.out.println("State: " + loadJob.getStatus().getState()); return ((StandardTableDefinition) bigquery.getTable(tableId).getDefinition()).getNumRows(); }
Example of creating a query job.
{ @code String query = "SELECT field FROM my_dataset_name.my_table_name"; Job job = null; JobConfiguration jobConfiguration = QueryJobConfiguration.of(query); JobInfo jobInfo = JobInfo.of(jobConfiguration); try { job = bigquery.create(jobInfo); } catch (BigQueryException e) { // the job was not created } }
Parameters | |
---|---|
Name | Description |
jobInfo |
JobInfo |
options |
JobOption[] |
Returns | |
---|---|
Type | Description |
Job |
create(RoutineInfo routineInfo, BigQuery.RoutineOption[] options)
public abstract Routine create(RoutineInfo routineInfo, BigQuery.RoutineOption[] options)
Creates a new routine.
Parameters | |
---|---|
Name | Description |
routineInfo |
RoutineInfo |
options |
RoutineOption[] |
Returns | |
---|---|
Type | Description |
Routine |
create(TableInfo tableInfo, BigQuery.TableOption[] options)
public abstract Table create(TableInfo tableInfo, BigQuery.TableOption[] options)
Creates a new table.
Example of creating a table.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; String fieldName = "string_field"; TableId tableId = TableId.of(datasetName, tableName); // Table field definition Field field = Field.of(fieldName, LegacySQLTypeName.STRING); // Table schema definition Schema schema = Schema.of(field); TableDefinition tableDefinition = StandardTableDefinition.of(schema); TableInfo tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build(); Table table = bigquery.create(tableInfo); }
Parameters | |
---|---|
Name | Description |
tableInfo |
TableInfo |
options |
TableOption[] |
Returns | |
---|---|
Type | Description |
Table |
createConnection()
public abstract Connection createConnection()
Creates a new BigQuery query connection used for executing queries (not the same as BigQuery connection properties). It uses the BigQuery Storage Read API for high throughput queries by default. This overloaded method creates a Connection with default ConnectionSettings for query execution where default values are set for numBufferedRows (20000), useReadApi (true), useLegacySql (false).
Example of creating a query connection.
{ @code Connection connection = bigquery.createConnection(); }
Returns | |
---|---|
Type | Description |
Connection |
createConnection(@NonNull ConnectionSettings connectionSettings)
public abstract Connection createConnection(@NonNull ConnectionSettings connectionSettings)
Creates a new BigQuery query connection used for executing queries (not the same as BigQuery connection properties). It uses the BigQuery Storage Read API for high throughput queries by default.
Example of creating a query connection.
{ @code ConnectionSettings connectionSettings = ConnectionSettings.newBuilder() .setRequestTimeout(10L) .setMaxResults(100L) .setUseQueryCache(true) .build(); Connection connection = bigquery.createConnection(connectionSettings); }
Parameter | |
---|---|
Name | Description |
connectionSettings |
@org.checkerframework.checker.nullness.qual.NonNull com.google.cloud.bigquery.ConnectionSettings |
Returns | |
---|---|
Type | Description |
Connection |
delete(DatasetId datasetId, BigQuery.DatasetDeleteOption[] options)
public abstract boolean delete(DatasetId datasetId, BigQuery.DatasetDeleteOption[] options)
Deletes the requested dataset.
Example of deleting a dataset, even if non-empty.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; DatasetId datasetId = DatasetId.of(projectId, datasetName); boolean deleted = bigquery.delete(datasetId, DatasetDeleteOption.deleteContents()); if (deleted) { // the dataset was deleted } else { // the dataset was not found } }
Parameters | |
---|---|
Name | Description |
datasetId |
DatasetId |
options |
DatasetDeleteOption[] |
Returns | |
---|---|
Type | Description |
boolean |
|
delete(JobId jobId)
public abstract boolean delete(JobId jobId)
Deletes the requested job.
Parameter | |
---|---|
Name | Description |
jobId |
JobId |
Returns | |
---|---|
Type | Description |
boolean |
|
delete(ModelId modelId)
public abstract boolean delete(ModelId modelId)
Deletes the requested model.
Example of deleting a model.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; String tableName = "my_model_name"; ModelId modelId = ModelId.of(projectId, datasetName, modelName); boolean deleted = bigquery.delete(modelId); if (deleted) { // the model was deleted } else { // the model was not found } }
Parameter | |
---|---|
Name | Description |
modelId |
ModelId |
Returns | |
---|---|
Type | Description |
boolean |
|
delete(RoutineId routineId)
public abstract boolean delete(RoutineId routineId)
Deletes the requested routine.
Example of deleting a routine.
String projectId = "my_project_id";
String datasetId = "my_dataset_id";
String routineId = "my_routine_id";
RoutineId routineId = RoutineId.of(projectId, datasetId, routineId);
boolean deleted = bigquery.delete(routineId);
if (deleted) {
// the routine was deleted
} else {
// the routine was not found
}
Parameter | |
---|---|
Name | Description |
routineId |
RoutineId |
Returns | |
---|---|
Type | Description |
boolean |
|
delete(TableId tableId)
public abstract boolean delete(TableId tableId)
Deletes the requested table.
Example of deleting a table.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; String tableName = "my_table_name"; TableId tableId = TableId.of(projectId, datasetName, tableName); boolean deleted = bigquery.delete(tableId); if (deleted) { // the table was deleted } else { // the table was not found } }
Parameter | |
---|---|
Name | Description |
tableId |
TableId |
Returns | |
---|---|
Type | Description |
boolean |
|
delete(String datasetId, BigQuery.DatasetDeleteOption[] options)
public abstract boolean delete(String datasetId, BigQuery.DatasetDeleteOption[] options)
Deletes the requested dataset.
Example of deleting a dataset from its id, even if non-empty.
{ @code String datasetName = "my_dataset_name"; boolean deleted = bigquery.delete(datasetName, DatasetDeleteOption.deleteContents()); if (deleted) { // the dataset was deleted } else { // the dataset was not found } }
Parameters | |
---|---|
Name | Description |
datasetId |
String |
options |
DatasetDeleteOption[] |
Returns | |
---|---|
Type | Description |
boolean |
|
delete(String datasetId, String tableId) (deprecated)
public abstract boolean delete(String datasetId, String tableId)
Deprecated. Now that BigQuery datasets contain multiple resource types, this invocation is
ambiguous. Please use more strongly typed version of #delete
that leverages an
non-ambiguous resource type identifier such as TableId
.
Deletes the requested table.
Parameters | |
---|---|
Name | Description |
datasetId |
String |
tableId |
String |
Returns | |
---|---|
Type | Description |
boolean |
|
getDataset(DatasetId datasetId, BigQuery.DatasetOption[] options)
public abstract Dataset getDataset(DatasetId datasetId, BigQuery.DatasetOption[] options)
Returns the requested dataset or null
if not found.
Example of getting a dataset.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; DatasetId datasetId = DatasetId.of(projectId, datasetName); Dataset dataset = bigquery.getDataset(datasetId); }
Parameters | |
---|---|
Name | Description |
datasetId |
DatasetId |
options |
DatasetOption[] |
Returns | |
---|---|
Type | Description |
Dataset |
getDataset(String datasetId, BigQuery.DatasetOption[] options)
public abstract Dataset getDataset(String datasetId, BigQuery.DatasetOption[] options)
Returns the requested dataset or null
if not found.
Example of getting a dataset.
{ @code String datasetName = "my_dataset"; Dataset dataset = bigquery.getDataset(datasetName); }
Parameters | |
---|---|
Name | Description |
datasetId |
String |
options |
DatasetOption[] |
Returns | |
---|---|
Type | Description |
Dataset |
getIamPolicy(TableId tableId, BigQuery.IAMOption[] options)
public abstract Policy getIamPolicy(TableId tableId, BigQuery.IAMOption[] options)
Gets the IAM policy for a specified table.
Parameters | |
---|---|
Name | Description |
tableId |
TableId |
options |
IAMOption[] |
Returns | |
---|---|
Type | Description |
com.google.cloud.Policy |
getJob(JobId jobId, BigQuery.JobOption[] options)
public abstract Job getJob(JobId jobId, BigQuery.JobOption[] options)
Returns the requested job or null
if not found. If the location of the job is not "US"
or "EU", the jobId
must specify the job location.
Example of getting a job.
{ @code String jobName = "my_job_name"; JobId jobIdObject = JobId.of(jobName); Job job = bigquery.getJob(jobIdObject); if (job == null) { // job was not found } }
Parameters | |
---|---|
Name | Description |
jobId |
JobId |
options |
JobOption[] |
Returns | |
---|---|
Type | Description |
Job |
getJob(String jobId, BigQuery.JobOption[] options)
public abstract Job getJob(String jobId, BigQuery.JobOption[] options)
Returns the requested job or null
if not found. If the location of the job is not "US"
or "EU", #getJob(JobId, JobOption...) must be used instead.
Example of getting a job.
{ @code String jobName = "my_job_name"; Job job = bigquery.getJob(jobName); if (job == null) { // job was not found } }
Parameters | |
---|---|
Name | Description |
jobId |
String |
options |
JobOption[] |
Returns | |
---|---|
Type | Description |
Job |
getModel(ModelId tableId, BigQuery.ModelOption[] options)
public abstract Model getModel(ModelId tableId, BigQuery.ModelOption[] options)
Returns the requested model or null
if not found.
Example of getting a model.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; String modelName = "my_model_name"; ModelId modelId = ModelId.of(projectId, datasetName, tableName); Model model = bigquery.getModel(modelId); }
Parameters | |
---|---|
Name | Description |
tableId |
ModelId |
options |
ModelOption[] |
Returns | |
---|---|
Type | Description |
Model |
getModel(String datasetId, String modelId, BigQuery.ModelOption[] options)
public abstract Model getModel(String datasetId, String modelId, BigQuery.ModelOption[] options)
Returns the requested model or null
if not found.
Parameters | |
---|---|
Name | Description |
datasetId |
String |
modelId |
String |
options |
ModelOption[] |
Returns | |
---|---|
Type | Description |
Model |
getQueryResults(JobId jobId, BigQuery.QueryResultsOption[] options)
public abstract QueryResponse getQueryResults(JobId jobId, BigQuery.QueryResultsOption[] options)
Returns results of the query associated with the provided job.
Users are encouraged to use Job#getQueryResults(QueryResultsOption...) instead.
Parameters | |
---|---|
Name | Description |
jobId |
JobId |
options |
QueryResultsOption[] |
Returns | |
---|---|
Type | Description |
QueryResponse |
getRoutine(RoutineId routineId, BigQuery.RoutineOption[] options)
public abstract Routine getRoutine(RoutineId routineId, BigQuery.RoutineOption[] options)
Returns the requested routine or null
if not found.
Parameters | |
---|---|
Name | Description |
routineId |
RoutineId |
options |
RoutineOption[] |
Returns | |
---|---|
Type | Description |
Routine |
getRoutine(String datasetId, String routineId, BigQuery.RoutineOption[] options)
public abstract Routine getRoutine(String datasetId, String routineId, BigQuery.RoutineOption[] options)
Returns the requested routine or null
if not found.
Parameters | |
---|---|
Name | Description |
datasetId |
String |
routineId |
String |
options |
RoutineOption[] |
Returns | |
---|---|
Type | Description |
Routine |
getTable(TableId tableId, BigQuery.TableOption[] options)
public abstract Table getTable(TableId tableId, BigQuery.TableOption[] options)
Returns the requested table or null
if not found.
Example of getting a table.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; String tableName = "my_table_name"; TableId tableId = TableId.of(projectId, datasetName, tableName); Table table = bigquery.getTable(tableId); }
Parameters | |
---|---|
Name | Description |
tableId |
TableId |
options |
TableOption[] |
Returns | |
---|---|
Type | Description |
Table |
getTable(String datasetId, String tableId, BigQuery.TableOption[] options)
public abstract Table getTable(String datasetId, String tableId, BigQuery.TableOption[] options)
Returns the requested table or null
if not found.
Example of getting a table.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; Table table = bigquery.getTable(datasetName, tableName); }
Parameters | |
---|---|
Name | Description |
datasetId |
String |
tableId |
String |
options |
TableOption[] |
Returns | |
---|---|
Type | Description |
Table |
insertAll(InsertAllRequest request)
public abstract InsertAllResponse insertAll(InsertAllRequest request)
Sends an insert all request.
Example of inserting rows into a table without running a load job.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; TableId tableId = TableId.of(datasetName, tableName); // Values of the row to insert Map<String, Object> rowContent = new HashMap<>(); rowContent.put("booleanField", true); // Bytes are passed in base64 rowContent.put("bytesField", "Cg0NDg0="); // 0xA, 0xD, 0xD, 0xE, 0xD in base64 // Records are passed as a map Map<String, Object> recordsContent = new HashMap<>(); recordsContent.put("stringField", "Hello, World!"); rowContent.put("recordField", recordsContent); InsertAllResponse response = bigquery.insertAll(InsertAllRequest.newBuilder(tableId).addRow("rowId", rowContent) // More rows can be added in the same RPC by invoking .addRow() on the // builder .build()); if (response.hasErrors()) { // If any of the insertions failed, this lets you inspect the errors for (Entry<Long, List<BigQueryError>> entry : response.getInsertErrors().entrySet()) { // inspect row error } } }
Parameter | |
---|---|
Name | Description |
request |
InsertAllRequest |
Returns | |
---|---|
Type | Description |
InsertAllResponse |
listDatasets(BigQuery.DatasetListOption[] options)
public abstract Page<Dataset> listDatasets(BigQuery.DatasetListOption[] options)
Lists the project's datasets. This method returns partial information on each dataset: (Dataset#getDatasetId(), Dataset#getFriendlyName() and Dataset#getGeneratedId()). To get complete information use either #getDataset(String, DatasetOption...) or #getDataset(DatasetId, DatasetOption...).
Example of listing datasets, specifying the page size.
{ @code // List datasets in the default project Page<Dataset> datasets = bigquery.listDatasets(DatasetListOption.pageSize(100)); for (Dataset dataset : datasets.iterateAll()) { // do something with the dataset } }
Parameter | |
---|---|
Name | Description |
options |
DatasetListOption[] |
Returns | |
---|---|
Type | Description |
Page<Dataset> |
listDatasets(String projectId, BigQuery.DatasetListOption[] options)
public abstract Page<Dataset> listDatasets(String projectId, BigQuery.DatasetListOption[] options)
Lists the datasets in the provided project. This method returns partial information on each dataset: (Dataset#getDatasetId(), Dataset#getFriendlyName() and Dataset#getGeneratedId()). To get complete information use either #getDataset(String, DatasetOption...) or #getDataset(DatasetId, DatasetOption...).
Example of listing datasets in a project, specifying the page size.
{ @code String projectId = "my_project_id"; // List datasets in a specified project Page<Dataset> datasets = bigquery.listDatasets(projectId, DatasetListOption.pageSize(100)); for (Dataset dataset : datasets.iterateAll()) { // do something with the dataset } }
Parameters | |
---|---|
Name | Description |
projectId |
String |
options |
DatasetListOption[] |
Returns | |
---|---|
Type | Description |
Page<Dataset> |
listJobs(BigQuery.JobListOption[] options)
public abstract Page<Job> listJobs(BigQuery.JobListOption[] options)
Lists the jobs.
Example of listing jobs, specifying the page size.
{ @code Page<Job> jobs = bigquery.listJobs(JobListOption.pageSize(100)); for (Job job : jobs.iterateAll()) { // do something with the job } }
Parameter | |
---|---|
Name | Description |
options |
JobListOption[] |
Returns | |
---|---|
Type | Description |
Page<Job> |
listModels(DatasetId datasetId, BigQuery.ModelListOption[] options)
public abstract Page<Model> listModels(DatasetId datasetId, BigQuery.ModelListOption[] options)
Lists the models in the dataset.
Parameters | |
---|---|
Name | Description |
datasetId |
DatasetId |
options |
ModelListOption[] |
Returns | |
---|---|
Type | Description |
Page<Model> |
listModels(String datasetId, BigQuery.ModelListOption[] options)
public abstract Page<Model> listModels(String datasetId, BigQuery.ModelListOption[] options)
Lists the models in the dataset.
Parameters | |
---|---|
Name | Description |
datasetId |
String |
options |
ModelListOption[] |
Returns | |
---|---|
Type | Description |
Page<Model> |
listPartitions(TableId tableId)
public abstract List<String> listPartitions(TableId tableId)
Parameter | |
---|---|
Name | Description |
tableId |
TableId |
Returns | |
---|---|
Type | Description |
List<String> |
A list of the partition ids present in the partitioned table |
listRoutines(DatasetId datasetId, BigQuery.RoutineListOption[] options)
public abstract Page<Routine> listRoutines(DatasetId datasetId, BigQuery.RoutineListOption[] options)
Lists the routines in the specified dataset.
Parameters | |
---|---|
Name | Description |
datasetId |
DatasetId |
options |
RoutineListOption[] |
Returns | |
---|---|
Type | Description |
Page<Routine> |
listRoutines(String datasetId, BigQuery.RoutineListOption[] options)
public abstract Page<Routine> listRoutines(String datasetId, BigQuery.RoutineListOption[] options)
Lists the routines in the specified dataset.
Parameters | |
---|---|
Name | Description |
datasetId |
String |
options |
RoutineListOption[] |
Returns | |
---|---|
Type | Description |
Page<Routine> |
listTableData(TableId tableId, BigQuery.TableDataListOption[] options)
public abstract TableResult listTableData(TableId tableId, BigQuery.TableDataListOption[] options)
Lists the table's rows.
Example of listing table rows, specifying the page size.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; TableId tableIdObject = TableId.of(datasetName, tableName); // This example reads the result 100 rows per RPC call. If there's no need // to limit the number, // simply omit the option. TableResult tableData = bigquery.listTableData(tableIdObject, TableDataListOption.pageSize(100)); for (FieldValueList row : tableData.iterateAll()) { // do something with the row } }
Parameters | |
---|---|
Name | Description |
tableId |
TableId |
options |
TableDataListOption[] |
Returns | |
---|---|
Type | Description |
TableResult |
listTableData(TableId tableId, Schema schema, BigQuery.TableDataListOption[] options)
public abstract TableResult listTableData(TableId tableId, Schema schema, BigQuery.TableDataListOption[] options)
Lists the table's rows. If the schema
is not null
, it is available to the
FieldValueList iterated over.
Example of listing table rows with schema.
{ @code Schema schema = Schema.of(Field.of("word", LegacySQLTypeName.STRING), Field.of("word_count", LegacySQLTypeName.STRING), Field.of("corpus", LegacySQLTypeName.STRING), Field.of("corpus_date", LegacySQLTypeName.STRING)); TableResult tableData = bigquery.listTableData(TableId.of("bigquery-public-data", "samples", "shakespeare"), schema); FieldValueList row = tableData.getValues().iterator().next(); System.out.println(row.get("word").getStringValue()); }
Parameters | |
---|---|
Name | Description |
tableId |
TableId |
schema |
Schema |
options |
TableDataListOption[] |
Returns | |
---|---|
Type | Description |
TableResult |
listTableData(String datasetId, String tableId, BigQuery.TableDataListOption[] options)
public abstract TableResult listTableData(String datasetId, String tableId, BigQuery.TableDataListOption[] options)
Lists the table's rows.
Example of listing table rows, specifying the page size.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; // This example reads the result 100 rows per RPC call. If there's no need // to limit the number, // simply omit the option. TableResult tableData = bigquery.listTableData(datasetName, tableName, TableDataListOption.pageSize(100)); for (FieldValueList row : tableData.iterateAll()) { // do something with the row } }
Parameters | |
---|---|
Name | Description |
datasetId |
String |
tableId |
String |
options |
TableDataListOption[] |
Returns | |
---|---|
Type | Description |
TableResult |
listTableData(String datasetId, String tableId, Schema schema, BigQuery.TableDataListOption[] options)
public abstract TableResult listTableData(String datasetId, String tableId, Schema schema, BigQuery.TableDataListOption[] options)
Lists the table's rows. If the schema
is not null
, it is available to the
FieldValueList iterated over.
Example of listing table rows with schema.
String datasetName = "my_dataset_name";
String tableName = "my_table_name";
Schema schema = ...;
String field = "field";
TableResult tableData = bigquery.listTableData(datasetName, tableName, schema);
for (FieldValueList row : tableData.iterateAll()) {
row.get(field);
}
Parameters | |
---|---|
Name | Description |
datasetId |
String |
tableId |
String |
schema |
Schema |
options |
TableDataListOption[] |
Returns | |
---|---|
Type | Description |
TableResult |
listTables(DatasetId datasetId, BigQuery.TableListOption[] options)
public abstract Page<Table> listTables(DatasetId datasetId, BigQuery.TableListOption[] options)
Lists the tables in the dataset. This method returns partial information on each table: (Table#getTableId(), Table#getFriendlyName(), Table#getGeneratedId() and type, which is part of Table#getDefinition()). To get complete information use either #getTable(TableId, TableOption...) or #getTable(String, String, TableOption...).
Example of listing the tables in a dataset.
{ @code String projectId = "my_project_id"; String datasetName = "my_dataset_name"; DatasetId datasetId = DatasetId.of(projectId, datasetName); Page<Table> tables = bigquery.listTables(datasetId, TableListOption.pageSize(100)); for (Table table : tables.iterateAll()) { // do something with the table } }
Parameters | |
---|---|
Name | Description |
datasetId |
DatasetId |
options |
TableListOption[] |
Returns | |
---|---|
Type | Description |
Page<Table> |
listTables(String datasetId, BigQuery.TableListOption[] options)
public abstract Page<Table> listTables(String datasetId, BigQuery.TableListOption[] options)
Lists the tables in the dataset. This method returns partial information on each table: (Table#getTableId(), Table#getFriendlyName(), Table#getGeneratedId() and type, which is part of Table#getDefinition()). To get complete information use either #getTable(TableId, TableOption...) or #getTable(String, String, TableOption...).
Example of listing the tables in a dataset, specifying the page size.
{ @code String datasetName = "my_dataset_name"; Page<Table> tables = bigquery.listTables(datasetName, TableListOption.pageSize(100)); for (Table table : tables.iterateAll()) { // do something with the table } }
Parameters | |
---|---|
Name | Description |
datasetId |
String |
options |
TableListOption[] |
Returns | |
---|---|
Type | Description |
Page<Table> |
query(QueryJobConfiguration configuration, BigQuery.JobOption[] options)
public abstract TableResult query(QueryJobConfiguration configuration, BigQuery.JobOption[] options)
Runs the query associated with the request, using an internally-generated random JobId.
If the location of the job is not "US" or "EU", #query(QueryJobConfiguration, JobId, JobOption...) must be used instead.
This method cannot be used in conjuction with QueryJobConfiguration#dryRun() queries. Since dry-run queries are not actually executed, there's no way to retrieve results.
Example of running a query.
{
@code
// BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
String query = "SELECT corpus FROM bigquery-public-data.samples.shakespeare
GROUP BY corpus;";
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query).build();
// Print the results. for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) { for (FieldValue val : row) { System.out.printf("%s,", val.toString()); } System.out.printf(" "); } }
This method supports query-related preview features via environmental variables (enabled by
setting the QUERY_PREVIEW_ENABLED
environment variable to "TRUE"). Specifically, this
method supports:
- Stateless queries: query execution without corresponding job metadata
The behaviour of these preview features is controlled by the bigquery service as well
Parameters | |
---|---|
Name | Description |
configuration |
QueryJobConfiguration |
options |
JobOption[] |
Returns | |
---|---|
Type | Description |
TableResult |
Exceptions | |
---|---|
Type | Description |
InterruptedException |
upon failure |
JobException |
upon failure |
query(QueryJobConfiguration configuration, JobId jobId, BigQuery.JobOption[] options)
public abstract TableResult query(QueryJobConfiguration configuration, JobId jobId, BigQuery.JobOption[] options)
Runs the query associated with the request, using the given JobId.
If the location of the job is not "US" or "EU", the jobId
must specify the job
location.
This method cannot be used in conjuction with QueryJobConfiguration#dryRun() queries. Since dry-run queries are not actually executed, there's no way to retrieve results.
See #query(QueryJobConfiguration, JobOption...) for examples on populating a QueryJobConfiguration.
Parameters | |
---|---|
Name | Description |
configuration |
QueryJobConfiguration |
jobId |
JobId |
options |
JobOption[] |
Returns | |
---|---|
Type | Description |
TableResult |
Exceptions | |
---|---|
Type | Description |
InterruptedException |
upon failure |
JobException |
upon failure |
setIamPolicy(TableId tableId, Policy policy, BigQuery.IAMOption[] options)
public abstract Policy setIamPolicy(TableId tableId, Policy policy, BigQuery.IAMOption[] options)
Sets the IAM policy for a specified table.
Parameters | |
---|---|
Name | Description |
tableId |
TableId |
policy |
com.google.cloud.Policy |
options |
IAMOption[] |
Returns | |
---|---|
Type | Description |
com.google.cloud.Policy |
testIamPermissions(TableId table, List<String> permissions, BigQuery.IAMOption[] options)
public abstract List<String> testIamPermissions(TableId table, List<String> permissions, BigQuery.IAMOption[] options)
Tests whether the caller holds specific permissions on a BigQuery table. The returned list represents the subset of granted permissions.
Parameters | |
---|---|
Name | Description |
table |
TableId |
permissions |
List<String> |
options |
IAMOption[] |
Returns | |
---|---|
Type | Description |
List<String> |
update(DatasetInfo datasetInfo, BigQuery.DatasetOption[] options)
public abstract Dataset update(DatasetInfo datasetInfo, BigQuery.DatasetOption[] options)
Updates dataset information.
Example of updating a dataset by changing its description.
{ @code // String datasetName = "my_dataset_name"; // String tableName = "my_table_name"; // String newDescription = "new_description";
Table beforeTable = bigquery.getTable(datasetName, tableName); TableInfo tableInfo = beforeTable.toBuilder().setDescription(newDescription).build(); Table afterTable = bigquery.update(tableInfo);
}
Parameters | |
---|---|
Name | Description |
datasetInfo |
DatasetInfo |
options |
DatasetOption[] |
Returns | |
---|---|
Type | Description |
Dataset |
update(ModelInfo modelInfo, BigQuery.ModelOption[] options)
public abstract Model update(ModelInfo modelInfo, BigQuery.ModelOption[] options)
Updates model information.
Example of updating a model by changing its description.
{ @code String datasetName = "my_dataset_name"; String modelName = "my_model_name"; String newDescription = "new_description"; Model beforeModel = bigquery.getModel(datasetName, modelName); ModelInfo modelInfo = beforeModel.toBuilder().setDescription(newDescription).build(); Model afterModel = bigquery.update(modelInfo); }
Example of updating a model by changing its expiration.
{ @code String datasetName = "my_dataset_name"; String modelName = "my_model_name"; Model beforeModel = bigquery.getModel(datasetName, modelName);
// Set model to expire 5 days from now. long expirationMillis = DateTime.now().plusDays(5).getMillis(); ModelInfo modelInfo = beforeModel.toBuilder().setExpirationTime(expirationMillis).build(); Model afterModel = bigquery.update(modelInfo); }
Parameters | |
---|---|
Name | Description |
modelInfo |
ModelInfo |
options |
ModelOption[] |
Returns | |
---|---|
Type | Description |
Model |
update(RoutineInfo routineInfo, BigQuery.RoutineOption[] options)
public abstract Routine update(RoutineInfo routineInfo, BigQuery.RoutineOption[] options)
Updates routine information.
Parameters | |
---|---|
Name | Description |
routineInfo |
RoutineInfo |
options |
RoutineOption[] |
Returns | |
---|---|
Type | Description |
Routine |
update(TableInfo tableInfo, BigQuery.TableOption[] options)
public abstract Table update(TableInfo tableInfo, BigQuery.TableOption[] options)
Updates table information.
Example of updating a table by changing its description.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; String newDescription = "new_description"; Table beforeTable = bigquery.getTable(datasetName, tableName); TableInfo tableInfo = beforeTable.toBuilder().setDescription(newDescription).build(); Table afterTable = bigquery.update(tableInfo); }
Example of updating a table by changing its expiration.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; Table beforeTable = bigquery.getTable(datasetName, tableName);
// Set table to expire 5 days from now. long expirationMillis = DateTime.now().plusDays(5).getMillis(); TableInfo tableInfo = beforeTable.toBuilder().setExpirationTime(expirationMillis).build(); Table afterTable = bigquery.update(tableInfo); }
Parameters | |
---|---|
Name | Description |
tableInfo |
TableInfo |
options |
TableOption[] |
Returns | |
---|---|
Type | Description |
Table |
writer(JobId jobId, WriteChannelConfiguration writeChannelConfiguration)
public abstract TableDataWriteChannel writer(JobId jobId, WriteChannelConfiguration writeChannelConfiguration)
Returns a channel to write data to be inserted into a BigQuery table. Data format and other
options can be configured using the WriteChannelConfiguration parameter. If the job is
not in "US" or "EU", the jobId
must contain the location of the job.
Example of creating a channel with which to write to a table.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; String csvData = "StringValue1 StringValue2 "; String location = "us"; TableId tableId = TableId.of(datasetName, tableName); WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration.newBuilder(tableId) .setFormatOptions(FormatOptions.csv()).build(); // The location must be specified; other fields can be auto-detected. JobId jobId = JobId.newBuilder().setLocation(location).build(); TableDataWriteChannel writer = bigquery.writer(jobId, writeChannelConfiguration); // Write data to writer try { writer.write(ByteBuffer.wrap(csvData.getBytes(Charsets.UTF_8))); } finally { writer.close(); } // Get load job Job job = writer.getJob(); job = job.waitFor(); LoadStatistics stats = job.getStatistics(); return stats.getOutputRows(); }
Parameters | |
---|---|
Name | Description |
jobId |
JobId |
writeChannelConfiguration |
WriteChannelConfiguration |
Returns | |
---|---|
Type | Description |
TableDataWriteChannel |
writer(WriteChannelConfiguration writeChannelConfiguration)
public abstract TableDataWriteChannel writer(WriteChannelConfiguration writeChannelConfiguration)
Returns a channel to write data to be inserted into a BigQuery table. Data format and other options can be configured using the WriteChannelConfiguration parameter. If the job is not in "US" or "EU", #writer(JobId, WriteChannelConfiguration) must be used instead.
Example of creating a channel with which to write to a table.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; String csvData = "StringValue1 StringValue2 "; TableId tableId = TableId.of(datasetName, tableName); WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration.newBuilder(tableId) .setFormatOptions(FormatOptions.csv()).build(); TableDataWriteChannel writer = bigquery.writer(writeChannelConfiguration); // Write data to writer try { writer.write(ByteBuffer.wrap(csvData.getBytes(Charsets.UTF_8))); } finally { writer.close(); } // Get load job Job job = writer.getJob(); job = job.waitFor(); LoadStatistics stats = job.getStatistics(); return stats.getOutputRows(); }
Example of writing a local file to a table.
{ @code String datasetName = "my_dataset_name"; String tableName = "my_table_name"; Path csvPath = FileSystems.getDefault().getPath(".", "my-data.csv"); String location = "us"; TableId tableId = TableId.of(datasetName, tableName); WriteChannelConfiguration writeChannelConfiguration = WriteChannelConfiguration.newBuilder(tableId) .setFormatOptions(FormatOptions.csv()).build(); // The location must be specified; other fields can be auto-detected. JobId jobId = JobId.newBuilder().setLocation(location).build(); TableDataWriteChannel writer = bigquery.writer(jobId, writeChannelConfiguration); // Write data to writer try (OutputStream stream = Channels.newOutputStream(writer)) { Files.copy(csvPath, stream); } // Get load job Job job = writer.getJob(); job = job.waitFor(); LoadStatistics stats = job.getStatistics(); return stats.getOutputRows(); }
Parameter | |
---|---|
Name | Description |
writeChannelConfiguration |
WriteChannelConfiguration |
Returns | |
---|---|
Type | Description |
TableDataWriteChannel |