- 3.59.0 (latest)
- 3.58.0
- 3.57.0
- 3.56.0
- 3.55.0
- 3.53.0
- 3.52.0
- 3.51.0
- 3.50.0
- 3.49.0
- 3.48.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.41.0
- 3.40.0
- 3.39.0
- 3.38.0
- 3.37.0
- 3.36.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.28.0
- 3.27.0
- 3.26.0
- 3.25.0
- 3.24.0
- 3.23.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.8.0
- 3.7.2-SNAPSHOT
- 3.6.0
- 3.4.1
- 3.3.1
- 3.2.17
public static final class BigQueryDestination.Builder extends GeneratedMessageV3.Builder<BigQueryDestination.Builder> implements BigQueryDestinationOrBuilder
A BigQuery destination for exporting assets to.
Protobuf type google.cloud.asset.v1p7beta1.BigQueryDestination
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > BigQueryDestination.BuilderImplements
BigQueryDestinationOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()
Returns | |
---|---|
Type | Description |
Descriptor |
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public BigQueryDestination.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
value |
Object |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
build()
public BigQueryDestination build()
Returns | |
---|---|
Type | Description |
BigQueryDestination |
buildPartial()
public BigQueryDestination buildPartial()
Returns | |
---|---|
Type | Description |
BigQueryDestination |
clear()
public BigQueryDestination.Builder clear()
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
clearDataset()
public BigQueryDestination.Builder clearDataset()
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
clearField(Descriptors.FieldDescriptor field)
public BigQueryDestination.Builder clearField(Descriptors.FieldDescriptor field)
Parameter | |
---|---|
Name | Description |
field |
FieldDescriptor |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
clearForce()
public BigQueryDestination.Builder clearForce()
If the destination table already exists and this flag is TRUE
, the
table will be overwritten by the contents of assets snapshot. If the flag
is FALSE
or unset and the destination table already exists, the export
call returns an INVALID_ARGUMEMT error.
bool force = 3;
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
clearOneof(Descriptors.OneofDescriptor oneof)
public BigQueryDestination.Builder clearOneof(Descriptors.OneofDescriptor oneof)
Parameter | |
---|---|
Name | Description |
oneof |
OneofDescriptor |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
clearPartitionSpec()
public BigQueryDestination.Builder clearPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
clearSeparateTablesPerAssetType()
public BigQueryDestination.Builder clearSeparateTablesPerAssetType()
If this flag is TRUE
, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The
[force] and [partition_spec] fields will apply to each of them.
Field [table] will be concatenated with "" and the asset type names (see
https://cloud.google.com/asset-inventory/docs/supported-asset-types for
supported asset types) to construct per-asset-type table names, in which
all non-alphanumeric characters like "." and "/" will be substituted by
"". Example: if field [table] is "mytable" and snapshot results
contain "storage.googleapis.com/Bucket" assets, the corresponding table
name will be "mytable_storage_googleapis_com_Bucket". If any of these
tables does not exist, a new table with the concatenated name will be
created.
When [content_type] in the ExportAssetsRequest is RESOURCE
, the schema of
each table will include RECORD-type columns mapped to the nested fields in
the Asset.resource.data field of that asset type (up to the 15 nested level
BigQuery supports
(https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
fields in >15 nested levels will be stored in JSON format string as a child
column of its parent RECORD column.
If error occurs when exporting to any table, the whole export call will
return an error but the export results that already succeed will persist.
Example: if exporting to table_type_A succeeds when exporting to
table_type_B fails during one export call, the results in table_type_A will
persist and there will not be partial results persisting in a table.
bool separate_tables_per_asset_type = 5;
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
clearTable()
public BigQueryDestination.Builder clearTable()
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
clone()
public BigQueryDestination.Builder clone()
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
getDataset()
public String getDataset()
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
String |
The dataset. |
getDatasetBytes()
public ByteString getDatasetBytes()
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
ByteString |
The bytes for dataset. |
getDefaultInstanceForType()
public BigQueryDestination getDefaultInstanceForType()
Returns | |
---|---|
Type | Description |
BigQueryDestination |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()
Returns | |
---|---|
Type | Description |
Descriptor |
getForce()
public boolean getForce()
If the destination table already exists and this flag is TRUE
, the
table will be overwritten by the contents of assets snapshot. If the flag
is FALSE
or unset and the destination table already exists, the export
call returns an INVALID_ARGUMEMT error.
bool force = 3;
Returns | |
---|---|
Type | Description |
boolean |
The force. |
getPartitionSpec()
public PartitionSpec getPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Returns | |
---|---|
Type | Description |
PartitionSpec |
The partitionSpec. |
getPartitionSpecBuilder()
public PartitionSpec.Builder getPartitionSpecBuilder()
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Returns | |
---|---|
Type | Description |
PartitionSpec.Builder |
getPartitionSpecOrBuilder()
public PartitionSpecOrBuilder getPartitionSpecOrBuilder()
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Returns | |
---|---|
Type | Description |
PartitionSpecOrBuilder |
getSeparateTablesPerAssetType()
public boolean getSeparateTablesPerAssetType()
If this flag is TRUE
, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The
[force] and [partition_spec] fields will apply to each of them.
Field [table] will be concatenated with "" and the asset type names (see
https://cloud.google.com/asset-inventory/docs/supported-asset-types for
supported asset types) to construct per-asset-type table names, in which
all non-alphanumeric characters like "." and "/" will be substituted by
"". Example: if field [table] is "mytable" and snapshot results
contain "storage.googleapis.com/Bucket" assets, the corresponding table
name will be "mytable_storage_googleapis_com_Bucket". If any of these
tables does not exist, a new table with the concatenated name will be
created.
When [content_type] in the ExportAssetsRequest is RESOURCE
, the schema of
each table will include RECORD-type columns mapped to the nested fields in
the Asset.resource.data field of that asset type (up to the 15 nested level
BigQuery supports
(https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
fields in >15 nested levels will be stored in JSON format string as a child
column of its parent RECORD column.
If error occurs when exporting to any table, the whole export call will
return an error but the export results that already succeed will persist.
Example: if exporting to table_type_A succeeds when exporting to
table_type_B fails during one export call, the results in table_type_A will
persist and there will not be partial results persisting in a table.
bool separate_tables_per_asset_type = 5;
Returns | |
---|---|
Type | Description |
boolean |
The separateTablesPerAssetType. |
getTable()
public String getTable()
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
String |
The table. |
getTableBytes()
public ByteString getTableBytes()
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
Returns | |
---|---|
Type | Description |
ByteString |
The bytes for table. |
hasPartitionSpec()
public boolean hasPartitionSpec()
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Returns | |
---|---|
Type | Description |
boolean |
Whether the partitionSpec field is set. |
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()
Returns | |
---|---|
Type | Description |
FieldAccessorTable |
isInitialized()
public final boolean isInitialized()
Returns | |
---|---|
Type | Description |
boolean |
mergeFrom(BigQueryDestination other)
public BigQueryDestination.Builder mergeFrom(BigQueryDestination other)
Parameter | |
---|---|
Name | Description |
other |
BigQueryDestination |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public BigQueryDestination.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
Parameters | |
---|---|
Name | Description |
input |
CodedInputStream |
extensionRegistry |
ExtensionRegistryLite |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
Exceptions | |
---|---|
Type | Description |
IOException |
mergeFrom(Message other)
public BigQueryDestination.Builder mergeFrom(Message other)
Parameter | |
---|---|
Name | Description |
other |
Message |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
mergePartitionSpec(PartitionSpec value)
public BigQueryDestination.Builder mergePartitionSpec(PartitionSpec value)
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Parameter | |
---|---|
Name | Description |
value |
PartitionSpec |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final BigQueryDestination.Builder mergeUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields |
UnknownFieldSet |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
setDataset(String value)
public BigQueryDestination.Builder setDataset(String value)
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
String The dataset to set. |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
setDatasetBytes(ByteString value)
public BigQueryDestination.Builder setDatasetBytes(ByteString value)
Required. The BigQuery dataset in format "projects/projectId/datasets/datasetId", to which the snapshot result should be exported. If this dataset does not exist, the export call returns an INVALID_ARGUMENT error.
string dataset = 1 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
ByteString The bytes for dataset to set. |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
setField(Descriptors.FieldDescriptor field, Object value)
public BigQueryDestination.Builder setField(Descriptors.FieldDescriptor field, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
value |
Object |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
setForce(boolean value)
public BigQueryDestination.Builder setForce(boolean value)
If the destination table already exists and this flag is TRUE
, the
table will be overwritten by the contents of assets snapshot. If the flag
is FALSE
or unset and the destination table already exists, the export
call returns an INVALID_ARGUMEMT error.
bool force = 3;
Parameter | |
---|---|
Name | Description |
value |
boolean The force to set. |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
setPartitionSpec(PartitionSpec value)
public BigQueryDestination.Builder setPartitionSpec(PartitionSpec value)
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Parameter | |
---|---|
Name | Description |
value |
PartitionSpec |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
setPartitionSpec(PartitionSpec.Builder builderForValue)
public BigQueryDestination.Builder setPartitionSpec(PartitionSpec.Builder builderForValue)
[partition_spec] determines whether to export to partitioned table(s) and
how to partition the data.
If [partition_spec] is unset or [partition_spec.partition_key] is unset or
PARTITION_KEY_UNSPECIFIED
, the snapshot results will be exported to
non-partitioned table(s). [force] will decide whether to overwrite existing
table(s).
If [partition_spec] is specified. First, the snapshot results will be
written to partitioned table(s) with two additional timestamp columns,
readTime and requestTime, one of which will be the partition key. Secondly,
in the case when any destination table already exists, it will first try to
update existing table's schema as necessary by appending additional
columns. Then, if [force] is TRUE
, the corresponding partition will be
overwritten by the snapshot results (data in different partitions will
remain intact); if [force] is unset or FALSE
, it will append the data. An
error will be returned if the schema update or data appension fails.
.google.cloud.asset.v1p7beta1.PartitionSpec partition_spec = 4;
Parameter | |
---|---|
Name | Description |
builderForValue |
PartitionSpec.Builder |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public BigQueryDestination.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
Parameters | |
---|---|
Name | Description |
field |
FieldDescriptor |
index |
int |
value |
Object |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
setSeparateTablesPerAssetType(boolean value)
public BigQueryDestination.Builder setSeparateTablesPerAssetType(boolean value)
If this flag is TRUE
, the snapshot results will be written to one or
multiple tables, each of which contains results of one asset type. The
[force] and [partition_spec] fields will apply to each of them.
Field [table] will be concatenated with "" and the asset type names (see
https://cloud.google.com/asset-inventory/docs/supported-asset-types for
supported asset types) to construct per-asset-type table names, in which
all non-alphanumeric characters like "." and "/" will be substituted by
"". Example: if field [table] is "mytable" and snapshot results
contain "storage.googleapis.com/Bucket" assets, the corresponding table
name will be "mytable_storage_googleapis_com_Bucket". If any of these
tables does not exist, a new table with the concatenated name will be
created.
When [content_type] in the ExportAssetsRequest is RESOURCE
, the schema of
each table will include RECORD-type columns mapped to the nested fields in
the Asset.resource.data field of that asset type (up to the 15 nested level
BigQuery supports
(https://cloud.google.com/bigquery/docs/nested-repeated#limitations)). The
fields in >15 nested levels will be stored in JSON format string as a child
column of its parent RECORD column.
If error occurs when exporting to any table, the whole export call will
return an error but the export results that already succeed will persist.
Example: if exporting to table_type_A succeeds when exporting to
table_type_B fails during one export call, the results in table_type_A will
persist and there will not be partial results persisting in a table.
bool separate_tables_per_asset_type = 5;
Parameter | |
---|---|
Name | Description |
value |
boolean The separateTablesPerAssetType to set. |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
setTable(String value)
public BigQueryDestination.Builder setTable(String value)
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
String The table to set. |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
setTableBytes(ByteString value)
public BigQueryDestination.Builder setTableBytes(ByteString value)
Required. The BigQuery table to which the snapshot result should be written. If this table does not exist, a new table with the given name will be created.
string table = 2 [(.google.api.field_behavior) = REQUIRED];
Parameter | |
---|---|
Name | Description |
value |
ByteString The bytes for table to set. |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |
This builder for chaining. |
setUnknownFields(UnknownFieldSet unknownFields)
public final BigQueryDestination.Builder setUnknownFields(UnknownFieldSet unknownFields)
Parameter | |
---|---|
Name | Description |
unknownFields |
UnknownFieldSet |
Returns | |
---|---|
Type | Description |
BigQueryDestination.Builder |