Cloud Data Loss Prevention (DLP) V2 API - Class Google::Cloud::Dlp::V2::StorageConfig::TimespanConfig (v0.20.0)

Reference documentation and code samples for the Cloud Data Loss Prevention (DLP) V2 API class Google::Cloud::Dlp::V2::StorageConfig::TimespanConfig.

Configuration of the timespan of the items to include in scanning. Currently only supported when inspecting Cloud Storage and BigQuery.

Inherits

  • Object

Extended By

  • Google::Protobuf::MessageExts::ClassMethods

Includes

  • Google::Protobuf::MessageExts

Methods

#enable_auto_population_of_timespan_config

def enable_auto_population_of_timespan_config() -> ::Boolean
Returns
  • (::Boolean) — When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.

#enable_auto_population_of_timespan_config=

def enable_auto_population_of_timespan_config=(value) -> ::Boolean
Parameter
  • value (::Boolean) — When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.
Returns
  • (::Boolean) — When the job is started by a JobTrigger we will automatically figure out a valid start_time to avoid scanning files that have not been modified since the last time the JobTrigger executed. This will be based on the time of the execution of the last run of the JobTrigger or the timespan end_time used in the last run of the JobTrigger.

#end_time

def end_time() -> ::Google::Protobuf::Timestamp
Returns

#end_time=

def end_time=(value) -> ::Google::Protobuf::Timestamp
Parameter
Returns

#start_time

def start_time() -> ::Google::Protobuf::Timestamp
Returns

#start_time=

def start_time=(value) -> ::Google::Protobuf::Timestamp
Parameter
Returns

#timestamp_field

def timestamp_field() -> ::Google::Cloud::Dlp::V2::FieldId
Returns
  • (::Google::Cloud::Dlp::V2::FieldId) — Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery.

    For BigQuery

    If this value is not specified and the table was modified between the given start and end times, the entire table will be scanned. If this value is specified, then rows are filtered based on the given start and end times. Rows with a NULL value in the provided BigQuery column are skipped. Valid data types of the provided BigQuery column are: INTEGER, DATE, TIMESTAMP, and DATETIME.

    If your BigQuery table is partitioned at ingestion time, you can use any of the following pseudo-columns as your timestamp field. When used with Cloud DLP, these pseudo-column names are case sensitive.

    • _PARTITIONTIME
    • _PARTITIONDATE
    • _PARTITION_LOAD_TIME

    For Datastore

    If this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are: TIMESTAMP.

    See the known issue related to this operation.

#timestamp_field=

def timestamp_field=(value) -> ::Google::Cloud::Dlp::V2::FieldId
Parameter
  • value (::Google::Cloud::Dlp::V2::FieldId) — Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery.

    For BigQuery

    If this value is not specified and the table was modified between the given start and end times, the entire table will be scanned. If this value is specified, then rows are filtered based on the given start and end times. Rows with a NULL value in the provided BigQuery column are skipped. Valid data types of the provided BigQuery column are: INTEGER, DATE, TIMESTAMP, and DATETIME.

    If your BigQuery table is partitioned at ingestion time, you can use any of the following pseudo-columns as your timestamp field. When used with Cloud DLP, these pseudo-column names are case sensitive.

    • _PARTITIONTIME
    • _PARTITIONDATE
    • _PARTITION_LOAD_TIME

    For Datastore

    If this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are: TIMESTAMP.

    See the known issue related to this operation.

Returns
  • (::Google::Cloud::Dlp::V2::FieldId) — Specification of the field containing the timestamp of scanned items. Used for data sources like Datastore and BigQuery.

    For BigQuery

    If this value is not specified and the table was modified between the given start and end times, the entire table will be scanned. If this value is specified, then rows are filtered based on the given start and end times. Rows with a NULL value in the provided BigQuery column are skipped. Valid data types of the provided BigQuery column are: INTEGER, DATE, TIMESTAMP, and DATETIME.

    If your BigQuery table is partitioned at ingestion time, you can use any of the following pseudo-columns as your timestamp field. When used with Cloud DLP, these pseudo-column names are case sensitive.

    • _PARTITIONTIME
    • _PARTITIONDATE
    • _PARTITION_LOAD_TIME

    For Datastore

    If this value is specified, then entities are filtered based on the given start and end times. If an entity does not contain the provided timestamp property or contains empty or invalid values, then it is included. Valid data types of the provided timestamp property are: TIMESTAMP.

    See the known issue related to this operation.