gcloud datastream streams update

NAME
gcloud datastream streams update - updates a Datastream stream
SYNOPSIS
gcloud datastream streams update (STREAM : --location=LOCATION) [--display-name=DISPLAY_NAME] [--state=STATE] [--update-labels=[KEY=VALUE,…]] [--update-mask=UPDATE_MASK] [--backfill-none     | --backfill-all --mysql-excluded-objects=MYSQL_EXCLUDED_OBJECTS     | --oracle-excluded-objects=ORACLE_EXCLUDED_OBJECTS     | --postgresql-excluded-objects=POSTGRESQL_EXCLUDED_OBJECTS     | --sqlserver-excluded-objects=SQLSERVER_EXCLUDED_OBJECTS] [--clear-labels     | --remove-labels=[KEY,…]] [--destination=DESTINATION --bigquery-destination-config=BIGQUERY_DESTINATION_CONFIG     | --gcs-destination-config=GCS_DESTINATION_CONFIG] [--force     | --validate-only] [--source=SOURCE --mysql-source-config=MYSQL_SOURCE_CONFIG     | --oracle-source-config=ORACLE_SOURCE_CONFIG     | --postgresql-source-config=POSTGRESQL_SOURCE_CONFIG     | --sqlserver-source-config=SQLSERVER_SOURCE_CONFIG] [GCLOUD_WIDE_FLAG]
DESCRIPTION
Update a Datastream stream. If successful, the response body contains a newly created instance of Operation. To get the operation result, call: describe OPERATION
EXAMPLES
To update a stream with a new source and new display name:
gcloud datastream streams update STREAM --location=us-central1 --display-name=my-stream --source=source --update-mask=display_name,source

To update a stream's state to RUNNING:

gcloud datastream streams update STREAM --location=us-central1 --state=RUNNING --update-mask=state

To update a stream's oracle source config:

gcloud datastream streams update STREAM --location=us-central1 --oracle-source-config=good_oracle_cp.json --update-mask=oracle_source_config.include_objects
POSITIONAL ARGUMENTS
Stream resource - The stream to update. The arguments in this group can be used to specify the attributes of this resource. (NOTE) Some attributes are not given arguments in this group but can be set in other ways.

To set the project attribute:

  • provide the argument stream on the command line with a fully specified name;
  • provide the argument --project on the command line;
  • set the property core/project.

This must be specified.

STREAM
ID of the stream or fully qualified identifier for the stream.

To set the stream attribute:

  • provide the argument stream on the command line.

This positional argument must be specified if any of the other arguments in this group are specified.

--location=LOCATION
The Cloud location for the stream.

To set the location attribute:

  • provide the argument stream on the command line with a fully specified name;
  • provide the argument --location on the command line.
FLAGS
--display-name=DISPLAY_NAME
Friendly name for the stream.
--state=STATE
Stream state, can be set to: "RUNNING" or "PAUSED".
--update-labels=[KEY=VALUE,…]
List of label KEY=VALUE pairs to update. If a label exists, its value is modified. Otherwise, a new label is created.

Keys must start with a lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.

--update-mask=UPDATE_MASK
Used to specify the fields to be overwritten in the stream resource by the update. If the update mask is used, then a field will be overwritten only if it is in the mask. If the user does not provide a mask then all fields will be overwritten. This is a comma-separated list of fully qualified names of fields, written as snake_case or camelCase. Example: "display_name, source_config.oracle_source_config".
At most one of these can be specified:
--backfill-none
Do not automatically backfill any objects. This flag is equivalent to selecting the Manual backfill type in the Google Cloud console.
--backfill-all
Automatically backfill objects included in the stream source configuration. Specific objects can be excluded. This flag is equivalent to selecting the Automatic backfill type in the Google Cloud console.
At most one of these can be specified:
--mysql-excluded-objects=MYSQL_EXCLUDED_OBJECTS
Path to a YAML (or JSON) file containing the MySQL data sources to avoid backfilling.

The JSON file is formatted as follows, with camelCase field naming:

  {
      "mysqlDatabases": [
          {
            "database":"sample_database",
            "mysqlTables": [
              {
                "table": "sample_table",
                "mysqlColumns": [
                  {
                    "column": "sample_column",
                  }
                 ]
              }
            ]
          }
        ]
      }
  }
--oracle-excluded-objects=ORACLE_EXCLUDED_OBJECTS
Path to a YAML (or JSON) file containing the Oracle data sources to avoid backfilling.

The JSON file is formatted as follows, with camelCase field naming:

  {
      "oracleSchemas": [
        {
          "schema": "SAMPLE",
          "oracleTables": [
            {
              "table": "SAMPLE_TABLE",
              "oracleColumns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    }
  }
--postgresql-excluded-objects=POSTGRESQL_EXCLUDED_OBJECTS
Path to a YAML (or JSON) file containing the PostgreSQL data sources to avoid backfilling.

The JSON file is formatted as follows, with camelCase field naming:

  {
      "postgresqlSchemas": [
        {
          "schema": "SAMPLE",
          "postgresqlTables": [
            {
              "table": "SAMPLE_TABLE",
              "postgresqlColumns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    }
  }
--sqlserver-excluded-objects=SQLSERVER_EXCLUDED_OBJECTS
Path to a YAML (or JSON) file containing the SQL Server data sources to avoid backfilling.

The JSON file is formatted as follows, with camelCase field naming:

  {
      "schemas": [
        {
          "schema": "SAMPLE",
          "tables": [
            {
              "table": "SAMPLE_TABLE",
              "columns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    }
  }
At most one of these can be specified:
--clear-labels
Remove all labels. If --update-labels is also specified then --clear-labels is applied first.

For example, to remove all labels:

gcloud datastream streams update --clear-labels

To remove all existing labels and create two new labels, foo and baz:

gcloud datastream streams update --clear-labels --update-labels foo=bar,baz=qux
--remove-labels=[KEY,…]
List of label keys to remove. If a label does not exist it is silently ignored. If --update-labels is also specified then --update-labels is applied first.
Connection profile resource - Resource ID of the destination connection profile. This represents a Cloud resource. (NOTE) Some attributes are not given arguments in this group but can be set in other ways.

To set the project attribute:

  • provide the argument --destination on the command line with a fully specified name;
  • provide the argument --project on the command line;
  • set the property core/project.

To set the location attribute:

  • provide the argument --destination on the command line with a fully specified name;
  • provide the argument --location on the command line.
--destination=DESTINATION
ID of the connection_profile or fully qualified identifier for the connection_profile.

To set the connection_profile attribute:

  • provide the argument --destination on the command line.
At most one of these can be specified:
--bigquery-destination-config=BIGQUERY_DESTINATION_CONFIG
Path to a YAML (or JSON) file containing the configuration for Google BigQuery Destination Config.

The YAML (or JSON) file should be formatted as follows:

BigQuery configuration with source hierarchy datasets and merge mode (merge mode is by default):

{
  "sourceHierarchyDatasets": {
    "datasetTemplate": {
      "location": "us-central1",
      "datasetIdPrefix": "my_prefix",
      "kmsKeyName": "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}"
    }
  },
  "merge": {}
  "dataFreshness": "3600s"
}

BigQuery configuration with source hierarchy datasets and append only mode:

{
  "sourceHierarchyDatasets": {
    "datasetTemplate": {
      "location": "us-central1",
      "datasetIdPrefix": "my_prefix",
      "kmsKeyName": "projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}"
    }
  },
  "appendOnly": {}
}

BigQuery configuration with single target dataset and merge mode:

{
  "singleTargetDataset": {
    "datasetId": "projectId:my_dataset"
  },
  "merge": {}
  "dataFreshness": "3600s"
}
--gcs-destination-config=GCS_DESTINATION_CONFIG
Path to a YAML (or JSON) file containing the configuration for Google Cloud Storage Destination Config.

The JSON file is formatted as follows:

 {
 "path": "some/path",
 "fileRotationMb":5,
 "fileRotationInterval":"15s",
 "avroFileFormat": {}
 }
At most one of these can be specified:
--force
Update the stream without validating it.
--validate-only
Only validate the stream, but do not update any resources. The default is false.
Connection profile resource - Resource ID of the source connection profile. This represents a Cloud resource. (NOTE) Some attributes are not given arguments in this group but can be set in other ways.

To set the project attribute:

  • provide the argument --source on the command line with a fully specified name;
  • provide the argument --project on the command line;
  • set the property core/project.

To set the location attribute:

  • provide the argument --source on the command line with a fully specified name;
  • provide the argument --location on the command line.
--source=SOURCE
ID of the connection_profile or fully qualified identifier for the connection_profile.

To set the connection_profile attribute:

  • provide the argument --source on the command line.
At most one of these can be specified:
--mysql-source-config=MYSQL_SOURCE_CONFIG
Path to a YAML (or JSON) file containing the configuration for MySQL Source Config.

The JSON file is formatted as follows, with camelCase field naming:

  {
    "includeObjects": {},
    "excludeObjects":  {
      "mysqlDatabases": [
          {
            "database":"sample_database",
            "mysqlTables": [
              {
                "table": "sample_table",
                "mysqlColumns": [
                  {
                    "column": "sample_column",
                  }
                 ]
              }
            ]
          }
        ]
      }
  }
--oracle-source-config=ORACLE_SOURCE_CONFIG
Path to a YAML (or JSON) file containing the configuration for Oracle Source Config.

The JSON file is formatted as follows, with camelCase field naming:

  {
    "includeObjects": {},
    "excludeObjects": {
      "oracleSchemas": [
        {
          "schema": "SAMPLE",
          "oracleTables": [
            {
              "table": "SAMPLE_TABLE",
              "oracleColumns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    }
  }
--postgresql-source-config=POSTGRESQL_SOURCE_CONFIG
Path to a YAML (or JSON) file containing the configuration for PostgreSQL Source Config.

The JSON file is formatted as follows, with camelCase field naming:

  {
    "includeObjects": {},
    "excludeObjects": {
      "postgresqlSchemas": [
        {
          "schema": "SAMPLE",
          "postgresqlTables": [
            {
              "table": "SAMPLE_TABLE",
              "postgresqlColumns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    },
    "replicationSlot": "SAMPLE_REPLICATION_SLOT",
    "publication": "SAMPLE_PUBLICATION"
  }
--sqlserver-source-config=SQLSERVER_SOURCE_CONFIG
Path to a YAML (or JSON) file containing the configuration for PostgreSQL Source Config.

The JSON file is formatted as follows, with camelCase field naming:

  {
    "includeObjects": {},
    "excludeObjects": {
      "schemas": [
        {
          "schema": "SAMPLE",
          "tables": [
            {
              "table": "SAMPLE_TABLE",
              "columns": [
                {
                  "column": "COL",
                }
              ]
            }
          ]
        }
      ]
    },
    "maxConcurrentCdcTasks": 2,
    "maxConcurrentBackfillTasks": 10,
    "transactionLogs": {}  # Or changeTables
  }
GCLOUD WIDE FLAGS
These flags are available to all commands: --access-token-file, --account, --billing-project, --configuration, --flags-file, --flatten, --format, --help, --impersonate-service-account, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity.

Run $ gcloud help for details.

NOTES
This variant is also available:
gcloud beta datastream streams update