- NAME
-
- gcloud dataflow jobs run - runs a job from the specified path
- SYNOPSIS
-
-
gcloud dataflow jobs run
JOB_NAME
--gcs-location
=GCS_LOCATION
[--additional-experiments
=[ADDITIONAL_EXPERIMENTS
,…]] [--dataflow-kms-key
=DATAFLOW_KMS_KEY
] [--disable-public-ips
] [--enable-streaming-engine
] [--max-workers
=MAX_WORKERS
] [--network
=NETWORK
] [--num-workers
=NUM_WORKERS
] [--parameters
=[PARAMETERS
,…]] [--region
=REGION_ID
] [--service-account-email
=SERVICE_ACCOUNT_EMAIL
] [--staging-location
=STAGING_LOCATION
] [--subnetwork
=SUBNETWORK
] [--worker-machine-type
=WORKER_MACHINE_TYPE
] [--worker-region
=WORKER_REGION
|--worker-zone
=WORKER_ZONE
|--zone
=ZONE
] [GCLOUD_WIDE_FLAG …
]
-
- DESCRIPTION
- Runs a job from the specified path.
- POSITIONAL ARGUMENTS
-
JOB_NAME
- The unique name to assign to the job.
- REQUIRED FLAGS
-
--gcs-location
=GCS_LOCATION
- The Google Cloud Storage location of the job template to run. (Must be a URL beginning with 'gs://'.)
- OPTIONAL FLAGS
-
--additional-experiments
=[ADDITIONAL_EXPERIMENTS
,…]- Additional experiments to pass to the job. These experiments are appended to any experiments already set by the template.
--dataflow-kms-key
=DATAFLOW_KMS_KEY
- The Cloud KMS key to protect the job resources.
--disable-public-ips
-
The Cloud Dataflow workers must not use public IP addresses. Overrides the
default
dataflow/disable_public_ips
property value for this command invocation. --enable-streaming-engine
-
Enabling Streaming Engine for the streaming job. Overrides the default
dataflow/enable_streaming_engine
property value for this command invocation. --max-workers
=MAX_WORKERS
- The maximum number of workers to run.
--network
=NETWORK
- The Compute Engine network for launching instances to run your pipeline.
--num-workers
=NUM_WORKERS
- The initial number of workers to use.
--parameters
=[PARAMETERS
,…]- The parameters to pass to the job.
--region
=REGION_ID
- Region ID of the job's regional endpoint. Defaults to 'us-central1'.
--service-account-email
=SERVICE_ACCOUNT_EMAIL
- The service account to run the workers as.
--staging-location
=STAGING_LOCATION
- The Google Cloud Storage location to stage temporary files. (Must be a URL beginning with 'gs://'.)
--subnetwork
=SUBNETWORK
- The Compute Engine subnetwork for launching instances to run your pipeline.
--worker-machine-type
=WORKER_MACHINE_TYPE
- The type of machine to use for workers. Defaults to server-specified.
-
Worker location options.
At most one of these can be specified:
--worker-region
=WORKER_REGION
- The region to run the workers in.
--worker-zone
=WORKER_ZONE
- The zone to run the workers in.
--zone
=ZONE
-
(DEPRECATED) The zone to run the workers in.
The --zone option is deprecated; use --worker-region or --worker-zone instead.
- GCLOUD WIDE FLAGS
-
These flags are available to all commands:
--access-token-file
,--account
,--billing-project
,--configuration
,--flags-file
,--flatten
,--format
,--help
,--impersonate-service-account
,--log-http
,--project
,--quiet
,--trace-token
,--user-output-enabled
,--verbosity
.Run
$ gcloud help
for details. - NOTES
-
This variant is also available:
gcloud beta dataflow jobs run
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-02-06 UTC.