Request the job status. To request the status of a job, we recommend
using projects.locations.jobs.messages.list
with a regional endpoint.
Using projects.jobs.messages.list
is not recommended, as you can only
request the status of jobs that are running in us-central1
.
Arguments
Parameters | |
---|---|
jobId |
Required. The job to get messages about.
|
location |
Required. The regional endpoint that contains the job specified by job_id.
|
projectId |
Required. A project id.
|
endTime |
Return only messages with timestamps < end_time. The default is now (i.e. return up to the latest messages available).
|
minimumImportance |
Filter to only get messages with importance >= level
|
Enum type. Can be one of the following: | |
JOB_MESSAGE_IMPORTANCE_UNKNOWN |
The message importance isn't specified, or is unknown. |
JOB_MESSAGE_DEBUG |
The message is at the 'debug' level: typically only useful for software engineers working on the code the job is running. Typically, Dataflow pipeline runners do not display log messages at this level by default. |
JOB_MESSAGE_DETAILED |
The message is at the 'detailed' level: somewhat verbose, but potentially useful to users. Typically, Dataflow pipeline runners do not display log messages at this level by default. These messages are displayed by default in the Dataflow monitoring UI. |
JOB_MESSAGE_BASIC |
The message is at the 'basic' level: useful for keeping track of the execution of a Dataflow pipeline. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI. |
JOB_MESSAGE_WARNING |
The message is at the 'warning' level: indicating a condition pertaining to a job which may require human intervention. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI. |
JOB_MESSAGE_ERROR |
The message is at the 'error' level: indicating a condition preventing a job from succeeding. Typically, Dataflow pipeline runners display log messages at this level by default, and these messages are displayed by default in the Dataflow monitoring UI. |
pageSize |
If specified, determines the maximum number of messages to return. If unspecified, the service may choose an appropriate default, or may return an arbitrarily large number of results.
|
pageToken |
If supplied, this should be the value of next_page_token returned by an earlier call. This will cause the next page of results to be returned.
|
startTime |
If specified, return only messages with timestamps >= start_time. The default is the job creation time (i.e. beginning of messages).
|
Raised exceptions
Exceptions | |
---|---|
ConnectionError |
In case of a network problem (such as DNS failure or refused connection). |
HttpError |
If the response status is >= 400 (excluding 429 and 503). |
TimeoutError |
If a long-running operation takes longer to finish than the specified timeout limit. |
TypeError |
If an operation or function receives an argument of the wrong type. |
ValueError |
If an operation or function receives an argument of the right type but an inappropriate value. For example, a negative timeout. |
Response
If successful, the response contains an instance of ListJobMessagesResponse
.
Subworkflow snippet
Some fields might be optional or required. To identify required fields, refer to the API documentation.
YAML
- list: call: googleapis.dataflow.v1b3.projects.locations.jobs.messages.list args: jobId: ... location: ... projectId: ... endTime: ... minimumImportance: ... pageSize: ... pageToken: ... startTime: ... result: listResult
JSON
[ { "list": { "call": "googleapis.dataflow.v1b3.projects.locations.jobs.messages.list", "args": { "jobId": "...", "location": "...", "projectId": "...", "endTime": "...", "minimumImportance": "...", "pageSize": "...", "pageToken": "...", "startTime": "..." }, "result": "listResult" } } ]