Stay organized with collections
Save and categorize content based on your preferences.
The Workflows connector defines the built-in
functions that can be used to access other Google Cloud products within a
workflow.
This page provides an overview of the individual connector.
There is no need to import or load connector libraries in a workflow—connectors
work out of the box when used in a call step.
Vertex AI API
Train high-quality custom machine learning models with minimal machine learning expertise and effort.
To learn more, see the Vertex AI API documentation.
Cancels a BatchPredictionJob. Starts asynchronous cancellation on the
BatchPredictionJob. The server makes the best effort to cancel the job,
but success is not guaranteed. Clients can use
JobService.GetBatchPredictionJob or other methods to check whether the
cancellation succeeded or whether the job completed despite
cancellation. On a successful cancellation, the BatchPredictionJob is
not deleted;instead its BatchPredictionJob.state is set to CANCELLED.
Any files already outputted by the job are not deleted.
Cancels a CustomJob. Starts asynchronous cancellation on the CustomJob.
The server makes a best effort to cancel the job, but success is not
guaranteed. Clients can use JobService.GetCustomJob or other methods to
check whether the cancellation succeeded or whether the job completed
despite cancellation. On successful cancellation, the CustomJob is not
deleted; instead it becomes a job with a CustomJob.error value with a
google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and
CustomJob.state is set to CANCELLED.
Perform an online explanation. If deployed_model_id is specified, the
corresponding DeployModel must have explanation_spec populated. If
deployed_model_id is not specified, all DeployedModels must have
explanation_spec populated.
Perform an online prediction with an arbitrary HTTP payload. The
response includes the following HTTP headers: *
X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this
prediction. * X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's
DeployedModel that served this prediction.
Search the nearest entities under a FeatureView. Search only works for
indexable feature view; if a feature view isn't indexable, returns
Invalid argument response.
Batch reads Feature values from a Featurestore. This API enables batch
reading Feature values, where each read instance in the batch may read
Feature values of entities from one or more EntityTypes. Point-in-time
correctness is guaranteed for Feature values of each read instance as of
each instance's read timestamp.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Delete Feature values from Featurestore. The progress of the deletion is
tracked by the returned operation. The deleted feature values are
guaranteed to be invisible to subsequent read operations after the
operation is marked as successfully done. If a delete feature values
operation fails, the feature values returned from reads and exports may
be inconsistent. If consistency is required, the caller must retry the
same delete request again and wait till the new operation returned is
marked as successfully done.
Imports Feature values into the Featurestore from a source storage. The
progress of the import is tracked by the returned operation. The
imported features are guaranteed to be visible to subsequent read
operations after the operation is marked as successfully done. If an
import operation fails, the Feature values returned from reads and
exports may be inconsistent. If consistency is required, the caller must
retry the same import request again and wait till the new operation
returned is marked as successfully done. There are also scenarios where
the caller can cause inconsistency. - Source data for import contains
multiple distinct Feature values for the same entity ID and timestamp. -
Source is modified during an import. This includes adding, updating, or
removing source data and/or metadata. Examples of updating metadata
include but are not limited to changing storage location, storage class,
or retention policy. - Online serving cluster is under-provisioned.
Reads Feature values of a specific entity of an EntityType. For reading
feature values of multiple entities of an EntityType, please use
StreamingReadFeatureValues.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Writes Feature values of one or more entities of an EntityType. The
Feature values are merged into existing entities if any. The Feature
values to be written must have timestamp within the online storage
retention.
Cancels a HyperparameterTuningJob. Starts asynchronous cancellation on
the HyperparameterTuningJob. The server makes a best effort to cancel
the job, but success is not guaranteed. Clients can use
JobService.GetHyperparameterTuningJob or other methods to check whether
the cancellation succeeded or whether the job completed despite
cancellation. On successful cancellation, the HyperparameterTuningJob is
not deleted; instead it becomes a job with a
HyperparameterTuningJob.error value with a google.rpc.Status.code of 1,
corresponding to Code.CANCELLED, and HyperparameterTuningJob.state is
set to CANCELLED.
Adds a set of Artifacts and Executions to a Context. If any of the
Artifacts or Executions have already been added to a Context, they are
simply skipped.
Adds a set of Contexts as children to a parent Context. If any of the
child Contexts have already been added to the parent Context, they are
simply skipped. If this call would create a cycle or cause any Context
to have more than 10 parents, the request will fail with an
INVALID_ARGUMENT error.
Adds Events to the specified Execution. An Event indicates whether an
Artifact was used as an input or output for an Execution. If an Event
already exists between the Execution and the Artifact, the Event is
skipped.
Obtains the set of input and output Artifacts for this Execution, in the
form of LineageSubgraph that also contains the Execution and connecting
Events.
Searches all of the resources in automl.googleapis.com,
datalabeling.googleapis.com and ml.googleapis.com that can be migrated
to Vertex AI's given location.
Pauses a ModelDeploymentMonitoringJob. If the job is running, the server
makes a best effort to cancel the job. Will mark
ModelDeploymentMonitoringJob.state to 'PAUSED'.
Copies an already existing Vertex AI Model into the specified Location.
The source Model must exist in the same Project. When copying custom
Models, the users themselves are responsible for Model.metadata content
to be region-agnostic, as well as making sure that any resources (e.g.
files) it depends on remain accessible.
Deletes a Model version. Model version can only be deleted if there are
no DeployedModels created from it. Deleting the only version in the
Model is not allowed. Use DeleteModel for deleting the Model instead.
Exports a trained, exportable Model to a location specified by the user.
A Model is considered to be exportable if it has at least one supported
export format.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Cancels a NasJob. Starts asynchronous cancellation on the NasJob. The
server makes a best effort to cancel the job, but success is not
guaranteed. Clients can use JobService.GetNasJob or other methods to
check whether the cancellation succeeded or whether the job completed
despite cancellation. On successful cancellation, the NasJob is not
deleted; instead it becomes a job with a NasJob.error value with a
google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and
NasJob.state is set to CANCELLED.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Starts asynchronous cancellation on a long-running operation. The server
makes a best effort to cancel the operation, but success is not
guaranteed. If the server doesn't support this method, it returns
google.rpc.Code.UNIMPLEMENTED. Clients can use Operations.GetOperation
or other methods to check whether the cancellation succeeded or whether
the operation completed despite cancellation. On successful
cancellation, the operation is not deleted; instead, it becomes an
operation with an Operation.error value with a google.rpc.Status.code of
1, corresponding to Code.CANCELLED.
Deletes a long-running operation. This method indicates that the client
is no longer interested in the operation result. It does not cancel the
operation. If the server doesn't support this method, it returns
google.rpc.Code.UNIMPLEMENTED.
Gets the latest state of a long-running operation. Clients can use this
method to poll the operation result at intervals as recommended by the
API service.
Waits until the specified long-running operation is done or reaches at
most a specified timeout, returning the latest state. If the operation
is already done, the latest state is immediately returned. If the
timeout specified is greater than the default HTTP/RPC timeout, the
HTTP/RPC timeout is used. If the server does not support this method, it
returns google.rpc.Code.UNIMPLEMENTED. Note that this method is on a
best-effort basis. It may return the latest state before the specified
timeout (including immediately), meaning even an immediate response is
no guarantee that the operation is done.
Batch cancel PipelineJobs. Firstly the server will check if all the jobs
are in non-terminal states, and skip the jobs that are already
terminated. If the operation failed, none of the pipeline jobs are
cancelled. The server will poll the states of all the pipeline jobs
periodically to check the cancellation status. This operation will
return an LRO.
Batch deletes PipelineJobs The Operation is atomic. If it fails, none of
the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs
are deleted.
Cancels a PipelineJob. Starts asynchronous cancellation on the
PipelineJob. The server makes a best effort to cancel the pipeline, but
success is not guaranteed. Clients can use
PipelineService.GetPipelineJob or other methods to check whether the
cancellation succeeded or whether the pipeline completed despite
cancellation. On successful cancellation, the PipelineJob is not
deleted; instead it becomes a pipeline with a PipelineJob.error value
with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED,
and PipelineJob.state is set to CANCELLED.
Perform an online prediction with an arbitrary HTTP payload. The
response includes the following HTTP headers: *
X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this
prediction. * X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's
DeployedModel that served this prediction.
Updates an active or paused Schedule. When the Schedule is updated, new
runs will be scheduled starting from the updated next execution time
after the update time based on the time_specification in the updated
Schedule. All unstarted runs before the update time will be skipped
while already created runs will NOT be paused or canceled.
Pauses a Schedule. Will mark Schedule.state to 'PAUSED'. If the schedule
is paused, no new runs will be created. Already created runs will NOT be
paused or canceled.
Resumes a paused Schedule to start scheduling new runs. Will mark
Schedule.state to 'ACTIVE'. Only paused Schedule can be resumed. When
the Schedule is resumed, new runs will be scheduled starting from the
next execution time after the current time based on the
time_specification in the Schedule. If Schedule.catchUp is set up true,
all missed runs will be scheduled for backfill first.
Checks whether a Trial should stop or not. Returns a long-running
operation. When the operation is successful, it will contain a
CheckTrialEarlyStoppingStateResponse.
Lists the pareto-optimal Trials for multi-objective Study or the optimal
Trials for single-objective Study. The definition of pareto-optimal can
be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency
Adds one or more Trials to a Study, with parameter values suggested by
Vertex AI Vizier. Returns a long-running operation associated with the
generation of Trial suggestions. When this long-running operation
succeeds, it will contain a SuggestTrialsResponse.
Reads multiple TensorboardTimeSeries' data. The data point number limit
is 1000 for scalars, 100 for tensors and blob references. If the number
of data points stored is less than the limit, all data is returned.
Otherwise, the number limit of data points is randomly selected from
this time series and returned.
Reads a TensorboardTimeSeries' data. By default, if the number of data
points stored is less than 1000, all data is returned. Otherwise, 1000
data points is randomly selected from this time series and returned.
This value can be changed by changing max_data_points, which can't be
greater than 10k.
Gets bytes of TensorboardBlobs. This is to allow reading blob data
stored in consumer project's Cloud Storage bucket without users having
to obtain Cloud Storage access permission.
Cancels a TrainingPipeline. Starts asynchronous cancellation on the
TrainingPipeline. The server makes a best effort to cancel the pipeline,
but success is not guaranteed. Clients can use
PipelineService.GetTrainingPipeline or other methods to check whether
the cancellation succeeded or whether the pipeline completed despite
cancellation. On successful cancellation, the TrainingPipeline is not
deleted; instead it becomes a pipeline with a TrainingPipeline.error
value with a google.rpc.Status.code of 1, corresponding to
Code.CANCELLED, and TrainingPipeline.state is set to CANCELLED.
Cancels a TuningJob. Starts asynchronous cancellation on the TuningJob.
The server makes a best effort to cancel the job, but success is not
guaranteed. Clients can use GenAiTuningService.GetTuningJob or other
methods to check whether the cancellation succeeded or whether the job
completed despite cancellation. On successful cancellation, the
TuningJob is not deleted; instead it becomes a job with a
TuningJob.error value with a google.rpc.Status.code of 1, corresponding
to Code.CANCELLED, and TuningJob.state is set to CANCELLED.
Cancels a BatchPredictionJob. Starts asynchronous cancellation on the
BatchPredictionJob. The server makes the best effort to cancel the job,
but success is not guaranteed. Clients can use
JobService.GetBatchPredictionJob or other methods to check whether the
cancellation succeeded or whether the job completed despite
cancellation. On a successful cancellation, the BatchPredictionJob is
not deleted;instead its BatchPredictionJob.state is set to CANCELLED.
Any files already outputted by the job are not deleted.
Cancels a CustomJob. Starts asynchronous cancellation on the CustomJob.
The server makes a best effort to cancel the job, but success is not
guaranteed. Clients can use JobService.GetCustomJob or other methods to
check whether the cancellation succeeded or whether the job completed
despite cancellation. On successful cancellation, the CustomJob is not
deleted; instead it becomes a job with a CustomJob.error value with a
google.rpc.Status.code of 1, corresponding to Code.CANCELLED, and
CustomJob.state is set to CANCELLED.
Perform an online explanation. If deployed_model_id is specified, the
corresponding DeployModel must have explanation_spec populated. If
deployed_model_id is not specified, all DeployedModels must have
explanation_spec populated.
Perform an online prediction with an arbitrary HTTP payload. The
response includes the following HTTP headers: *
X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this
prediction. * X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's
DeployedModel that served this prediction.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Search the nearest entities under a FeatureView. Search only works for
indexable feature view; if a feature view isn't indexable, returns
Invalid argument response.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Bidirectional streaming RPC to fetch feature values under a FeatureView.
Requests may not have a one-to-one mapping to responses and responses
may be returned out-of-order to reduce latency.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Batch reads Feature values from a Featurestore. This API enables batch
reading Feature values, where each read instance in the batch may read
Feature values of entities from one or more EntityTypes. Point-in-time
correctness is guaranteed for Feature values of each read instance as of
each instance's read timestamp.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Delete Feature values from Featurestore. The progress of the deletion is
tracked by the returned operation. The deleted feature values are
guaranteed to be invisible to subsequent read operations after the
operation is marked as successfully done. If a delete feature values
operation fails, the feature values returned from reads and exports may
be inconsistent. If consistency is required, the caller must retry the
same delete request again and wait till the new operation returned is
marked as successfully done.
Imports Feature values into the Featurestore from a source storage. The
progress of the import is tracked by the returned operation. The
imported features are guaranteed to be visible to subsequent read
operations after the operation is marked as successfully done. If an
import operation fails, the Feature values returned from reads and
exports may be inconsistent. If consistency is required, the caller must
retry the same import request again and wait till the new operation
returned is marked as successfully done. There are also scenarios where
the caller can cause inconsistency. - Source data for import contains
multiple distinct Feature values for the same entity ID and timestamp. -
Source is modified during an import. This includes adding, updating, or
removing source data and/or metadata. Examples of updating metadata
include but are not limited to changing storage location, storage class,
or retention policy. - Online serving cluster is under-provisioned.
Reads Feature values of a specific entity of an EntityType. For reading
feature values of multiple entities of an EntityType, please use
StreamingReadFeatureValues.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Writes Feature values of one or more entities of an EntityType. The
Feature values are merged into existing entities if any. The Feature
values to be written must have timestamp within the online storage
retention.
Cancels a HyperparameterTuningJob. Starts asynchronous cancellation on
the HyperparameterTuningJob. The server makes a best effort to cancel
the job, but success is not guaranteed. Clients can use
JobService.GetHyperparameterTuningJob or other methods to check whether
the cancellation succeeded or whether the job completed despite
cancellation. On successful cancellation, the HyperparameterTuningJob is
not deleted; instead it becomes a job with a
HyperparameterTuningJob.error value with a google.rpc.Status.code of 1,
corresponding to Code.CANCELLED, and HyperparameterTuningJob.state is
set to CANCELLED.
Adds a set of Artifacts and Executions to a Context. If any of the
Artifacts or Executions have already been added to a Context, they are
simply skipped.
Adds a set of Contexts as children to a parent Context. If any of the
child Contexts have already been added to the parent Context, they are
simply skipped. If this call would create a cycle or cause any Context
to have more than 10 parents, the request will fail with an
INVALID_ARGUMENT error.
Adds Events to the specified Execution. An Event indicates whether an
Artifact was used as an input or output for an Execution. If an Event
already exists between the Execution and the Artifact, the Event is
skipped.
Obtains the set of input and output Artifacts for this Execution, in the
form of LineageSubgraph that also contains the Execution and connecting
Events.
Searches all of the resources in automl.googleapis.com,
datalabeling.googleapis.com and ml.googleapis.com that can be migrated
to Vertex AI's given location.
Pauses a ModelDeploymentMonitoringJob. If the job is running, the server
makes a best effort to cancel the job. Will mark
ModelDeploymentMonitoringJob.state to 'PAUSED'.
Lists ModelMonitoringJobs. Callers may choose to read across multiple
Monitors as per AIP-159 by using '-' (the
hyphen or dash character) as a wildcard character instead of
modelMonitor id in the parent. Format
projects/{project_id}/locations/{location}/moodelMonitors/-/modelMonitoringJobs
Copies an already existing Vertex AI Model into the specified Location.
The source Model must exist in the same Project. When copying custom
Models, the users themselves are responsible for Model.metadata content
to be region-agnostic, as well as making sure that any resources (e.g.
files) it depends on remain accessible.
Deletes a Model version. Model version can only be deleted if there are
no DeployedModels created from it. Deleting the only version in the
Model is not allowed. Use DeleteModel for deleting the Model instead.
Exports a trained, exportable Model to a location specified by the user.
A Model is considered to be exportable if it has at least one supported
export format.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Sets the access control policy on the specified resource. Replaces any
existing policy. Can return NOT_FOUND, INVALID_ARGUMENT, and
PERMISSION_DENIED errors.
Returns permissions that a caller has on the specified resource. If the
resource does not exist, this will return an empty set of permissions,
not a NOT_FOUND error. Note: This operation is designed to be used for
building permission-aware UIs and command-line tools, not for
authorization checking. This operation may "fail open" without
warning.
Starts asynchronous cancellation on a long-running operation. The server
makes a best effort to cancel the operation, but success is not
guaranteed. If the server doesn't support this method, it returns
google.rpc.Code.UNIMPLEMENTED. Clients can use Operations.GetOperation
or other methods to check whether the cancellation succeeded or whether
the operation completed despite cancellation. On successful
cancellation, the operation is not deleted; instead, it becomes an
operation with an Operation.error value with a google.rpc.Status.code of
1, corresponding to Code.CANCELLED.
Deletes a long-running operation. This method indicates that the client
is no longer interested in the operation result. It does not cancel the
operation. If the server doesn't support this method, it returns
google.rpc.Code.UNIMPLEMENTED.
Gets the latest state of a long-running operation. Clients can use this
method to poll the operation result at intervals as recommended by the
API service.
Waits until the specified long-running operation is done or reaches at
most a specified timeout, returning the latest state. If the operation
is already done, the latest state is immediately returned. If the
timeout specified is greater than the default HTTP/RPC timeout, the
HTTP/RPC timeout is used. If the server does not support this method, it
returns google.rpc.Code.UNIMPLEMENTED. Note that this method is on a
best-effort basis. It may return the latest state before the specified
timeout (including immediately), meaning even an immediate response is
no guarantee that the operation is done.
Batch cancel PipelineJobs. Firstly the server will check if all the jobs
are in non-terminal states, and skip the jobs that are already
terminated. If the operation failed, none of the pipeline jobs are
cancelled. The server will poll the states of all the pipeline jobs
periodically to check the cancellation status. This operation will
return an LRO.
Batch deletes PipelineJobs The Operation is atomic. If it fails, none of
the PipelineJobs are deleted. If it succeeds, all of the PipelineJobs
are deleted.
Cancels a PipelineJob. Starts asynchronous cancellation on the
PipelineJob. The server makes a best effort to cancel the pipeline, but
success is not guaranteed. Clients can use
PipelineService.GetPipelineJob or other methods to check whether the
cancellation succeeded or whether the pipeline completed despite
cancellation. On successful cancellation, the PipelineJob is not
deleted; instead it becomes a pipeline with a PipelineJob.error value
with a google.rpc.Status.code of 1, corresponding to Code.CANCELLED,
and PipelineJob.state is set to CANCELLED.
Perform an online prediction with an arbitrary HTTP payload. The
response includes the following HTTP headers: *
X-Vertex-AI-Endpoint-Id: ID of the Endpoint that served this
prediction. * X-Vertex-AI-Deployed-Model-Id: ID of the Endpoint's
DeployedModel that served this prediction.
Updates an active or paused Schedule. When the Schedule is updated, new
runs will be scheduled starting from the updated next execution time
after the update time based on the time_specification in the updated
Schedule. All unstarted runs before the update time will be skipped
while already created runs will NOT be paused or canceled.
Pauses a Schedule. Will mark Schedule.state to 'PAUSED'. If the schedule
is paused, no new runs will be created. Already created runs will NOT be
paused or canceled.
Resumes a paused Schedule to start scheduling new runs. Will mark
Schedule.state to 'ACTIVE'. Only paused Schedule can be resumed. When
the Schedule is resumed, new runs will be scheduled starting from the
next execution time after the current time based on the
time_specification in the Schedule. If Schedule.catchUp is set up true,
all missed runs will be scheduled for backfill first.
Checks whether a Trial should stop or not. Returns a long-running
operation. When the operation is successful, it will contain a
CheckTrialEarlyStoppingStateResponse.
Lists the pareto-optimal Trials for multi-objective Study or the optimal
Trials for single-objective Study. The definition of pareto-optimal can
be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency
Adds one or more Trials to a Study, with parameter values suggested by
Vertex AI Vizier. Returns a long-running operation associated with the
generation of Trial suggestions. When this long-running operation
succeeds, it will contain a SuggestTrialsResponse.
Reads multiple TensorboardTimeSeries' data. The data point number limit
is 1000 for scalars, 100 for tensors and blob references. If the number
of data points stored is less than the limit, all data is returned.
Otherwise, the number limit of data points is randomly selected from
this time series and returned.
Reads a TensorboardTimeSeries' data. By default, if the number of data
points stored is less than 1000, all data is returned. Otherwise, 1000
data points is randomly selected from this time series and returned.
This value can be changed by changing max_data_points, which can't be
greater than 10k.
Gets bytes of TensorboardBlobs. This is to allow reading blob data
stored in consumer project's Cloud Storage bucket without users having
to obtain Cloud Storage access permission.
Cancels a TrainingPipeline. Starts asynchronous cancellation on the
TrainingPipeline. The server makes a best effort to cancel the pipeline,
but success is not guaranteed. Clients can use
PipelineService.GetTrainingPipeline or other methods to check whether
the cancellation succeeded or whether the pipeline completed despite
cancellation. On successful cancellation, the TrainingPipeline is not
deleted; instead it becomes a pipeline with a TrainingPipeline.error
value with a google.rpc.Status.code of 1, corresponding to
Code.CANCELLED, and TrainingPipeline.state is set to CANCELLED.
Cancels a TuningJob. Starts asynchronous cancellation on the TuningJob.
The server makes a best effort to cancel the job, but success is not
guaranteed. Clients can use GenAiTuningService.GetTuningJob or other
methods to check whether the cancellation succeeded or whether the job
completed despite cancellation. On successful cancellation, the
TuningJob is not deleted; instead it becomes a job with a
TuningJob.error value with a google.rpc.Status.code of 1, corresponding
to Code.CANCELLED, and TuningJob.state is set to CANCELLED.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-19 UTC."],[],[]]