GoogleApi__HttpBody
Message that represents an arbitrary HTTP body. It should only be used for payload formats that can't be represented as JSON, such as raw binary or an HTML page. This message can be used both in streaming and non-streaming API methods in the request as well as the response. It can be used as a top-level request field, which is convenient if one wants to extract parameters from either the URL or HTTP template into the request fields and also want access to the raw HTTP body. Example: message GetResourceRequest { // A unique request id. string request_id = 1; // The raw HTTP body is bound to this field. google.api.HttpBody http_body = 2; } service ResourceService { rpc GetResource(GetResourceRequest) returns (google.api.HttpBody); rpc UpdateResource(google.api.HttpBody) returns (google.protobuf.Empty); } Example with streaming methods: service CaldavService { rpc GetCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); rpc UpdateCalendar(stream google.api.HttpBody) returns (stream google.api.HttpBody); } Use of this type only changes how the request and response bodies are handled, all other features will continue to work unchanged.Fields | |
---|---|
contentType |
The HTTP Content-Type header value specifying the content type of the body. |
data |
The HTTP request/response body as raw binary. |
extensions[] |
Application specific response metadata. Must be set in the first response for streaming APIs. |
GoogleCloudMlV1_AutomatedStoppingConfig_DecayCurveAutomatedStoppingConfig
(No description provided)Fields | |
---|---|
useElapsedTime |
If true, measurement.elapsed_time is used as the x-axis of each Trials Decay Curve. Otherwise, Measurement.steps will be used as the x-axis. |
GoogleCloudMlV1_AutomatedStoppingConfig_MedianAutomatedStoppingConfig
The median automated stopping rule stops a pending trial if the trial's best objective_value is strictly below the median 'performance' of all completed trials reported up to the trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the trial in each measurement.Fields | |
---|---|
useElapsedTime |
If true, the median automated stopping rule applies to measurement.use_elapsed_time, which means the elapsed_time field of the current trial's latest measurement is used to compute the median objective value for each completed trial. |
GoogleCloudMlV1_HyperparameterOutput_HyperparameterMetric
An observed value of a metric.Fields | |
---|---|
objectiveValue |
The objective value at this training step. |
trainingStep |
The global training step for this metric. |
GoogleCloudMlV1_Measurement_Metric
A message representing a metric in the measurement.Fields | |
---|---|
metric |
Required. Metric name. |
value |
Required. The value for this metric. |
GoogleCloudMlV1_StudyConfigParameterSpec_CategoricalValueSpec
(No description provided)Fields | |
---|---|
values[] |
Must be specified if type is |
GoogleCloudMlV1_StudyConfigParameterSpec_DiscreteValueSpec
(No description provided)Fields | |
---|---|
values[] |
Must be specified if type is |
GoogleCloudMlV1_StudyConfigParameterSpec_DoubleValueSpec
(No description provided)Fields | |
---|---|
maxValue |
Must be specified if type is |
minValue |
Must be specified if type is |
GoogleCloudMlV1_StudyConfigParameterSpec_IntegerValueSpec
(No description provided)Fields | |
---|---|
maxValue |
Must be specified if type is |
minValue |
Must be specified if type is |
GoogleCloudMlV1_StudyConfigParameterSpec_MatchingParentCategoricalValueSpec
Represents the spec to match categorical values from parent parameter.Fields | |
---|---|
values[] |
Matches values of the parent parameter with type 'CATEGORICAL'. All values must exist in |
GoogleCloudMlV1_StudyConfigParameterSpec_MatchingParentDiscreteValueSpec
Represents the spec to match discrete values from parent parameter.Fields | |
---|---|
values[] |
Matches values of the parent parameter with type 'DISCRETE'. All values must exist in |
GoogleCloudMlV1_StudyConfigParameterSpec_MatchingParentIntValueSpec
Represents the spec to match integer values from parent parameter.Fields | |
---|---|
values[] |
Matches values of the parent parameter with type 'INTEGER'. All values must lie in |
GoogleCloudMlV1_StudyConfig_MetricSpec
Represents a metric to optimize.Fields | |
---|---|
goal |
Required. The optimization goal of the metric. |
Enum type. Can be one of the following: | |
GOAL_TYPE_UNSPECIFIED |
Goal Type will default to maximize. |
MAXIMIZE |
Maximize the goal metric. |
MINIMIZE |
Minimize the goal metric. |
metric |
Required. The name of the metric. |
GoogleCloudMlV1_StudyConfig_ParameterSpec
Represents a single parameter to optimize.Fields | |
---|---|
categoricalValueSpec |
The value spec for a 'CATEGORICAL' parameter. |
childParameterSpecs[] |
A child node is active if the parameter's value matches the child node's matching_parent_values. If two items in child_parameter_specs have the same name, they must have disjoint matching_parent_values. |
discreteValueSpec |
The value spec for a 'DISCRETE' parameter. |
doubleValueSpec |
The value spec for a 'DOUBLE' parameter. |
integerValueSpec |
The value spec for an 'INTEGER' parameter. |
parameter |
Required. The parameter name must be unique amongst all ParameterSpecs. |
parentCategoricalValues |
(No description provided) |
parentDiscreteValues |
(No description provided) |
parentIntValues |
(No description provided) |
scaleType |
How the parameter should be scaled. Leave unset for categorical parameters. |
Enum type. Can be one of the following: | |
SCALE_TYPE_UNSPECIFIED |
By default, no scaling is applied. |
UNIT_LINEAR_SCALE |
Scales the feasible space to (0, 1) linearly. |
UNIT_LOG_SCALE |
Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive. |
UNIT_REVERSE_LOG_SCALE |
Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive. |
type |
Required. The type of the parameter. |
Enum type. Can be one of the following: | |
PARAMETER_TYPE_UNSPECIFIED |
You must specify a valid type. Using this unspecified type will result in an error. |
DOUBLE |
Type for real-valued parameters. |
INTEGER |
Type for integral parameters. |
CATEGORICAL |
The parameter is categorical, with a value chosen from the categories field. |
DISCRETE |
The parameter is real valued, with a fixed set of feasible points. If type==DISCRETE , feasible_points must be provided, and {min_value , max_value } will be ignored. |
GoogleCloudMlV1_Trial_Parameter
A message representing a parameter to be tuned. Contains the name of the parameter and the suggested value to use for this trial.Fields | |
---|---|
floatValue |
Must be set if ParameterType is DOUBLE or DISCRETE. |
intValue |
Must be set if ParameterType is INTEGER |
parameter |
The name of the parameter. |
stringValue |
Must be set if ParameterTypeis CATEGORICAL |
GoogleCloudMlV1__AcceleratorConfig
Represents a hardware accelerator request config. Note that the AcceleratorConfig can be used in both Jobs and Versions. Learn more about accelerators for training and accelerators for online prediction.Fields | |
---|---|
count |
The number of accelerators to attach to each machine running the job. |
type |
The type of accelerator to use. |
Enum type. Can be one of the following: | |
ACCELERATOR_TYPE_UNSPECIFIED |
Unspecified accelerator type. Default to no GPU. |
NVIDIA_TESLA_K80 |
Nvidia Tesla K80 GPU. |
NVIDIA_TESLA_P100 |
Nvidia Tesla P100 GPU. |
NVIDIA_TESLA_V100 |
Nvidia V100 GPU. |
NVIDIA_TESLA_P4 |
Nvidia Tesla P4 GPU. |
NVIDIA_TESLA_T4 |
Nvidia T4 GPU. |
NVIDIA_TESLA_A100 |
Nvidia A100 GPU. |
TPU_V2 |
TPU v2. |
TPU_V3 |
TPU v3. |
TPU_V2_POD |
TPU v2 POD. |
TPU_V3_POD |
TPU v3 POD. |
TPU_V4_POD |
TPU v4 POD. |
GoogleCloudMlV1__AddTrialMeasurementRequest
The request message for the AddTrialMeasurement service method.Fields | |
---|---|
measurement |
Required. The measurement to be added to a trial. |
GoogleCloudMlV1__AutoScaling
Options for automatically scaling a model.Fields | |
---|---|
maxNodes |
The maximum number of nodes to scale this model under load. The actual value will depend on resource quota and availability. |
metrics[] |
MetricSpec contains the specifications to use to calculate the desired nodes count. |
minNodes |
Optional. The minimum number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed. Therefore, the cost of operating this model will be at least |
GoogleCloudMlV1__AutomatedStoppingConfig
Configuration for Automated Early Stopping of Trials. If no implementation_config is set, automated early stopping will not be run.Fields | |
---|---|
decayCurveStoppingConfig |
(No description provided) |
medianAutomatedStoppingConfig |
(No description provided) |
GoogleCloudMlV1__BuiltInAlgorithmOutput
Represents output related to a built-in algorithm Job.Fields | |
---|---|
framework |
Framework on which the built-in algorithm was trained. |
modelPath |
The Cloud Storage path to the |
pythonVersion |
Python version on which the built-in algorithm was trained. |
runtimeVersion |
AI Platform runtime version on which the built-in algorithm was trained. |
GoogleCloudMlV1__Capability
(No description provided)Fields | |
---|---|
availableAccelerators[] |
Available accelerators for the capability. |
type |
(No description provided) |
Enum type. Can be one of the following: | |
TYPE_UNSPECIFIED |
(No description provided) |
TRAINING |
(No description provided) |
BATCH_PREDICTION |
(No description provided) |
ONLINE_PREDICTION |
(No description provided) |
GoogleCloudMlV1__CheckTrialEarlyStoppingStateMetatdata
This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.Fields | |
---|---|
createTime |
The time at which the operation was submitted. |
study |
The name of the study that the trial belongs to. |
trial |
The trial name. |
GoogleCloudMlV1__CheckTrialEarlyStoppingStateResponse
The message will be placed in the response field of a completed google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.Fields | |
---|---|
endTime |
The time at which operation processing completed. |
shouldStop |
True if the Trial should stop. |
startTime |
The time at which the operation was started. |
GoogleCloudMlV1__CompleteTrialRequest
The request message for the CompleteTrial service method.Fields | |
---|---|
finalMeasurement |
Optional. If provided, it will be used as the completed trial's final_measurement; Otherwise, the service will auto-select a previously reported measurement as the final-measurement |
infeasibleReason |
Optional. A human readable reason why the trial was infeasible. This should only be provided if |
trialInfeasible |
Optional. True if the trial cannot be run with the given Parameter, and final_measurement will be ignored. |
GoogleCloudMlV1__Config
(No description provided)Fields | |
---|---|
tpuServiceAccount |
The service account Cloud ML uses to run on TPU node. |
GoogleCloudMlV1__ContainerPort
Represents a network port in a single container. This message is a subset of the Kubernetes ContainerPort v1 core specification.Fields | |
---|---|
containerPort |
Number of the port to expose on the container. This must be a valid port number: 0 < PORT_NUMBER < 65536. |
GoogleCloudMlV1__ContainerSpec
Specification of a custom container for serving predictions. This message is a subset of the Kubernetes Container v1 core specification.Fields | |
---|---|
args[] |
Immutable. Specifies arguments for the command that runs when the container starts. This overrides the container's |
command[] |
Immutable. Specifies the command that runs when the container starts. This overrides the container's |
env[] |
Immutable. List of environment variables to set in the container. After the container starts running, code running in the container can read these environment variables. Additionally, the command and args fields can reference these variables. Later entries in this list can also reference earlier entries. For example, the following example sets the variable |
image |
URI of the Docker image to be used as the custom container for serving predictions. This URI must identify an image in Artifact Registry and begin with the hostname |
ports[] |
Immutable. List of ports to expose from the container. AI Platform Prediction sends any prediction requests that it receives to the first port on this list. AI Platform Prediction also sends liveness and health checks to this port. If you do not specify this field, it defaults to following value: |
GoogleCloudMlV1__DiskConfig
Represents the config of disk options.Fields | |
---|---|
bootDiskSizeGb |
Size in GB of the boot disk (default is 100GB). |
bootDiskType |
Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive). |
GoogleCloudMlV1__EncryptionConfig
Represents a custom encryption key configuration that can be applied to a resource.Fields | |
---|---|
kmsKeyName |
The Cloud KMS resource identifier of the customer-managed encryption key used to protect a resource, such as a training job. It has the following format: |
GoogleCloudMlV1__EnvVar
Represents an environment variable to be made available in a container. This message is a subset of the Kubernetes EnvVar v1 core specification.Fields | |
---|---|
name |
Name of the environment variable. Must be a valid C identifier and must not begin with the prefix |
value |
Value of the environment variable. Defaults to an empty string. In this field, you can reference environment variables set by AI Platform Prediction and environment variables set earlier in the same env field as where this message occurs. You cannot reference environment variables set in the Docker image. In order for environment variables to be expanded, reference them by using the following syntax: $(VARIABLE_NAME) Note that this differs from Bash variable expansion, which does not use parentheses. If a variable cannot be resolved, the reference in the input string is used unchanged. To avoid variable expansion, you can escape this syntax with |
GoogleCloudMlV1__ExplainRequest
Request for explanations to be issued against a trained model.Fields | |
---|---|
httpBody |
Required. The explanation request body. |
GoogleCloudMlV1__ExplanationConfig
Message holding configuration options for explaining model predictions. There are three feature attribution methods supported for TensorFlow models: integrated gradients, sampled Shapley, and XRAI. Learn more about feature attributions.Fields | |
---|---|
integratedGradientsAttribution |
Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365 |
sampledShapleyAttribution |
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features. |
xraiAttribution |
Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs. |
GoogleCloudMlV1__GetConfigResponse
Returns service account information associated with a project.Fields | |
---|---|
config |
(No description provided) |
serviceAccount |
The service account Cloud ML uses to access resources in the project. |
serviceAccountProject |
The project number for |
GoogleCloudMlV1__HyperparameterOutput
Represents the result of a single hyperparameter tuning trial from a training job. The TrainingOutput object that is returned on successful completion of a training job with hyperparameter tuning includes a list of HyperparameterOutput objects, one for each successful trial.Fields | |
---|---|
allMetrics[] |
All recorded object metrics for this trial. This field is not currently populated. |
builtInAlgorithmOutput |
Details related to built-in algorithms jobs. Only set for trials of built-in algorithms jobs that have succeeded. |
endTime |
Output only. End time for the trial. |
finalMetric |
The final objective metric seen for this trial. |
hyperparameters |
The hyperparameters given to this trial. |
isTrialStoppedEarly |
True if the trial is stopped early. |
startTime |
Output only. Start time for the trial. |
state |
Output only. The detailed state of the trial. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The job state is unspecified. |
QUEUED |
The job has been just created and processing has not yet begun. |
PREPARING |
The service is preparing to run the job. |
RUNNING |
The job is in progress. |
SUCCEEDED |
The job completed successfully. |
FAILED |
The job failed. error_message should contain the details of the failure. |
CANCELLING |
The job is being cancelled. error_message should describe the reason for the cancellation. |
CANCELLED |
The job has been cancelled. error_message should describe the reason for the cancellation. |
trialId |
The trial id for these results. |
webAccessUris |
URIs for accessing interactive shells (one URI for each training node). Only available if this trial is part of a hyperparameter tuning job and the job's training_input.enable_web_access is |
GoogleCloudMlV1__HyperparameterSpec
Represents a set of hyperparameters to optimize.Fields | |
---|---|
algorithm |
Optional. The search algorithm specified for the hyperparameter tuning job. Uses the default AI Platform hyperparameter tuning algorithm if unspecified. |
Enum type. Can be one of the following: | |
ALGORITHM_UNSPECIFIED |
The default algorithm used by the hyperparameter tuning service. This is a Bayesian optimization algorithm. |
GRID_SEARCH |
Simple grid search within the feasible space. To use grid search, all parameters must be INTEGER , CATEGORICAL , or DISCRETE . |
RANDOM_SEARCH |
Simple random search within the feasible space. |
enableTrialEarlyStopping |
Optional. Indicates if the hyperparameter tuning job enables auto trial early stopping. |
goal |
Required. The type of goal to use for tuning. Available types are |
Enum type. Can be one of the following: | |
GOAL_TYPE_UNSPECIFIED |
Goal Type will default to maximize. |
MAXIMIZE |
Maximize the goal metric. |
MINIMIZE |
Minimize the goal metric. |
hyperparameterMetricTag |
Optional. The TensorFlow summary tag name to use for optimizing trials. For current versions of TensorFlow, this tag name should exactly match what is shown in TensorBoard, including all scopes. For versions of TensorFlow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used. |
maxFailedTrials |
Optional. The number of failed trials that need to be seen before failing the hyperparameter tuning job. You can specify this field to override the default failing criteria for AI Platform hyperparameter tuning jobs. Defaults to zero, which means the service decides when a hyperparameter job should fail. |
maxParallelTrials |
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one. |
maxTrials |
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one. |
params[] |
Required. The set of parameters to tune. |
resumePreviousJobId |
Optional. The prior hyperparameter tuning job id that users hope to continue with. The job id will be used to find the corresponding vizier study guid and resume the study. |
GoogleCloudMlV1__IntegratedGradientsAttribution
Attributes credit by computing the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365Fields | |
---|---|
numIntegralSteps |
Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. |
GoogleCloudMlV1__Job
Represents a training or prediction job.Fields | |
---|---|
createTime |
Output only. When the job was created. |
endTime |
Output only. When the job processing was completed. |
errorMessage |
Output only. The details of a failure or a cancellation. |
etag |
|
jobId |
Required. The user-specified id of the job. |
jobPosition |
Output only. It's only effect when the job is in QUEUED state. If it's positive, it indicates the job's position in the job scheduler. It's 0 when the job is already scheduled. |
labels |
Optional. One or more labels that you can add, to organize your jobs. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. |
predictionInput |
Input parameters to create a prediction job. |
predictionOutput |
The current prediction job result. |
startTime |
Output only. When the job processing was started. |
state |
Output only. The detailed state of a job. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The job state is unspecified. |
QUEUED |
The job has been just created and processing has not yet begun. |
PREPARING |
The service is preparing to run the job. |
RUNNING |
The job is in progress. |
SUCCEEDED |
The job completed successfully. |
FAILED |
The job failed. error_message should contain the details of the failure. |
CANCELLING |
The job is being cancelled. error_message should describe the reason for the cancellation. |
CANCELLED |
The job has been cancelled. error_message should describe the reason for the cancellation. |
trainingInput |
Input parameters to create a training job. |
trainingOutput |
The current training job result. |
GoogleCloudMlV1__ListJobsResponse
Response message for the ListJobs method.Fields | |
---|---|
jobs[] |
The list of jobs. |
nextPageToken |
Optional. Pass this token as the |
GoogleCloudMlV1__ListLocationsResponse
(No description provided)Fields | |
---|---|
locations[] |
Locations where at least one type of CMLE capability is available. |
nextPageToken |
Optional. Pass this token as the |
GoogleCloudMlV1__ListModelsResponse
Response message for the ListModels method.Fields | |
---|---|
models[] |
The list of models. |
nextPageToken |
Optional. Pass this token as the |
GoogleCloudMlV1__ListOptimalTrialsResponse
The response message for the ListOptimalTrials method.Fields | |
---|---|
trials[] |
The pareto-optimal trials for multiple objective study or the optimal trial for single objective study. The definition of pareto-optimal can be checked in wiki page. https://en.wikipedia.org/wiki/Pareto_efficiency |
GoogleCloudMlV1__ListStudiesResponse
(No description provided)Fields | |
---|---|
studies[] |
The studies associated with the project. |
GoogleCloudMlV1__ListTrialsResponse
The response message for the ListTrials method.Fields | |
---|---|
trials[] |
The trials associated with the study. |
GoogleCloudMlV1__ListVersionsResponse
Response message for the ListVersions method.Fields | |
---|---|
nextPageToken |
Optional. Pass this token as the |
versions[] |
The list of versions. |
GoogleCloudMlV1__Location
(No description provided)Fields | |
---|---|
capabilities[] |
Capabilities available in the location. |
name |
(No description provided) |
GoogleCloudMlV1__ManualScaling
Options for manually scaling a model.Fields | |
---|---|
nodes |
The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to |
GoogleCloudMlV1__Measurement
A message representing a measurement.Fields | |
---|---|
elapsedTime |
Output only. Time that the trial has been running at the point of this measurement. |
metrics[] |
Provides a list of metrics that act as inputs into the objective function. |
stepCount |
The number of steps a machine learning model has been trained for. Must be non-negative. |
GoogleCloudMlV1__MetricSpec
MetricSpec contains the specifications to use to calculate the desired nodes count when autoscaling is enabled.Fields | |
---|---|
name |
metric name. |
Enum type. Can be one of the following: | |
METRIC_NAME_UNSPECIFIED |
Unspecified MetricName. |
CPU_USAGE |
CPU usage. |
GPU_DUTY_CYCLE |
GPU duty cycle. |
target |
Target specifies the target value for the given metric; once real metric deviates from the threshold by a certain percentage, the node count changes. |
GoogleCloudMlV1__Model
Represents a machine learning solution. A model can have multiple versions, each of which is a deployed, trained model ready to receive prediction requests. The model itself is just a container.Fields | |
---|---|
defaultVersion |
Output only. The default version of the model. This version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.models.versions.setDefault. |
description |
Optional. The description specified for the model when it was created. |
etag |
|
labels |
Optional. One or more labels that you can add, to organize your models. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models. |
name |
Required. The name specified for the model when it was created. The model name must be unique within the project it is created in. |
onlinePredictionConsoleLogging |
Optional. If true, online prediction nodes send |
onlinePredictionLogging |
Optional. If true, online prediction access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each request. Note that logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option. Default is false. |
regions[] |
Optional. The list of regions where the model is going to be deployed. Only one region per model is supported. Defaults to 'us-central1' if nothing is set. See the available regions for AI Platform services. Note: * No matter where a model is deployed, it can always be accessed by users from anywhere, both for online and batch prediction. * The region for a batch prediction job is set by the region field when submitting the batch prediction job and does not take its value from this field. |
GoogleCloudMlV1__OperationMetadata
Represents the metadata of the long-running operation.Fields | |
---|---|
createTime |
The time the operation was submitted. |
endTime |
The time operation processing completed. |
isCancellationRequested |
Indicates whether a request to cancel this operation has been made. |
labels |
The user labels, inherited from the model or the model version being operated on. |
modelName |
Contains the name of the model associated with the operation. |
operationType |
The operation type. |
Enum type. Can be one of the following: | |
OPERATION_TYPE_UNSPECIFIED |
Unspecified operation type. |
CREATE_VERSION |
An operation to create a new version. |
DELETE_VERSION |
An operation to delete an existing version. |
DELETE_MODEL |
An operation to delete an existing model. |
UPDATE_MODEL |
An operation to update an existing model. |
UPDATE_VERSION |
An operation to update an existing version. |
UPDATE_CONFIG |
An operation to update project configuration. |
projectNumber |
Contains the project number associated with the operation. |
startTime |
The time operation processing started. |
version |
Contains the version associated with the operation. |
GoogleCloudMlV1__ParameterSpec
Represents a single hyperparameter to optimize.Fields | |
---|---|
categoricalValues[] |
Required if type is |
discreteValues[] |
Required if type is |
maxValue |
Required if type is |
minValue |
Required if type is |
parameterName |
Required. The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate". |
scaleType |
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., |
Enum type. Can be one of the following: | |
NONE |
By default, no scaling is applied. |
UNIT_LINEAR_SCALE |
Scales the feasible space to (0, 1) linearly. |
UNIT_LOG_SCALE |
Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive. |
UNIT_REVERSE_LOG_SCALE |
Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive. |
type |
Required. The type of the parameter. |
Enum type. Can be one of the following: | |
PARAMETER_TYPE_UNSPECIFIED |
You must specify a valid type. Using this unspecified type will result in an error. |
DOUBLE |
Type for real-valued parameters. |
INTEGER |
Type for integral parameters. |
CATEGORICAL |
The parameter is categorical, with a value chosen from the categories field. |
DISCRETE |
The parameter is real valued, with a fixed set of feasible points. If type==DISCRETE , feasible_points must be provided, and {min_value , max_value } will be ignored. |
GoogleCloudMlV1__PredictRequest
Request for predictions to be issued against a trained model.Fields | |
---|---|
httpBody |
Required. The prediction request body. Refer to the request body details section for more information on how to structure your request. |
GoogleCloudMlV1__PredictionInput
Represents input parameters for a prediction job.Fields | |
---|---|
batchSize |
Optional. Number of records per batch, defaults to 64. The service will buffer batch_size number of records in memory before invoking one Tensorflow prediction call internally. So take the record size and memory available into consideration when setting this parameter. |
dataFormat |
Required. The format of the input data files. |
Enum type. Can be one of the following: | |
DATA_FORMAT_UNSPECIFIED |
Unspecified format. |
JSON |
Each line of the file is a JSON dictionary representing one record. |
TEXT |
Deprecated. Use JSON instead. |
TF_RECORD |
The source file is a TFRecord file. Currently available only for input data. |
TF_RECORD_GZIP |
The source file is a GZIP-compressed TFRecord file. Currently available only for input data. |
CSV |
Values are comma-separated rows, with keys in a separate file. Currently available only for output data. |
inputPaths[] |
Required. The Cloud Storage location of the input data files. May contain wildcards. |
maxWorkerCount |
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified. |
modelName |
Use this field if you want to use the default version for the specified model. The string must use the following format: |
outputDataFormat |
Optional. Format of the output data files, defaults to JSON. |
Enum type. Can be one of the following: | |
DATA_FORMAT_UNSPECIFIED |
Unspecified format. |
JSON |
Each line of the file is a JSON dictionary representing one record. |
TEXT |
Deprecated. Use JSON instead. |
TF_RECORD |
The source file is a TFRecord file. Currently available only for input data. |
TF_RECORD_GZIP |
The source file is a GZIP-compressed TFRecord file. Currently available only for input data. |
CSV |
Values are comma-separated rows, with keys in a separate file. Currently available only for output data. |
outputPath |
Required. The output Google Cloud Storage location. |
region |
Required. The Google Compute Engine region to run the prediction job in. See the available regions for AI Platform services. |
runtimeVersion |
Optional. The AI Platform runtime version to use for this batch prediction. If not set, AI Platform will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri. |
signatureName |
Optional. The name of the signature defined in the SavedModel to use for this job. Please refer to SavedModel for information about how to use signatures. Defaults to DEFAULT_SERVING_SIGNATURE_DEF_KEY , which is "serving_default". |
uri |
Use this field if you want to specify a Google Cloud Storage path for the model to use. |
versionName |
Use this field if you want to specify a version of the model to use. The string is formatted the same way as |
GoogleCloudMlV1__PredictionOutput
Represents results of a prediction job.Fields | |
---|---|
errorCount |
The number of data instances which resulted in errors. |
nodeHours |
Node hours used by the batch prediction job. |
outputPath |
The output Google Cloud Storage location provided at the job creation time. |
predictionCount |
The number of generated predictions. |
GoogleCloudMlV1__ReplicaConfig
Represents the configuration for a replica in a cluster.Fields | |
---|---|
acceleratorConfig |
Represents the type and number of accelerators used by the replica. Learn about restrictions on accelerator configurations for training. |
containerArgs[] |
Arguments to the entrypoint command. The following rules apply for container_command and container_args: - If you do not supply command or args: The defaults defined in the Docker image are used. - If you supply a command but no args: The default EntryPoint and the default Cmd defined in the Docker image are ignored. Your command is run without any arguments. - If you supply only args: The default Entrypoint defined in the Docker image is run with the args that you supplied. - If you supply a command and args: The default Entrypoint and the default Cmd defined in the Docker image are ignored. Your command is run with your args. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time. |
containerCommand[] |
The command with which the replica's custom container is run. If provided, it will override default ENTRYPOINT of the docker image. If not provided, the docker image's ENTRYPOINT is used. It cannot be set if custom container image is not provided. Note that this field and [TrainingInput.args] are mutually exclusive, i.e., both cannot be set at the same time. |
diskConfig |
Represents the configuration of disk options. |
imageUri |
The Docker image to run on the replica. This image must be in Container Registry. Learn more about configuring custom containers. |
tpuTfVersion |
The AI Platform runtime version that includes a TensorFlow version matching the one used in the custom container. This field is required if the replica is a TPU worker that uses a custom container. Otherwise, do not specify this field. This must be a runtime version that currently supports training with TPUs. Note that the version of TensorFlow included in a runtime version may differ from the numbering of the runtime version itself, because it may have a different patch version. In this field, you must specify the runtime version (TensorFlow minor version). For example, if your custom container runs TensorFlow |
GoogleCloudMlV1__RequestLoggingConfig
Configuration for logging request-response pairs to a BigQuery table. Online prediction requests to a model version and the responses to these requests are converted to raw strings and saved to the specified BigQuery table. Logging is constrained by BigQuery quotas and limits. If your project exceeds BigQuery quotas or limits, AI Platform Prediction does not log request-response pairs, but it continues to serve predictions. If you are using continuous evaluation, you do not need to specify this configuration manually. Setting up continuous evaluation automatically enables logging of request-response pairs.Fields | |
---|---|
bigqueryTableName |
Required. Fully qualified BigQuery table name in the following format: " project_id.dataset_name.table_name" The specified table must already exist, and the "Cloud ML Service Agent" for your project must have permission to write to it. The table must have the following schema: Field nameType Mode model STRING REQUIRED model_version STRING REQUIRED time TIMESTAMP REQUIRED raw_data STRING REQUIRED raw_prediction STRING NULLABLE groundtruth STRING NULLABLE |
samplingPercentage |
Percentage of requests to be logged, expressed as a fraction from 0 to 1. For example, if you want to log 10% of requests, enter |
GoogleCloudMlV1__RouteMap
Specifies HTTP paths served by a custom container. AI Platform Prediction sends requests to these paths on the container; the custom container must run an HTTP server that responds to these requests with appropriate responses. Read Custom container requirements for details on how to create your container image to meet these requirements.Fields | |
---|---|
health |
HTTP path on the container to send health checkss to. AI Platform Prediction intermittently sends GET requests to this path on the container's IP address and port to check that the container is healthy. Read more about health checks. For example, if you set this field to |
predict |
HTTP path on the container to send prediction requests to. AI Platform Prediction forwards requests sent using projects.predict to this path on the container's IP address and port. AI Platform Prediction then returns the container's response in the API response. For example, if you set this field to |
GoogleCloudMlV1__SampledShapleyAttribution
An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.Fields | |
---|---|
numPaths |
The number of feature permutations to consider when approximating the Shapley values. |
GoogleCloudMlV1__Scheduling
All parameters related to scheduling of training jobs.Fields | |
---|---|
maxRunningTime |
Optional. The maximum job running time, expressed in seconds. The field can contain up to nine fractional digits, terminated by |
maxWaitTime |
Optional. The maximum job wait time, expressed in seconds. The field can contain up to nine fractional digits, terminated by |
priority |
Optional. Job scheduling will be based on this priority, which in the range [0, 1000]. The bigger the number, the higher the priority. Default to 0 if not set. If there are multiple jobs requesting same type of accelerators, the high priority job will be scheduled prior to ones with low priority. |
GoogleCloudMlV1__Study
A message representing a Study.Fields | |
---|---|
createTime |
Output only. Time at which the study was created. |
inactiveReason |
Output only. A human readable reason why the Study is inactive. This should be empty if a study is ACTIVE or COMPLETED. |
name |
Output only. The name of a study. |
state |
Output only. The detailed state of a study. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The study state is unspecified. |
ACTIVE |
The study is active. |
INACTIVE |
The study is stopped due to an internal error. |
COMPLETED |
The study is done when the service exhausts the parameter search space or max_trial_count is reached. |
studyConfig |
Required. Configuration of the study. |
GoogleCloudMlV1__StudyConfig
Represents configuration of a study.Fields | |
---|---|
algorithm |
The search algorithm specified for the study. |
Enum type. Can be one of the following: | |
ALGORITHM_UNSPECIFIED |
The default algorithm used by the Cloud AI Platform Vizier service. |
GAUSSIAN_PROCESS_BANDIT |
Gaussian Process Bandit. |
GRID_SEARCH |
Simple grid search within the feasible space. To use grid search, all parameters must be INTEGER , CATEGORICAL , or DISCRETE . |
RANDOM_SEARCH |
Simple random search within the feasible space. |
automatedStoppingConfig |
Configuration for automated stopping of unpromising Trials. |
metrics[] |
Metric specs for the study. |
parameters[] |
Required. The set of parameters to tune. |
GoogleCloudMlV1__SuggestTrialsMetadata
Metadata field of a google.longrunning.Operation associated with a SuggestTrialsRequest.Fields | |
---|---|
clientId |
The identifier of the client that is requesting the suggestion. |
createTime |
The time operation was submitted. |
study |
The name of the study that the trial belongs to. |
suggestionCount |
The number of suggestions requested. |
GoogleCloudMlV1__SuggestTrialsRequest
The request message for the SuggestTrial service method.Fields | |
---|---|
clientId |
Required. The identifier of the client that is requesting the suggestion. If multiple SuggestTrialsRequests have the same |
suggestionCount |
Required. The number of suggestions requested. |
GoogleCloudMlV1__SuggestTrialsResponse
This message will be placed in the response field of a completed google.longrunning.Operation associated with a SuggestTrials request.Fields | |
---|---|
endTime |
The time at which operation processing completed. |
startTime |
The time at which the operation was started. |
studyState |
The state of the study. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The study state is unspecified. |
ACTIVE |
The study is active. |
INACTIVE |
The study is stopped due to an internal error. |
COMPLETED |
The study is done when the service exhausts the parameter search space or max_trial_count is reached. |
trials[] |
A list of trials. |
GoogleCloudMlV1__TrainingInput
Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to submitting a training job.Fields | |
---|---|
args[] |
Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container's |
enableWebAccess |
Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to |
encryptionConfig |
Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google's default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training. |
evaluatorConfig |
Optional. The configuration for evaluators. You should only set |
evaluatorCount |
Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in |
evaluatorType |
Optional. Specifies the type of virtual machine to use for your training job's evaluator nodes. The supported values are the same as those described in the entry for |
hyperparameters |
Optional. The set of Hyperparameters to tune. |
jobDir |
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training. |
masterConfig |
Optional. The configuration for your master worker. You should only set |
masterType |
Optional. Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when |
network |
Optional. The full name of the Compute Engine network to which the Job is peered. For example, |
packageUris[] |
Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100. |
parameterServerConfig |
Optional. The configuration for parameter servers. You should only set |
parameterServerCount |
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in |
parameterServerType |
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for |
pythonModule |
Required. The Python module name to run after installing the packages. |
pythonVersion |
Optional. The version of Python used in training. You must either specify this field or specify |
region |
Required. The region to run the training job in. See the available regions for AI Platform Training. |
runtimeVersion |
Optional. The AI Platform runtime version to use for training. You must either specify this field or specify |
scaleTier |
Required. Specifies the machine types, the number of replicas for workers and parameter servers. |
Enum type. Can be one of the following: | |
BASIC |
A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets. |
STANDARD_1 |
Many workers and a few parameter servers. |
PREMIUM_1 |
A large number of workers with many parameter servers. |
BASIC_GPU |
A single worker instance with a GPU. |
BASIC_TPU |
A single worker instance with a Cloud TPU. |
CUSTOM |
The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You must set TrainingInput.masterType to specify the type of machine to use for your master node. This is the only required setting. * You may set TrainingInput.workerCount to specify the number of workers to use. If you specify one or more workers, you must also set TrainingInput.workerType to specify the type of machine to use for your worker nodes. * You may set TrainingInput.parameterServerCount to specify the number of parameter servers to use. If you specify one or more parameter servers, you must also set TrainingInput.parameterServerType to specify the type of machine to use for your parameter servers. Note that all of your workers must use the same machine type, which can be different from your parameter server type and master type. Your parameter servers must likewise use the same machine type, which can be different from your worker type and master type. |
scheduling |
Optional. Scheduling options for a training job. |
serviceAccount |
Optional. The email address of a service account to use when running the training appplication. You must have the |
useChiefInTfConfig |
Optional. Use |
workerConfig |
Optional. The configuration for workers. You should only set |
workerCount |
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in |
workerType |
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for |
GoogleCloudMlV1__TrainingOutput
Represents results of a training job. Output only.Fields | |
---|---|
builtInAlgorithmOutput |
Details related to built-in algorithms jobs. Only set for built-in algorithms jobs. |
completedTrialCount |
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs. |
consumedMLUnits |
The amount of ML units consumed by the job. |
hyperparameterMetricTag |
The TensorFlow summary tag name used for optimizing hyperparameter tuning trials. See |
isBuiltInAlgorithmJob |
Whether this job is a built-in Algorithm job. |
isHyperparameterTuningJob |
Whether this job is a hyperparameter tuning job. |
trials[] |
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs. |
webAccessUris |
Output only. URIs for accessing interactive shells (one URI for each training node). Only available if training_input.enable_web_access is |
GoogleCloudMlV1__Trial
A message representing a trial.Fields | |
---|---|
clientId |
Output only. The identifier of the client that originally requested this trial. |
endTime |
Output only. Time at which the trial's status changed to COMPLETED. |
finalMeasurement |
The final measurement containing the objective value. |
infeasibleReason |
Output only. A human readable string describing why the trial is infeasible. This should only be set if trial_infeasible is true. |
measurements[] |
A list of measurements that are strictly lexicographically ordered by their induced tuples (steps, elapsed_time). These are used for early stopping computations. |
name |
Output only. Name of the trial assigned by the service. |
parameters[] |
The parameters of the trial. |
startTime |
Output only. Time at which the trial was started. |
state |
The detailed state of a trial. |
Enum type. Can be one of the following: | |
STATE_UNSPECIFIED |
The trial state is unspecified. |
REQUESTED |
Indicates that a specific trial has been requested, but it has not yet been suggested by the service. |
ACTIVE |
Indicates that the trial has been suggested. |
COMPLETED |
Indicates that the trial is done, and either has a final_measurement set, or is marked as trial_infeasible. |
STOPPING |
Indicates that the trial should stop according to the service. |
trialInfeasible |
Output only. If true, the parameters in this trial are not attempted again. |
GoogleCloudMlV1__Version
Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list.Fields | |
---|---|
acceleratorConfig |
Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the |
autoScaling |
Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model's ability to scale or you will start seeing increases in latency and 429 response codes. |
container |
Optional. Specifies a custom container to use for serving predictions. If you specify this field, then |
createTime |
Output only. The time the version was created. |
deploymentUri |
The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container. |
description |
Optional. The description specified for the version when it was created. |
errorMessage |
Output only. The details of a failure or a cancellation. |
etag |
|
explanationConfig |
Optional. Configures explainability features on the model's version. Some explanation features require additional metadata to be loaded as part of the model payload. |
framework |
Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are |
Enum type. Can be one of the following: | |
FRAMEWORK_UNSPECIFIED |
Unspecified framework. Assigns a value based on the file suffix. |
TENSORFLOW |
Tensorflow framework. |
SCIKIT_LEARN |
Scikit-learn framework. |
XGBOOST |
XGBoost framework. |
isDefault |
Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault. |
labels |
Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models. |
lastMigrationModelId |
Output only. The AI Platform (Unified) |
lastMigrationTime |
Output only. The last time this version was successfully migrated to AI Platform (Unified). |
lastUseTime |
Output only. The time the version was last used for prediction. |
machineType |
Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to |
manualScaling |
Manually select the number of nodes to use for serving the model. You should generally use |
name |
Required. The name specified for the version when it was created. The version name must be unique within the model it is created in. |
packageUris[] |
Optional. Cloud Storage paths ( |
predictionClass |
Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the |
pythonVersion |
Required. The version of Python used in prediction. The following Python versions are available: * Python '3.7' is available when |
requestLoggingConfig |
Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version. |
routes |
Optional. Specifies paths on a custom container's HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the |
runtimeVersion |
Required. The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions. |
serviceAccount |
Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the |
state |
Output only. The state of a version. |
Enum type. Can be one of the following: | |
UNKNOWN |
The version state is unspecified. |
READY |
The version is ready for prediction. |
CREATING |
The version is being created. New UpdateVersion and DeleteVersion requests will fail if a version is in the CREATING state. |
FAILED |
The version failed to be created, possibly cancelled. error_message should contain the details of the failure. |
DELETING |
The version is being deleted. New UpdateVersion and DeleteVersion requests will fail if a version is in the DELETING state. |
UPDATING |
The version is being updated. New UpdateVersion and DeleteVersion requests will fail if a version is in the UPDATING state. |
GoogleCloudMlV1__XraiAttribution
Attributes credit by computing the XRAI taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Currently only implemented for models with natural image inputs.Fields | |
---|---|
numIntegralSteps |
Number of steps for approximating the path integral. A good value to start is 50 and gradually increase until the sum to diff property is met within the desired error range. |
GoogleIamV1__AuditConfig
Specifies the audit configuration for a service. The configuration determines which permission types are logged, and what identities, if any, are exempted from logging. An AuditConfig must have one or more AuditLogConfigs. If there are AuditConfigs for bothallServices
and a specific service, the union of the two AuditConfigs is used for that service: the log_types specified in each AuditConfig are enabled, and the exempted_members in each AuditLogConfig are exempted. Example Policy with multiple AuditConfigs: { "audit_configs": [ { "service": "allServices", "audit_log_configs": [ { "log_type": "DATA_READ", "exempted_members": [ "user:jose@example.com" ] }, { "log_type": "DATA_WRITE" }, { "log_type": "ADMIN_READ" } ] }, { "service": "sampleservice.googleapis.com", "audit_log_configs": [ { "log_type": "DATA_READ" }, { "log_type": "DATA_WRITE", "exempted_members": [ "user:aliya@example.com" ] } ] } ] } For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ logging. It also exempts jose@example.com
from DATA_READ logging, and aliya@example.com
from DATA_WRITE logging.
Fields | |
---|---|
auditLogConfigs[] |
The configuration for logging of each type of permission. |
service |
Specifies a service that will be enabled for audit logging. For example, |
GoogleIamV1__AuditLogConfig
Provides the configuration for logging a type of permissions. Example: { "audit_log_configs": [ { "log_type": "DATA_READ", "exempted_members": [ "user:jose@example.com" ] }, { "log_type": "DATA_WRITE" } ] } This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting jose@example.com from DATA_READ logging.Fields | |
---|---|
exemptedMembers[] |
Specifies the identities that do not cause logging for this type of permission. Follows the same format of Binding.members. |
logType |
The log type that this config enables. |
Enum type. Can be one of the following: | |
LOG_TYPE_UNSPECIFIED |
Default case. Should never be this. |
ADMIN_READ |
Admin reads. Example: CloudIAM getIamPolicy |
DATA_WRITE |
Data writes. Example: CloudSQL Users create |
DATA_READ |
Data reads. Example: CloudSQL Users list |
GoogleIamV1__Binding
Associatesmembers
, or principals, with a role
.
Fields | |
---|---|
condition |
The condition that is associated with this binding. If the condition evaluates to |
members[] |
Specifies the principals requesting access for a Google Cloud resource. |
role |
Role that is assigned to the list of |
GoogleIamV1__Policy
An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources. APolicy
is a collection of bindings
. A binding
binds one or more members
, or principals, to a single role
. Principals can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role
is a named list of permissions; each role
can be an IAM predefined role or a user-created custom role. For some types of Google Cloud resources, a binding
can also specify a condition
, which is a logical expression that allows access to a resource only if the expression evaluates to true
. A condition can add constraints based on attributes of the request, the resource, or both. To learn which resources support conditions in their IAM policies, see the IAM documentation. JSON example: { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin", "members": [ "user:mike@example.com", "group:admins@example.com", "domain:google.com", "serviceAccount:my-project-id@appspot.gserviceaccount.com" ] }, { "role": "roles/resourcemanager.organizationViewer", "members": [ "user:eve@example.com" ], "condition": { "title": "expirable access", "description": "Does not grant access after Sep 2020", "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ], "etag": "BwWWja0YfJA=", "version": 3 }
YAML example: bindings: - members: - user:mike@example.com - group:admins@example.com - domain:google.com - serviceAccount:my-project-id@appspot.gserviceaccount.com role: roles/resourcemanager.organizationAdmin - members: - user:eve@example.com role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time < timestamp('2020-10-01T00:00:00.000Z') etag: BwWWja0YfJA= version: 3
For a description of IAM and its features, see the IAM documentation.
Fields | |
---|---|
auditConfigs[] |
Specifies cloud audit logging configuration for this policy. |
bindings[] |
Associates a list of |
etag |
|
version |
Specifies the format of the policy. Valid values are |
GoogleIamV1__SetIamPolicyRequest
Request message forSetIamPolicy
method.
Fields | |
---|---|
policy |
REQUIRED: The complete policy to be applied to the |
updateMask |
OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only the fields in the mask will be modified. If no mask is provided, the following default mask is used: |
GoogleIamV1__TestIamPermissionsRequest
Request message forTestIamPermissions
method.
Fields | |
---|---|
permissions[] |
The set of permissions to check for the |
GoogleIamV1__TestIamPermissionsResponse
Response message forTestIamPermissions
method.
Fields | |
---|---|
permissions[] |
A subset of |
GoogleLongrunning__ListOperationsResponse
The response message for Operations.ListOperations.Fields | |
---|---|
nextPageToken |
The standard List next-page token. |
operations[] |
A list of operations that matches the specified filter in the request. |
GoogleLongrunning__Operation
This resource represents a long-running operation that is the result of a network API call.Fields | |
---|---|
done |
If the value is |
error |
The error result of the operation in case of failure or cancellation. |
metadata |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. |
name |
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the |
response |
The normal, successful response of the operation. If the original method returns no data on success, such as |
GoogleRpc__Status
TheStatus
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide.
Fields | |
---|---|
code |
The status code, which should be an enum value of google.rpc.Code. |
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use. |
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client. |
GoogleType__Expr
Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec. Example (Comparison): title: "Summary size limit" description: "Determines if a summary is less than 100 chars" expression: "document.summary.size() < 100" Example (Equality): title: "Requestor is owner" description: "Determines if requestor is the document owner" expression: "document.owner == request.auth.claims.email" Example (Logic): title: "Public documents" description: "Determine whether the document should be publicly visible" expression: "document.type != 'private' && document.type != 'internal'" Example (Data Manipulation): title: "Notification string" description: "Create a notification string with a timestamp." expression: "'New message received at ' + string(document.create_time)" The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.Fields | |
---|---|
description |
Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI. |
expression |
Textual representation of an expression in Common Expression Language syntax. |
location |
Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file. |
title |
Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression. |