ApplicationConfigs

Message storing the graph of the application.

JSON representation
{
  "nodes": [
    {
      object (Node)
    }
  ],
  "monitoringConfig": {
    object (MonitoringConfig)
  }
}
Fields
nodes[]

object (Node)

A list of nodes in the application graph.

monitoringConfig

object (MonitoringConfig)

Monitoring-related configuration for this application.

Node

Message describing node object.

JSON representation
{
  "name": string,
  "displayName": string,
  "nodeConfig": {
    object (ProcessorConfig)
  },
  "processor": string,
  "parents": [
    {
      object (InputEdge)
    }
  ],
  "outputAllOutputChannelsToStream": boolean
}
Fields
name

string

Required. A unique name for the node.

displayName

string

A user friendly display name for the node.

nodeConfig

object (ProcessorConfig)

Node config.

processor

string

Processor name refer to the chosen processor resource.

parents[]

object (InputEdge)

Parent node. Input node should not have parent node. For V1 Alpha1/Beta only media warehouse node can have multiple parents, other types of nodes will only have one parent.

outputAllOutputChannelsToStream

boolean

By default, the output of the node will only be available to downstream nodes. To consume the direct output from the application node, the output must be sent to Vision AI Streams at first.

By setting outputAllOutputChannelsToStream to true, App Platform will automatically send all the outputs of the current node to Vision AI Stream resources (one stream per output channel). The output stream resource will be created by App Platform automatically during deployment and deleted after application un-deployment. Note that this config applies to all the Application Instances.

The output stream can be override at instance level by configuring the outputResources section of Instance resource. producerNode should be current node, outputResourceBinding should be the output channel name (or leave it blank if there is only 1 output channel of the processor) and outputResource should be the target output stream.

ProcessorConfig

Next ID: 23

JSON representation
{

  // Union field processor_config can be only one of the following:
  "videoStreamInputConfig": {
    object (VideoStreamInputConfig)
  },
  "aiEnabledDevicesInputConfig": {
    object (AIEnabledDevicesInputConfig)
  },
  "mediaWarehouseConfig": {
    object (MediaWarehouseConfig)
  },
  "personBlurConfig": {
    object (PersonBlurConfig)
  },
  "occupancyCountConfig": {
    object (OccupancyCountConfig)
  },
  "personVehicleDetectionConfig": {
    object (PersonVehicleDetectionConfig)
  },
  "vertexAutomlVisionConfig": {
    object (VertexAutoMLVisionConfig)
  },
  "vertexAutomlVideoConfig": {
    object (VertexAutoMLVideoConfig)
  },
  "vertexCustomConfig": {
    object (VertexCustomConfig)
  },
  "generalObjectDetectionConfig": {
    object (GeneralObjectDetectionConfig)
  },
  "bigQueryConfig": {
    object (BigQueryConfig)
  },
  "personalProtectiveEquipmentDetectionConfig": {
    object (PersonalProtectiveEquipmentDetectionConfig)
  }
  // End of list of possible types for union field processor_config.
}
Fields

Union field processor_config.

processor_config can be only one of the following:

videoStreamInputConfig

object (VideoStreamInputConfig)

Configs of stream input processor.

aiEnabledDevicesInputConfig

object (AIEnabledDevicesInputConfig)

Config of AI-enabled input devices.

mediaWarehouseConfig

object (MediaWarehouseConfig)

Configs of media warehouse processor.

personBlurConfig

object (PersonBlurConfig)

Configs of person blur processor.

occupancyCountConfig

object (OccupancyCountConfig)

Configs of occupancy count processor.

personVehicleDetectionConfig

object (PersonVehicleDetectionConfig)

Configs of Person Vehicle Detection processor.

vertexAutomlVisionConfig

object (VertexAutoMLVisionConfig)

Configs of Vertex AutoML vision processor.

vertexAutomlVideoConfig

object (VertexAutoMLVideoConfig)

Configs of Vertex AutoML video processor.

vertexCustomConfig

object (VertexCustomConfig)

Configs of Vertex Custom processor.

generalObjectDetectionConfig

object (GeneralObjectDetectionConfig)

Configs of General Object Detection processor.

bigQueryConfig

object (BigQueryConfig)

Configs of BigQuery processor.

personalProtectiveEquipmentDetectionConfig

object (PersonalProtectiveEquipmentDetectionConfig)

Configs of personalProtectiveEquipmentDetectionConfig

VideoStreamInputConfig

Message describing Video Stream Input Config. This message should only be used as a placeholder for builtin:stream-input processor, actual stream binding should be specified using corresponding API.

JSON representation
{
  "streams": [
    string
  ],
  "streamsWithAnnotation": [
    {
      object (StreamWithAnnotation)
    }
  ]
}
Fields
streams[]
(deprecated)

string

streamsWithAnnotation[]
(deprecated)

object (StreamWithAnnotation)

AIEnabledDevicesInputConfig

Message describing AI-enabled Devices Input Config.

MediaWarehouseConfig

Message describing MediaWarehouseConfig.

JSON representation
{
  "corpus": string,
  "region": string,
  "ttl": string
}
Fields
corpus

string

Resource name of the Media Warehouse corpus. Format: projects/${project_id}/locations/${locationId}/corpora/${corpus_id}

region
(deprecated)

string

Deprecated.

ttl

string (Duration format)

The duration for which all media assets, associated metadata, and search documents can exist.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

PersonBlurConfig

Message describing FaceBlurConfig.

JSON representation
{
  "personBlurType": enum (PersonBlurType),
  "facesOnly": boolean
}
Fields
personBlurType

enum (PersonBlurType)

Person blur type.

facesOnly

boolean

Whether only blur faces other than the whole object in the processor.

PersonBlurType

Type of Person Blur

Enums
PERSON_BLUR_TYPE_UNSPECIFIED PersonBlur Type UNSPECIFIED.
FULL_OCCULUSION FaceBlur Type full occlusion.
BLUR_FILTER FaceBlur Type blur filter.

OccupancyCountConfig

Message describing OccupancyCountConfig.

JSON representation
{
  "enablePeopleCounting": boolean,
  "enableVehicleCounting": boolean,
  "enableDwellingTimeTracking": boolean
}
Fields
enablePeopleCounting

boolean

Whether to count the appearances of people, output counts have 'people' as the key.

enableVehicleCounting

boolean

Whether to count the appearances of vehicles, output counts will have 'vehicle' as the key.

enableDwellingTimeTracking

boolean

Whether to track each invidual object's loitering time inside the scene or specific zone.

PersonVehicleDetectionConfig

Message describing PersonVehicleDetectionConfig.

JSON representation
{
  "enablePeopleCounting": boolean,
  "enableVehicleCounting": boolean
}
Fields
enablePeopleCounting

boolean

At least one of enablePeopleCounting and enableVehicleCounting fields must be set to true. Whether to count the appearances of people, output counts have 'people' as the key.

enableVehicleCounting

boolean

Whether to count the appearances of vehicles, output counts will have 'vehicle' as the key.

VertexAutoMLVisionConfig

Message of configurations of Vertex AutoML Vision Processors.

JSON representation
{
  "confidenceThreshold": number,
  "maxPredictions": integer
}
Fields
confidenceThreshold

number

Only entities with higher score than the threshold will be returned. Value 0.0 means to return all the detected entities.

maxPredictions

integer

At most this many predictions will be returned per output frame. Value 0 means to return all the detected entities.

VertexAutoMLVideoConfig

Message describing VertexAutoMLVideoConfig.

JSON representation
{
  "confidenceThreshold": number,
  "blockedLabels": [
    string
  ],
  "maxPredictions": integer,
  "boundingBoxSizeLimit": number
}
Fields
confidenceThreshold

number

Only entities with higher score than the threshold will be returned. Value 0.0 means returns all the detected entities.

blockedLabels[]

string

Labels specified in this field won't be returned.

maxPredictions

integer

At most this many predictions will be returned per output frame. Value 0 means to return all the detected entities.

boundingBoxSizeLimit

number

Only Bounding Box whose size is larger than this limit will be returned. Object Tracking only. Value 0.0 means to return all the detected entities.

VertexCustomConfig

Message describing VertexCustomConfig.

JSON representation
{
  "maxPredictionFps": integer,
  "dedicatedResources": {
    object (DedicatedResources)
  },
  "postProcessingCloudFunction": string,
  "attachApplicationMetadata": boolean
}
Fields
maxPredictionFps

integer

The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.

dedicatedResources

object (DedicatedResources)

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

postProcessingCloudFunction

string

If not empty, the prediction result will be sent to the specified cloud function for post processing. * The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse. * The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field. * To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.

attachApplicationMetadata

boolean

If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }

DedicatedResources

A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.

JSON representation
{
  "machineSpec": {
    object (MachineSpec)
  },
  "minReplicaCount": integer,
  "maxReplicaCount": integer,
  "autoscalingMetricSpecs": [
    {
      object (AutoscalingMetricSpec)
    }
  ]
}
Fields
machineSpec

object (MachineSpec)

Required. Immutable. The specification of a single machine used by the prediction.

minReplicaCount

integer

Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1.

If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

maxReplicaCount

integer

Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use minReplicaCount as the default value.

The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for maxReplicaCount * number of cores in the selected machine type) and (maxReplicaCount * number of GPUs per replica in the selected machine type).

autoscalingMetricSpecs[]

object (AutoscalingMetricSpec)

Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric.

If machineSpec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics.

If machineSpec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set.

For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscalingMetricSpecs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscalingMetricSpecs.target to 80.

MachineSpec

Specification of a single machine.

JSON representation
{
  "machineType": string,
  "acceleratorType": enum (AcceleratorType),
  "acceleratorCount": integer
}
Fields
machineType

string

Immutable. The type of the machine.

See the list of machine types supported for prediction

See the list of machine types supported for custom training.

For [DeployedModel][] this field is optional, and the default value is n1-standard-2. For [BatchPredictionJob][] or as part of [WorkerPoolSpec][] this field is required.

acceleratorType

enum (AcceleratorType)

Immutable. The type of accelerator(s) that may be attached to the machine as per acceleratorCount.

acceleratorCount

integer

The number of accelerators to attach to the machine.

AcceleratorType

Represents a hardware accelerator type.

Enums
ACCELERATOR_TYPE_UNSPECIFIED Unspecified accelerator type, which means no accelerator.
NVIDIA_TESLA_K80 Nvidia Tesla K80 GPU.
NVIDIA_TESLA_P100 Nvidia Tesla P100 GPU.
NVIDIA_TESLA_V100 Nvidia Tesla V100 GPU.
NVIDIA_TESLA_P4 Nvidia Tesla P4 GPU.
NVIDIA_TESLA_T4 Nvidia Tesla T4 GPU.
NVIDIA_TESLA_A100 Nvidia Tesla A100 GPU.
TPU_V2 TPU v2.
TPU_V3 TPU v3.

AutoscalingMetricSpec

The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.

JSON representation
{
  "metricName": string,
  "target": integer
}
Fields
metricName

string

Required. The resource metric name. Supported metrics:

  • For Online Prediction:
  • aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle
  • aiplatform.googleapis.com/prediction/online/cpu/utilization
target

integer

The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

GeneralObjectDetectionConfig

Message of configurations for General Object Detection processor.

BigQueryConfig

Message of configurations for BigQuery processor.

JSON representation
{
  "table": string,
  "cloudFunctionMapping": {
    string: string,
    ...
  },
  "createDefaultTableIfNotExists": boolean
}
Fields
table

string

BigQuery table resource for Vision AI Platform to ingest annotations to.

cloudFunctionMapping

map (key: string, value: string)

Data Schema By default, Vision AI Application will try to write annotations to the target BigQuery table using the following schema:

  • ingestion_time: TIMESTAMP, the ingestion time of the original data.

  • application: STRING, name of the application which produces the annotation.

  • instance: STRING, Id of the instance which produces the annotation.

  • node: STRING, name of the application graph node which produces the annotation.

  • annotation: STRING or JSON, the actual annotation protobuf will be converted to json string with bytes field as 64 encoded string. It can be written to both String or Json type column.

To forward annotation data to an existing BigQuery table, customer needs to make sure the compatibility of the schema.

The map maps application node name to its corresponding cloud function endpoint to transform the annotations directly to the google.cloud.bigquery.storage.v1.AppendRowsRequest (only avro_rows or proto_rows should be set). If configured, annotations produced by corresponding application node will sent to the Cloud Function at first before be forwarded to BigQuery.

If the default table schema doesn't fit, customer is able to transform the annotation output from Vision AI Application to arbitrary BigQuery table schema with CloudFunction.

  • The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of Vision AI annotation.
  • The cloud function should return AppPlatformCloudFunctionResponse with AppendRowsRequest stored in the annotations field.
  • To drop the annotation, simply clear the annotations field in the returned AppPlatformCloudFunctionResponse.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

createDefaultTableIfNotExists

boolean

If true, App Platform will create the BigQuery DataSet and the BigQuery Table with default schema if the specified table doesn't exist. This doesn't work if any cloud function customized schema is specified since the system doesn't know your desired schema. JSON column will be used in the default table created by App Platform.

PersonalProtectiveEquipmentDetectionConfig

Message describing PersonalProtectiveEquipmentDetectionConfig.

JSON representation
{
  "enableFaceCoverageDetection": boolean,
  "enableHeadCoverageDetection": boolean,
  "enableHandsCoverageDetection": boolean
}
Fields
enableFaceCoverageDetection

boolean

Whether to enable face coverage detection.

enableHeadCoverageDetection

boolean

Whether to enable head coverage detection.

enableHandsCoverageDetection

boolean

Whether to enable hands coverage detection.

InputEdge

Message describing one edge pointing into a node.

JSON representation
{
  "parentNode": string,
  "parentOutputChannel": string,
  "connectedInputChannel": string
}
Fields
parentNode

string

The name of the parent node.

parentOutputChannel

string

The connected output artifact of the parent node. It can be omitted if target processor only has 1 output artifact.

connectedInputChannel

string

The connected input channel of the current node's processor. It can be omitted if target processor only has 1 input channel.

MonitoringConfig

Monitoring-related configuration for an application.

JSON representation
{
  "enabled": boolean
}
Fields
enabled

boolean

Whether this application has monitoring enabled.