- JSON representation
- Node
- ProcessorConfig
- VideoStreamInputConfig
- AIEnabledDevicesInputConfig
- MediaWarehouseConfig
- PersonBlurConfig
- PersonBlurType
- OccupancyCountConfig
- PersonVehicleDetectionConfig
- VertexAutoMLVisionConfig
- VertexAutoMLVideoConfig
- VertexCustomConfig
- DedicatedResources
- MachineSpec
- AcceleratorType
- AutoscalingMetricSpec
- GeneralObjectDetectionConfig
- BigQueryConfig
- PersonalProtectiveEquipmentDetectionConfig
- InputEdge
- MonitoringConfig
Message storing the graph of the application.
JSON representation |
---|
{ "nodes": [ { object ( |
Fields | |
---|---|
nodes[] |
A list of nodes in the application graph. |
monitoringConfig |
Monitoring-related configuration for this application. |
Node
Message describing node object.
JSON representation |
---|
{ "name": string, "displayName": string, "nodeConfig": { object ( |
Fields | |
---|---|
name |
Required. A unique name for the node. |
displayName |
A user friendly display name for the node. |
nodeConfig |
Node config. |
processor |
Processor name refer to the chosen processor resource. |
parents[] |
Parent node. Input node should not have parent node. For V1 Alpha1/Beta only media warehouse node can have multiple parents, other types of nodes will only have one parent. |
outputAllOutputChannelsToStream |
By default, the output of the node will only be available to downstream nodes. To consume the direct output from the application node, the output must be sent to Vision AI Streams at first. By setting outputAllOutputChannelsToStream to true, App Platform will automatically send all the outputs of the current node to Vision AI Stream resources (one stream per output channel). The output stream resource will be created by App Platform automatically during deployment and deleted after application un-deployment. Note that this config applies to all the Application Instances. The output stream can be override at instance level by configuring the |
ProcessorConfig
Next ID: 23
JSON representation |
---|
{ // Union field |
Fields | |
---|---|
Union field
|
|
videoStreamInputConfig |
Configs of stream input processor. |
aiEnabledDevicesInputConfig |
Config of AI-enabled input devices. |
mediaWarehouseConfig |
Configs of media warehouse processor. |
personBlurConfig |
Configs of person blur processor. |
occupancyCountConfig |
Configs of occupancy count processor. |
personVehicleDetectionConfig |
Configs of Person Vehicle Detection processor. |
vertexAutomlVisionConfig |
Configs of Vertex AutoML vision processor. |
vertexAutomlVideoConfig |
Configs of Vertex AutoML video processor. |
vertexCustomConfig |
Configs of Vertex Custom processor. |
generalObjectDetectionConfig |
Configs of General Object Detection processor. |
bigQueryConfig |
Configs of BigQuery processor. |
personalProtectiveEquipmentDetectionConfig |
Configs of personalProtectiveEquipmentDetectionConfig |
VideoStreamInputConfig
Message describing Video Stream Input Config. This message should only be used as a placeholder for builtin:stream-input processor, actual stream binding should be specified using corresponding API.
JSON representation |
---|
{
"streams": [
string
],
"streamsWithAnnotation": [
{
object ( |
Fields | |
---|---|
streams[] |
|
streamsWithAnnotation[] |
|
AIEnabledDevicesInputConfig
Message describing AI-enabled Devices Input Config.
MediaWarehouseConfig
Message describing MediaWarehouseConfig.
JSON representation |
---|
{ "corpus": string, "region": string, "ttl": string } |
Fields | |
---|---|
corpus |
Resource name of the Media Warehouse corpus. Format: projects/${project_id}/locations/${locationId}/corpora/${corpus_id} |
region |
Deprecated. |
ttl |
The duration for which all media assets, associated metadata, and search documents can exist. A duration in seconds with up to nine fractional digits, ending with ' |
PersonBlurConfig
Message describing FaceBlurConfig.
JSON representation |
---|
{
"personBlurType": enum ( |
Fields | |
---|---|
personBlurType |
Person blur type. |
facesOnly |
Whether only blur faces other than the whole object in the processor. |
PersonBlurType
Type of Person Blur
Enums | |
---|---|
PERSON_BLUR_TYPE_UNSPECIFIED |
PersonBlur Type UNSPECIFIED. |
FULL_OCCULUSION |
FaceBlur Type full occlusion. |
BLUR_FILTER |
FaceBlur Type blur filter. |
OccupancyCountConfig
Message describing OccupancyCountConfig.
JSON representation |
---|
{ "enablePeopleCounting": boolean, "enableVehicleCounting": boolean, "enableDwellingTimeTracking": boolean } |
Fields | |
---|---|
enablePeopleCounting |
Whether to count the appearances of people, output counts have 'people' as the key. |
enableVehicleCounting |
Whether to count the appearances of vehicles, output counts will have 'vehicle' as the key. |
enableDwellingTimeTracking |
Whether to track each invidual object's loitering time inside the scene or specific zone. |
PersonVehicleDetectionConfig
Message describing PersonVehicleDetectionConfig.
JSON representation |
---|
{ "enablePeopleCounting": boolean, "enableVehicleCounting": boolean } |
Fields | |
---|---|
enablePeopleCounting |
At least one of enablePeopleCounting and enableVehicleCounting fields must be set to true. Whether to count the appearances of people, output counts have 'people' as the key. |
enableVehicleCounting |
Whether to count the appearances of vehicles, output counts will have 'vehicle' as the key. |
VertexAutoMLVisionConfig
Message of configurations of Vertex AutoML Vision Processors.
JSON representation |
---|
{ "confidenceThreshold": number, "maxPredictions": integer } |
Fields | |
---|---|
confidenceThreshold |
Only entities with higher score than the threshold will be returned. Value 0.0 means to return all the detected entities. |
maxPredictions |
At most this many predictions will be returned per output frame. Value 0 means to return all the detected entities. |
VertexAutoMLVideoConfig
Message describing VertexAutoMLVideoConfig.
JSON representation |
---|
{ "confidenceThreshold": number, "blockedLabels": [ string ], "maxPredictions": integer, "boundingBoxSizeLimit": number } |
Fields | |
---|---|
confidenceThreshold |
Only entities with higher score than the threshold will be returned. Value 0.0 means returns all the detected entities. |
blockedLabels[] |
Labels specified in this field won't be returned. |
maxPredictions |
At most this many predictions will be returned per output frame. Value 0 means to return all the detected entities. |
boundingBoxSizeLimit |
Only Bounding Box whose size is larger than this limit will be returned. Object Tracking only. Value 0.0 means to return all the detected entities. |
VertexCustomConfig
Message describing VertexCustomConfig.
JSON representation |
---|
{
"maxPredictionFps": integer,
"dedicatedResources": {
object ( |
Fields | |
---|---|
maxPredictionFps |
The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps. |
dedicatedResources |
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. |
postProcessingCloudFunction |
If not empty, the prediction result will be sent to the specified cloud function for post processing. * The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse. * The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field. * To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse. |
attachApplicationMetadata |
If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; } |
DedicatedResources
A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.
JSON representation |
---|
{ "machineSpec": { object ( |
Fields | |
---|---|
machineSpec |
Required. Immutable. The specification of a single machine used by the prediction. |
minReplicaCount |
Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. |
maxReplicaCount |
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for maxReplicaCount * number of cores in the selected machine type) and (maxReplicaCount * number of GPUs per replica in the selected machine type). |
autoscalingMetricSpecs[] |
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If If For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set |
MachineSpec
Specification of a single machine.
JSON representation |
---|
{
"machineType": string,
"acceleratorType": enum ( |
Fields | |
---|---|
machineType |
Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For [DeployedModel][] this field is optional, and the default value is |
acceleratorType |
Immutable. The type of accelerator(s) that may be attached to the machine as per |
acceleratorCount |
The number of accelerators to attach to the machine. |
AcceleratorType
Represents a hardware accelerator type.
Enums | |
---|---|
ACCELERATOR_TYPE_UNSPECIFIED |
Unspecified accelerator type, which means no accelerator. |
NVIDIA_TESLA_K80 |
Nvidia Tesla K80 GPU. |
NVIDIA_TESLA_P100 |
Nvidia Tesla P100 GPU. |
NVIDIA_TESLA_V100 |
Nvidia Tesla V100 GPU. |
NVIDIA_TESLA_P4 |
Nvidia Tesla P4 GPU. |
NVIDIA_TESLA_T4 |
Nvidia Tesla T4 GPU. |
NVIDIA_TESLA_A100 |
Nvidia Tesla A100 GPU. |
TPU_V2 |
TPU v2. |
TPU_V3 |
TPU v3. |
AutoscalingMetricSpec
The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
JSON representation |
---|
{ "metricName": string, "target": integer } |
Fields | |
---|---|
metricName |
Required. The resource metric name. Supported metrics:
|
target |
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided. |
GeneralObjectDetectionConfig
Message of configurations for General Object Detection processor.
BigQueryConfig
Message of configurations for BigQuery processor.
JSON representation |
---|
{ "table": string, "cloudFunctionMapping": { string: string, ... }, "createDefaultTableIfNotExists": boolean } |
Fields | |
---|---|
table |
BigQuery table resource for Vision AI Platform to ingest annotations to. |
cloudFunctionMapping |
Data Schema By default, Vision AI Application will try to write annotations to the target BigQuery table using the following schema:
To forward annotation data to an existing BigQuery table, customer needs to make sure the compatibility of the schema. The map maps application node name to its corresponding cloud function endpoint to transform the annotations directly to the google.cloud.bigquery.storage.v1.AppendRowsRequest (only If the default table schema doesn't fit, customer is able to transform the annotation output from Vision AI Application to arbitrary BigQuery table schema with CloudFunction.
An object containing a list of |
createDefaultTableIfNotExists |
If true, App Platform will create the BigQuery DataSet and the BigQuery Table with default schema if the specified table doesn't exist. This doesn't work if any cloud function customized schema is specified since the system doesn't know your desired schema. JSON column will be used in the default table created by App Platform. |
PersonalProtectiveEquipmentDetectionConfig
Message describing PersonalProtectiveEquipmentDetectionConfig.
JSON representation |
---|
{ "enableFaceCoverageDetection": boolean, "enableHeadCoverageDetection": boolean, "enableHandsCoverageDetection": boolean } |
Fields | |
---|---|
enableFaceCoverageDetection |
Whether to enable face coverage detection. |
enableHeadCoverageDetection |
Whether to enable head coverage detection. |
enableHandsCoverageDetection |
Whether to enable hands coverage detection. |
InputEdge
Message describing one edge pointing into a node.
JSON representation |
---|
{ "parentNode": string, "parentOutputChannel": string, "connectedInputChannel": string } |
Fields | |
---|---|
parentNode |
The name of the parent node. |
parentOutputChannel |
The connected output artifact of the parent node. It can be omitted if target processor only has 1 output artifact. |
connectedInputChannel |
The connected input channel of the current node's processor. It can be omitted if target processor only has 1 input channel. |
MonitoringConfig
Monitoring-related configuration for an application.
JSON representation |
---|
{ "enabled": boolean } |
Fields | |
---|---|
enabled |
Whether this application has monitoring enabled. |