Enumerations

BlockType

static

number

Type of a block (text, image etc) as identified by OCR.

Value

UNKNOWN

Unknown block type.

TEXT

Regular text block.

TABLE

Table block.

PICTURE

Image block.

RULER

Horizontal/vertical line box.

BARCODE

Barcode block.

BreakType

static

number

Enum to denote the type of break found. New line, space etc.

Value

UNKNOWN

Unknown break label type.

SPACE

Regular space.

SURE_SPACE

Sure space (very wide).

EOL_SURE_SPACE

Line-wrapping break.

HYPHEN

End-line hyphen that is not present in text; does

LINE_BREAK

not co-occur with SPACE, LEADER_SPACE, or LINE_BREAK. Line break that ends a paragraph.

Likelihood

static

number

A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades.

Value

UNKNOWN

Unknown likelihood.

VERY_UNLIKELY

It is very unlikely that the image belongs to the specified vertical.

UNLIKELY

It is unlikely that the image belongs to the specified vertical.

POSSIBLE

It is possible that the image belongs to the specified vertical.

LIKELY

It is likely that the image belongs to the specified vertical.

VERY_LIKELY

It is very likely that the image belongs to the specified vertical.

Type

static

number

Type of image feature.

Value

TYPE_UNSPECIFIED

Unspecified feature type.

FACE_DETECTION

Run face detection.

LANDMARK_DETECTION

Run landmark detection.

LOGO_DETECTION

Run logo detection.

LABEL_DETECTION

Run label detection.

TEXT_DETECTION

Run OCR.

DOCUMENT_TEXT_DETECTION

Run dense text document OCR. Takes precedence when both DOCUMENT_TEXT_DETECTION and TEXT_DETECTION are present.

SAFE_SEARCH_DETECTION

Run computer vision models to compute image safe-search properties.

IMAGE_PROPERTIES

Compute a set of image properties, such as the image's dominant colors.

CROP_HINTS

Run crop hints.

WEB_DETECTION

Run web detection.

Type

static

number

Face landmark (feature) type. Left and right are defined from the vantage of the viewer of the image without considering mirror projections typical of photos. So, LEFT_EYE, typically, is the person's right eye.

Value

UNKNOWN_LANDMARK

Unknown face landmark detected. Should not be filled.

LEFT_EYE

Left eye.

RIGHT_EYE

Right eye.

LEFT_OF_LEFT_EYEBROW

Left of left eyebrow.

RIGHT_OF_LEFT_EYEBROW

Right of left eyebrow.

LEFT_OF_RIGHT_EYEBROW

Left of right eyebrow.

RIGHT_OF_RIGHT_EYEBROW

Right of right eyebrow.

MIDPOINT_BETWEEN_EYES

Midpoint between eyes.

NOSE_TIP

Nose tip.

UPPER_LIP

Upper lip.

LOWER_LIP

Lower lip.

MOUTH_LEFT

Mouth left.

MOUTH_RIGHT

Mouth right.

MOUTH_CENTER

Mouth center.

NOSE_BOTTOM_RIGHT

Nose, bottom right.

NOSE_BOTTOM_LEFT

Nose, bottom left.

NOSE_BOTTOM_CENTER

Nose, bottom center.

LEFT_EYE_TOP_BOUNDARY

Left eye, top boundary.

LEFT_EYE_RIGHT_CORNER

Left eye, right corner.

LEFT_EYE_BOTTOM_BOUNDARY

Left eye, bottom boundary.

LEFT_EYE_LEFT_CORNER

Left eye, left corner.

RIGHT_EYE_TOP_BOUNDARY

Right eye, top boundary.

RIGHT_EYE_RIGHT_CORNER

Right eye, right corner.

RIGHT_EYE_BOTTOM_BOUNDARY

Right eye, bottom boundary.

RIGHT_EYE_LEFT_CORNER

Right eye, left corner.

LEFT_EYEBROW_UPPER_MIDPOINT

Left eyebrow, upper midpoint.

RIGHT_EYEBROW_UPPER_MIDPOINT

Right eyebrow, upper midpoint.

LEFT_EAR_TRAGION

Left ear tragion.

RIGHT_EAR_TRAGION

Right ear tragion.

LEFT_EYE_PUPIL

Left eye pupil.

RIGHT_EYE_PUPIL

Right eye pupil.

FOREHEAD_GLABELLA

Forehead glabella.

CHIN_GNATHION

Chin gnathion.

CHIN_LEFT_GONION

Chin left gonion.

CHIN_RIGHT_GONION

Chin right gonion.

Properties

BlockType

static

number

Type of a block (text, image etc) as identified by OCR.

Value

UNKNOWN

Unknown block type.

TEXT

Regular text block.

TABLE

Table block.

PICTURE

Image block.

RULER

Horizontal/vertical line box.

BARCODE

Barcode block.

BreakType

static

number

Enum to denote the type of break found. New line, space etc.

Value

UNKNOWN

Unknown break label type.

SPACE

Regular space.

SURE_SPACE

Sure space (very wide).

EOL_SURE_SPACE

Line-wrapping break.

HYPHEN

End-line hyphen that is not present in text; does

LINE_BREAK

not co-occur with SPACE, LEADER_SPACE, or LINE_BREAK. Line break that ends a paragraph.

Likelihood

static

number

A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades.

Value

UNKNOWN

Unknown likelihood.

VERY_UNLIKELY

It is very unlikely that the image belongs to the specified vertical.

UNLIKELY

It is unlikely that the image belongs to the specified vertical.

POSSIBLE

It is possible that the image belongs to the specified vertical.

LIKELY

It is likely that the image belongs to the specified vertical.

VERY_LIKELY

It is very likely that the image belongs to the specified vertical.

Type

static

number

Type of image feature.

Value

TYPE_UNSPECIFIED

Unspecified feature type.

FACE_DETECTION

Run face detection.

LANDMARK_DETECTION

Run landmark detection.

LOGO_DETECTION

Run logo detection.

LABEL_DETECTION

Run label detection.

TEXT_DETECTION

Run OCR.

DOCUMENT_TEXT_DETECTION

Run dense text document OCR. Takes precedence when both DOCUMENT_TEXT_DETECTION and TEXT_DETECTION are present.

SAFE_SEARCH_DETECTION

Run computer vision models to compute image safe-search properties.

IMAGE_PROPERTIES

Compute a set of image properties, such as the image's dominant colors.

CROP_HINTS

Run crop hints.

WEB_DETECTION

Run web detection.

Type

static

number

Face landmark (feature) type. Left and right are defined from the vantage of the viewer of the image without considering mirror projections typical of photos. So, LEFT_EYE, typically, is the person's right eye.

Value

UNKNOWN_LANDMARK

Unknown face landmark detected. Should not be filled.

LEFT_EYE

Left eye.

RIGHT_EYE

Right eye.

LEFT_OF_LEFT_EYEBROW

Left of left eyebrow.

RIGHT_OF_LEFT_EYEBROW

Right of left eyebrow.

LEFT_OF_RIGHT_EYEBROW

Left of right eyebrow.

RIGHT_OF_RIGHT_EYEBROW

Right of right eyebrow.

MIDPOINT_BETWEEN_EYES

Midpoint between eyes.

NOSE_TIP

Nose tip.

UPPER_LIP

Upper lip.

LOWER_LIP

Lower lip.

MOUTH_LEFT

Mouth left.

MOUTH_RIGHT

Mouth right.

MOUTH_CENTER

Mouth center.

NOSE_BOTTOM_RIGHT

Nose, bottom right.

NOSE_BOTTOM_LEFT

Nose, bottom left.

NOSE_BOTTOM_CENTER

Nose, bottom center.

LEFT_EYE_TOP_BOUNDARY

Left eye, top boundary.

LEFT_EYE_RIGHT_CORNER

Left eye, right corner.

LEFT_EYE_BOTTOM_BOUNDARY

Left eye, bottom boundary.

LEFT_EYE_LEFT_CORNER

Left eye, left corner.

RIGHT_EYE_TOP_BOUNDARY

Right eye, top boundary.

RIGHT_EYE_RIGHT_CORNER

Right eye, right corner.

RIGHT_EYE_BOTTOM_BOUNDARY

Right eye, bottom boundary.

RIGHT_EYE_LEFT_CORNER

Right eye, left corner.

LEFT_EYEBROW_UPPER_MIDPOINT

Left eyebrow, upper midpoint.

RIGHT_EYEBROW_UPPER_MIDPOINT

Right eyebrow, upper midpoint.

LEFT_EAR_TRAGION

Left ear tragion.

RIGHT_EAR_TRAGION

Right ear tragion.

LEFT_EYE_PUPIL

Left eye pupil.

RIGHT_EYE_PUPIL

Right eye pupil.

FOREHEAD_GLABELLA

Forehead glabella.

CHIN_GNATHION

Chin gnathion.

CHIN_LEFT_GONION

Chin left gonion.

CHIN_RIGHT_GONION

Chin right gonion.

Abstract types

AnnotateImageRequest

static

Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.

Properties

Parameter

image

Object

The image to be processed.

This object should have the same structure as Image

features

Array of Object

Requested features.

This object should have the same structure as Feature

imageContext

Object

Additional context that may accompany the image.

This object should have the same structure as ImageContext

See also

google.cloud.vision.v1.AnnotateImageRequest definition in proto format

AnnotateImageResponse

static

Response to an image annotation request.

Properties

Parameter

faceAnnotations

Array of Object

If present, face detection has completed successfully.

This object should have the same structure as FaceAnnotation

landmarkAnnotations

Array of Object

If present, landmark detection has completed successfully.

This object should have the same structure as EntityAnnotation

logoAnnotations

Array of Object

If present, logo detection has completed successfully.

This object should have the same structure as EntityAnnotation

labelAnnotations

Array of Object

If present, label detection has completed successfully.

This object should have the same structure as EntityAnnotation

textAnnotations

Array of Object

If present, text (OCR) detection or document (OCR) text detection has completed successfully.

This object should have the same structure as EntityAnnotation

fullTextAnnotation

Object

If present, text (OCR) detection or document (OCR) text detection has completed successfully. This annotation provides the structural hierarchy for the OCR detected text.

This object should have the same structure as TextAnnotation

safeSearchAnnotation

Object

If present, safe-search annotation has completed successfully.

This object should have the same structure as SafeSearchAnnotation

imagePropertiesAnnotation

Object

If present, image properties were extracted successfully.

This object should have the same structure as ImageProperties

cropHintsAnnotation

Object

If present, crop hints have completed successfully.

This object should have the same structure as CropHintsAnnotation

webDetection

Object

If present, web detection has completed successfully.

This object should have the same structure as WebDetection

error

Object

If set, represents the error message for the operation. Note that filled-in image annotations are guaranteed to be correct, even when error is set.

This object should have the same structure as Status

See also

google.cloud.vision.v1.AnnotateImageResponse definition in proto format

BatchAnnotateImagesRequest

static

Multiple image annotation requests are batched into a single service call.

Property

Parameter

requests

Array of Object

Individual image annotation requests for this batch.

This object should have the same structure as AnnotateImageRequest

See also

google.cloud.vision.v1.BatchAnnotateImagesRequest definition in proto format

BatchAnnotateImagesResponse

static

Response to a batch image annotation request.

Property

Parameter

responses

Array of Object

Individual responses to image annotation requests within the batch.

This object should have the same structure as AnnotateImageResponse

See also

google.cloud.vision.v1.BatchAnnotateImagesResponse definition in proto format

Block

static

Logical element on the page.

Properties

Parameter

property

Object

Additional information detected for the block.

This object should have the same structure as TextProperty

boundingBox

Object

The bounding box for the block. The vertices are in the order of top-left, top-right, bottom-right, bottom-left. When a rotation of the bounding box is detected the rotation is represented as around the top-left corner as defined when the text is read in the 'natural' orientation. For example:

 when the text is horizontal it might look like:
   0----1
   |    |
   3----2
 when it's rotated 180 degrees around the top-left corner it becomes:
   2----3
   |    |
   1----0
and the vertice order will still be (0, 1, 2, 3).

This object should have the same structure as BoundingPoly

paragraphs

Array of Object

List of paragraphs in this block (if this blocks is of type text).

This object should have the same structure as Paragraph

blockType

number

Detected block type (text, image etc) for this block.

The number should be among the values of BlockType

See also

google.cloud.vision.v1.Block definition in proto format

BoundingPoly

static

A bounding polygon for the detected image annotation.

Property

Parameter

vertices

Array of Object

The bounding polygon vertices.

This object should have the same structure as Vertex

See also

google.cloud.vision.v1.BoundingPoly definition in proto format

ColorInfo

static

Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.

Properties

Parameter

color

Object

RGB components of the color.

This object should have the same structure as Color

score

number

Image-specific score for this color. Value in range [0, 1].

pixelFraction

number

The fraction of pixels the color occupies in the image. Value in range [0, 1].

See also

google.cloud.vision.v1.ColorInfo definition in proto format

CropHint

static

Single crop hint that is used to generate a new crop when serving an image.

Properties

Parameter

boundingPoly

Object

The bounding polygon for the crop region. The coordinates of the bounding box are in the original image's scale, as returned in ImageParams.

This object should have the same structure as BoundingPoly

confidence

number

Confidence of this being a salient region. Range [0, 1].

importanceFraction

number

Fraction of importance of this salient region with respect to the original image.

See also

google.cloud.vision.v1.CropHint definition in proto format

CropHintsAnnotation

static

Set of crop hints that are used to generate new crops when serving images.

Property

Parameter

cropHints

Array of Object

This object should have the same structure as CropHint

See also

google.cloud.vision.v1.CropHintsAnnotation definition in proto format

CropHintsParams

static

Parameters for crop hints annotation request.

Property

Parameter

aspectRatios

Array of number

Aspect ratios in floats, representing the ratio of the width to the height of the image. For example, if the desired aspect ratio is 4/3, the corresponding float value should be 1.33333. If not specified, the best possible crop is returned. The number of provided aspect ratios is limited to a maximum of 16; any aspect ratios provided after the 16th are ignored.

See also

google.cloud.vision.v1.CropHintsParams definition in proto format

DetectedBreak

static

Detected start or end of a structural component.

Properties

Parameter

type

number

The number should be among the values of BreakType

isPrefix

boolean

True if break prepends the element.

See also

google.cloud.vision.v1.TextAnnotation.DetectedBreak definition in proto format

DetectedLanguage

static

Detected language for a structural component.

Properties

Parameter

languageCode

string

The BCP-47 language code, such as "en-US" or "sr-Latn". For more information, see http:// www.unicode.org/reports/tr35/#Unicode_locale_identifier.

confidence

number

Confidence of detected language. Range [0, 1].

See also

google.cloud.vision.v1.TextAnnotation.DetectedLanguage definition in proto format

DominantColorsAnnotation

static

Set of dominant colors and their corresponding scores.

Property

Parameter

colors

Array of Object

RGB color values with their score and pixel fraction.

This object should have the same structure as ColorInfo

See also

google.cloud.vision.v1.DominantColorsAnnotation definition in proto format

EntityAnnotation

static

Set of detected entity features.

Properties

Parameter

mid

string

Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.

locale

string

The language code for the locale in which the entity textual description is expressed.

description

string

Entity textual description, expressed in its locale language.

score

number

Overall score of the result. Range [0, 1].

confidence

number

The accuracy of the entity detection in an image. For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].

topicality

number

The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].

boundingPoly

Object

Image region to which this entity belongs. Currently not produced for LABEL_DETECTION features. For TEXT_DETECTION (OCR), boundingPolys are produced for the entire text detected in an image region, followed by boundingPolys for each word within the detected text.

This object should have the same structure as BoundingPoly

locations

Array of Object

The location information for the detected entity. Multiple LocationInfo elements can be present because one location may indicate the location of the scene in the image, and another location may indicate the location of the place where the image was taken. Location information is usually present for landmarks.

This object should have the same structure as LocationInfo

properties

Array of Object

Some entities may have optional user-supplied Property (name/value) fields, such a score or string that qualifies the entity.

This object should have the same structure as Property

See also

google.cloud.vision.v1.EntityAnnotation definition in proto format

FaceAnnotation

static

A face annotation object contains the results of face detection.

Properties

Parameter

boundingPoly

Object

The bounding polygon around the face. The coordinates of the bounding box are in the original image's scale, as returned in ImageParams. The bounding box is computed to "frame" the face in accordance with human expectations. It is based on the landmarker results. Note that one or more x and/or y coordinates may not be generated in the BoundingPoly (the polygon will be unbounded) if only a partial face appears in the image to be annotated.

This object should have the same structure as BoundingPoly

fdBoundingPoly

Object

The fd_bounding_poly bounding polygon is tighter than the boundingPoly, and encloses only the skin part of the face. Typically, it is used to eliminate the face from any image analysis that detects the "amount of skin" visible in an image. It is not based on the landmarker results, only on the initial face detection, hence the fd (face detection) prefix.

This object should have the same structure as BoundingPoly

landmarks

Array of Object

Detected face landmarks.

This object should have the same structure as Landmark

rollAngle

number

Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].

panAngle

number

Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].

tiltAngle

number

Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].

detectionConfidence

number

Detection confidence. Range [0, 1].

landmarkingConfidence

number

Face landmarking confidence. Range [0, 1].

joyLikelihood

number

Joy likelihood.

The number should be among the values of Likelihood

sorrowLikelihood

number

Sorrow likelihood.

The number should be among the values of Likelihood

angerLikelihood

number

Anger likelihood.

The number should be among the values of Likelihood

surpriseLikelihood

number

Surprise likelihood.

The number should be among the values of Likelihood

underExposedLikelihood

number

Under-exposed likelihood.

The number should be among the values of Likelihood

blurredLikelihood

number

Blurred likelihood.

The number should be among the values of Likelihood

headwearLikelihood

number

Headwear likelihood.

The number should be among the values of Likelihood

See also

google.cloud.vision.v1.FaceAnnotation definition in proto format

Feature

static

Users describe the type of Google Cloud Vision API tasks to perform over images by using Features. Each Feature indicates a type of image detection task to perform. Features encode the Cloud Vision API vertical to operate on and the number of top-scoring results to return.

Properties

Parameter

type

number

The feature type.

The number should be among the values of Type

maxResults

number

Maximum number of results of this type.

See also

google.cloud.vision.v1.Feature definition in proto format

Image

static

Client image to perform Google Cloud Vision API tasks over.

Properties

Parameter

content

string

Image content, represented as a stream of bytes. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

source

Object

Google Cloud Storage image location. If both content and source are provided for an image, content takes precedence and is used t