API documentation for vision_v1p2beta1.types
package.
Classes
AnnotateFileResponse
Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.
AnnotateImageRequest
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.
AnnotateImageResponse
Response to an image annotation request.
AsyncAnnotateFileRequest
An offline file annotation request.
AsyncAnnotateFileResponse
The response for a single offline file annotation request.
AsyncBatchAnnotateFilesRequest
Multiple async file annotation requests are batched into a single service call.
AsyncBatchAnnotateFilesResponse
Response to an async batch file annotation request.
BatchAnnotateImagesRequest
Multiple image annotation requests are batched into a single service call.
BatchAnnotateImagesResponse
Response to a batch image annotation request.
Block
Logical element on the page.
BoundingPoly
A bounding polygon for the detected image annotation.
ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.
CropHint
Single crop hint that is used to generate a new crop when serving an image.
CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
CropHintsParams
Parameters for crop hints annotation request.
DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
EntityAnnotation
Set of detected entity features.
FaceAnnotation
A face annotation object contains the results of face detection.
Feature
The type of Google Cloud Vision API detection to perform, and the
maximum number of results to return for that type. Multiple
Feature
objects can be specified in the features
list.
GcsDestination
The Google Cloud Storage location where the output will be written to.
GcsSource
The Google Cloud Storage location where the input will be read from.
Image
Client image to perform Google Cloud Vision API tasks over.
ImageAnnotationContext
If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.
ImageContext
Image context and/or feature-specific parameters.
ImageProperties
Stores image properties, such as dominant colors.
ImageSource
External image source (Google Cloud Storage or web URL image location).
InputConfig
The desired input location and metadata.
LatLongRect
Rectangle determined by min and max LatLng
pairs.
Likelihood
A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades.
Values: UNKNOWN (0): Unknown likelihood. VERY_UNLIKELY (1): It is very unlikely that the image belongs to the specified vertical. UNLIKELY (2): It is unlikely that the image belongs to the specified vertical. POSSIBLE (3): It is possible that the image belongs to the specified vertical. LIKELY (4): It is likely that the image belongs to the specified vertical. VERY_LIKELY (5): It is very likely that the image belongs to the specified vertical.
LocationInfo
Detected entity location information.
NormalizedVertex
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
OperationMetadata
Contains metadata for the BatchAnnotateImages operation.
OutputConfig
The desired output location and metadata.
Page
Detected page from OCR.
Paragraph
Structural unit of text representing a number of words in certain order.
Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.
Property
A Property
consists of a user-supplied name/value pair.
SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
Symbol
A single symbol representation.
TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.
TextDetectionParams
Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.
Vertex
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
WebDetection
Relevant information for the image from the Internet.
WebDetectionParams
Parameters for web detection request.
Word
A word representation.