Package types (2.3.2)

API documentation for vision_v1p2beta1.types package.

Classes

AnnotateFileResponse

Response to a single file annotation request. A file may contain one or more images, which individually have their own responses.

AnnotateImageRequest

Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.

AnnotateImageResponse

Response to an image annotation request. .. attribute:: face_annotations

If present, face detection has completed successfully.

:type: Sequence[google.cloud.vision_v1p2beta1.types.FaceAnnotation]

AsyncAnnotateFileRequest

An offline file annotation request. .. attribute:: input_config

Required. Information about the input file.

:type: google.cloud.vision_v1p2beta1.types.InputConfig

AsyncAnnotateFileResponse

The response for a single offline file annotation request. .. attribute:: output_config

The output location and metadata from AsyncAnnotateFileRequest.

:type: google.cloud.vision_v1p2beta1.types.OutputConfig

AsyncBatchAnnotateFilesRequest

Multiple async file annotation requests are batched into a single service call.

AsyncBatchAnnotateFilesResponse

Response to an async batch file annotation request. .. attribute:: responses

The list of file annotation responses, one for each request in AsyncBatchAnnotateFilesRequest.

:type: Sequence[google.cloud.vision_v1p2beta1.types.AsyncAnnotateFileResponse]

BatchAnnotateImagesRequest

Multiple image annotation requests are batched into a single service call.

BatchAnnotateImagesResponse

Response to a batch image annotation request. .. attribute:: responses

Individual responses to image annotation requests within the batch.

:type: Sequence[google.cloud.vision_v1p2beta1.types.AnnotateImageResponse]

Block

Logical element on the page. .. attribute:: property

Additional information detected for the block.

:type: google.cloud.vision_v1p2beta1.types.TextAnnotation.TextProperty

BoundingPoly

A bounding polygon for the detected image annotation. .. attribute:: vertices

The bounding polygon vertices.

:type: Sequence[google.cloud.vision_v1p2beta1.types.Vertex]

ColorInfo

Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image.

CropHint

Single crop hint that is used to generate a new crop when serving an image.

CropHintsAnnotation

Set of crop hints that are used to generate new crops when serving images.

CropHintsParams

Parameters for crop hints annotation request. .. attribute:: aspect_ratios

Aspect ratios in floats, representing the ratio of the width to the height of the image. For example, if the desired aspect ratio is 4/3, the corresponding float value should be 1.33333. If not specified, the best possible crop is returned. The number of provided aspect ratios is limited to a maximum of 16; any aspect ratios provided after the 16th are ignored.

:type: Sequence[float]

DominantColorsAnnotation

Set of dominant colors and their corresponding scores. .. attribute:: colors

RGB color values with their score and pixel fraction.

:type: Sequence[google.cloud.vision_v1p2beta1.types.ColorInfo]

EntityAnnotation

Set of detected entity features. .. attribute:: mid

Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API <https://developers.google.com/knowledge-graph/>__.

:type: str

FaceAnnotation

A face annotation object contains the results of face detection.

Feature

The type of Google Cloud Vision API detection to perform, and the maximum number of results to return for that type. Multiple Feature objects can be specified in the features list.

GcsDestination

The Google Cloud Storage location where the output will be written to.

GcsSource

The Google Cloud Storage location where the input will be read from.

Image

Client image to perform Google Cloud Vision API tasks over. .. attribute:: content

Image content, represented as a stream of bytes. Note: As with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

:type: bytes

ImageAnnotationContext

If an image was produced from a file (e.g. a PDF), this message gives information about the source of that image.

ImageContext

Image context and/or feature-specific parameters. .. attribute:: lat_long_rect

Not used.

:type: google.cloud.vision_v1p2beta1.types.LatLongRect

ImageProperties

Stores image properties, such as dominant colors. .. attribute:: dominant_colors

If present, dominant colors completed successfully.

:type: google.cloud.vision_v1p2beta1.types.DominantColorsAnnotation

ImageSource

External image source (Google Cloud Storage or web URL image location).

InputConfig

The desired input location and metadata. .. attribute:: gcs_source

The Google Cloud Storage location to read the input from.

:type: google.cloud.vision_v1p2beta1.types.GcsSource

LatLongRect

Rectangle determined by min and max LatLng pairs. .. attribute:: min_lat_lng

Min lat/long pair.

:type: google.type.latlng_pb2.LatLng

Likelihood

A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades.

LocationInfo

Detected entity location information. .. attribute:: lat_lng

lat/long location coordinates.

:type: google.type.latlng_pb2.LatLng

NormalizedVertex

A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.

OperationMetadata

Contains metadata for the BatchAnnotateImages operation. .. attribute:: state

Current state of the batch operation.

:type: google.cloud.vision_v1p2beta1.types.OperationMetadata.State

OutputConfig

The desired output location and metadata. .. attribute:: gcs_destination

The Google Cloud Storage location to write the output(s) to.

:type: google.cloud.vision_v1p2beta1.types.GcsDestination

Page

Detected page from OCR. .. attribute:: property

Additional information detected on the page.

:type: google.cloud.vision_v1p2beta1.types.TextAnnotation.TextProperty

Paragraph

Structural unit of text representing a number of words in certain order.

Position

A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image.

Property

A Property consists of a user-supplied name/value pair. .. attribute:: name

Name of the property.

:type: str

SafeSearchAnnotation

Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).

Symbol

A single symbol representation. .. attribute:: property

Additional information detected for the symbol.

:type: google.cloud.vision_v1p2beta1.types.TextAnnotation.TextProperty

TextAnnotation

TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol Each structural component, starting from Page, may further have their own properties. Properties describe detected languages, breaks etc.. Please refer to the TextAnnotation.TextProperty message definition below for more detail.

TextDetectionParams

Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features.

Vertex

A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.

WebDetection

Relevant information for the image from the Internet. .. attribute:: web_entities

Deduced entities from similar images on the Internet.

:type: Sequence[google.cloud.vision_v1p2beta1.types.WebDetection.WebEntity]

WebDetectionParams

Parameters for web detection request. .. attribute:: include_geo_results

Whether to include results derived from the geo information in the image.

:type: bool

Word

A word representation. .. attribute:: property

Additional information detected for the word.

:type: google.cloud.vision_v1p2beta1.types.TextAnnotation.TextProperty