The Client Library for Python v0.25.1 includes some significant changes to how previous client libraries were designed. These changes can be summarized as follows:
Consolidation of modules into fewer types
Replacing untyped parameters with strongly-typed classes and enumerations
This topic provides details on the changes that you will need to make to your Python code for the Cloud Vision API client libraries in order to use the v0.25.1 Python client library.
Running previous versions of the client library
You are not required to upgrade your Python client library to v0.25.1. If you want to continue using a previous version of the Python client library
and do not want to migrate your code, then you should specify the version of
the Python client library used by your app. To specify a specific library
version, edit the requirements.txt
file as shown:
google-cloud-vision==0.25
Removed Modules
The following modules were removed in the Python Client Library v0.25.1 package.
google.cloud.vision.annotations
google.cloud.vision.batch
google.cloud.vision.client
google.cloud.vision.color
google.cloud.vision.crop_hint
google.cloud.vision.entity
google.cloud.vision.face
google.cloud.vision.feature
google.cloud.vision.geometry
google.cloud.vision.image
google.cloud.vision.likelihood
google.cloud.vision.safe_search
google.cloud.vision.text
google.cloud.vision.web
Required Code Changes
Imports
Include the new google.cloud.vision.types
module in order to access the new types in the Python Client Library v0.25.1.
The types
module contains the new classes that are required for creating requests, such as types.Image
.
Additionally, the new google.cloud.vision.enums
module contains the enumerations useful for parsing and understanding API responses, such as enums.Likelihood.UNLIKELY
and enums.FaceAnnotation.Landmark.Type.LEFT_EYE
.
Create a client
The Client
class has been replaced with the ImageAnnotatorClient
class. Replace references to the Client
class with ImageAnnotatorClient
.
Previous versions of the client libraries:
old_client = vision.Client()
Python Client Library v0.25.1:
Constructing objects that represent image content
To identify image content from a local file, from a Google Cloud Storage URI, or from a web URI, use the new Image
class.
Constructing objects that represent image content from a local file
The following example shows the new way to represent image content from a local file.
Previous versions of the client libraries:
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
Python Client Library v0.25.1:
Constructing objects that represent image content from a URI
The following example shows the new way to represent image content from a Google Cloud Storage URI or a web URI. uri
is the URI to an image file on Google Cloud Storage or on the web.
Previous versions of the client libraries:
image = old_client.image(source_uri=uri)
Python Client Library v0.25.1:
Making requests and processing responses
With the Python Client Library v.0.25.1 the API methods such as face_detection
belong to the ImageAnnotatorClient
object as opposed to the Image
objects.
The returned values are different for several methods as explained below.
Particularly, bounding box vertices are now stored in bounding_poly.vertices
as opposed to bounds.vertices
. The coordinates of each vertex are stored in vertex.x
and vertex.y
as opposed to vertex.x_coordinate
and vertex.y_coordinate
.
The bounding box change affects face_detection
, logo_detection
, text_detection
, document_text_detection
, and crop_hints
.
Making a face detection request and processing the response
Emotion likelihoods are now returned as enumerations stored in face.surprise_likelihood
as opposed to face.emotions.surprise
. The names of likelihood labels can be recovered by importing google.cloud.vision.enums.Likelihood
.
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
faces = image.detect_faces()
for face in faces:
print('anger: {}'.format(face.emotions.anger))
print('joy: {}'.format(face.emotions.joy))
print('surprise: {}'.format(face.emotions.surprise))
vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
for bound in face.bounds.vertices])
print('face bounds: {}'.format(','.join(vertices)))
Python Client Library v0.25.1:
Making a label detection request and processing the response
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
labels = image.detect_labels()
for label in labels:
print(label.description)
Python Client Library v0.25.1:
Making a landmark detection request and processing the response
Previous versions of the client libraries::
Landmark locations' latitude and longitude are now stored in location.lat_lng.latitude
and location.lat_lng.longitude
, as opposed to location.latitude
and location.longitude
.
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
landmarks = image.detect_landmarks()
for landmark in landmarks:
print(landmark.description, landmark.score)
for location in landmark.locations:
print('Latitude'.format(location.latitude))
print('Longitude'.format(location.longitude))
Python Client Library v0.25.1:
Making a logo detection request and processing the response
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
logos = image.detect_logos()
for logo in logos:
print(logo.description, logo.score)
Python Client Library v0.25.1:
Making a SafeSearch detection request and processing the response
SafeSearch likelihoods are now returned as enumerations. The names of likelihood labels can be recovered by importing google.cloud.vision.enums.Likelihood
.
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
safe = image.detect_safe_search()
print('Safe search:')
print('adult: {}'.format(safe.adult))
print('medical: {}'.format(safe.medical))
print('spoofed: {}'.format(safe.spoof))
print('violence: {}'.format(safe.violence))
Python Client Library v0.25.1:
Making a text detection request and processing the response
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
texts = image.detect_text()
for text in texts:
print('\n"{}"'.format(text.description))
vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
for bound in text.bounds.vertices])
print('bounds: {}'.format(','.join(vertices)))
Python Client Library v0.25.1:
Making a document text detection request and processing the response
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
document = image.detect_full_text()
for page in document.pages:
for block in page.blocks:
block_words = []
for paragraph in block.paragraphs:
block_words.extend(paragraph.words)
block_symbols = []
for word in block_words:
block_symbols.extend(word.symbols)
block_text = ''
for symbol in block_symbols:
block_text = block_text + symbol.text
print('Block Content: {}'.format(block_text))
print('Block Bounds:\n {}'.format(block.bounding_box))
Python Client Library v0.25.1:
Making an image properties request and processing the response
Dominant color information is now stored in props.dominant_colors.colors
as opposed to props.colors
.
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
props = image.detect_properties()
for color in props.colors:
print('fraction: {}'.format(color.pixel_fraction))
print('\tr: {}'.format(color.color.red))
print('\tg: {}'.format(color.color.green))
print('\tb: {}'.format(color.color.blue))
print('\ta: {}'.format(color.color.alpha))
Python Client Library v0.25.1:
Making a web detection request and processing the response
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
notes = image.detect_web()
if notes.pages_with_matching_images:
print('\n{} Pages with matching images retrieved')
for page in notes.pages_with_matching_images:
print('Score : {}'.format(page.score))
print('Url : {}'.format(page.url))
if notes.full_matching_images:
print ('\n{} Full Matches found: '.format(
len(notes.full_matching_images)))
for image in notes.full_matching_images:
print('Score: {}'.format(image.score))
print('Url : {}'.format(image.url))
if notes.partial_matching_images:
print ('\n{} Partial Matches found: '.format(
len(notes.partial_matching_images)))
for image in notes.partial_matching_images:
print('Score: {}'.format(image.score))
print('Url : {}'.format(image.url))
if notes.web_entities:
print ('\n{} Web entities found: '.format(len(notes.web_entities)))
for entity in notes.web_entities:
print('Score : {}'.format(entity.score))
print('Description: {}'.format(entity.description))
Python Client Library v0.25.1:
Making a crop hints request and processing the response
Previous versions of the client libraries::
with io.open(file_name, 'rb') as image_file:
content = image_file.read()
image = old_client.image(content=content)
hints = image.detect_crop_hints(aspect_ratios=[1.77])
for n, hint in enumerate(hints):
print('\nCrop Hint: {}'.format(n))
vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
for bound in hint.bounds.vertices])
print('bounds: {}'.format(','.join(vertices)))
Python Client Library v0.25.1:
Note that the aspect ratios need to be passed in through a CropHintsParams
and an ImageContext
.