Migrating to Python Client Library v0.25.1

The Client Library for Python v0.25.1 includes some significant changes to how previous client libraries were designed. These changes can be summarized as follows:

  • Consolidation of modules into fewer types

  • Replacing untyped parameters with strongly-typed classes and enumerations

This topic provides details on the changes that you will need to make to your Python code for the Cloud Vision API client libraries in order to use the v0.25.1 Python client library.

Running previous versions of the client library

You are not required to upgrade your Python client library to v0.25.1. If you want to continue using a previous version of the Python client library and do not want to migrate your code, then you should specify the version of the Python client library used by your app. To specify a specific library version, edit the requirements.txt file as shown:

google-cloud-vision==0.25

Deprecated Modules

The following modules are deprecated in the Python Client Library v0.25.1 package.

  • google.cloud.vision.annotations

  • google.cloud.vision.batch

  • google.cloud.vision.client

  • google.cloud.vision.color

  • google.cloud.vision.crop_hint

  • google.cloud.vision.entity

  • google.cloud.vision.face

  • google.cloud.vision.feature

  • google.cloud.vision.geometry

  • google.cloud.vision.image

  • google.cloud.vision.likelihood

  • google.cloud.vision.safe_search

  • google.cloud.vision.text

  • google.cloud.vision.web

Required Code Changes

Imports

Include the new google.cloud.vision.types module in order to access the new types in the Python Client Library v0.25.1.

The types module contains the new classes that are required for creating requests, such as types.Image.

from google.cloud import vision
from google.cloud.vision import types

Additionally, the new google.cloud.vision.enums module contains the enumerations useful for parsing and understanding API responses, such as enums.Likelihood.UNLIKELY and enums.FaceAnnotation.Landmark.Type.LEFT_EYE.

Create a client

The Client class has been replaced with the ImageAnnotatorClient class. Replace references to the Client class with ImageAnnotatorClient.

Previous versions of the client libraries:

old_client = vision.Client()

Python Client Library v0.25.1:

client = vision.ImageAnnotatorClient()

Constructing objects that represent image content

To identify image content from a local file, from a Google Cloud Storage URI, or from a web URI, use the new Image class.

Constructing objects that represent image content from a local file

The following example shows the new way to represent image content from a local file.

Previous versions of the client libraries:

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

Constructing objects that represent image content from a URI

The following example shows the new way to represent image content from a Google Cloud Storage URI or a web URI. uri is the URI to an image file on Google Cloud Storage or on the web.

Previous versions of the client libraries:

image = old_client.image(source_uri=uri)

Python Client Library v0.25.1:

image = types.Image()
image.source.image_uri = uri

Making requests and processing responses

With the Python Client Library v.0.25.1 the API methods such as face_detection belong to the ImageAnnotatorClient object as opposed to the Image objects.

The returned values are different for several methods as explained below.

Particularly, bounding box vertices are now stored in bounding_poly.vertices as opposed to bounds.vertices. The coordinates of each vertex are stored in vertex.x and vertex.y as opposed to vertex.x_coordinate and vertex.y_coordinate.

The bounding box change affects face_detection, logo_detection, text_detection, document_text_detection, and crop_hints.

Making a face detection request and processing the response

Emotion likelihoods are now returned as enumerations stored in face.surprise_likelihood as opposed to face.emotions.surprise. The names of likelihood labels can be recovered by importing google.cloud.vision.enums.Likelihood.

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

faces = image.detect_faces()

for face in faces:
    print('anger: {}'.format(face.emotions.anger))
    print('joy: {}'.format(face.emotions.joy))
    print('surprise: {}'.format(face.emotions.surprise))

    vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
                for bound in face.bounds.vertices])

    print('face bounds: {}'.format(','.join(vertices)))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.face_detection(image=image)
faces = response.face_annotations

# Names of likelihood from google.cloud.vision.enums
likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE',
                   'LIKELY', 'VERY_LIKELY')
print('Faces:')

for face in faces:
    print('anger: {}'.format(likelihood_name[face.anger_likelihood]))
    print('joy: {}'.format(likelihood_name[face.joy_likelihood]))
    print('surprise: {}'.format(likelihood_name[face.surprise_likelihood]))

    vertices = (['({},{})'.format(vertex.x, vertex.y)
                for vertex in face.bounding_poly.vertices])

    print('face bounds: {}'.format(','.join(vertices)))

Making a label detection request and processing the response

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

labels = image.detect_labels()

for label in labels:
    print(label.description)

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')

for label in labels:
    print(label.description)

Making a landmark detection request and processing the response

Previous versions of the client libraries::

Landmark locations' latitude and longitude are now stored in location.lat_lng.latitude and location.lat_lng.longitude, as opposed to location.latitude and location.longitude.

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

landmarks = image.detect_landmarks()

for landmark in landmarks:
    print(landmark.description, landmark.score)
    for location in landmark.locations:
        print('Latitude'.format(location.latitude))
        print('Longitude'.format(location.longitude))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.landmark_detection(image=image)
landmarks = response.landmark_annotations
print('Landmarks:')

for landmark in landmarks:
    print(landmark.description)
    for location in landmark.locations:
        lat_lng = location.lat_lng
        print('Latitude'.format(lat_lng.latitude))
        print('Longitude'.format(lat_lng.longitude))

Making a logo detection request and processing the response

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

logos = image.detect_logos()

for logo in logos:
    print(logo.description, logo.score)

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.logo_detection(image=image)
logos = response.logo_annotations
print('Logos:')

for logo in logos:
    print(logo.description)

Making a safe search detection request and processing the response

Safe search likelihoods are now returned as enumerations. The names of likelihood labels can be recovered by importing google.cloud.vision.enums.Likelihood.

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

safe = image.detect_safe_search()
print('Safe search:')
print('adult: {}'.format(safe.adult))
print('medical: {}'.format(safe.medical))
print('spoofed: {}'.format(safe.spoof))
print('violence: {}'.format(safe.violence))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.safe_search_detection(image=image)
safe = response.safe_search_annotation

# Names of likelihood from google.cloud.vision.enums
likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE',
                   'LIKELY', 'VERY_LIKELY')
print('Safe search:')

print('adult: {}'.format(likelihood_name[safe.adult]))
print('medical: {}'.format(likelihood_name[safe.medical]))
print('spoofed: {}'.format(likelihood_name[safe.spoof]))
print('violence: {}'.format(likelihood_name[safe.violence]))

Making a text detection request and processing the response

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

texts = image.detect_text()

for text in texts:
    print('\n"{}"'.format(text.description))

    vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
                for bound in text.bounds.vertices])

    print('bounds: {}'.format(','.join(vertices)))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.text_detection(image=image)
texts = response.text_annotations
print('Texts:')

for text in texts:
    print('\n"{}"'.format(text.description))

    vertices = (['({},{})'.format(vertex.x, vertex.y)
                for vertex in text.bounding_poly.vertices])

    print('bounds: {}'.format(','.join(vertices)))

Making a document text detection request and processing the response

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

document = image.detect_full_text()

for page in document.pages:
    for block in page.blocks:
        block_words = []
        for paragraph in block.paragraphs:
            block_words.extend(paragraph.words)

        block_symbols = []
        for word in block_words:
            block_symbols.extend(word.symbols)

        block_text = ''
        for symbol in block_symbols:
            block_text = block_text + symbol.text

        print('Block Content: {}'.format(block_text))
        print('Block Bounds:\n {}'.format(block.bounding_box))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.document_text_detection(image=image)
document = response.full_text_annotation

for page in document.pages:
    for block in page.blocks:
        block_words = []
        for paragraph in block.paragraphs:
            block_words.extend(paragraph.words)

        block_symbols = []
        for word in block_words:
            block_symbols.extend(word.symbols)

        block_text = ''
        for symbol in block_symbols:
            block_text = block_text + symbol.text

        print('Block Content: {}'.format(block_text))
        print('Block Bounds:\n {}'.format(block.bounding_box))

Making an image properties request and processing the response

Dominant color information is now stored in props.dominant_colors.colors as opposed to props.colors.

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

props = image.detect_properties()

for color in props.colors:
    print('fraction: {}'.format(color.pixel_fraction))
    print('\tr: {}'.format(color.color.red))
    print('\tg: {}'.format(color.color.green))
    print('\tb: {}'.format(color.color.blue))
    print('\ta: {}'.format(color.color.alpha))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.image_properties(image=image)
props = response.image_properties_annotation
print('Properties:')

for color in props.dominant_colors.colors:
    print('fraction: {}'.format(color.pixel_fraction))
    print('\tr: {}'.format(color.color.red))
    print('\tg: {}'.format(color.color.green))
    print('\tb: {}'.format(color.color.blue))
    print('\ta: {}'.format(color.color.alpha))

Making a web detection request and processing the response

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

notes = image.detect_web()

if notes.pages_with_matching_images:
    print('\n{} Pages with matching images retrieved')

    for page in notes.pages_with_matching_images:
        print('Score : {}'.format(page.score))
        print('Url   : {}'.format(page.url))

if notes.full_matching_images:
    print ('\n{} Full Matches found: '.format(
           len(notes.full_matching_images)))

    for image in notes.full_matching_images:
        print('Score:  {}'.format(image.score))
        print('Url  : {}'.format(image.url))

if notes.partial_matching_images:
    print ('\n{} Partial Matches found: '.format(
           len(notes.partial_matching_images)))

    for image in notes.partial_matching_images:
        print('Score: {}'.format(image.score))
        print('Url  : {}'.format(image.url))

if notes.web_entities:
    print ('\n{} Web entities found: '.format(len(notes.web_entities)))

    for entity in notes.web_entities:
        print('Score      : {}'.format(entity.score))
        print('Description: {}'.format(entity.description))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = types.Image(content=content)

response = client.web_detection(image=image)
notes = response.web_detection

if notes.pages_with_matching_images:
    print('\n{} Pages with matching images retrieved')

    for page in notes.pages_with_matching_images:
        print('Url   : {}'.format(page.url))

if notes.full_matching_images:
    print ('\n{} Full Matches found: '.format(
           len(notes.full_matching_images)))

    for image in notes.full_matching_images:
        print('Url  : {}'.format(image.url))

if notes.partial_matching_images:
    print ('\n{} Partial Matches found: '.format(
           len(notes.partial_matching_images)))

    for image in notes.partial_matching_images:
        print('Url  : {}'.format(image.url))

if notes.web_entities:
    print ('\n{} Web entities found: '.format(len(notes.web_entities)))

    for entity in notes.web_entities:
        print('Score      : {}'.format(entity.score))
        print('Description: {}'.format(entity.description))

Making a crop hints request and processing the response

Previous versions of the client libraries::

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

hints = image.detect_crop_hints(aspect_ratios=[1.77])

for n, hint in enumerate(hints):
    print('\nCrop Hint: {}'.format(n))

    vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
                for bound in hint.bounds.vertices])

    print('bounds: {}'.format(','.join(vertices)))

Python Client Library v0.25.1:

with io.open(path, 'rb') as image_file:
    content = image_file.read()
image = types.Image(content=content)

crop_hints_params = types.CropHintsParams(aspect_ratios=[1.77])
image_context = types.ImageContext(crop_hints_params=crop_hints_params)

response = client.crop_hints(image=image, image_context=image_context)
hints = response.crop_hints_annotation.crop_hints

for n, hint in enumerate(hints):
    print('\nCrop Hint: {}'.format(n))

    vertices = (['({},{})'.format(vertex.x, vertex.y)
                for vertex in hint.bounding_poly.vertices])

    print('bounds: {}'.format(','.join(vertices)))

Note that the aspect ratios need to be passed in through a CropHintsParams and an ImageContext.

Monitor your resources on the go

Get the Google Cloud Console app to help you manage your projects.

Send feedback about...

Google Cloud Vision API Documentation