迁移至 Python 客户端库 v0.25.1

Python 客户端库 v0.25.1 针对以前的客户端库设计进行了一些重大更改。这些更改可以总结如下:

  • 整合模块合以减少类型

  • 将无类型参数替换为强类型的类和枚举

本主题详细介绍您需要针对 Cloud Vision API 客户端库对 Python 代码进行什么更改,才能使用 v0.25.1 Python 客户端库。

运行旧版客户端库

您无需将 Python 客户端库升级到 v0.25.1。如果您想要继续使用先前版本的 Python 客户端库,且不想迁移您的代码,那么您应指定您的应用所使用的 Python 客户端库版本。要指定特定的库版本,请按如下所示修改 requirements.txt 文件:

google-cloud-vision==0.25

已移除的模块

Python 客户端库 v0.25.1 软件包中已移除以下模块。

  • google.cloud.vision.annotations

  • google.cloud.vision.batch

  • google.cloud.vision.client

  • google.cloud.vision.color

  • google.cloud.vision.crop_hint

  • google.cloud.vision.entity

  • google.cloud.vision.face

  • google.cloud.vision.feature

  • google.cloud.vision.geometry

  • google.cloud.vision.image

  • google.cloud.vision.likelihood

  • google.cloud.vision.safe_search

  • google.cloud.vision.text

  • google.cloud.vision.web

所需的代码更改

导入

添加新的 google.cloud.vision.types 模块以访问 Python 客户端库 v0.25.1 中的新类型。

types 模块包含创建请求所需的新类,例如 types.Image

from google.cloud import vision

此外,新的 google.cloud.vision.enums 模块包含用于解析和理解 API 响应的枚举,如 enums.Likelihood.UNLIKELYenums.FaceAnnotation.Landmark.Type.LEFT_EYE

创建客户端

Client 类已替换为 ImageAnnotatorClient 类。请将对 Client 类的引用替换为 ImageAnnotatorClient

旧版客户端库

old_client = vision.Client()

Python 客户端库 v0.25.1

client = vision.ImageAnnotatorClient()

构建表示图片内容的对象

要识别图片内容是来自本地文件、Google Cloud Storage URI 还是网页 URI,请使用新的 Image 类。

构建表示来自本地文件的图片内容的对象

以下示例展示了如何通过新方式表示来自本地文件的图片内容。

旧版客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

构建表示来自 URI 的图片内容的对象

以下示例展示了表示来自 Google Cloud Storage URI 或网络 URI 的图片内容的新方式。uri 是指 Google Cloud Storage 或网络上的图片文件的 URI。

旧版客户端库

image = old_client.image(source_uri=uri)

Python 客户端库 v0.25.1

image = vision.Image()
image.source.image_uri = uri

发出请求并处理响应

在 Python 客户端库 v.0.25.1 中,face_detection 等 API 方法属于 ImageAnnotatorClient 对象,而非 Image 对象。

如下文所述,多种方法返回的值有所不同。

需特别注意,边界框顶点现在存储在 bounding_poly.vertices(而不是 bounds.vertices)中。每个顶点的坐标存储在 vertex.xvertex.y 中(而不是 vertex.x_coordinatevertex.y_coordinate 中)。

边界框更改会影响 face_detectionlogo_detectiontext_detectiondocument_text_detectioncrop_hints

发出面部检测请求并处理响应

现在,情绪可能性以枚举的形式返回(存储在 face.surprise_likelihood 而不是 face.emotions.surprise 中)。可能性标签的名称可以通过导入 google.cloud.vision.enums.Likelihood 来恢复。

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

faces = image.detect_faces()

for face in faces:
    print('anger: {}'.format(face.emotions.anger))
    print('joy: {}'.format(face.emotions.joy))
    print('surprise: {}'.format(face.emotions.surprise))

    vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
                for bound in face.bounds.vertices])

    print('face bounds: {}'.format(','.join(vertices)))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.face_detection(image=image)
faces = response.face_annotations

# Names of likelihood from google.cloud.vision.enums
likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE',
                   'LIKELY', 'VERY_LIKELY')
print('Faces:')

for face in faces:
    print('anger: {}'.format(likelihood_name[face.anger_likelihood]))
    print('joy: {}'.format(likelihood_name[face.joy_likelihood]))
    print('surprise: {}'.format(likelihood_name[face.surprise_likelihood]))

    vertices = (['({},{})'.format(vertex.x, vertex.y)
                for vertex in face.bounding_poly.vertices])

    print('face bounds: {}'.format(','.join(vertices)))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出标签检测请求并处理响应

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

labels = image.detect_labels()

for label in labels:
    print(label.description)

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')

for label in labels:
    print(label.description)

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出地标检测请求并处理响应

以前版本的客户端库

地标位置的纬度和经度现在存储在 location.lat_lng.latitudelocation.lat_lng.longitude(而不是 location.latitudelocation.longitude)中。

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

landmarks = image.detect_landmarks()

for landmark in landmarks:
    print(landmark.description, landmark.score)
    for location in landmark.locations:
        print('Latitude'.format(location.latitude))
        print('Longitude'.format(location.longitude))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.landmark_detection(image=image)
landmarks = response.landmark_annotations
print('Landmarks:')

for landmark in landmarks:
    print(landmark.description)
    for location in landmark.locations:
        lat_lng = location.lat_lng
        print('Latitude {}'.format(lat_lng.latitude))
        print('Longitude {}'.format(lat_lng.longitude))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出徽标检测请求并处理响应

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

logos = image.detect_logos()

for logo in logos:
    print(logo.description, logo.score)

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.logo_detection(image=image)
logos = response.logo_annotations
print('Logos:')

for logo in logos:
    print(logo.description)

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出安全搜索检测请求并处理响应

安全搜索可能性现在以枚举的形式返回。可能性标签的名称可以通过导入 google.cloud.vision.enums.Likelihood 来恢复。

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

safe = image.detect_safe_search()
print('Safe search:')
print('adult: {}'.format(safe.adult))
print('medical: {}'.format(safe.medical))
print('spoofed: {}'.format(safe.spoof))
print('violence: {}'.format(safe.violence))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.safe_search_detection(image=image)
safe = response.safe_search_annotation

# Names of likelihood from google.cloud.vision.enums
likelihood_name = ('UNKNOWN', 'VERY_UNLIKELY', 'UNLIKELY', 'POSSIBLE',
                   'LIKELY', 'VERY_LIKELY')
print('Safe search:')

print('adult: {}'.format(likelihood_name[safe.adult]))
print('medical: {}'.format(likelihood_name[safe.medical]))
print('spoofed: {}'.format(likelihood_name[safe.spoof]))
print('violence: {}'.format(likelihood_name[safe.violence]))
print('racy: {}'.format(likelihood_name[safe.racy]))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出文本检测请求并处理响应

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

texts = image.detect_text()

for text in texts:
    print('\n"{}"'.format(text.description))

    vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
                for bound in text.bounds.vertices])

    print('bounds: {}'.format(','.join(vertices)))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.text_detection(image=image)
texts = response.text_annotations
print('Texts:')

for text in texts:
    print('\n"{}"'.format(text.description))

    vertices = (['({},{})'.format(vertex.x, vertex.y)
                for vertex in text.bounding_poly.vertices])

    print('bounds: {}'.format(','.join(vertices)))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出文档文本检测请求并处理响应

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

document = image.detect_full_text()

for page in document.pages:
    for block in page.blocks:
        block_words = []
        for paragraph in block.paragraphs:
            block_words.extend(paragraph.words)

        block_symbols = []
        for word in block_words:
            block_symbols.extend(word.symbols)

        block_text = ''
        for symbol in block_symbols:
            block_text = block_text + symbol.text

        print('Block Content: {}'.format(block_text))
        print('Block Bounds:\n {}'.format(block.bounding_box))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.document_text_detection(image=image)

for page in response.full_text_annotation.pages:
    for block in page.blocks:
        print('\nBlock confidence: {}\n'.format(block.confidence))

        for paragraph in block.paragraphs:
            print('Paragraph confidence: {}'.format(
                paragraph.confidence))

            for word in paragraph.words:
                word_text = ''.join([
                    symbol.text for symbol in word.symbols
                ])
                print('Word text: {} (confidence: {})'.format(
                    word_text, word.confidence))

                for symbol in word.symbols:
                    print('\tSymbol: {} (confidence: {})'.format(
                        symbol.text, symbol.confidence))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出图片属性请求并处理响应

主色信息现在存储在 props.dominant_colors.colors 而不是 props.colors 中。

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

props = image.detect_properties()

for color in props.colors:
    print('fraction: {}'.format(color.pixel_fraction))
    print('\tr: {}'.format(color.color.red))
    print('\tg: {}'.format(color.color.green))
    print('\tb: {}'.format(color.color.blue))
    print('\ta: {}'.format(color.color.alpha))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.image_properties(image=image)
props = response.image_properties_annotation
print('Properties:')

for color in props.dominant_colors.colors:
    print('fraction: {}'.format(color.pixel_fraction))
    print('\tr: {}'.format(color.color.red))
    print('\tg: {}'.format(color.color.green))
    print('\tb: {}'.format(color.color.blue))
    print('\ta: {}'.format(color.color.alpha))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出网络检测请求并处理响应

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

notes = image.detect_web()

if notes.pages_with_matching_images:
    print('\n{} Pages with matching images retrieved')

    for page in notes.pages_with_matching_images:
        print('Score : {}'.format(page.score))
        print('Url   : {}'.format(page.url))

if notes.full_matching_images:
    print ('\n{} Full Matches found: '.format(
           len(notes.full_matching_images)))

    for image in notes.full_matching_images:
        print('Score:  {}'.format(image.score))
        print('Url  : {}'.format(image.url))

if notes.partial_matching_images:
    print ('\n{} Partial Matches found: '.format(
           len(notes.partial_matching_images)))

    for image in notes.partial_matching_images:
        print('Score: {}'.format(image.score))
        print('Url  : {}'.format(image.url))

if notes.web_entities:
    print ('\n{} Web entities found: '.format(len(notes.web_entities)))

    for entity in notes.web_entities:
        print('Score      : {}'.format(entity.score))
        print('Description: {}'.format(entity.description))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()

image = vision.Image(content=content)

response = client.web_detection(image=image)
annotations = response.web_detection

if annotations.best_guess_labels:
    for label in annotations.best_guess_labels:
        print('\nBest guess label: {}'.format(label.label))

if annotations.pages_with_matching_images:
    print('\n{} Pages with matching images found:'.format(
        len(annotations.pages_with_matching_images)))

    for page in annotations.pages_with_matching_images:
        print('\n\tPage url   : {}'.format(page.url))

        if page.full_matching_images:
            print('\t{} Full Matches found: '.format(
                   len(page.full_matching_images)))

            for image in page.full_matching_images:
                print('\t\tImage url  : {}'.format(image.url))

        if page.partial_matching_images:
            print('\t{} Partial Matches found: '.format(
                   len(page.partial_matching_images)))

            for image in page.partial_matching_images:
                print('\t\tImage url  : {}'.format(image.url))

if annotations.web_entities:
    print('\n{} Web entities found: '.format(
        len(annotations.web_entities)))

    for entity in annotations.web_entities:
        print('\n\tScore      : {}'.format(entity.score))
        print(u'\tDescription: {}'.format(entity.description))

if annotations.visually_similar_images:
    print('\n{} visually similar images found:\n'.format(
        len(annotations.visually_similar_images)))

    for image in annotations.visually_similar_images:
        print('\tImage url    : {}'.format(image.url))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

发出剪裁提示请求并处理响应

以前版本的客户端库

with io.open(file_name, 'rb') as image_file:
    content = image_file.read()

image = old_client.image(content=content)

hints = image.detect_crop_hints(aspect_ratios=[1.77])

for n, hint in enumerate(hints):
    print('\nCrop Hint: {}'.format(n))

    vertices = (['({},{})'.format(bound.x_coordinate, bound.y_coordinate)
                for bound in hint.bounds.vertices])

    print('bounds: {}'.format(','.join(vertices)))

Python 客户端库 v0.25.1

with io.open(path, 'rb') as image_file:
    content = image_file.read()
image = vision.Image(content=content)

crop_hints_params = vision.CropHintsParams(aspect_ratios=[1.77])
image_context = vision.ImageContext(
    crop_hints_params=crop_hints_params)

response = client.crop_hints(image=image, image_context=image_context)
hints = response.crop_hints_annotation.crop_hints

for n, hint in enumerate(hints):
    print('\nCrop Hint: {}'.format(n))

    vertices = (['({},{})'.format(vertex.x, vertex.y)
                for vertex in hint.bounding_poly.vertices])

    print('bounds: {}'.format(','.join(vertices)))

if response.error.message:
    raise Exception(
        '{}\nFor more info on error messages, check: '
        'https://cloud.google.com/apis/design/errors'.format(
            response.error.message))

请注意,宽高比需要通过 CropHintsParamsImageContext 传入。