Edge 容器教程

创建 AutoML Vision Edge 模型并将其导出到 Google Cloud Storage 存储分区后,您可以将 RESTful 服务与 AutoML Vision Edge 模型TF Serving Docker 映像搭配使用。

您需要构建的内容

Docker 容器可帮助您在不同设备上轻松部署 Edge 模型。您可使用任何所需语言从容器调用 REST API 来运行 Edge 模型,而且还不必安装依赖项或查找合适的 TensorFlow 版本。

在本教程中,您将分步体验如何使用 Docker 容器在设备上运行 Edge 模型。

具体而言,本教程将分步指导您完成以下三个步骤:

  1. 获取预建容器。
  2. 使用 Edge 模型运行容器以启动 REST API。
  3. 进行预测。

许多设备只有 CPU,而有些设备可能具有 GPU,可加快预测速度。因此,我们提供了同时涵盖预建的 CPU 和 GPU 容器的教程。

目标

在本入门级端到端演示中,您将使用代码示例执行以下操作:

  1. 获取 Docker 容器。
  2. 使用 Docker 容器和 Edge 模型启动 REST API。
  3. 进行预测以获得分析结果。

准备工作

要完成本教程,您必须执行以下操作:

  1. 训练可导出的 Edge 模型。按照 Edge 设备模型快速入门来训练 Edge 模型。
  2. 导出 AutoML Vision Edge 模型。此模型将与容器一起作为 REST API 提供。
  3. 安装 Docker。这是运行 Docker 容器所必需的软件。
  4. (可选)安装 NVIDIA Docker 和驱动程序。 如果您的设备具有 GPU 并且您希望加快预测速度,则可选择执行此步骤。
  5. 准备测试映像。这些映像将在请求中发送,以获取分析结果。

下一部分将详细介绍如何导出模型和安装必要软件。

导出 AutoML Vision Edge 模型

训练 Edge 模型后,您可以将其导出到其他设备。

这些容器支持 TensorFlow 模型,它们在导出时被命名为 saved_model.pb

如需导出容器的 AutoML Vision Edge 模型,请在界面中选择容器标签页,然后在 Google Cloud Storage 上将模型导出到 ${YOUR_MODEL_PATH}。稍后,该导出的模型将与容器一起作为 REST API 提供。

导出到容器选项

如需在本地下载导出的模型,请运行以下命令。

其中:

  • ${YOUR_MODEL_PATH} - Google Cloud Storage 上的模型位置(例如 gs://my-bucket-vcm/models/edge/ICN4245971651915048908/2020-01-20_01-27-14-064_tf-saved-model/
  • ${YOUR_LOCAL_MODEL_PATH} - 您要下载模型的本地路径(例如 /tmp)。
gsutil cp ${YOUR_MODEL_PATH} ${YOUR_LOCAL_MODEL_PATH}/saved_model.pb

安装 Docker

Docker 是用于在容器内部署和运行应用的软件。

在系统上安装 Docker Community Edition (CE)。您将使用此版本以 REST API 形式提供 Edge 模型。

安装 NVIDIA 驱动程序和 NVIDIA DOCKER(可选 - 仅适用于 GPU)

某些设备具有 GPU,可更快提供预测。我们提供了 GPU Docker 容器,它支持 NVIDIA GPU。

如需运行 GPU 容器,您必须在系统上安装 NVIDIA 驱动程序NVIDIA Docker

使用 CPU 运行模型推断

本部分提供了有关使用 CPU 容器运行模型推断的分步说明。您将使用已安装的 Docker 来获取并运行 CPU 容器,将导出的 Edge 模型作为 REST API 提供,然后将包含测试映像的请求发送到 REST API 以获取分析结果。

拉取 Docker 映像

首先,您将使用 Docker 获取预建的 CPU 容器。预建的 CPU 容器已有整个环境来提供导出的 Edge 模型,但其中尚未包含任何 Edge 模型。

预建的 CPU 容器存储在 Google Container Registry 中。在请求容器之前,请先在 Google Container Registry 中针对容器的位置设置一个环境变量:

export CPU_DOCKER_GCS_PATH=gcr.io/cloud-devrel-public-resources/gcloud-container-1.14.0:latest

针对 Container Registry 路径设置环境变量后,请运行以下命令行来获取 CPU 容器:

sudo docker pull ${CPU_DOCKER_GCS_PATH}

运行 Docker 容器

获取现有容器后,您将运行此 CPU 容器以使用 REST API 提供 Edge 模型推断。

在启动 CPU 容器之前,您必须先设置系统变量:

  • ${CONTAINER_NAME} - 运行容器时指示容器名称的字符串,例如 CONTAINER_NAME=automl_high_accuracy_model_cpu
  • ${PORT} - 一个数字,表示设备中稍后将接受 REST API 调用的端口,例如 PORT=8501

设置变量后,在命令行中运行 Docker 以使用 REST API 提供 Edge 模型推断:

sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCS_PATH}

容器成功运行后,便可在 http://localhost:${PORT}/v1/models/default:predict 处提供 REST API。以下部分详细介绍了如何将预测请求发送到此位置。

发送预测请求

至此,容器已成功运行,接下来您可以将对测试图片的预测请求发送到 REST API。

命令行

命令行请求正文包含 base64 编码的 image_bytes 和字符串 key,用于标识给定图片。如需详细了解图片编码,请参阅 Base64 编码主题。请求 JSON 文件的格式如下:

/tmp/request.json
{
  "instances":
  [
    {
      "image_bytes":
      {
        "b64": "/9j/7QBEUGhvdG9zaG9...base64-encoded-image-content...fXNWzvDEeYxxxzj/Coa6Bax//Z"
      },
      "key": "your-chosen-image-key"
    }
  ]
}

创建本地 JSON 请求文件后,您可以发送预测请求。

使用以下命令可发送预测请求:

curl -X POST -d  @/tmp/request.json http://localhost:${PORT}/v1/models/default:predict
响应

您将看到如下所示的输出:

{
  "predictions": [
    {
      "detection_multiclass_scores": [
        [0.00233048, 0.00207388, 0.00123361, 0.0052332, 0.00132892, 0.00333592],
        [0.00233048, 0.00207388, 0.00123361, 0.0052332, 0.00132892, 0.00333592],
        [0.00240907, 0.00173151, 0.00134027, 0.00287125, 0.00130472, 0.00242674],
        [0.00227344, 0.00124374, 0.00147101, 0.00377446, 0.000997812, 0.0029003],
        [0.00132903, 0.000844955, 0.000537515, 0.00474253, 0.000508994, 0.00130466],
        [0.00233048, 0.00207388, 0.00123361, 0.0052332, 0.00132892, 0.00333592],
        [0.00110534, 0.000204086, 0.000247836, 0.000553966, 0.000193745, 0.000359297],
        [0.00112912, 0.000443578, 0.000779897, 0.00203282, 0.00069508, 0.00188044],
        [0.00271052, 0.00163364, 0.00138229, 0.00314173, 0.00164038, 0.00332257],
        [0.00227907, 0.00217116, 0.00190553, 0.00321552, 0.00233933, 0.0053153],
        [0.00271052, 0.00163364, 0.00138229, 0.00314173, 0.00164038, 0.00332257],
        [0.00250274, 0.000489146, 0.000879943, 0.00355569, 0.00129834, 0.00355521],
        [0.00227344, 0.00124374, 0.00147101, 0.00377446, 0.000997812, 0.0029003],
        [0.00241205, 0.000786602, 0.000896335, 0.00187016, 0.00106838, 0.00193021],
        [0.00132069, 0.00173706, 0.00389183, 0.00536761, 0.00387135, 0.00752795],
        [0.0011555, 0.00025022, 0.000221372, 0.000536889, 0.000187278, 0.000306398],
        [0.00150242, 0.000391901, 0.00061205, 0.00158429, 0.000300348, 0.000788659],
        [0.00181362, 0.000169843, 0.000458032, 0.000690967, 0.000296295, 0.000412017],
        [0.00101465, 0.000184, 0.000148445, 0.00068599, 0.000111818, 0.000290155],
        [0.00128508, 0.00108775, 0.00298983, 0.00174832, 0.00143594, 0.00285548],
        [0.00137702, 0.000480384, 0.000831485, 0.000920624, 0.000405788, 0.000735044],
        [0.00103369, 0.00152704, 0.000892937, 0.00269693, 0.00214335, 0.00588191],
        [0.0013324, 0.000647604, 0.00293729, 0.00156629, 0.00195253, 0.00239435],
        [0.0022423, 0.00141206, 0.0012649, 0.00450748, 0.00138766, 0.00249887],
        [0.00125939, 0.0002428, 0.000370741, 0.000917137, 0.00024578, 0.000412881],
        [0.00135186, 0.000139147, 0.000525713, 0.00103053, 0.000366062, 0.000844955],
        [0.00127548, 0.000989318, 0.00256863, 0.00162545, 0.00311682, 0.00439551],
        [0.00112912, 0.000443578, 0.000779897, 0.00203282, 0.00069508, 0.00188044],
        [0.00114602, 0.00107747, 0.00145, 0.00320849, 0.00211915, 0.00331426],
        [0.00148326, 0.00059548, 0.00431389, 0.00164703, 0.00311947, 0.00268343],
        [0.00154313, 0.000925034, 0.00770769, 0.00252789, 0.00489518, 0.00352332],
        [0.00135094, 0.00042069, 0.00088492, 0.000987828, 0.000755847, 0.00144881],
        [0.0015994, 0.000540197, 0.00163212, 0.00140327, 0.00114474, 0.0026556],
        [0.00150502, 0.000138223, 0.000343591, 0.000529736, 0.000173837, 0.000381887],
        [0.00127372, 0.00066787, 0.00149515, 0.00272799, 0.00110033, 0.00370145],
        [0.00144503, 0.000365585, 0.00318581, 0.00126475, 0.00212631, 0.00204816],
        [0.00132069, 0.00173706, 0.00389183, 0.00536761, 0.00387135, 0.00752795],
        [0.00198057, 0.000307024, 0.000573188, 0.00147268, 0.000757724, 0.0017142],
        [0.00157535, 0.000590324, 0.00190055, 0.00170627, 0.00138417, 0.00246152],
        [0.00177169, 0.000364572, 0.00183856, 0.000767767, 0.00121492, 0.000916481]
      ],
      "detection_classes":
        [5.0, 2.0, 5.0, 5.0, 5.0, 4.0, 5.0, 5.0, 4.0, 4.0, 2.0, 4.0, 4.0, 4.0, 5.0, 4.0, 5.0,
        5.0, 4.0, 5.0, 5.0, 4.0, 5.0, 2.0, 4.0, 4.0, 5.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0,
        5.0, 4.0, 4.0, 5.0, 5.0],
      "num_detections": 40.0,
      "image_info": [320, 320, 1, 0, 320, 320],
      "detection_boxes": [
        [0.457792, 0.0, 0.639324, 0.180828],
        [0.101111, 0.0, 0.89904, 0.995376],
        [0.447649, 0.314644, 0.548206, 0.432875],
        [0.250341, 0.733411, 0.3419, 0.847185],
        [0.573936, 0.0933048, 0.766472, 0.208054],
        [0.490438, 0.194659, 0.825894, 0.563959],
        [0.619383, 0.57948, 0.758244, 0.694948],
        [0.776185, 0.554518, 0.841549, 0.707129],
        [0.101111, 0.0, 0.89904, 0.995376],
        [0.431243, 0.0917888, 0.850772, 0.617123],
        [0.250883, 0.13572, 0.780518, 0.817881],
        [0.327646, 0.878977, 0.607503, 0.989904],
        [0.573936, 0.0933048, 0.766472, 0.208054],
        [0.37792, 0.460952, 0.566977, 0.618865],
        [0.373325, 0.575019, 0.463646, 0.642949],
        [0.27251, 0.0714827, 0.790764, 0.77176],
        [0.725154, 0.561221, 0.849777, 0.702165],
        [0.37549, 0.558988, 0.460575, 0.626821],
        [0.265563, 0.248368, 0.785451, 0.977509],
        [0.605674, 0.597553, 0.760419, 0.744799],
        [0.400611, 0.327271, 0.487579, 0.424036],
        [0.48632, 0.980808, 0.606008, 0.997468],
        [0.542414, 0.0588853, 0.752879, 0.200775],
        [0.490438, 0.194659, 0.825894, 0.563959],
        [0.368839, 0.115654, 0.741839, 0.587659],
        [0.467101, 0.985155, 0.588853, 0.997708],
        [0.755204, 0.561319, 0.836475, 0.676249],
        [0.409855, 0.302322, 0.773464, 0.587772],
        [0.351938, 0.934163, 0.587043, 0.99954],
        [0.27758, 0.72402, 0.334137, 0.846945],
        [0.29875, 0.601199, 0.381122, 0.679323],
        [0.64637, 0.566566, 0.767553, 0.67331],
        [0.372612, 0.596795, 0.457588, 0.666544],
        [0.438422, 0.989558, 0.578529, 0.998366],
        [0.586531, 0.499894, 0.879711, 0.845526],
        [0.608476, 0.644501, 0.759154, 0.827037],
        [0.352501, 0.477601, 0.710863, 0.948605],
        [0.466184, 0.953443, 0.668056, 0.996681],
        [0.547756, 0.00152373, 0.722814, 0.150687],
        [0.759639, 0.548476, 0.866864, 0.722007]
      ],
      "detection_scores":
        [0.877304, 0.839354, 0.824509, 0.579912, 0.461549, 0.306151, 0.268687, 0.197998, 0.181444,
        0.17856, 0.152705, 0.148958, 0.14726, 0.135506, 0.128483, 0.12234, 0.105697, 0.105665,
        0.0941569, 0.0891062, 0.0845169, 0.0810551, 0.0794339, 0.0784486, 0.0771784, 0.0770716,
        0.075339, 0.0716749, 0.0715761, 0.07108, 0.0705339, 0.0693555, 0.0677402, 0.0644643,
        0.0631491, 0.062369, 0.0619523, 0.060859, 0.0601122, 0.0589799],
      "detection_classes_as_text":
        ["Tomato", "Salad", "Tomato", "Tomato", "Tomato", "Seafood", "Tomato", "Tomato", "Seafood",
        "Seafood", "Salad", "Seafood", "Seafood", "Seafood", "Tomato", "Seafood", "Tomato",
        "Tomato", "Seafood", "Tomato", "Tomato", "Seafood", "Tomato", "Salad", "Seafood",
        "Seafood", "Tomato", "Seafood", "Seafood", "Tomato", "Tomato", "Tomato", "Tomato",
        "Seafood", "Seafood", "Tomato", "Seafood", "Seafood", "Tomato", "Tomato"],
      "key": "1"
    }
  ]
}

Python

如需了解如何安装和使用 AutoML Vision Object Detection 客户端库,请参阅 AutoML Vision Object Detection 客户端库。如需了解详情,请参阅 AutoML Vision Object Detection Python API 参考文档

如需向 AutoML Vision Object Detection 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import base64
import cv2
import io
import json

import requests

def preprocess_image(image_file_path, max_width, max_height):
    """Preprocesses input images for AutoML Vision Edge models.

    Args:
        image_file_path: Path to a local image for the prediction request.
        max_width: The max width for preprocessed images. The max width is 640
            (1024) for AutoML Vision Image Classfication (Object Detection)
            models.
        max_height: The max width for preprocessed images. The max height is
            480 (1024) for AutoML Vision Image Classfication (Object
            Detetion) models.
    Returns:
        The preprocessed encoded image bytes.
    """
    # cv2 is used to read, resize and encode images.
    encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 85]
    im = cv2.imread(image_file_path)
    [height, width, _] = im.shape
    if height > max_height or width > max_width:
        ratio = max(height / float(max_width), width / float(max_height))
        new_height = int(height / ratio + 0.5)
        new_width = int(width / ratio + 0.5)
        resized_im = cv2.resize(
            im, (new_width, new_height), interpolation=cv2.INTER_AREA
        )
        _, processed_image = cv2.imencode(".jpg", resized_im, encode_param)
    else:
        _, processed_image = cv2.imencode(".jpg", im, encode_param)
    return base64.b64encode(processed_image).decode("utf-8")

def container_predict(image_file_path, image_key, port_number=8501):
    """Sends a prediction request to TFServing docker container REST API.

    Args:
        image_file_path: Path to a local image for the prediction request.
        image_key: Your chosen string key to identify the given image.
        port_number: The port number on your device to accept REST API calls.
    Returns:
        The response of the prediction request.
    """
    # AutoML Vision Edge models will preprocess the input images.
    # The max width and height for AutoML Vision Image Classification and
    # Object Detection models are 640*480 and 1024*1024 separately. The
    # example here is for Image Classification models.
    encoded_image = preprocess_image(
        image_file_path=image_file_path, max_width=640, max_height=480
    )

    # The example here only shows prediction with one image. You can extend it
    # to predict with a batch of images indicated by different keys, which can
    # make sure that the responses corresponding to the given image.
    instances = {
        "instances": [{"image_bytes": {"b64": str(encoded_image)}, "key": image_key}]
    }

    # This example shows sending requests in the same server that you start
    # docker containers. If you would like to send requests to other servers,
    # please change localhost to IP of other servers.
    url = "http://localhost:{}/v1/models/default:predict".format(port_number)

    response = requests.post(url, data=json.dumps(instances))
    print(response.json())

使用 GPU 容器运行模型推断(可选)

本部分介绍如何使用 GPU 容器运行模型推断。此过程与使用 CPU 运行模型推断非常相似,主要区别在于 GPU 容器路径以及 GPU 容器的启动方式。

拉取 Docker 映像

首先,您将使用 Docker 获取预建的 GPU 容器。预建的 GPU 容器已有环境来提供导出的带有 GPU 的 Edge 模型,但其中尚未包含任何 Edge 模型或驱动程序。

预建的 CPU 容器存储在 Google Container Registry 中。在请求容器之前,请先在 Google Container Registry 中针对容器的位置设置一个环境变量:

export GPU_DOCKER_GCS_PATH=gcr.io/cloud-devrel-public-resources/gcloud-container-1.14.0-gpu:latest

运行以下命令行以获取 GPU 容器:

sudo docker pull ${GPU_DOCKER_GCS_PATH}

运行 Docker 容器

此步骤将运行 GPU 容器,以使用 REST API 提供 Edge 模型推断。如上所述,您必须安装 NVIDIA 驱动程序和 Docker。此外,还必须设置以下系统变量:

  • ${CONTAINER_NAME} - 运行容器时指示容器名称的字符串,例如 CONTAINER_NAME=automl_high_accuracy_model_gpu
  • ${PORT} - 一个数字,表示设备中稍后将接受 REST API 调用的端口,例如 PORT=8502

设置变量后,在命令行中运行 Docker 以使用 REST API 提供 Edge 模型推断:

sudo docker run --runtime=nvidia --rm --name "${CONTAINER_NAME}" -v \
${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -p \
${PORT}:8501 -t ${GPU_DOCKER_GCS_PATH}

容器成功运行后,便可在 http://localhost:${PORT}/v1/models/default:predict 处提供 REST API。以下部分详细介绍了如何将预测请求发送到此位置。

发送预测请求

至此,容器已成功运行,接下来您可以将对测试图片的预测请求发送到 REST API。

命令行

命令行请求正文包含 base64 编码的 image_bytes 和字符串 key,用于标识给定图片。如需详细了解图片编码,请参阅 Base64 编码主题。请求 JSON 文件的格式如下:

/tmp/request.json
{
  "instances":
  [
    {
      "image_bytes":
      {
        "b64": "/9j/7QBEUGhvdG9zaG9...base64-encoded-image-content...fXNWzvDEeYxxxzj/Coa6Bax//Z"
      },
      "key": "your-chosen-image-key"
    }
  ]
}

创建本地 JSON 请求文件后,您可以发送预测请求。

使用以下命令可发送预测请求:

curl -X POST -d  @/tmp/request.json http://localhost:${PORT}/v1/models/default:predict
响应

您将看到如下所示的输出:

{
  "predictions": [
    {
      "detection_multiclass_scores": [
        [0.00233048, 0.00207388, 0.00123361, 0.0052332, 0.00132892, 0.00333592],
        [0.00233048, 0.00207388, 0.00123361, 0.0052332, 0.00132892, 0.00333592],
        [0.00240907, 0.00173151, 0.00134027, 0.00287125, 0.00130472, 0.00242674],
        [0.00227344, 0.00124374, 0.00147101, 0.00377446, 0.000997812, 0.0029003],
        [0.00132903, 0.000844955, 0.000537515, 0.00474253, 0.000508994, 0.00130466],
        [0.00233048, 0.00207388, 0.00123361, 0.0052332, 0.00132892, 0.00333592],
        [0.00110534, 0.000204086, 0.000247836, 0.000553966, 0.000193745, 0.000359297],
        [0.00112912, 0.000443578, 0.000779897, 0.00203282, 0.00069508, 0.00188044],
        [0.00271052, 0.00163364, 0.00138229, 0.00314173, 0.00164038, 0.00332257],
        [0.00227907, 0.00217116, 0.00190553, 0.00321552, 0.00233933, 0.0053153],
        [0.00271052, 0.00163364, 0.00138229, 0.00314173, 0.00164038, 0.00332257],
        [0.00250274, 0.000489146, 0.000879943, 0.00355569, 0.00129834, 0.00355521],
        [0.00227344, 0.00124374, 0.00147101, 0.00377446, 0.000997812, 0.0029003],
        [0.00241205, 0.000786602, 0.000896335, 0.00187016, 0.00106838, 0.00193021],
        [0.00132069, 0.00173706, 0.00389183, 0.00536761, 0.00387135, 0.00752795],
        [0.0011555, 0.00025022, 0.000221372, 0.000536889, 0.000187278, 0.000306398],
        [0.00150242, 0.000391901, 0.00061205, 0.00158429, 0.000300348, 0.000788659],
        [0.00181362, 0.000169843, 0.000458032, 0.000690967, 0.000296295, 0.000412017],
        [0.00101465, 0.000184, 0.000148445, 0.00068599, 0.000111818, 0.000290155],
        [0.00128508, 0.00108775, 0.00298983, 0.00174832, 0.00143594, 0.00285548],
        [0.00137702, 0.000480384, 0.000831485, 0.000920624, 0.000405788, 0.000735044],
        [0.00103369, 0.00152704, 0.000892937, 0.00269693, 0.00214335, 0.00588191],
        [0.0013324, 0.000647604, 0.00293729, 0.00156629, 0.00195253, 0.00239435],
        [0.0022423, 0.00141206, 0.0012649, 0.00450748, 0.00138766, 0.00249887],
        [0.00125939, 0.0002428, 0.000370741, 0.000917137, 0.00024578, 0.000412881],
        [0.00135186, 0.000139147, 0.000525713, 0.00103053, 0.000366062, 0.000844955],
        [0.00127548, 0.000989318, 0.00256863, 0.00162545, 0.00311682, 0.00439551],
        [0.00112912, 0.000443578, 0.000779897, 0.00203282, 0.00069508, 0.00188044],
        [0.00114602, 0.00107747, 0.00145, 0.00320849, 0.00211915, 0.00331426],
        [0.00148326, 0.00059548, 0.00431389, 0.00164703, 0.00311947, 0.00268343],
        [0.00154313, 0.000925034, 0.00770769, 0.00252789, 0.00489518, 0.00352332],
        [0.00135094, 0.00042069, 0.00088492, 0.000987828, 0.000755847, 0.00144881],
        [0.0015994, 0.000540197, 0.00163212, 0.00140327, 0.00114474, 0.0026556],
        [0.00150502, 0.000138223, 0.000343591, 0.000529736, 0.000173837, 0.000381887],
        [0.00127372, 0.00066787, 0.00149515, 0.00272799, 0.00110033, 0.00370145],
        [0.00144503, 0.000365585, 0.00318581, 0.00126475, 0.00212631, 0.00204816],
        [0.00132069, 0.00173706, 0.00389183, 0.00536761, 0.00387135, 0.00752795],
        [0.00198057, 0.000307024, 0.000573188, 0.00147268, 0.000757724, 0.0017142],
        [0.00157535, 0.000590324, 0.00190055, 0.00170627, 0.00138417, 0.00246152],
        [0.00177169, 0.000364572, 0.00183856, 0.000767767, 0.00121492, 0.000916481]
      ],
      "detection_classes":
        [5.0, 2.0, 5.0, 5.0, 5.0, 4.0, 5.0, 5.0, 4.0, 4.0, 2.0, 4.0, 4.0, 4.0, 5.0, 4.0, 5.0,
        5.0, 4.0, 5.0, 5.0, 4.0, 5.0, 2.0, 4.0, 4.0, 5.0, 4.0, 4.0, 5.0, 5.0, 5.0, 5.0, 4.0, 4.0,
        5.0, 4.0, 4.0, 5.0, 5.0],
      "num_detections": 40.0,
      "image_info": [320, 320, 1, 0, 320, 320],
      "detection_boxes": [
        [0.457792, 0.0, 0.639324, 0.180828],
        [0.101111, 0.0, 0.89904, 0.995376],
        [0.447649, 0.314644, 0.548206, 0.432875],
        [0.250341, 0.733411, 0.3419, 0.847185],
        [0.573936, 0.0933048, 0.766472, 0.208054],
        [0.490438, 0.194659, 0.825894, 0.563959],
        [0.619383, 0.57948, 0.758244, 0.694948],
        [0.776185, 0.554518, 0.841549, 0.707129],
        [0.101111, 0.0, 0.89904, 0.995376],
        [0.431243, 0.0917888, 0.850772, 0.617123],
        [0.250883, 0.13572, 0.780518, 0.817881],
        [0.327646, 0.878977, 0.607503, 0.989904],
        [0.573936, 0.0933048, 0.766472, 0.208054],
        [0.37792, 0.460952, 0.566977, 0.618865],
        [0.373325, 0.575019, 0.463646, 0.642949],
        [0.27251, 0.0714827, 0.790764, 0.77176],
        [0.725154, 0.561221, 0.849777, 0.702165],
        [0.37549, 0.558988, 0.460575, 0.626821],
        [0.265563, 0.248368, 0.785451, 0.977509],
        [0.605674, 0.597553, 0.760419, 0.744799],
        [0.400611, 0.327271, 0.487579, 0.424036],
        [0.48632, 0.980808, 0.606008, 0.997468],
        [0.542414, 0.0588853, 0.752879, 0.200775],
        [0.490438, 0.194659, 0.825894, 0.563959],
        [0.368839, 0.115654, 0.741839, 0.587659],
        [0.467101, 0.985155, 0.588853, 0.997708],
        [0.755204, 0.561319, 0.836475, 0.676249],
        [0.409855, 0.302322, 0.773464, 0.587772],
        [0.351938, 0.934163, 0.587043, 0.99954],
        [0.27758, 0.72402, 0.334137, 0.846945],
        [0.29875, 0.601199, 0.381122, 0.679323],
        [0.64637, 0.566566, 0.767553, 0.67331],
        [0.372612, 0.596795, 0.457588, 0.666544],
        [0.438422, 0.989558, 0.578529, 0.998366],
        [0.586531, 0.499894, 0.879711, 0.845526],
        [0.608476, 0.644501, 0.759154, 0.827037],
        [0.352501, 0.477601, 0.710863, 0.948605],
        [0.466184, 0.953443, 0.668056, 0.996681],
        [0.547756, 0.00152373, 0.722814, 0.150687],
        [0.759639, 0.548476, 0.866864, 0.722007]
      ],
      "detection_scores":
        [0.877304, 0.839354, 0.824509, 0.579912, 0.461549, 0.306151, 0.268687, 0.197998, 0.181444,
        0.17856, 0.152705, 0.148958, 0.14726, 0.135506, 0.128483, 0.12234, 0.105697, 0.105665,
        0.0941569, 0.0891062, 0.0845169, 0.0810551, 0.0794339, 0.0784486, 0.0771784, 0.0770716,
        0.075339, 0.0716749, 0.0715761, 0.07108, 0.0705339, 0.0693555, 0.0677402, 0.0644643,
        0.0631491, 0.062369, 0.0619523, 0.060859, 0.0601122, 0.0589799],
      "detection_classes_as_text":
        ["Tomato", "Salad", "Tomato", "Tomato", "Tomato", "Seafood", "Tomato", "Tomato", "Seafood",
        "Seafood", "Salad", "Seafood", "Seafood", "Seafood", "Tomato", "Seafood", "Tomato",
        "Tomato", "Seafood", "Tomato", "Tomato", "Seafood", "Tomato", "Salad", "Seafood",
        "Seafood", "Tomato", "Seafood", "Seafood", "Tomato", "Tomato", "Tomato", "Tomato",
        "Seafood", "Seafood", "Tomato", "Seafood", "Seafood", "Tomato", "Tomato"],
      "key": "1"
    }
  ]
}

Python

如需了解如何安装和使用 AutoML Vision Object Detection 客户端库,请参阅 AutoML Vision Object Detection 客户端库。如需了解详情,请参阅 AutoML Vision Object Detection Python API 参考文档

如需向 AutoML Vision Object Detection 进行身份验证,请设置应用默认凭据。 如需了解详情,请参阅为本地开发环境设置身份验证

import base64
import cv2
import io
import json

import requests

def preprocess_image(image_file_path, max_width, max_height):
    """Preprocesses input images for AutoML Vision Edge models.

    Args:
        image_file_path: Path to a local image for the prediction request.
        max_width: The max width for preprocessed images. The max width is 640
            (1024) for AutoML Vision Image Classfication (Object Detection)
            models.
        max_height: The max width for preprocessed images. The max height is
            480 (1024) for AutoML Vision Image Classfication (Object
            Detetion) models.
    Returns:
        The preprocessed encoded image bytes.
    """
    # cv2 is used to read, resize and encode images.
    encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 85]
    im = cv2.imread(image_file_path)
    [height, width, _] = im.shape
    if height > max_height or width > max_width:
        ratio = max(height / float(max_width), width / float(max_height))
        new_height = int(height / ratio + 0.5)
        new_width = int(width / ratio + 0.5)
        resized_im = cv2.resize(
            im, (new_width, new_height), interpolation=cv2.INTER_AREA
        )
        _, processed_image = cv2.imencode(".jpg", resized_im, encode_param)
    else:
        _, processed_image = cv2.imencode(".jpg", im, encode_param)
    return base64.b64encode(processed_image).decode("utf-8")

def container_predict(image_file_path, image_key, port_number=8501):
    """Sends a prediction request to TFServing docker container REST API.

    Args:
        image_file_path: Path to a local image for the prediction request.
        image_key: Your chosen string key to identify the given image.
        port_number: The port number on your device to accept REST API calls.
    Returns:
        The response of the prediction request.
    """
    # AutoML Vision Edge models will preprocess the input images.
    # The max width and height for AutoML Vision Image Classification and
    # Object Detection models are 640*480 and 1024*1024 separately. The
    # example here is for Image Classification models.
    encoded_image = preprocess_image(
        image_file_path=image_file_path, max_width=640, max_height=480
    )

    # The example here only shows prediction with one image. You can extend it
    # to predict with a batch of images indicated by different keys, which can
    # make sure that the responses corresponding to the given image.
    instances = {
        "instances": [{"image_bytes": {"b64": str(encoded_image)}, "key": image_key}]
    }

    # This example shows sending requests in the same server that you start
    # docker containers. If you would like to send requests to other servers,
    # please change localhost to IP of other servers.
    url = "http://localhost:{}/v1/models/default:predict".format(port_number)

    response = requests.post(url, data=json.dumps(instances))
    print(response.json())

总结

在本教程中,您了解了如何使用 CPU 或 GPU Docker 容器运行 Edge 模型。现在,您可以在更多设备上部署这一基于容器的解决方案。

后续步骤