Exporting Edge models

After you have created (trained) a model, you can export your custom model.

After exporting your model you can then deploy the model to a device.

You can export an image classification model in either generic Tensorflow Lite format, Edge TPU compiled TensorFlow Lite format, general TensorFlow format, or TensorFlow.js for Web format to a Google Cloud Storage location using the ExportModel API.

Using curl

To make it more convenient to run the curl samples in this topic, set the following environment variable.

export ENDPOINT="automl.googleapis.com"

Export to devices

TensorFlow Lite models

Web UI

  1. Open the AutoML Vision UI and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the TF Lite option. After selecting the TF Lite option, select Export to export your Edge TF Lite model.

    Export TF Lite model

Integrated UI

  1. Open the Vision Dashboard and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the TF Lite option. After selecting the TF Lite option, select Export to export your Edge TF Lite model.

    updated Export TF Lite model image

Command-line

In the "model_format" field specify "tflite" (default).

curl \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/us-central1/models/${MODEL_ID}:export \
  -d '{
        "output_config": {
          "model_format": "tflite",
          "gcs_destination": {
              "output_uri_prefix": "${USER_GCS_PATH}"
          },
        }
      }'

As a result you will see a folder in the directory you provided (${USER_GCS_PATH}). The created folder will be named according to timestamp with the model type in the format YYYY-MM-DD_hh-mm-ss-sss_${model-type} (for example, 2018-10-26_18-16-30-458_tflite).

Timestamp folder example

The folder contains a TensorFlow Lite model named model.tflite, a label file named dict.txt, and a tflite_metadata.json file.

Each line in the label file dict.txt represents a label of the predictions returned by the TensorFlow Lite model, in the same order they were requested. For example, the dict.txt for the flowers dataset is as follows:

daisy
dandelion
roses
sunflowers
tulips

Using the exported model

After exporting your model to a Google Cloud Storage bucket you can deploy your AutoML Vision Edge model on Android devices, iOS devices, or Raspberry Pi 3.

Core ML models

Web UI

  1. Open the AutoML Vision UI and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the Core ML option. After selecting the Core ML option, click Export to export your on-device TF Lite model.

    Export Core ML model

Integrated UI

  1. Open the Vision Dashboard and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the Core ML option. After selecting the Core ML option, click Export to export your on-device TF Lite model.

    updated export Core ML model screenshot

Command-line

In the "model_format" field specify "core-ml".

curl \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/us-central1/models/${MODEL_ID}:export \
  -d '{
        "output_config": {
        "model_format": "core-ml",
          "gcs_destination": {
              "output_uri_prefix": "${USER_GCS_PATH}"
          },
        }
      }'

As a result you will see a folder in the directory you provided (${USER_GCS_PATH}). The created folder will be named according to timestamp in the format YYYY-MM-DD_hh-mm-ss-sss_${model-type} (for example, 2018-10-26_18-16-30-458_core-ml).

Timestamp folder example

The folder contains a Core ML model named model.mlmodel and a label file named dict.txt.

Each line in the label file dict.txt represents a label of the predictions returned by the TensorFlow Lite model, in the same order they were requested. For example, the dict.txt for the flowers dataset is as follows:

daisy
dandelion
roses
sunflowers
tulips

Using the exported model

After exporting your model to a Google Cloud Storage bucket you can deploy your AutoML Vision Edge model on iOS devices.

Export to a container

Web UI

  1. Open the AutoML Vision UI and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the Container option. After selecting the Container option, click Export to export your on-device TF Lite model.

    Export container image

Integrated UI

  1. Open the Vision Dashboard and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the Container option. After selecting the Container option, click Export to export your on-device TF Lite model.

    updated export container model screenshot

Command-line

In the "model_format" field specify "tf-saved-model".

curl \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/us-central1/models/${MODEL_ID}:export \
  -d '{
        "output_config": {
        "model_format": "tf-saved-model",
          "gcs_destination": {
              "output_uri_prefix": "${USER_GCS_PATH}"
          },
        }
      }'

As a result you will see a folder in the directory you provided (${USER_GCS_PATH}). The created folder will be named according to timestamp in the format YYYY-MM-DD_hh-mm-ss-sss_${model-type} (for example, 2018-10-26_18-16-30-458_tf-saved-model).

Timestamp folder example

The folder contains a TensorFlow model named saved_model.pb.

Using the exported model

After exporting your model to a Google Cloud Storage bucket you can use your exported model to make predictions in a Docker image. See the Containers tutorial for instructions on deployment to a container.

Export to Edge TPU

Web UI

  1. Open the AutoML Vision UI and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Click the row for the model you want to use to label your images.

  3. Click the Test & Use tab just below the title bar.

  4. In the Use your model section select the Edge devices option.

    Export TF Lite model

  5. From the dropdown menu, select Edge TPU. After selecting the Edge TPU option, click Export to export your Edge TPU model.

    Export TF Lite model dropdown

Integrated UI

  1. Open the Vision Dashboard and click the lightbulb icon in the left navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Select the row for the model you want to use to label your images.

  3. Select the Test & Use tab just below the title bar.

  4. In the Use your model section select the Edge devices option.

    updated export TF Lite model

Command-line

curl \
  -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
  -H "Content-Type: application/json" \
  https://${ENDPOINT}/v1beta1/projects/${PROJECT_ID}/locations/us-central1/models/${MODEL_ID}:export \
  -d '{
        "output_config": {
          "gcs_destination": {
              "output_uri_prefix": "${USER_GCS_PATH}"
          },
              "params": {
                "model-format": "edgetpu_tflite"
          }
        }
      }'

As a result you will see a folder in the directory you provided (${USER_GCS_PATH}). The created folder will be named according to timestamp in the format YYYY-MM-DD_hh-mm-ss-sss_${model-type} (for example, 2018-10-26_18-16-30-458_edgetpu-tflite).

Timestamp folder example

The folder contains a TensorFlow Lite model named edgetpu_model.tflite, a label file named dict.txt, and a tflite_metadata.json file.

Each line in the label file dict.txt represents a label of the predictions returned by the TensorFlow Lite model, in the same order they were requested. For example, the dict.txt for the flowers dataset is as follows:

daisy
dandelion
roses
sunflowers
tulips

Using the exported model

For more information on how to deploy to Edge TPU, see Coral's official documentation.

Export for Web

Web UI

  1. Open the AutoML Vision UI and select the lightbulb icon in the side navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Select the row for the model you want to use to label your images.

  3. Select the Test & Use tab just below the title bar.

  4. In the Use your model section select the Tensorflow.js option. After selecting the Tensorflow.js option, select Export to export your Web-ready TensorFlow.js model.

    Export TensorFlow.js screenshot

Integrated UI

  1. Open the Vision Dashboard and select the lightbulb icon in the side navigation bar to display the available models.

    To view the models for a different project, select the project from the drop-down list in the upper right of the title bar.

  2. Select the row for the model you want to use to label your images.

  3. Select the Test & Use tab just below the title bar.

  4. In the Use your model section select the Tensorflow.js option. After selecting the Tensorflow.js option, select Export to export your Web-ready TensorFlow.js model.

    updated export TensorFlow.js model screenshot

REST & CMD LINE

Before using any of the request data below, make the following replacements:

  • project-id: your GCP project ID.
  • model-id: the ID of your model, from the response when you created the model. The ID is the last element of the name of your model. For example:
    • model name: projects/project-id/locations/location-id/models/IOD4412217016962778756
    • model id: IOD4412217016962778756
  • output-storage-bucket: a Google Cloud Storage bucket/directory to save output files to, expressed in the following form: gs://bucket/directory/. The requesting user must have write permission to the bucket.

HTTP method and URL:

POST https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:export

Request JSON body:

{
  "outputConfig": {
    "modelFormat": "tf_js",
    "gcsDestination": {
      "outputUriPrefix": "cloud-storage-bucket/"
    },
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file called request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:export

PowerShell

Save the request body in a file called request.json, and execute the following command:

$cred = gcloud auth application-default print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://automl.googleapis.com/v1beta1/projects/project-id/locations/us-central1/models/model-id:export" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{
  "name": "projects/project-id/locations/us-central1/operations/operation-id",
  "metadata": {
    "@type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
    "createTime": "2019-07-22T21:23:21.643041Z",
    "updateTime": "2019-07-22T21:23:21.643041Z",
    "exportModelDetails": {
      "outputInfo": {
        "gcsOutputDirectory": "cloud-storage-bucket/model-export/${icn/iod}/${model-type}-YYYY-MM-DDThh:mm:ss.sssZ"
      }
    }
  }
}

As a result you will see a folder in the directory you provided (${USER_GCS_PATH}). The created folder will be named according to timestamp in the format /model-export/${icn/iod}/${model-type}-YYYY-MM-DDThh:mm:ss.sssZ (for example, tf_js-edge_model-2019-10-03T17:24:46.999Z).

The folder contains binary files (.bin), a label file named dict.txt, and a model.json file.

Tensorflow.js folder example
Was this page helpful? Let us know how we did:

Send feedback about...

Cloud AutoML Vision
Need help? Visit our support page.