This page describes how to use Vertex AI to export your image and video AutoML Edge models to Cloud Storage.
For information about exporting tabular models, see Exporting an AutoML tabular model.
Introduction
After you have trained an AutoML Edge model you can, in some cases, export the model in different formats, depending on how you want to use it. The exported model files are saved in a Cloud Storage bucket, and can they be used for prediction in the environment of your choosing.
You cannot use an Edge model in Vertex AI to serve predictions; you must deploy Edge model to an external device to get predictions.
Export a model
Use the following code samples to identify an AutoML Edge model, specify an output file storage location, and then send the export model request.
Image
Select the tab below for your objective:
Classification
Trained AutoML Edge image classification models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Edge TPU TF Lite - Export your model as a TF Lite package to run your model on Edge TPU devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Core ML - Export an .mlmodel file to run your model on iOS and macOS devices.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the
options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite
(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.core-ml
(Core ML) - Export an .mlmodel file to run your model on iOS and macOS devices.tf-js
(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Classification
Trained AutoML Edge image classification models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Edge TPU TF Lite - Export your model as a TF Lite package to run your model on Edge TPU devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Core ML - Export an .mlmodel file to run your model on iOS and macOS devices.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the
options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite
(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.core-ml
(Core ML) - Export an .mlmodel file to run your model on iOS and macOS devices.tf-js
(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Object detection
Trained AutoML Edge image object detection models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Select the Deploy & Test tab to view the available export formats.
- Select your desired export model format from the Use your edge-optimized model section.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the
options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.tf-js
(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Video
Select the tab below for your objective:
Action recognition
Trained AutoML Edge video action recognition models can be exported in the saved model format.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where the Model is stored. For example,
us-central1
. - MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For video action
recognition, the model option is:
tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
- PROJECT_NUMBER: Your project's automatically generated project number.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation.
Classification
Trained AutoML Edge video classification models can only be exported in the saved model format.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where the Model is stored. For example,
us-central1
. - MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For video classification,
the model option is:
tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
- PROJECT_NUMBER: Your project's automatically generated project number.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-10-12T20:53:40.130785Z", "updateTime": "2020-10-12T20:53:40.130785Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } } }
You can get the status of the export operation to see when it finishes.
Object tracking
Trained AutoML Edge video object tracking models can be exported in the following formats:
- TF Lite - Export your model as a TensorFlow Lite package to run your model on edge or mobile devices.
- Container - Export your model as a TensorFlow Saved Model to run on a Docker container.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Region where the Model is stored. For example,
us-central1
. - MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For video object tracking
models, the options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite
(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
- PROJECT_NUMBER: Your project's automatically generated project number.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
{ "name": "projects/PROJECT_NUMBER/locations/LOCATION/models/MODEL_ID/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-10-12T20:53:40.130785Z", "updateTime": "2020-10-12T20:53:40.130785Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } } }
You can get the status of the export operation to see when it finishes.
Get status of the operation
Image
Use the following code to get the status of the export operation. This code is the same for all objectives:
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- OPERATION_ID:The ID of the target operation. This ID is typically contained in the response to the original request.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID
To send your request, choose one of these options:
curl
Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID"
PowerShell
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID" | Select-Object -Expand Content
{ "name": "projects/PROJECT/locations/LOCATION/models/MODEL_ID/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelOperationMetadata", "genericMetadata": { "createTime": "2020-10-12T20:53:40.130785Z", "updateTime": "2020-10-12T20:53:40.793983Z" }, "outputInfo": { "artifactOutputUri": "gs://OUTPUT_BUCKET/model-MODEL_ID/EXPORT_FORMAT/YYYY-MM-DDThh:mm:ss.sssZ" } }, "done": true, "response": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.ExportModelResponse" } }
Video
REST
Before using any of the request data, make the following replacements:
- PROJECT_NUMBER: Your project's automatically generated project number.
- LOCATION: Region where the Model is stored. For example,
us-central1
. - OPERATION_ID: ID of your operations.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID
To send your request, choose one of these options:
curl
Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID"
PowerShell
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/operations/OPERATION_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
Output files
Image
Select the tab below for your model format:
TF Lite
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tflite/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
model.tflite
: A file containing a version of the model that is ready to be used with TensorFlow Lite.
Edge TPU
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/edgetpu-tflite/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
edgetpu_model.tflite
: A file containing a version of the model for TensorFlow Lite, passed through the Edge TPU compiler to be compatible with the Edge TPU.
Container
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tf-saved-model/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
saved_model.pb
: A protocol buffer file containing the graph definition and the weights of the model.
Core ML
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/core-ml/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
dict.txt
: A label file. Each line in the label filedict.txt
represents a label of the predictions returned by the model, in the same order they were requested.Sample
dict.txt
roses daisy tulips dandelion sunflowers
model.mlmodel
: A file specifying a Core ML model.
Tensorflow.js
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tf-js/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
dict.txt
: A label file. Each line in the label filedict.txt
represents a label of the predictions returned by the model, in the same order they were requested.Sample
dict.txt
roses daisy tulips dandelion sunflowers
group1-shard1of3.bin
: A binary file.group1-shard2of3.bin
: A binary file.group1-shard3of3.bin
: A binary file.model.json
: A JSON file representation of a model.Sample
model.json
(shortened for clarity){ "format": "graph-model", "generatedBy": "2.4.0", "convertedBy": "TensorFlow.js Converter v1.7.0", "userDefinedMetadata": { "signature": { "inputs": { "image:0": { "name": "image:0", "dtype": "DT_FLOAT", "tensorShape": { "dim": [ { "size": "1" }, { "size": "224" }, { "size": "224" }, { "size": "3" } ] } } }, "outputs": { "scores:0": { "name": "scores:0", "dtype": "DT_FLOAT", "tensorShape": { "dim": [ { "size": "1" }, { "size": "5" } ] } } } } }, "modelTopology": { "node": [ { "name": "image", "op": "Placeholder", "attr": { "dtype": { "type": "DT_FLOAT" }, "shape": { "shape": { "dim": [ { "size": "1" }, { "size": "224" }, { "size": "224" }, { "size": "3" } ] } } } }, { "name": "mnas_v4_a_1/feature_network/feature_extractor/Mean/reduction_indices", "op": "Const", "attr": { "value": { "tensor": { "dtype": "DT_INT32", "tensorShape": { "dim": [ { "size": "2" } ] } } }, "dtype": { "type": "DT_INT32" } } }, ... { "name": "scores", "op": "Identity", "input": [ "Softmax" ], "attr": { "T": { "type": "DT_FLOAT" } } } ], "library": {}, "versions": {} }, "weightsManifest": [ { "paths": [ "group1-shard1of3.bin", "group1-shard2of3.bin", "group1-shard3of3.bin" ], "weights": [ { "name": "mnas_v4_a_1/feature_network/feature_extractor/Mean/reduction_indices", "shape": [ 2 ], "dtype": "int32" }, { "name": "mnas_v4_a/output/fc/tf_layer/kernel", "shape": [ 1280, 5 ], "dtype": "float32" }, ... { "name": "mnas_v4_a_1/feature_network/lead_cell_17/op_0/conv2d_0/Conv2D_weights", "shape": [ 1, 1, 320, 1280 ], "dtype": "float32" }, { "name": "mnas_v4_a_1/feature_network/cell_14/op_0/expand_0/Conv2D_bn_offset", "shape": [ 1152 ], "dtype": "float32" } ] } ] }
Video
Select the tab below for your model format:
TF Lite
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tflite/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
model.tflite
: A file containing a version of the model that is ready to be used with TensorFlow Lite.frozen_inference_graph.pb
: A serialized protocol buffer file containing the graph definition and the weights of the model.label_map.pbtxt
: A label map file that maps each of the used labels to an integer value.
Edge TPU
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/edgetpu-tflite/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
edgetpu_model.tflite
: A file containing a version of the model for TensorFlow Lite, passed through the Edge TPU compiler to be compatible with the Edge TPU.label_map.pbtxt
: A label map file that maps each of the used labels to an integer value.
Container
The OUTPUT_BUCKET
you specified in the request determines where the output
files are stored. The directory format where the output files are stored follows the format:
- gs://OUTPUT_BUCKET/model-MODEL_ID/tf-saved-model/YYYY-MM-DDThh:mm:ss.sssZ/
Files:
frozen_inference_graph.pb
: A serialized protocol buffer file containing the graph definition and the weights of the model.label_map.pbtxt
: A label map file that maps each of the used labels to an integer value.saved_model/saved_model.pb
: The file stores the actual TensorFlow program, or model, and a set of named signatures, each identifying a function that accepts tensor inputs and produces tensor outputs.saved_model/variables/
: The variables directory contains a standard training checkpoint.