This page describes how to use Vertex AI to export your image and video AutoML Edge models to Cloud Storage.
For information about exporting tabular models, see Exporting an AutoML tabular model.
Introduction
After you have trained an AutoML Edge model you can, in some cases, export the model in different formats, depending on how you want to use it. The exported model files are saved in a Cloud Storage bucket, and can they be used for prediction in the environment of your choosing.
You cannot use an Edge model in Vertex AI to serve predictions; you must deploy Edge model to an external device to get predictions.
Export a model
Use the following code samples to identify an AutoML Edge model, specify an output file storage location, and then send the export model request.
Image
Select the tab below for your objective:
Classification
Trained AutoML Edge image classification models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Edge TPU TF Lite - Export your model as a TF Lite package to run your model on Edge TPU devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Core ML - Export an .mlmodel file to run your model on iOS and macOS devices.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the
options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite
(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.core-ml
(Core ML) - Export an .mlmodel file to run your model on iOS and macOS devices.tf-js
(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Vertex AI SDK for Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Classification
Trained AutoML Edge image classification models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Edge TPU TF Lite - Export your model as a TF Lite package to run your model on Edge TPU devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Core ML - Export an .mlmodel file to run your model on iOS and macOS devices.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Click Export.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the
options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.edgetpu-tflite
(Edge TPU TF Lite) - Export your model as a TF Lite package to run your model on Edge TPU devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.core-ml
(Core ML) - Export an .mlmodel file to run your model on iOS and macOS devices.tf-js
(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Vertex AI SDK for Python
To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.
Object detection
Trained AutoML Edge image object detection models can be exported in the following formats:
- TF Lite - Export your model as a TF Lite package to run your model on edge or mobile devices.
- Container - Export your model as a TF Saved Model to run on a Docker container.
- Tensorflow.js - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
Select the tab below for your language or environment:
Console
- In the Google Cloud console, in the Vertex AI section, go to the Models page.
- Click the version number of the AutoML Edge model you want to export to open its details page.
- Select the Deploy & Test tab to view the available export formats.
- Select your desired export model format from the Use your edge-optimized model section.
- In the Export model side window, specify the location in Cloud Storage to store Edge model export output.
- Click Export.
- Click Done to close the Export model side window.
REST
Before using any of the request data, make the following replacements:
- LOCATION: Your project's location.
- PROJECT: Your project ID.
- MODEL_ID: The ID number of the trained AutoML Edge model you are exporting.
- EXPORT_FORMAT: The type of Edge model you are exporting. For this objective the
options are:
tflite
(TF Lite) - Export your model as a TF Lite package to run your model on edge or mobile devices.tf-saved-model
(Container) - Export your model as a TF Saved Model to run on a Docker container.tf-js
(Tensorflow.js) - Export your model as a TensorFlow.js package to run your model in the browser and in Node.js.
- OUTPUT_BUCKET: The path to the Cloud Storage bucket directory where you want to store your Edge model files.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export
Request JSON body:
{ "outputConfig": { "exportFormatId": "EXPORT_FORMAT", "artifactDestination": { "outputUriPrefix": "gs://OUTPUT_BUCKET/" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT/locations/LOCATION/models/MODEL_ID:export" | Select-Object -Expand Content
The response contains information about specifications as well as the OPERATION_ID.
You can get the status of the export operation to see when it finishes.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.