Edge device model quickstart

This quickstart walks you through the process of:

  • Copying a set of images into Google Cloud Storage.
  • Creating a CSV listing the images and their labels.
  • Using AutoML Vision to create your dataset, train a custom AutoML Vision Edge model, and make a prediction.
  • Exporting and deploying your AutoML Vision Edge model.

Before you begin

Set up your project

  1. Faça login na sua Conta do Google.

    Se você ainda não tiver uma, inscreva-se.

  2. No Console do GCP, acesse a página Gerenciar recursos e selecione ou crie um projeto.

    Acessar a página Gerenciar recursos

  3. Verifique se o faturamento foi ativado para o projeto.

    Saiba como ativar o faturamento

  4. Ativar Cloud AutoML and Storage APIs.

    Ativar as APIs

  5. Install the gcloud command line tool.
  6. Follow the instructions to create a service account and download a key file for that account.
  7. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path to the service account key file that you downloaded when you created the service account.
    export GOOGLE_APPLICATION_CREDENTIALS=key-file
  8. Set the PROJECT_ID environment variable to your Project ID.
    export PROJECT_ID=your-project-id
    The AutoML API calls and resource names include your Project ID in them. The PROJECT_ID environment variable provides a convenient way to specify the ID.
  9. Add yourself and your service account to the AutoML Editor IAM role.
    1. Replace your-userid@your-domain with your user account.
    2. Replace service-account-name with the name of your new service account, for example service-account1@myproject.iam.gserviceaccount.com.
    gcloud auth login
    gcloud projects add-iam-policy-binding $PROJECT_ID \
       --member="user:your-userid@your-domain" \
       --role="roles/automl.admin"
    gcloud projects add-iam-policy-binding $PROJECT_ID \
       --member="serviceAccount:service-account-name" \
       --role="roles/automl.editor"
    
  10. Allow the AutoML service account (custom-vision@appspot.gserviceaccount.com) to access your Google Cloud project resources:
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:custom-vision@appspot.gserviceaccount.com" \
      --role="roles/ml.admin"
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:custom-vision@appspot.gserviceaccount.com" \
      --role="roles/storage.admin"
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:custom-vision@appspot.gserviceaccount.com" \
      --role="roles/serviceusage.serviceUsageAdmin"

Create a Cloud Storage bucket

Use Cloud Shell, a browser-based Linux command line connected to your GCP Console project, to create your Cloud Storage bucket:

  1. Open Cloud Shell.

  2. Create a Google Cloud Storage bucket. The bucket name must be in the format: project-id-vcm. The following command creates a storage bucket in the us-central1 region named project-id-vcm. For a complete list of available regions, see the Bucket Locations page.

    gsutil mb -p project-id -c regional -l us-central1 gs://project-id-vcm/

    Recommended file structure for your Cloud Storage files:

    gs://project-id-vcm/dataset-name/documents/document-name.txt

Copy the sample images into your bucket

Next, copy the flower dataset used in this Tensorflow blog post. The images are stored in a public Cloud Storage bucket, so you can copy them directly from there to your own bucket.

  1. In your Cloud Shell session, enter:

    gsutil -m cp -R gs://cloud-ml-data/img/flower_photos/ gs://${BUCKET}/img/
    

    The file copying takes about 20 minutes to complete.

Create the CSV file

The sample dataset contains a CSV file with all of the image locations and the labels for each image. You'll use that to create your own CSV file:

  1. Update the CSV file to point to the files in your own bucket:

    gsutil cat gs://${BUCKET}/img/flower_photos/all_data.csv | sed "s:cloud-ml-data:${BUCKET}:" > all_data.csv
    
  2. Copy the CSV file into your bucket:

    gsutil cp all_data.csv gs://${BUCKET}/csv/
    

Create your dataset

Visit the AutoML Vision UI to begin the process of creating your dataset and training your model.

When prompted, make sure to select the project that you used for your Cloud Storage bucket.

  1. From the AutoML Vision page, click New Dataset:

    New dataset button in console

  2. Specify a name for this dataset. Click the + sign to continue.

    New dataset name field

  3. Specify the Cloud Storage URI of your CSV file. For this quickstart, the CSV file is at gs://your-project-123-vcm/csv/all_data.csv. Make sure to replace your-project-123 with your specific project ID.

    • Be sure to check Enable multi-label classification if you have images that are labeled with more than one label.
  4. Click Create Dataset. The import process takes a few minutes. When it completes, you are taken to the next page which has details on all of the images identified for your dataset, both labeled and unlabeled images. You can filter images by label by selecting a label under Filter labels. If you are using the flower dataset, you will see a warning alert which will notify you of repeated images or images with multiple labels (if multi-label is not enabled).

    Filtering by label example

    • You can add additional images and update labels for new and existing images after you have imported a CSV file.

Train your model

  1. Once your dataset has been created and processed, select the Train tab to initiate model training.

    select train tab

  2. Select TRAIN NEW MODEL to continue.

    Train new model option

    This will open a pop-up window with training options.

  3. From the training pop-up window, select 1. "Edge" from the Model type. Then select model optimized for 2. "Best trade-off" and your 3. node hour budget.

    Train Edge model

  4. Select "Start training" to begin model training.

    Training is initiated for your model, and should take about an hour. The training might stop earlier than the node hour you selected. The service will email you once training has completed, or if any errors occur.

Once training is complete, you can refer to evaluation metrics, as well as test and use the model.

Select the Evaluate tab to get more details on F1, Precision, and Recall scores.

Select an individual label under Filter labels to get details on true positives, false negatives and false positives.

Make a Prediction

Once training is complete, your model is automatically deployed.

Select the Predict tab for instructions on sending an image to your model for a prediction. You can also refer to Annotating images for examples.

Export and Deploy the Edge model

The final step in using an AutoML Vision Edge model is to export (optimize and download) and deploy (use) your model.

There are multiple ways you can export and deploy your models to use for prediction on Edge devices.

In this quickstart you will use Tensorflow Lite (TF Lite) as an example. TF Lite models are both easy to use and have a wide set of use cases.

  1. Under Use your Edge model, select the TFLite tab.

    Export TF Lite model

  2. Select Export to export a TF Lite package into your Cloud Storage storage bucket. The export process typically takes several minutes.

  3. After the export is completed, select the Cloud Storage link, and it will direct you to the Google Cloud Storage destination folder in the Google Cloud Platform Console.

In the Google Cloud Storage destination location you will find a folder named with timestamp and model format, under which you can find the following files:

  • a tflite file (model.tflite),
  • a dictionary file (dict.txt)
  • a metadata file (tflite_metadata.json)

What's Next

With these files, you can follow tutorials to deploy on Android devices, iOS devices, or Raspberry Pi 3.

Other options to use the model:

  • You can also export the model as TensorFlow SavedModel and use it with a Docker container in the Container tab. See the container tutorial on how to export to a container.
  • You can export the model for running on Edge TPU in the Edge devices tab. You can follow this Edge TPU official documentation to use the model.
  • You can check check_box Format model for Core ML (iOS / macOS) before training the model for training a CoreML supported model. After training, you can export the model in CoreML tab, and follow the CoreML official documentation.
Esta página foi útil? Conte sua opinião sobre:

Enviar comentários sobre…

Cloud AutoML Vision
Precisa de ajuda? Acesse nossa página de suporte.