This quickstart walks you through the process of:

  • Copying a set of images into Google Cloud Storage.
  • Creating a CSV listing the images and their labels.
  • Using AutoML Vision to create your dataset, and train and deploy your model.

Before you begin

Set up your project

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the GCP Console, go to the Manage resources page and select or create a new project.

    Go to the Manage resources page

  3. Make sure that billing is enabled for your project.

    Learn how to enable billing

  4. Enable the Cloud AutoML and Storage APIs.

    Enable the APIs

  5. Install the gcloud command line tool.
  6. Follow the instructions to create a service account and download a key file for that account.
  7. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path to the service account key file that you downloaded when you created the service account.
  8. Set the PROJECT_ID environment variable to your Project ID.
    export PROJECT_ID=your-project-id
    The AutoML API calls and resource names include your Project ID in them. The PROJECT_ID environment variable provides a convenient way to specify the ID.
  9. Add yourself and your service account to the AutoML Editor IAM role.
    1. Replace your-userid@your-domain with your user account.
    2. Replace service-account-name with the name of your new service account, for example service-account1@myproject.iam.gserviceaccount.com.
    gcloud auth login
    gcloud projects add-iam-policy-binding $PROJECT_ID \
       --member="user:your-userid@your-domain" \
    gcloud projects add-iam-policy-binding $PROJECT_ID \
       --member="serviceAccount:service-account-name" \
  10. Allow the AutoML service account to access your Google Cloud project resources:
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:custom-vision@appspot.gserviceaccount.com" \
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:custom-vision@appspot.gserviceaccount.com" \
    gcloud projects add-iam-policy-binding $PROJECT_ID \
      --member="serviceAccount:custom-vision@appspot.gserviceaccount.com" \

Create a Cloud Storage bucket

Use Cloud Shell, a browser-based Linux command line connected to your GCP Console project, to create your Cloud Storage bucket:

  1. Open Cloud Shell.

  2. Create a Google Cloud Storage bucket. The bucket name must be in the format: project-id-vcm. The following command creates a storage bucket in the us-central1 region named project-id-vcm. For a complete list of available regions, see the Bucket Locations page.

    gsutil mb -p project-id -c regional -l us-central1 gs://project-id-vcm/

    Recommended file structure for your Cloud Storage files:


Copy the sample images into your bucket

Next, copy the flower dataset used in this Tensorflow blog post. The images are stored in a public Cloud Storage bucket, so you can copy them directly from there to your own bucket.

  1. In your Cloud Shell session, enter:

    gsutil -m cp -R gs://cloud-ml-data/img/flower_photos/ gs://${BUCKET}/img/

    The file copying takes about 20 minutes to complete.

Create the CSV file

The sample dataset contains a CSV file with all of the image locations and the labels for each image. You'll use that to create your own CSV file:

  1. Update the CSV file to point to the files in your own bucket:

    gsutil cat gs://${BUCKET}/img/flower_photos/all_data.csv | sed "s:cloud-ml-data:${BUCKET}:" > all_data.csv
  2. Copy the CSV file into your bucket:

    gsutil cp all_data.csv gs://${BUCKET}/csv/

Create your dataset

Visit the AutoML Vision UI to begin the process of creating your dataset and training your model.

When prompted, make sure to select the project that you used for your Cloud Storage bucket.

  1. From the AutoML Vision page, click NEW DATASET:

    New dataset button in console

  2. Specify a name for this dataset. Click the + sign to continue.

    New dataset name field

  3. Specify the Cloud Storage URI of your CSV file. For this quickstart, the CSV file is at gs://your-project-123-vcm/csv/all_data.csv. Make sure to replace your-project-123 with your specific project ID.

    • Be sure to check Enable multi-label classification if you have images that are labeled with more than one label.
  4. Click CREATE DATASET. The import process takes a few minutes. When it completes, you are taken to the next page which has details on all of the images identified for your dataset, both labeled and unlabeled images. You can filter images by label by selecting a label under Filter labels. If you are using the flower dataset, you will see a warning alert which will notify you of repeated images or images with multiple labels (if multi-label is not enabled).

    Filtering by label example

    • You can add additional images and update labels for new and existing images after you have imported a CSV file.

Train your model

Once your dataset has been created and processed, click the Train tab to initiate model training.

Click START TRAINING to continue.

Training is initiated for your model. Training your model should take about 10 minutes for this dataset. The service will email you once training has completed, or if any errors occur.

Once training is complete, your model is automatically deployed.

You can click the Evaluate tab to get more details on F1, Precision, and Recall scores. Click on each label under Filter labels to get details on true positives, false negatives and false positives.

Make a Prediction

Click the Predict tab for instructions on sending an image to your model for a prediction. You can also refer to Annotating images for examples.

Was this page helpful? Let us know how we did:

Send feedback about...

Cloud AutoML Vision
Need help? Visit our support page.