Deploy Edge to Android tutorial

What you will build

In this tutorial you will download an exported custom TensorFlow Lite model created using AutoML Vision Edge. You will then run a pre-made Android app that uses the model to identify images of flowers.

End product mobile screenshot
Image credit: Felipe Venâncio, "from my mother's garden" (CC BY 2.0, image shown in app).

Objectives

In this introductory, end-to-end walkthrough you will use code to:

  • Run a pre-trained model in an Android app using the TFLite interpreter.

Before you begin

Train a model from AutoML Vision Edge

Before you can deploy a model to an edge device you must train and export a TF Lite model from AutoML Vision Edge following the Edge device model quickstart.

After completing the quickstart you should have exported trained model files: a TF Lite file, a label file, and a metadata file, as shown below.

cloud storage tf lite model files

Install TensorFlow

Before you begin the tutorial you need to install several pieces of software:

If you have a working Python installation, run the following commands to download this software:

pip install --upgrade  "tensorflow==1.7.*"
pip install PILLOW

Reference the official TensorFlow documentation if you encounter issues with this process.

Clone the Git repository

Using the command line, clone the Git repository with the following command:

git clone https://github.com/googlecodelabs/tensorflow-for-poets-2

Navigate to the directory of the local clone of the repository (tensorflow-for-poets-2 directory). You will run all following code samples from this directory:

cd tensorflow-for-poets-2

Setup the Android App

Install Android Studio

If necessary, install Android Studio 3.0+ locally.

Open the project with Android Studio

Open a project with Android Studio by taking the following steps:

  1. Open Android Studio Android Studio start icon. After it loads select Android Studio open project icon "Open an existing Android Studio project" from this popup:

    Android Studio open project popup

  2. In the file selector, choose tensorflow-for-poets-2/android/tflite from your working directory.

  3. The first time you open the project you will get a "Gradle Sync" popup asking about using Gradle wrapper. Select "OK".

    Android Studio open project popup

Test run the app

The app can either run on a real Android device or in the Android Studio Emulator.

Set up an Android device

You can't load the app from android studio onto your phone unless you activate "developer mode" and "USB Debugging".

To complete this one-time setup process, follow these instructions.

Set up the emulator with camera access (optional)

If you choose to use an emulator instead of a real Android device, Android studio makes setting up the emulator easy.

Since this app uses the camera, setup the emulator's camera to use your computer's camera instead of the default test pattern.

To setup the emulator's camera you need to create a new device in the "Android Virtual Device Manager", a service you can access with this button Virtual device manager icon. From the main AVDM page select "Create Virtual Device":

Android Studio create a virtual device option

Then on the "Verify Configuration" page, the last page of the virtual device setup, select "Show Advanced Settings":

Android Studio create a virtual device option

With the advanced settings displayed, you can set both camera sources to use the host computer's webcam:

Android Studio choose camera source option

Run the original app

Before making any changes to the app, run the version that comes with the repository.

To start the build and install process, run a Gradle sync.

Gradle sync icon

After running a Gradle sync, select play Android Studio play icon.

After selecting the play button you will need to select your device from this popup:

select device popup window

After you select your device you must allow the Tensorflow Demo to access your camera and files:

allow camera access windows

Now that the app is installed, click the app icon Android Studio app icon to launch it. This version of the app uses the standard MobileNet, pre-trained on the 1000 ImageNet categories.

It should look something like this:

run test app

Run the customized app

The default app setup classifies images into one of the 1000 ImageNet classes using the standard MobileNet without retraining.

Now make modifications so the app will use a model created by AutoML Vision Edge for your custom image categories.

Add your model files to the project

The demo project is configured to search for a graph.lite, and a labels.txt file in the android/tflite/app/src/main/assets/ directory.

Replace the two original files with your versions with the following commands:

cp [Downloads]/model.tflite android/tflite/app/src/main/assets/graph.lite
cp [Downloads]/dict.txt  android/tflite/app/src/main/assets/labels.txt

Modify your app

This app uses a float model, while the model created by AutoML Vision Edge is a quantized one. You will make some code changes to enable the app to use the model.

Change the data type of labelProbArray and filterLabelProbArray from float to byte, in class member definition and in ImageClassifier initializer.

private byte[][] labelProbArray = null;
private byte[][] filterLabelProbArray = null;

labelProbArray = new byte[1][labelList.size()];
filterLabelProbArray = new byte[FILTER_STAGES][labelList.size()];

Allocate the imgData based on int8 type, in ImageClassifier initializer.

    imgData =
        ByteBuffer.allocateDirect(
            DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);

Cast the data type of labelProbArray from int8 to float, in printTopKLabels():

  private String printTopKLabels() {
    for (int i = 0; i < labelList.size(); ++i) {
      sortedLabels.add(
          new AbstractMap.SimpleEntry<>(labelList.get(i), (float)labelProbArray[0][i]));
      if (sortedLabels.size() > RESULTS_TO_SHOW) {
        sortedLabels.poll();
      }
    }
    String textToShow = "";
    final int size = sortedLabels.size();
    for (int i = 0; i < size; ++i) {
      Map.Entry<String, Float> label = sortedLabels.poll();
      textToShow = String.format("\n%s: %4.2f",label.getKey(),label.getValue()) + textToShow;
    }
    return textToShow;
  }

Run your app

In Android Studio run a Gradle sync so the build system can find your files:

Gradle sync icon

After running a Gradle sync select play Android Studio play icon to start the build and install process as before.

It should look something like this:

End product mobile screenshot
Image credit: Felipe Venâncio, "from my mother's garden" (CC BY 2.0, image shown in app).

You can hold the power and volume-down buttons together to take a screenshot.

Test the updated app by pointing the camera at different pictures of flowers to see if they are correctly classified.

How does it work?

So now that you have the app running, examine the TensorFlow Lite specific code.

TensorFlow-Android AAR

This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.

The following lines in the module's build.gradle file include the newest version of the AAR from the TensorFlow bintray maven repository in the project.

build.gradle

repositories {
    maven {
        url 'https://google.bintray.com/tensorflow'
    }
}

dependencies {
    // ...
    compile 'org.tensorflow:tensorflow-lite:+'
}

You use the following block to instruct the Android Asset Packaging Tool that .lite or .tflite assets should not be compressed. This is important, as the .lite file will be memory-mapped, and that will not work when the file is compressed.

build.gradle

android {
    aaptOptions {
        noCompress "tflite"
        noCompress "lite"
    }
}

Using theTFLite Java API

The code interfacing to the TFLite is all contained in ImageClassifier.java.

Setup

The first block of interest is the constructor for the ImageClassifier:

ImageClassifier.java

ImageClassifier(Activity activity) throws IOException {
    tflite = new Interpreter(loadModelFile(activity));
    labelList = loadLabelList(activity);
    imgData =
        ByteBuffer.allocateDirect(
            4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
    imgData.order(ByteOrder.nativeOrder());
    labelProbArray = new float[1][labelList.size()];
    Log.d(TAG, "Created a Tensorflow Lite Image Classifier.");
}

There are a few lines of particular interest.

The following line creates the TFLite interpreter:

ImageClassifier.java

tflite = new Interpreter(loadModelFile(activity));

This line instantiates a TFLite interpreter. The interpreter works similarly to a tf.Session (for those familiar with TensorFlow, outside of TFLite). You pass the interpreter a MappedByteBuffer containing the model. The local function loadModelFile creates a MappedByteBuffer containing the activity's graph.lite asset file.

The following lines create the input data buffer:

ImageClassifier.java

imgData = ByteBuffer.allocateDirect(
    4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list and create the output buffer:

labelList = loadLabelList(activity);
//...
labelProbArray = new float[1][labelList.size()];

The output buffer is a float array with one element for each label where the model will write the output probabilities.

Run the model

The second block of interest is the classifyFrame method. This method takes a Bitmap as input, runs the model, and returns the text to print in the app.

ImageClassifier.java

String classifyFrame(Bitmap bitmap) {
 // ...
 convertBitmapToByteBuffer(bitmap);
 // ...
 tflite.run(imgData, labelProbArray);
 // ...
 String textToShow = printTopKLabels();
 // ...
}

This method does three things. First, the method converts and copies the input Bitmap to the imgData ByteBuffer for input to the model. Then, the method calls the interpreter's run method, passing the input buffer and the output array as arguments. The interpreter sets the values in the output array to the probability calculated for each class. The input and output nodes are defined by the arguments to the toco conversion step that created the .lite model file earlier.

What Next?

You've now completed a walkthrough of an Android flower classification app using an Edge model. You tested an image classification app before making modifications to it and getting sample annotations. You then examined TensorFlow Lite specific code to to understand underlying functionality.