What you will build
In this tutorial you will download an exported custom TensorFlow Lite model created using AutoML Vision Edge. You will then run a pre-made Android app that uses the model to identify images of flowers.
Objectives
In this introductory, end-to-end walkthrough you will use code to:
- Run a pre-trained model in an Android app using the TFLite interpreter.
Before you begin
Train a model from AutoML Vision Edge
Before you can deploy a model to an edge device you must train and export a TF Lite model from AutoML Vision Edge following the Edge device model quickstart.
After completing the quickstart you should have exported trained model files: a TF Lite file, a label file, and a metadata file, as shown below.
Install TensorFlow
Before you begin the tutorial you need to install several pieces of software:
- install tensorflow version 1.7
- install PILLOW
If you have a working Python installation, run the following commands to download this software:
pip install --upgrade "tensorflow==1.7.*" pip install PILLOW
Reference the official TensorFlow documentation if you encounter issues with this process.
Clone the Git repository
Using the command line, clone the Git repository with the following command:
git clone https://github.com/googlecodelabs/tensorflow-for-poets-2
Navigate to the directory of the local clone of the repository
(tensorflow-for-poets-2
directory). You will run all following code samples
from this directory:
cd tensorflow-for-poets-2
Setup the Android
AppInstall Android Studio
If necessary, install Android Studio 3.0+ locally.
Open the project with Android Studio
Open a project with Android Studio by taking the following steps:
Open Android Studio . After it loads select "Open an existing Android Studio project" from this popup:
In the file selector, choose
tensorflow-for-poets-2/android/tflite
from your working directory.The first time you open the project you will get a "Gradle Sync" popup asking about using Gradle wrapper. Select "OK".
Test run the app
The app can either run on a real Android device or in the Android Studio Emulator.
Set up an Android device
You can't load the app from android studio onto your phone unless you activate "developer mode" and "USB Debugging".
To complete this one-time setup process, follow these instructions.
Set up the emulator with camera access (optional)
If you choose to use an emulator instead of a real Android device, Android studio makes setting up the emulator easy.
Since this app uses the camera, setup the emulator's camera to use your computer's camera instead of the default test pattern.
To setup the emulator's camera you need to create a new device in the "Android Virtual Device Manager", a service you can access with this button . From the main AVDM page select "Create Virtual Device":
Then on the "Verify Configuration" page, the last page of the virtual device setup, select "Show Advanced Settings":
With the advanced settings displayed, you can set both camera sources to use the host computer's webcam:
Run the original app
Before making any changes to the app, run the version that comes with the repository.
To start the build and install process, run a Gradle sync.
After running a Gradle sync, select play .
After selecting the play button you will need to select your device from this popup:
After you select your device you must allow the Tensorflow Demo to access your camera and files:
Now that the app is installed, click the app icon to launch it. This version of the app uses the standard MobileNet, pre-trained on the 1000 ImageNet categories.
It should look something like this:
Run the customized app
The default app setup classifies images into one of the 1000 ImageNet classes using the standard MobileNet without retraining.
Now make modifications so the app will use a model created by AutoML Vision Edge for your custom image categories.
Add your model files to the project
The demo project is configured to search for a graph.lite
, and a
labels.txt
file in the android/tflite/app/src/main/assets/
directory.
Replace the two original files with your versions with the following commands:
cp [Downloads]/model.tflite android/tflite/app/src/main/assets/graph.lite cp [Downloads]/dict.txt android/tflite/app/src/main/assets/labels.txt
Modify your app
This app uses a float model, while the model created by AutoML Vision Edge is a quantized one. You will make some code changes to enable the app to use the model.
Change the data type of labelProbArray
and filterLabelProbArray
from float
to byte, in class member definition and in ImageClassifier
initializer.
private byte[][] labelProbArray = null;
private byte[][] filterLabelProbArray = null;
labelProbArray = new byte[1][labelList.size()];
filterLabelProbArray = new byte[FILTER_STAGES][labelList.size()];
Allocate the imgData
based on int8 type, in ImageClassifier
initializer.
imgData =
ByteBuffer.allocateDirect(
DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
Cast the data type of labelProbArray
from int8 to float, in
printTopKLabels()
:
private String printTopKLabels() {
for (int i = 0; i < labelList.size(); ++i) {
sortedLabels.add(
new AbstractMap.SimpleEntry<>(labelList.get(i), (float)labelProbArray[0][i]));
if (sortedLabels.size() > RESULTS_TO_SHOW) {
sortedLabels.poll();
}
}
String textToShow = "";
final int size = sortedLabels.size();
for (int i = 0; i < size; ++i) {
Map.Entry<String, Float> label = sortedLabels.poll();
textToShow = String.format("\n%s: %4.2f",label.getKey(),label.getValue()) + textToShow;
}
return textToShow;
}
Run your app
In Android Studio run a Gradle sync so the build system can find your files:
After running a Gradle sync select play to start the build and install process as before.
It should look something like this:
You can hold the power and volume-down buttons together to take a screenshot.
Test the updated app by pointing the camera at different pictures of flowers to see if they are correctly classified.
How does it work?
So now that you have the app running, examine the TensorFlow Lite specific code.
TensorFlow-Android AAR
This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.
The following lines in the module's
build.gradle
file include the newest version of the AAR from the TensorFlow bintray maven
repository in the project.
repositories { maven { url 'https://google.bintray.com/tensorflow' } } dependencies { // ... compile 'org.tensorflow:tensorflow-lite:+' }
You use the following block to instruct the Android Asset Packaging Tool
that .lite
or .tflite
assets should not be compressed.
This is important, as the .lite
file will be
memory-mapped, and
that will not work when the file is compressed.
android { aaptOptions { noCompress "tflite" noCompress "lite" } }
Using theTFLite Java API
The code interfacing to the TFLite is all contained in ImageClassifier.java
.
Setup
The first block of interest is the constructor for the ImageClassifier
:
ImageClassifier(Activity activity) throws IOException { tflite = new Interpreter(loadModelFile(activity)); labelList = loadLabelList(activity); imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE); imgData.order(ByteOrder.nativeOrder()); labelProbArray = new float[1][labelList.size()]; Log.d(TAG, "Created a Tensorflow Lite Image Classifier."); }
There are a few lines of particular interest.
The following line creates the TFLite interpreter:
tflite = new Interpreter(loadModelFile(activity));
This line instantiates a TFLite interpreter. The interpreter works similarly to
a tf.Session
(for those familiar with TensorFlow, outside of TFLite).
You pass the interpreter a MappedByteBuffer
containing the model.
The local function loadModelFile
creates a MappedByteBuffer
containing the activity's graph.lite
asset file.
The following lines create the input data buffer:
imgData = ByteBuffer.allocateDirect( 4 * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
This byte buffer is sized to contain the image data once converted to float.
The interpreter can accept float arrays directly as input, but the ByteBuffer
is more efficient as it avoids extra copies in the interpreter.
The following lines load the label list and create the output buffer:
labelList = loadLabelList(activity); //... labelProbArray = new float[1][labelList.size()];
The output buffer is a float array with one element for each label where the model will write the output probabilities.
Run the model
The second block of interest is the classifyFrame
method.
This method takes a Bitmap
as input, runs the model, and returns the text
to print in the app.
String classifyFrame(Bitmap bitmap) { // ... convertBitmapToByteBuffer(bitmap); // ... tflite.run(imgData, labelProbArray); // ... String textToShow = printTopKLabels(); // ... }
This method does three things. First, the method converts and copies the
input Bitmap
to the imgData
ByteBuffer
for input to the model.
Then, the method calls the interpreter's run method, passing the input buffer
and the output array as arguments. The interpreter sets the values in
the output array to the probability calculated for each class.
The input and output nodes are defined by the arguments to the toco
conversion step that created the .lite
model file earlier.
What Next?
You've now completed a walkthrough of an Android flower classification app using an Edge model. You tested an image classification app before making modifications to it and getting sample annotations. You then examined TensorFlow Lite specific code to to understand underlying functionality.
- Learn more about TFLite from the official documentation and the code repository.
- Try the quantized version of this demo app, for a more powerful model in a smaller package.
- Try some other TFLite ready models including a speech hot-word detector and an on-device version of smart-reply.
- Learn more about TensorFlow in general with TensorFlow's getting started getting started documentation.