Building an ML Android app for custom object detection or custom object image classification using TFLite
What is TensorFlow Lite?
TensorFlow Lite (TFLite) is a lighter version of Google’s open-source machine learning framework, TensorFlow. The lightweight solution, TensorFlow Lite, is uniquely designed to run machine learning models on mobile, embedded, and IoT devices. It works with a huge range of devices, from tiny microcontrollers to powerful mobile phones, and it enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
- The TensorFlow Lite interpreter, which runs specially optimized models on many different hardware types, including mobile phones, embedded Linux devices, and microcontrollers.
- The TensorFlow Lite converter, which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
Machine learning at the edge
TensorFlow Lite is designed to make it easy to perform machine learning on devices, “at the edge” of the network, instead of sending data back and forth from a server. Consequently, for developers, performing machine learning on-device can help improve:
- Latency: there’s no round-trip to a server
- Privacy: no data needs to leave the device
- Connectivity: an Internet connection isn’t required
- Power consumption: network connections are power-hungry
To read more on TFLite, go to its official page here.
How to use TFLite to build an Android app for custom object detection?
TFlite models can be deployed on mobile devices, such as Android, iOS, Raspberry Pi, and other IoT devices. We can build apps with machine learning modules using the TFLite model. This is done in 3 steps:
- First, train a machine learning model for object detection, image classification, text recognition, speech recognition, gesture recognition, etc.
- Second, convert the trained model to a TFLite model.
- Finally, deploy the TFLite model on a mobile device.
NOTE: You can choose any model from the TF1 & TF2 model zoo for training here. Moreover, you can also download pre-trained TFLite models from the TensorFlow Hub. To learn more about TensorFlow and TensorFlow Lite go to its official page here.
We can use either of the TF1 or TF2 model zoo’s to select our model for training. TensorFlow is working on adding more TFLite support and functionalities for TensorFlow 2.
Currently TFLite supports only SSD models (excluding EfficientDet)
IMAGE CLASSIFICATION vs OBJECT DETECTION
In Image Classification, we have one label per image while for object detection, we can have single or multiple labels per image as demonstrated below.
BUILD AN OBJECT DETECTION APP
Objective: Train an ML model for custom object detection, convert it to a TFLite model using TensorFlow, and finally deploy it on mobile devices using the sample TFLite object detection app from TensorFlow’s GitHub.
Here are 2 tutorials on how to train a model and create an app for custom object detection using Google Colab for both the TensorFlow versions (TF 1.x & TF 2.x).
BUILD AN IMAGE CLASSIFICATION APP
Objective: Train an ML model for a custom object image classification, convert it to a TFLite model using TensorFlow, and deploy it on mobile devices using the sample TFLite image classification app from TensorFlow’s GitHub.
Here are 2 tutorials on how to train a model and create an app for custom image classification using Google colab and Teachable-Machine.