https://blog.tensorflow.org/2020/09/introducing-tensorflow-lite-task-library.html?hl=es
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0BrVLr8TB2fbErUzd0tjO05oCpKJcIQJWaZMyRg7sHqbl9l6_1_7iB6eE82x5H5iXdGeo4FUGs5b7Dp1hv5HbyLm-7Ih59PBgxNrqvljKyrL1TiWE57Ysno-5pGN3rJiSZsQ5zclgBLc/s1600/plane.jpg
Posted by Lu Wang, Chen Cen, Arun Venkatesan, Khanh LeViet
Overview
Running inference with TensorFlow Lite models on mobile devices is much more than just interacting with a model, but also requires
extra code to handle complex logic, such as data conversion, pre/post processing, loading associated files and more.
Today, we are introducing the
TensorFlow Lite Task Library, a set of powerful and easy-to-use model interfaces, which handles most of the pre- and post-processing and other complex logic on your behalf. The Task Library comes with support for popular machine learning tasks, including Image Classification and Segmentation, Object Detection and Natural Language Processing. The model interfaces are specifically designed for each task to achieve the best performance and usability - inference on pre trained and custom models for supported tasks can now be done with just 5 lines of code! The Task Library has been widely used in production for many Google products.
Supported ML Tasks
The
TensorFlow Lite Task Library currently supports six ML tasks including Vision and NLP use cases. Here is the brief introduction for each of them.
- ImageClassifier
Image classification is a common use of machine learning to identify what an image represents. For example, we might want to know what type of animal appears in a given picture. The ImageClassifier
API supports common image processing and configurations. It also allows displaying labels in specific supported locales and filtering results based on label allowlist and denylist.
- ObjectDetector
Object detectors can identify which of a known set of objects might be present and provide information about their positions within the given image or a video stream. The ObjectDetector
API supports similar image processing options as ImageClassifer
. The output is a list of the top-k detected objects with label, bounding box, and probability.
- ImageSegmenter
Image segmenters predict whether each pixel of an image is associated with a certain class. This is in contrast to object detection, which detects objects in rectangular regions, and image classification, which classifies the overall image. Besides image processing, ImageSegmenter
also supports two types of output masks, category mask and confidence mask.
- NLClassifier & BertNLClassifier
NLClassifier
classifies input text into different categories. This versatile API can be configured to load any TFLite model with text input and score output.
BertNLClassifier
is similar to NLClassifier, except that this API is specially tailored for BERT related models that requires Wordpiece and Sentencepiece tokenizations outside the TFLite model.
- BertQuestionAnswerer
BertQuestionAnswerer loads a BERT model and answers question based on the content of a given passage. It currently supports MobileBERT and ALBERT. Similar to BertNLClassifier, BertQuestionAnswerer encapsulates complex tokenization processing for input text. You can simply pass in contexts and questions in string to BertQuestionAnswerer.
Supported Models
The Task Library is compatible with the following known sources of models:
The Task Library also supports custom models that fit the model compatibility requirements of each Task API. The associated files (i.e. label maps and vocab files) and processing parameters, if applicable, should be properly populated into the
Model Metadata. See the
documentation on TensorFlow website for each API for more details.
Run inference with the Task Library
The Task Library works cross-platform and is supported on
Java,
C++ (experimental), and
Swift (experimental). Running inference with the Task Library can be as easy as just writing a few lines of code. For example, you can use the
DeepLab v3 TFLite model to segment an airplane image (Figure 1) in Android as follows:
// Create the API from a model file and options
String modelPath = "path/to/model.tflite"
ImageSegmenterOptions options = ImageSegmenterOptions.builder().setOutputType(OutputType.CONFIDENCE_MASK).build();
ImageSegmenter imageSegmenter = ImageSegmenter.createFromFileAndOptions(context, modelPath, options);
// Segment an image
TensorImage image = TensorImage.fromBitmap(bitmap);
List results = imageSegmenter.segment(image);
|
Figure 1. ImageSegmenter input image. |
|
Figure 2. Segmented mask. |
You can then use the colored labels and category mask in the results to construct the segmented mask image as shown in Figure 2.
Swift is supported for the three text APIs. To perform Question and Answer in iOS with the
SQuAD v1 TFLite model on a given context and a question, you could run:
let modelPath = "path/to/model.tflite"
// Create the API from a model file
let mobileBertAnswerer = TFLBertQuestionAnswerer.mobilebertQuestionAnswerer(modelPath: modelPath)
let context = """
The Amazon rainforest, alternatively, the Amazon Jungle, also known in \
English as Amazonia, is a moist broadleaf tropical rainforest in the \
Amazon biome that covers most of the Amazon basin of South America. This \
basin encompasses 7,000,000 square kilometers(2,700,000 square miles), of \
which 5,500,000 square kilometers(2,100,000 square miles) are covered by \
the rainforest. This region includes territory belonging to nine nations.
"""
let question = "Where is Amazon rainforest?"
// Answer a question
let answers = mobileBertAnswerer.answer(context: context, question: question)
// answers.[0].text could be “South America.”
Build a Task API for your use case
If your use case is not supported by the existing Task libraries, you can leverage the Task API infrastructure and build your custom C++/Android/iOS inference APIs. See this
guide for more details.
Future Work
We will continue improving the user experience for the Task Library. Here is the roadmap for the near future:
- Improve the usability of the C++ Task Library, such as providing prebuilt binaries and creating user-friendly workflows for users who want to build from source code.
- Publish reference examples using the Task Library.
- Enable more machine learning use cases via new task types.
- Improve cross-platform support and enable more tasks for iOS.
Feedback
We would love to hear your feedback, and suggestions for newer use cases to be supported in the Task Library. Please email tflite@tensorflow.org or create a TensorFlow Lite support
GitHub issue.
Acknowledgments
This work would not have been possible without the efforts of - Cédric Deltheil and Maxime Brénon, the main contributors for the Task Library Vision API.
- Chen Cen, the main contributor for the Task Library native/Android/iOS infrastructure and Text API.
- Xunkai and YoungSeok Yoon, the main contributors for the dev infra and releasing process.
We would like to thank Tian Lin, Sijia Ma, YoungSeok Yoon, Yuqi Li, Hsiu Wang, Qifei Wang, Alec Go, Christine Kaeser-Chen, Yicheng Fan, Elizabeth Kemp, Willi Gierke, Arun Venkatesan, Amy Jang, Mike Liang, Denis Brulé, Gaurav Nemade, Khanh LeViet, Luiz GUStavo Martins, Shuangfeng Li, Jared Duke, Erik Vee, Sarah Sirajuddin, Tim Davis for their active support of this work.