Building a TinyML Application with TF Micro and SensiML
мая 07, 2021

A guest post by Chris Knorowski, SensiML CTO

TinyML reduces the complexity of adding AI to the edge, enabling new applications where streaming data back to the cloud is prohibitive. Some examples of applications that are making use of TinyML right now are :

  • Visual and audio wake words that trigger an action when a person is detected in an image or a keyword is spoken .
  • Predictive maintenance on industrial machines using sensors to continuously monitor for anomalous behavior.
  • Gesture and activity detection for medical, consumer, and agricultural devices, such as gait analysis, fall detection or animal health monitoring.

One common factor for all these applications is the low cost and power usage of the hardware they run on. Sure, we can detect audio and visual wake words or analyze sensor data for predictive maintenance on a desktop computer. But, for a lot of these applications to be viable, the hardware needs to be inexpensive and power efficient (so it can run on batteries for an extended time).

Fortunately, the hardware is now getting to the point where running real-time analytics is possible. It is crazy to think about, but the Arm Cortex-M4 processor can do more FFT’s per second than the Pentium 4 processor while using orders of magnitude less power. Similar gains in power/performance have been made in sensors and wireless communication. TinyML allows us to take advantage of these advances in hardware to create all sorts of novel applications that simply were not possible before.

At SensiML our goal is to empower developers to rapidly add AI to their own edge devices, allowing their applications to autonomously transform raw sensor data into meaningful insight. We have taken years of lessons learned in creating products that rely on edge optimized machine learning and distilled that knowledge into a single framework, the SensiML Analytics Toolkit, which provides an end-to-end development platform spanning data collection, labeling, algorithm development, firmware generation, and testing.

So what does it take to build a TinyML application?

Building a TinyML application touches on skill sets ranging from hardware engineering, embedded programming, software engineering, machine learning, data science and domain expertise about the application you are building. The steps required to build the application can be broken into four parts:

  1. Collecting and annotating data
  2. Applying signal preprocessing
  3. Training a classification algorithm
  4. Creating firmware optimized for the resource budget of an edge device

This tutorial will walk you through all the steps, and by the end of it you will have created an edge optimized TinyML application for the Arduino Nano 33 BLE Sense that is capable of recognizing different boxing punches in real-time using the Gyroscope and Accelerometer sensor data from the onboard IMU sensor.

Gesture recognition using TinyML. Male punching a punching bag

What you need to get started

We will use the SensiML Analytics Toolkit to handle collecting and annotating sensor data, creating a sensor preprocessing pipeline, and generating the firmware. We will use TensorFlow to train our machine learning model and TensorFlow Lite Micro for inferencing. Before you start, we recommend signing up for SensiML Community Edition to get access to the SensiML Analytics Toolkit.

The Software

The Hardware

  • Arduino Nano 33 BLE Sense
  • Adafruit Li-Ion Backpack Add-On (optional)
  • Lithium-Ion Polymer Battery ( 3.7v 100mAh)
  • Zebra Byte Case
  • Glove and Double Sided Tape

The Arduino Nano 33 BLE Sense has an Arm Cortex-M4 microcontroller running at 64 MHz with 1MB Flash memory and 256 KB of RAM. If you are used to working with cloud/mobile this may seem tiny, but many applications can run in such a resource-constrained environment.

The Nano 33 BLE Sense also has a variety of onboard sensors which can be used in your TinyML applications. For this tutorial, we are using the motion sensor which is a 9-axis IMU (accelerometer, gyroscope, magnetometer).

For wireless power, we used the Adafruit Li-Ion Battery Pack. If you do not have the battery pack, you can still walk through this tutorial using a suitably long micro USB cable to power the board. Though collecting gesture data is not quite as fun when you are wired. See the images below hooking up the battery to the Nano 33 BLE Sense.

Nano 33 BLE Sense
battery connected to boxing glove

Building Your Data Set

For every machine learning project, the quality of the final product depends on the quality of your data set. Time-series data, unlike image and audio, are typically unique to each application. Because of this, you often need to collect and annotate your datasets. The next part of this tutorial will walk you through how to connect to the Nano 33 BLE Sense to stream data wirelessly over BLE as well as label the data so it can be used to train a TensorFlow model.

For this project we are going to collect data for 5 different gestures as well as some data for negative cases which we will label as Unknown. The 5 boxing gestures we are going to collect data for are Jab, Overhand, Cross, Hook, and Uppercut.

boxing gestures

We will also collect data on both the right and left glove. Giving us a total of 10 different classes. To simplify things we will build two separate models one for the right glove, and one for the left. This tutorial will focus on the left glove.

Streaming sensor data from the Nano 33 over BLE

The first challenge of a TinyML project is often to figure out how to get data off of the sensor. Depending on your needs you may choose Wi-Fi, BLE, Serial, or LoRaWAN. Alternatively, you may find storing data to an internal SD card and transferring the files after is the best way to collect data. For this tutorial, we will take advantage of the onboard BLE radio to stream sensor data from the Nano 33 BLE Sense.

We are going to use the SensiML Open Gateway running on our computer to retrieve the sensor data. To download and launch the gateway open a terminal and run the following commands:

git clone https://github.com/sensiml/open-gateway

cd open-gateway
pip3 install -r requirements.txt
python3 app.py

The gateway should now be running on your machine.

Gateway

Next, we need to connect the gateway server to the Nano 33 BLE Sense. Make sure you have flashed the Data Collection Firmware to your Nano 33. This firmware implements the Simple Streaming Interface specification which creates two topics used for streaming data. The /config topic returns a JSON describing the sensor data and /stream topic streams raw sensor data as a byte array of Int16 values.

To configure the gateway to connect to your sensor:

  1. Go to the gateway address in your browser (defaults to localhost:5555)
  2. Click on the Home Tab
  3. Set Device Mode: Data Capture
  4. Set Connection Type: BLE
  5. Click the Scan button, and select the device named Nano 33 DCL
  6. Click the Connect to Device button
SensiML Gateway

The gateway will pull the configuration from your device, and be ready to start forwarding sensor data. You can verify it is working by going to the Test Stream tab and clicking the Start Stream button.

Setting up the Data Capture Lab Project

Now that we can stream data, the next step is to record and label the boxing gestures. To do that we will use the SensiML Data Capture Lab. If you haven’t already done so, download and install the Data Capture Lab to record sensor data.

We have created a template project to get you started. The project is prepopulated with the gesture labels and metadata information, along with some pre-recorded example gestures files. To add this project to your account:

  1. Download and unzip the Boxing Glove Gestures Demo Project
  2. Open the Data Capture Lab
  3. Click Upload Project
  4. Click Browse which will open the file explorer window
  5. Navigate to the Boxing Glove Gestures Demo folder you just unzipped and select the Boxing Glove Gestures Demo.dclproj file
  6. Click Upload
SensiML Data Capture Lab

Connecting to the Gateway

After uploading the project, you can start capturing sensor data. For this tutorial we will be streaming data to the Data Capture Lab from the gateway over TCP/IP. To connect to the Nano 33 BLE Sense from the Data Capture Lab through the gateway:

  1. Open the Project Boxing Glove Gestures Demo
  2. Click Switch Modes -> Capture Mode
  3. Select Connection Method: Wi-Fi
  4. Click the Find Devices button
  5. Enter the IP Address of your gateway machine, and the port the server is running on (typically 127.0.0.1:5555)
  6. Click Add Device
  7. Select the newly added device
  8. Click the Connect button
wi-fi connection

You should see sensor data streaming across the screen. If you are having trouble with this step, see the full documentation here for troubleshooting.

boxing gesture data streaming

Capturing Boxing Gesture Sensor Data

The Data Capture Lab can also play videos that have been recorded alongside your sensor data. If you want to capture videos and sync them up with sensor data see the documentation here. This can be extremely helpful during the annotation phase to help interpret what is happening at a given point in the time-series sensor waveforms.

Now that data is streaming into the Data Capture Lab, we can begin capturing our gesture data set.

  1. Select “Jab” from the Label dropdown in the Capture Properties screen. (this will be the name of the file)
  2. Select the Metadata which captures the context (subject, glove, experience, etc.)
  3. Then click the Begin Recording button to start recording the sensor data
  4. Perform several “Jab” gestures
  5. Click the Stop Recording button when you are finished

After you hit stop recording, the captured data will be saved locally and synced with the cloud project. You can view the file by going to the Project Explorer and double-clicking on the newly created file.

GIF showing male boxing while program collects data

The following video walks through capturing sensor data.

Annotating Sensor Data

To classify sensor data in real-time, you need to decide how much and which portion of the sensor stream to feed to the classifier. On edge devices, it gets even more difficult as you are limited to a small buffer of data due to the limited RAM. Identifying the right segmentation algorithm for an application can save on battery life by limiting the number of classifications performed as well as improving the accuracy by identifying the start and end of a gesture.

Segmentation algorithms work by taking the input from the sensor and buffering the data until they determine a new segment has been found. At that point, they pass the data buffer down to the result of the pipeline. The simplest segmentation algorithm is a sliding window, which continually feeds a set chunk of data to the classifier. However, there are many drawbacks to the sliding window for discrete gesture recognition, such as performing classifications when there are no events. This wastes battery and runs the risk of having events split across multiple windows which can lower accuracy.

Segmenting in the Data Capture Lab

We identify events in the Data Capture Lab by creating Segments around the events in your sensor data. Segments are displayed with a pair of blue and red lines when you open a file and define where an event is located.

The Data Capture Lab has two methods for labeling your events: Manual and Auto. In manual mode you can manually drag and drop a segment onto the graph to identify an event in your sensor data. Auto mode uses a segmentation algorithm to automatically detect events based on customizable parameters. For this tutorial, we are going to use a segmentation algorithm in Auto mode. The segmentation algorithms we use for determining events will also be compiled as part of the firmware so that the on-device model will be fed the same segments of data it was trained against.

We have already created a segmentation algorithm for this project based on the dataset we have collected so far. To perform automatic event detection on newly captured data file:

  1. Select the file from the Project Explorer
  2. Click on the Detect Segments button
  3. The segmentation algorithm will be run against the capture and the segments it finds will be added to the file
GIF of auto-segmentation

Note: If the events are not matching the real segments in your file, you may need to adjust the parameters of the segmentation algorithm.

Labeling Events in the Data Capture Lab

Keep in mind that automatic event detection only detects that an event has occurred, it does not determine what type of event has occurred. For each event that was detected, you will need to apply a label to them. To do that:

  1. Select one or more of the segments from the graph
  2. Click the Edit button or (Ctrl+E)
  3. Specify which label is associated with that event
  4. Repeat steps 1-3 for all segments in the capture
  5. Click Save
GIF labeling data

Building a TinyML Model

We are going to use Google Colab to train our machine learning model using the data we collected from the Nano 33 BLE Sense in the previous section. Colab provides a Jupyter notebook that allows us to run our TensorFlow training in a web browser. Open the Google Colab notebook and follow along to train your model.

Offline Model Validation

After saving the model, go to the Analytic Studio to perform offline validation. To test the model against any of the captured data files

  1. Open the Boxing Glove Gestures Demo project in the Summary Tab
    SensiML analytics studio
  2. Go to Test Model Tab
  3. Select your model from the Model Name dropdown
  4. Select one or more of the capture files by clicking on them
  5. Click the Compute Accuracy Button to classify the captures using the selected model
compute accuracy
p> When you click the Compute Accuracy button, the segmentation algorithm, preprocessing steps, and TensorFlow model are compiled into a single Knowledge Pack. Then the classification results and accuracy for each of the captures you selected are computed using the compiled Knowledge Pack. Click the Results button for the individual capture to see the classifications for all of the detected events and how they compared with the ground truth labels.

Deploy and Test on the Nano 33 BLE Sense

Downloading the model as firmware

Now that you validated the model offline, it's time to see how it performs at the edge. To do that we download and flash the model to the Nano 33 BLE Sense.

  1. Go to the Download Model tab of the Analytics Studio
  2. Select the HW Platform: Arduino CortexM4
  3. Select Format: Library
  4. Click the Download button
  5. The compiled library file should download to your computer
download knowledge pack

Flashing the Firmware

After downloading the library, we will build and upload the firmware to the Nano 33 BLE Sense. For this step, you will need the Nano 33 Knowledge Pack Firmware. To compile the firmware, we are using Visual Studio Code with the Platform IO plugin. To compile your model and Flash the Nano 33 BLE Sense with this firmware:

  1. Open your terminal and run
    git clone https://github.com/sensiml/nano33_knowledge_pack/
  2. Unzip the downloaded Knowledge Pack.
  3. In the folder, you will find the following directories:

    knowledgepack_project/

    libsensiml/

  4. Copy the files from libsensiml to nano33_knowledge_pack/lib/sensiml. You will overwrite the files included in the repository.
  5. Copy the files from knowledgepack_project to nano33_knowledge_pack/src/
    Copy the files from knowledgepack_project to nano33_knowledge_pack/src/
  6. Switch to the Platform I/O extension tab in VS Code
  7. Connect your Nano 33 BLE Sense to your computer using the micro USB cable.
  8. Click Upload and Monitor under the nano33ble_with_tensorflow in the PlatformI/O tab.
    Upload and Monitor tab

When the device restarts, it will boot up and your model will be running automatically. The video below walks through these steps.

Viewing Classification Results

To see the classification results in real-time connect to your device over BLE using the Android TestApp or the SensiML Open Gateway. The device will show up with the name Nano33 SensiML KP when you scan for devices. We have trained two models, one for the left glove and one for the right glove. You can see a demo of both models running at the same time in the following video.

Conclusion

We hope this blog has given you the tools you need to start building an end-to-end TinyML application using TensorFlow Lite For Microcontrollers and the SensiML Analytics Toolkit. For more tutorials and examples of TinyML applications checkout the application examples in our documentation. Follow us on LinkedIn or get in touch with us, we love hearing about all of the amazing TinyML applications the community is working on!

Next post
Building a TinyML Application with TF Micro and SensiML

A guest post by Chris Knorowski, SensiMLCTO TinyML reduces the complexity of adding AI to the edge, enabling new applications where streaming data back to the cloud is prohibitive. Some examples of applications that are making use of TinyML right now are : Visual and audio wake words that trigger an action when a person is detected in an image or a keyword is spoken . Predictive maintenance on i…