VisionAir: Using Federated Learning to estimate Air Quality using the Tensorflow API for Java
2월 19, 2020
A guest article by Harshita Diddee, Divyanshu Sharma, Shivani Jindal and Shivam Grover

Privacy of user-data is critical in learning machine learning models. Federated Learning is a useful technique for smartphones to learn a shared prediction model while keeping all the training data on device, thus enabling user privacy. We apply this technique to learn models that estimate air quality from photos taken by a user’s smartphone. In Delhi (India), air quality routinely degrades and a severe Air Quality Index (AQI) rating is no longer shocking news to most residents. Government established AQI monitoring stations provide coverage over less than 5% of the geographical area, which is not sufficient for the public to get an accurate real-time estimate of the air they are breathing.

Toward the goal of building an inexpensive and on-device AQI estimation app, we built VisionAir while participating in the Marconi Society’s Celestini Program India. VisionAir is a privacy preserving Android application that allows a user to estimate the AQI of a region using an image that the user takes. We specifically estimate the Airborne Particulate Matter (PM 2.5) value that adversely impacts human health. In this article, we describe an update on the project where we leverage Federated Learning to build a shared model, and share a new open-source dataset.

To build the model, we curated and open-sourced a large dataset of photos collected from the smartphone of the sky horizon in Delhi. Prior work on estimating air quality (PM2.5) using image analysis does not always take privacy of user data (such as images and geo-tag locations) into account. By contrast, VisionAir deploys the model on-device for inference and training. VisionAir achieves on-device training of the deep learning model by using the Tensorflow API for Java. While on-device training enables privacy, a limitation is that each user ends up training their own model without leveraging other users’ data. By using Federated Learning, VisionAir consistently improves the model performance by taking crowdsourced inputs into consideration in training the model without compromising the privacy of any user.

Deep Learning Model

VisionAir uses a neural network that takes image features, meteorological data, and past AQI data as input to estimate the PM 2.5 of an image. The model is optimized for on-device training by feeding only the most relevant features into a shallow net with only 2 hidden layers. The relevant set of features from the image taken by the user include haze degree, contrast, and entropy computed from the image. For details, you can see our previous article.

Building an open source dataset

First, we needed a dataset of images annotated with the above mentioned features. To collect diverse and temporally relevant data over multiple locations, we built an Android application that captures an HDR and Non-HDR image every 5 minutes (see documentation here).
Large database of images with corresponding labels: Location, time of capture, and distance from Government Central Pollution Control Board (CPCB) sensor.
Our dataset captured a large set of HDR images because they contain more information about the scene lighting in comparison to Non-HDR images and compensate for any information loss due to post-processing in the user’s smartphone. This enables uniform predictions of the same image taken across multiple smartphones (even the ones that are not included in the collection of training data).

The VisionAir dataset has been open sourced to enable further research in this field. It consists of approximately 4000 Non-HDR and 1200 HDR images annotated with corresponding image features, weather, and temporal data. It also adds additional meteorological parameters such as wind speed, wind direction, temperature, humidity, precipitation, time of the day.

Assigning labels to images in training data

Depending upon where the image was taken, labels were assigned to the images as follows. If there was a Central Pollution Control Board (CPCB) station within a 1 kilometer radius of the location of where the photo was taken, we used the CPCB station’s recorded PM 2.5 AQI reading as the label for that particular image. Otherwise, we used an accurate, high quality portable sensor called AirVeda which gave us the PM 2.5 AQI reading corresponding to that particular image. We had calibrated the AirVeda Sensor and validated that CPCB sensor readings and AirVeda readings match at the same location for the same time of the day.

Training the VisionAir Model On-Device with Tensorflow’s API for Java

Federated Learning maintains two instances of the model: a global model which is updated on the server and a client model updated by using on-device training.

Global Model

We developed the ML model to estimate AQI using image features and other meteorological parameters in Python using TensorFlow. The model was trained on 4000 images from VisionAir dataset. We exported the trained model (the metagraph and the checkpoint file) and used that in Java API in our Android application. The metagraph file contains the model architecture while the checkpoint file contains the assigned weights. Using these files, helper functions in the Tensorflow API for Java allow us to restore the model’s graph on the client device. For the detailed analysis of the code used to develop this entire On-Device and Federated architecture for VisionAir, you may find this useful.

We measured the model performance in terms of Root Mean Square Error (RMSE) and used three cross validation sets to achieve unbiased results over a training data of 2800 images and testing data of 700 images over the months June-August in Delhi. The global model achieved RMSE of 9.05 ppm. Based on Indian Standard for AQI Categorisation, the typical range of AQI in Jun-Aug is moderate (101–200) suggesting the trained model’s predicted value may deviate from the actual ground truth value by around 9.05 ppm which is acceptable. We would need additional data collection over other months to capture the data distribution of the entire year.

Client Model

Global Model is pushed onto all the client devices. We use Tensorflow’s API for Java to train the model on a client device. We run a few epochs on the client device to train the model and then store these weights in a usable form. After every 24 hours, the extracted weights on the client device are sent to our Federated Averaging server. The application currently has approximately 100 users. In every iteration of federated averaging the weights of 5 client devices are uploaded on the intermediate federated averaging server.

Federated Learning in VisionAir

In Federated Learning, the improvements in the client model weights are used to update the global model, thereby improving the overall estimation accuracy.
VisionAir implementing Federated Learning
To update the global model, we average the weights of the client device models using the algorithm described in this paper. Thus, the new weights of the global model are the average contribution of multiple client devices. This enables the global model to improve its AQI predictions to newer scenes (locations) and adapt to common patterns observed by all client devices (such as a seasonal changes in AQI).

Thereafter, the checkpoint file of the updated global model is pushed to the client devices as the new client model. We do not replace the metagraph file because the architecture of the global and client model remains the same.

Some caveats in leveraging federated learning include tuning the hyperparameters to ensure that the client model does not perform worse than the global model. We are still investigating the model performance by varying the number of epochs, training batch size, and the effect of the number of active users who contribute to federated averaging in a single iteration.

Federated Learning has been significant in VisionAir’s quest to provide a privacy-preserving AQI estimation tool.

Acknowledgements

We are grateful to the funding and support provided by Marconi Society’s Celestini Program India, our mentor Dr. Aakanksha Chowdhery (Google Brain), and the technically well equipped lab of Prof. Brejesh Lall at Indian Institute of Technology.

VisionAir is available on the Google Play Store here. You may visit our website or medium publication to stay updated about our ongoing work and results.

Next post
VisionAir: Using Federated Learning to estimate Air Quality using the Tensorflow API for Java

A guest article by Harshita Diddee, Divyanshu Sharma, Shivani Jindal and Shivam Grover

Privacy of user-data is critical in learning machine learning models. Federated Learning is a useful technique for smartphones to learn a shared prediction model while keeping all the training data on device, thus enabling user privacy. We apply this technique to learn models that estimate air quality from phot…