Using TensorFlow.js to deploy a radiology-based image search application for radiologists
март 24, 2020
A guest post by Erwin John T. Carpio, MD

As a doctor and radiologist from the Philippines, I’ve always wanted to learn how to develop and apply machine learning (ML from here on) models to my field of practice. However, machine learning was like a foreign language to me when I began. I had limited programming experience, and without a formal computer science background, I felt this field would be beyond my reach, especially when I tried to look at existing research. But that was soon about to change as I started my learning journey and I discovered this field was much more accessible than I initially thought.

RadLens: A web app for reverse image search

I’m currently focusing on creating a tool named RadLens to inspire practitioners like myself. I hope it helps others in my field to consider how they may use machine learning tools to assist in their day to day practice. It’s a work in progress (it is currently not FDA approved as a medical device and should not be used for diagnosis).

Of course, in any medical machine learning application, the most important things are collecting a large, diverse training dataset and performing a rigorous evaluation. For a clinical grade application, this could include many thousands or more of expertly graded images. In my application, I focused on building a small tool as a proof of concept. I hoped that the tool would assist with my work although I would still ultimately be the physician responsible for making an accurate diagnosis.

My day to day work includes identifying and diagnosing fractures which is why I first wanted to see if I could, as a minimum viable prototype, build a web app that could classify 2 fracture types for the forearm (Monteggia and Galeazzi). Aside from detecting actual fractures, however, radiologists are also trained to be aware of various anatomic (normal) variants of the body so that we can differentiate these from actual pathology. Examples of anatomic variants are the foot ossicles. Radiologists train to know the difference between these anatomic variants and actual fracture fragments.

We sometimes look up anatomic variants in reference textbooks to ensure that what we’re looking at is just an ossicle and not an actual fracture fragment. There are numerous foot ossicles, and it can be cumbersome to manually find the corresponding page in a textbook from memory. I decided to see if I could train a new ML model for the 2nd version of RadLens to detect several different foot ossicles I commonly see in my own practice.

I wanted to build web apps that could:
  1. Remember images of the types of fractures for the 1st web app and 6 types of foot ossicles/anatomic variants for the 2nd version.
  2. Use the device’s camera to scan in real time what I suspected to be a fracture (for the first web app) or a foot ossicle/anatomic variant (for the second web app)
  3. Automatically direct me to a Google Image Search so I could cross reference the suspected "fracture" or "foot ossicle/anatomic variant” for myself.
The two versions or iterations of RadLens are detailed below.

RadLens (version 1) is a web app that uses your device’s camera to scan an x-ray image and tries to predict what type of fracture is shown in that image (if there is a fracture) without ever having to upload any images to the cloud to preserve patient privacy. If one of the fractures the system is trained on is found, it returns a score indicating how confident it is on the classification. As all inference is performed on the local machine, it makes this process even faster, as no server is required for classification. Importantly, RadLens also returns a link to a Google Image Search, so I can then browse for other images of this fracture, and use them to aid my diagnosis.

The initial version of RadLens focused on classifying two types of fractures (e.g. Monteggia and Galeazzi fractures of the forearm). The user experience flow looks like this:
Using RadLens version 1.
Left: An X-ray image of the forearm is scanned using the phone's camera in real time.
Center: RadLens classifies the fracture type as Monteggia or Galeazzi
Right: Click the hyperlink to browse a Google Image Search for images of the detected fracture type for cross reference.
The hyperlink to Google Image Search is important, especially as RadLens is trained on only a small dataset. It makes using this tool interactive, and I can assess the accuracy as I go and use any mistakes it may make to help guide my future work. I can explore similar cases and verify if the model is correct, or decide if additional training data needs to be gathered, or if other medical professionals need to be consulted.

Building RadLens

Instead of developing an AI model that is as accurate as a radiologist, I instead decided to focus on small models that help me search for references faster. To build the first version of RadLens (for fractures), my initial prototypes were coded in Python using Tensorflow, taking advantage of a technique called Transfer Learning that enables you to train a new model which reuses some of the features learned by another model trained on a large dataset. After experimenting with this approach for a while, I decided to go with something even simpler and had greater reach.

I discovered Teachable Machine, a website by Google that allows your computer to recognize your own images, sounds, & poses. You can even upload training data using the UI if you wish so training happens live in the web browser. I used the models produced by Teachable Machine to create my 2nd prototype for RadLens (for foot ossicles/anatomic variants). Teachable Machine is excellent for building simple and interactive prototypes that can help to communicate to radiologists how potential uses cases of ML could facilitate their work. The current process I chose for training ML models may not be suitable for building clinical grade applications (for that you will need a team of computer scientists and physicians working together with more data), but for my goal of helping other radiologists understand how ML can help with their day to day work, I find it to be extremely useful.

As an added bonus, using Teachable Machine meant that instead of using 2 programming languages for my project (Python and JavaScript), I could concentrate on using just one (JavaScript). Even better, inference is performed in the web browser with TensorFlow.js running on your laptop or phone; this means patient data remains private, and is never uploaded to a central server for classification, and inference time is faster too since there is no round trip time to the server.
A 2nd prototype of RadLens. If the app detects a foot X-ray, it will then ask you to zoom in on the image. Once zoomed in, the app will try to infer what that ossicle is. You can then start an image search for the predicted class. Base code for the RadLens prototypes are available on GitHub.

Looking forward

Most of today’s ML solutions for healthcare come prepackaged, and while robust, have many limitations. These systems have huge file sizes and limited deployment as the model must stay within a central IT system. In addition, they can be very expensive so only large hospitals and clinics can afford to use them. Since they are already pre-trained and pre-packaged, it can be hard for local radiologists to retrain them for use cases that may be more attuned to the local practice’s needs. I essentially want to put cost effective ML into the palm of the local radiologist. Building a proof of concept system was more within reach than I initially thought.

In the future, I am hoping to further improve upon the web app by adding object detection to highlight the found fracture or ossicle with a visible bounding box on the image itself. Currently, the app performs image classification only, which detects the presence but does not show the location.

I have learned that the spread of ML as a technology can both be horizontal and vertical. Horizontal applications are broad and widespread. These are usually made possible through the efforts of larger teams of AI experts, as they traverse the wide canvas of computer vision in medicine. I hope to spark interest in the vertical spread of AI development as it becomes more customized to the individual use cases of specific radiologists around the world. I can think of no better way of doing that right now than with the web and TensorFlow.js to easily enable people to try and experiment with the possibilities of using machine learning in their niche areas.
Next post
Using TensorFlow.js to deploy a radiology-based image search application for radiologists

A guest post by Erwin John T. Carpio, MD

As a doctor and radiologist from the Philippines, I’ve always wanted to learn how to develop and apply machine learning (ML from here on) models to my field of practice. However, machine learning was like a foreign language to me when I began. I had limited programming experience, and without a formal computer science background, I felt this field would be…