How Roboflow enables thousands of developers to use computer vision with TensorFlow.js
July 27, 2022

A guest post by Brad Dwyer, co-founder and CTO, Roboflow

Roboflow lets developers build their own computer vision applications, from data preparation and model training to deployment and active learning. Through building our own applications, we learned firsthand how tedious it can be to train and deploy a computer vision model. That’s why we launched Roboflow in January 2020 – we believe every developer should have computer vision available in their toolkit. Our mission is to remove any barriers that might prevent them from succeeding.

Our end-to-end computer vision platform simplifies the process of collecting images, creating datasets, training models, and deploying them to production. Over 100,000 developers build with Roboflow’s tools. TensorFlow.js makes up a core part of Roboflow's deployment stack that has now powered over 10,000 projects created by developers around the world.

As an early design decision, we decided that, in order to provide the best user experience, we needed to be able to run users' models directly in their web browser (along with our API, edge devices, and on-prem) instead of requiring a round-trip to our servers. The three primary concerns that motivated this decision were latency, bandwidth, and cost.

For example, Roboflow powers SpellTable's Codex feature which uses a computer vision model to identify Magic: The Gathering cards.

From Twitter

How Roboflow Uses TensorFlow.js

Whenever a user's model finishes training on Roboflow's backend, the model is converted and automatically converted to support sevel various deployment targets; one of those targets is TensorFlow.js. While TensorFlow.js is not the only way to deploy a computer vision model with Roboflow, some ways TensorFlow.js powers features within Roboflow include:

roboflow.js

roboflow.js is a JavaScript SDK developers can use to integrate their trained model into a web app or Node.js app. Check this video for a quick introduction:

Inference Server

The Roboflow Inference Server is a cross-platform microservice that enables developers to self-host and serve their model on-prem. (Note: while not all of Roboflow’s inference servers are TFjs-based, it is one supported means of model deployment.)

The tfjs-node container runs via Docker and is GPU-accelerated on any machine with CUDA and a compatible NVIDIA graphics card, or using a CPU on any Linux, Mac, or Windows device.

Preview

Preview is an in-browser widget that lets developers seamlessly test their models on images, video, and webcam streams.

Label Assist

Label Assist is a model-assisted image labeling tool that lets developers use their previous model's predictions as the starting point for annotating additional images.

One way users leverage Label Assist is in-browser predictions:

Why We Chose TensorFlow.js

Once we had decided we needed to run in the browser, TensorFlow.js was a clear choice.

Because TFJS runs in our users' browsers and on their own compute, we are able to provide ML-powered features to our full user base of over 100,000 developers, including those on our free Public plan. That simply wouldn't be feasible if we had to spin up a fleet of cloud-hosted GPUs.

Behind the Scenes

To implement roboflow.js with TensorFlow.js was relatively straightforward.

We had to change a couple of layers in our neural network to ensure all of our ops were supported on the runtimes we wanted to use, integrate the tfjs-converter into our training pipeline, and port our pre-processing and post-processing code to JavaScript from Python. From there, it was smooth sailing.

Once we'd built roboflow.js for our customers, we utilized it internally to power features like Preview, Label Assist, and one implementation of the Inference Server.

Try it Out

The easiest way to try roboflow.js is by using Preview on Roboflow Universe, where we host over 7,000 pre-trained models that our users have shared. Any of these models can be readily built into your applications for things like seeing playing cards, counting surfers, reading license plates, and spotting bacteria under microscope, and more.

On the Deployment tab of any project with a trained model, you can drop a video or use your webcam to run inference right in your browser. To see a live in-browser example, give this community created mask detector a try by clicking the “Webcam” icon:

To train your own model for a custom use case, you can create a free Roboflow account to collect and label a dataset, then train and deploy it for use with roboflow.js in a single click. This enables you to use your model wherever you may need.

About Roboflow

Roboflow makes it easy for developers to use computer vision in their applications. Over 100,000 users have built with the company's end-to-end platform for image and video collection, organization, annotation, preprocessing, model training, and model deployment. Roboflow provides the tools for companies to improve their datasets and build more accurate computer vision models faster so their teams can focus on their domain problems without reinventing the wheel on vision infrastructure.

Browse datasets on Roboflow Universe

Get started in the Roboflow documentation

View all available Roboflow features

Next post
How Roboflow enables thousands of developers to use computer vision with TensorFlow.js

A guest post by Brad Dwyer, co-founder and CTO, RoboflowRoboflow lets developers build their own computer vision applications, from data preparation and model training to deployment and active learning. Through building our own applications, we learned firsthand how tedious it can be to train and deploy a computer vision model. That’s why we launched Roboflow in January 2020 – we believe every de…