https://blog.tensorflow.org/2019/12/an-introduction-to-new-and-improved.html?hl=hi
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-tS_2z73oGjlSxnJxD9qqnkCGDfv3G0s8Uz2nXaupu6flAprDha2qoyaJlXyPPo5ZUK_Ckcy6ZS1SgSziF6l7RcYYBPJdqtygq9v-MmZli6MdUkQwqCLaGgwldV7VKPNN1OmIRsQ7aZY/s1600/deeplab.png
By Jordan Grimstad
It’s been a year and a half since we
introduced TensorFlow Hub, an open-source repository of ready to use pre-trained models published by Google and DeepMind. Since then, we’ve published hundreds of models -- some that are general-purpose and fine-tunable to specific tasks, others which are more specialized -- to help you get faster, smarter ML applications even with little data or compute power.
At
TensorFlow World, we made three major announcements:
Let’s take a look at how all of this comes together, and explore some of the new features and models available.
Discovering Our New Model Formats
TensorFlow Hub now offers deployment formats that help you get started more quickly. We have added search features and visual cues that enable you to find and download the right model for your use case.
When you search for a model, look for the badge in the upper-right corner of the model cards indicating a specific format:
After you click into a model, you can see the deployment formats available and browse the documentation:
You can also search for models by deployment format -- try searching for “tfjs” or “tflite” to see a list of models with TensorFlow.js or TensorFlow Lite deployment formats, respectively.
With a wider variety of assets optimized for different deployment environments, TF Hub can now serve even more use cases.
Interactive Model Visualization
To check if your model of interest is suitable for the use case you have in mind, we now have an embedded, interactive model visualization tool available for selected vision models. These model visualizers can be found at the top of the model detail pages. You can upload your own test images if you’d like to test model performance on your own data. Some sample images are also provided to test the model right from the page.
Here’s the model visualizer in action on the
Mobile Mushroom Classifier from the Danish Mycological Society:
And here it is running on The Metropolitan Museum of Art’s
iMet Collection Attribute Classifier:
We hope that this visualization tool will save you prototyping and development time by giving you a better understanding of the performance and possible use-cases for a particular model early in your development process.
Using Pre-Trained TF Hub Models in TF2.0
If you haven’t used TF Hub before, we have many tutorials and demos available showing you how to get started. The easiest way to familiarize yourself with what TF Hub can do is to use a pre-trained model that fits a specific task.
We recently published
Text classification with TensorFlow Hub to demonstrate how you can use tf.keras and a
pre-trained text embedding from the TF Hub repository to quickly & easily classify the sentiment of a movie review. This is how you build a Keras model in five lines using a pre-trained embedding:
model = tf.keras.Sequential()
model.add(hub.KerasLayer(
"https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1",
input_shape=[], dtype=tf.string, trainable=True))
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.summary()
Another recently-published demonstration,
Fast Style Transfer for Arbitrary Styles, shows the use of a different
pre-trained model from
Magenta to enable fast artistic style transfer with only a few lines of code.
model = hub.load(
"https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2")
stylized_image = model(tf.constant(content_image), tf.constant(style_image))[0]
How to Learn More
We would love to hear your feedback! Please give the new TF Hub a try, and file bugs or feature requests at our
GitHub component. If you’re interested in publishing on TensorFlow Hub, please
let us know here. We are looking for a small group of alpha testers to help us scale our publishing workflow.