3월 30, 2018 —
Posted by Sandeep Gupta, Product Manager for TensorFlow, on behalf of the TensorFlow team
Today, we’re holding the second TensorFlow Developer Summit at the Computer History Museum in Mountain View, CA! The event brings together over 500 TensorFlow users in-person and thousands tuning into the livestream at TensorFlow events around the world. The day is filled with new product announcements along…
Highlights from the TensorFlow Developer Summit, 2018
3월 30, 2018
Posted by Sandeep Gupta, Product Manager for TensorFlow, on behalf of the TensorFlow team
Today, we’re holding the second TensorFlow Developer Summit at the Computer History Museum in Mountain View, CA! The event brings together over 500 TensorFlow users in-person and thousands tuning into the livestream at TensorFlow events around the world. The day is filled with new product announcements along with technical talks from the TensorFlow team and guest speakers.
Machine learning is solving challenging problems that impact everyone around the world. Problems that we thought were impossible or too complex to solve are now possible with this technology. Using TensorFlow, we’ve already seen great advancements in many different fields. For example:
Astrophysicists are using TensorFlow to analyze large amounts of data from the Kepler mission to discover new planets.
Scientists in Africa are using TensorFlow to detect diseases in Cassava plants to improving yield for farmers.
Machine learning is solving challenging problems that impact everyone around the world. See how researchers at PlantVillage (https://plantvillage.psu.edu/) of Penn State University and the International Institute of Tropical Agriculture (IITA) (http://www.iita.org/) are using ML and TensorFlow to help farmers detect diseases in Cassava plants.
We’re excited to see these amazing uses of TensorFlow and are committed to making it accessible to more developers. This is why we’re pleased to announce new updates to TensorFlow that will help improve the developer experience!
We’re making TensorFlow easier to use
Researchers and developers want a simpler way of using TensorFlow. We’re integrating a more intuitive programming model for Python developers called eager execution that removes the distinction between the construction and execution of computational graphs. You can develop with eager execution and then use the same code to generate the equivalent graph for training at scale using the Estimator high-level API. We’re also announcing a new method for running Estimator models on multiple GPUs on a single machine. This allows developers to quickly scale their models with minimal code changes.
As machine learning models become more abundant and complex, we want to make it easier for developers to share, reuse, and debug them. To help developers share and reuse models, we’re announcing TensorFlow Hub, a library built to foster the publication and discovery of modules (self-contained pieces of TensorFlow graph) that can be reused across similar tasks. Modules contain weights that have been pre-trained on large datasets, and may be retrained and used in your own applications. By reusing a module, a developer can train a model using a smaller dataset, improve generalization, or simply speed up training. To make debugging models easier, we’re also releasing a new interactive graphical debugger plug-in as part of the TensorBoard visualization tool that helps you inspect and step through internal nodes of a computation graph in real-time.
Model training is only one part of the machine learning process and developers need a solution that works end-to-end to build real-world ML systems. Towards this end, we’re announcing the roadmap for TensorFlow Extended (TFX) along with the launch of TensorFlow Model Analysis, an open-source library that combines the power of TensorFlow and Apache Beam to compute and visualize evaluation metrics. The components of TFX that have been released thus far (including TensorFlow Model Analysis, TensorFlow Transform, Estimators, and TensorFlow Serving) are well integrated and let developers prepare data, train, validate, and deploy TensorFlow models in production.
TensorFlow is available in more languages and platforms
Along with making TensorFlow easier to use, we’re announcing that developers can use TensorFlow in new languages. TensorFlow.js is a new ML framework for JavaScript developers. Machine learning in the browser using TensorFlow.js opens exciting new possibilities, including interactive ML and support for scenarios where all data remains client-side. It can be used to build and train modules entirely in the browser, as well as import TensorFlow and Keras models trained offline for inference using WebGL acceleration. The Emoji Scavenger Hunt game is a fun example of an application built using TensorFlow.js.
Model training is only one part of the machine learning process and developers need a solution that works end-to-end to build real-world ML systems. Towards this end, we’re announcing the roadmap for TensorFlow Extended (TFX) along with the launch of TensorFlow Model Analysis, an open-source library that combines the power of TensorFlow and Apache Beam to compute and visualize evaluation metrics. The components of TFX that have been released thus far (including TensorFlow Model Analysis, TensorFlow Transform, Estimators, and TensorFlow Serving) are well integrated and let developers prepare data, train, validate, and deploy TensorFlow models in production.We also have some exciting news for Swift programmers: TensorFlow for Swift will be open sourced this April. TensorFlow for Swift is not your typical language binding for TensorFlow. It integrates first-class compiler and language support, providing the full power of graphs with the usability of eager execution. The project is still in development, with more updates coming soon!
We’re also sharing the latest updates to TensorFlow Lite, TensorFlow’s lightweight, cross-platform solution for deploying trained ML models on mobile and other edge devices. In addition to existing support for Android and iOS, we’re announcing support for Raspberry Pi, increased support for ops/models (including custom ops), and describing how developers can easily use TensorFlow Lite in their own apps. The TensorFlow Lite core interpreter is now only 75KB in size (vs 1.1 MB for TensorFlow) and we’re seeing speedups of up to 3x when running quantized image classification models on TensorFlow Lite vs. TensorFlow.
For hardware support, TensorFlow now has integration with NVIDIA’s TensorRT. TensorRT is a library that optimizes deep learning models for inference and creates a runtime for deployment on GPUs in production environments. It brings a number of optimizations to TensorFlow and automatically selects platform specific kernels to maximize throughput and minimizes latency during inference on GPUs.
For users who run TensorFlow on CPUs, our partnership with Intel has delivered integration with a highly optimized Intel MKL-DNN open source library for deep learning. When using Intel MKL-DNN, we observed up to 3x inference speedup on various Intel CPU platforms.
The list of platforms that run TensorFlow has grown to include Cloud TPUs, which were released in beta last month. The Google Cloud TPU team has already delivered a strong 1.6X performance increase in ResNet-50 performance since launch. These improvements will be available to TensorFlow users with the 1.8 release soon.
Enabling new applications and domains using TensorFlow
Many data analysis problems are solved using statistical and probabilistic methods. Beyond deep learning and neural network models, TensorFlow now provides state-of-the-art methods for Bayesian analysis via the TensorFlow Probability API. This library contains building blocks like probability distributions, sampling methods, and new metrics and losses. Many other classical ML methods also have increased support. As an example, boosted decision trees can be easily trained and deployed using pre-made high-level classes.
Machine learning and TensorFlow have already helped solve challenging problems in many different fields. Another area where we see TensorFlow having a big impact is in genomics, which is why we’re releasing Nucleus, a library for reading, writing, and filtering common genomics file formats for use in TensorFlow. This, along with DeepVariant, an open-source TensorFlow based tool for genome variant discovery, will help spur new research and advances in genomics.
Expanding community resources and engagement
These updates to TensorFlow aim to benefit and grow the community of users and contributors — the thousands of people who play a part in making TensorFlow one of the most popular ML frameworks in the world. To continue to engage with the community and stay up-to-date with TensorFlow, we’ve launched the new official TensorFlow blog and the TensorFlow YouTube channel.
We’re also making it easier for our community to collaborate by launching new mailing lists and Special Interest Groups designed to support open-source work on specific projects. To see how you can be a part of the community, visit the TensorFlow Community page and as always, you can follow TensorFlow on Twitter for the latest news.
We’re incredibly thankful to everyone who has helped make TensorFlow a successful ML framework in the past two years. Thanks for attending, thanks for watching, and remember to use #MadeWithTensorFlow to share how you are solving impactful and challenging problems with machine learning and TensorFlow!
Next post
Announcements·
Highlights from the TensorFlow Developer Summit, 2018
3월 30, 2018
—
Posted by Sandeep Gupta, Product Manager for TensorFlow, on behalf of the TensorFlow team
Today, we’re holding the second TensorFlow Developer Summit at the Computer History Museum in Mountain View, CA! The event brings together over 500 TensorFlow users in-person and thousands tuning into the livestream at TensorFlow events around the world. The day is filled with new product announcements along…