TensorFlow 2.0 is now available!
de setembre 30, 2019
Posted by the TensorFlow Team

Earlier this year, we announced TensorFlow 2.0 in alpha at the TensorFlow Dev Summit. Today, we’re delighted to announce that the final release of TensorFlow 2.0 is now available! Learn how to install it here.

TensorFlow 2.0 is driven by the community telling us they want an easy-to-use platform that is both flexible and powerful, and which supports deployment to any platform. TensorFlow 2.0 provides a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push the state-of-the-art in machine learning and build scalable ML-powered applications.



Coding with TensorFlow 2.0

TensorFlow 2.0 makes development of ML applications much easier. With tight integration of Keras into TensorFlow, eager execution by default, and Pythonic function execution, TensorFlow 2.0 makes the experience of developing applications as familiar as possible for Python developers. For researchers pushing the boundaries of ML, we have invested heavily in TensorFlow’s low-level API: We now export all ops that are used internally, and we provide inheritable interfaces for crucial concepts such as variables and checkpoints. This allows you to build onto the internals of TensorFlow without having to rebuild TensorFlow.

A flowchart of TensorFlow 2.0 (Use tf.data to read data, Keras to train models, Distribution Strategies to deploy, SavedModel to save, then deploy to TensorFlow,js or TensorFlow Lite, or use TensorFlow Serving)
To be able to run models on a variety of runtimes, including the cloud, web, browser, Node.js, mobile and embedded systems, we have standardized on the SavedModel file format. This allows you to run your models with TensorFlow, deploy them with TensorFlow Serving, use them on mobile and embedded systems with TensorFlow Lite, and train and run in the browser or Node.js with TensorFlow.js.

For high-performance training scenarios, you can use the Distribution Strategy API to distribute training with minimal code changes and attain great out-of-the-box performance. It supports distributed training with Keras’ Model.fit and also supports custom training loops. Multi-GPU support is now available, and you can learn more about using GPUs on Google Cloud here. Cloud TPU support is coming in a future release. Check out the distributed training guide for more details.
TensorFlow 2.0 offers many performance improvements on GPUs. TensorFlow 2.0 delivers up to 3x faster training performance using mixed precision on Volta and Turing GPUs with a few lines of code, used for example in ResNet-50 and BERT. TensorFlow 2.0 is tightly integrated with TensorRT and uses an improved API to deliver better usability and high performance during inference on NVIDIA T4 Cloud GPUs on Google Cloud.
“Machine learning on NVIDIA GPUs and systems allows developers to solve problems that seemed impossible just a few years ago,” said Kari Briski, Senior Director of Accelerated Computing Software Product Management at NVIDIA. “TensorFlow 2.0 is packed with many great GPU acceleration features, and we can’t wait to see the amazing AI applications the community will create with these updated tools.”
Effective access for training and validation data is paramount to being effective when building models in TensorFlow. We introduced TensorFlow Datasets, giving a standard interface to a plethora of datasets containing a variety of data types such as images, text, video, and more.
While the traditional Session-based programming model is still maintained, we recommend using regular Python development with eager execution. The tf.function decorator can be used to convert your code into graphs which can be executed remotely, serialized, and optimized for performance. This is complemented by Autograph, which can convert regular Python control flow directly into TensorFlow control flow.
And of course, if you have used TensorFlow 1.x and are looking for a migration guide to 2.0, we have published one here. The TensorFlow 2.0 release also includes an automatic conversion script to help get you started.
We have worked with many users inside Google and from the TensorFlow community to test TensorFlow 2.0 features and have been thrilled with the feedback. As an example, the Google News team launched a BERT-based language understanding model in TensorFlow 2.0 that significantly improved story coverage. TensorFlow 2.0 provides easy-to-use APIs with the flexibility to implement new ideas quickly. Model training and serving was seamlessly integrated into the existing infrastructure.
Also, ML isn’t just for Python developers — using TensorFlow.js, training and inference is available to JavaScript developers, and we continue to invest in Swift as a language for building models with the Swift for TensorFlow library.
There’s a lot to unpack here, so to assist, we’ve created a handy guide on how to be effective with all that’s new in TensorFlow 2.0. To make it even easier to get started with TensorFlow 2.0, we are releasing reference implementations of several commonly used ML models using the 2.0 API here.
Additionally, to learn how to build applications using TensorFlow 2.0, check out the online courses we have created together with deeplearning.ai and Udacity.
To get started quickly try Google Cloud’s Deep Learning VM Images — no setup required, pre-configured virtual machines to help you build your TensorFlow 2.0 deep learning projects.

Learn More

Head over to tensorflow.org for more about TensorFlow 2.0, including how to download and get started with coding ML applications. Finally, for more exciting announcements, talks, and hands-on training and presentations about TensorFlow 2.0 and its ecosystem, please join us in person at TensorFlow World on Oct 28–31 in Santa Clara, CA. We hope to see you there!

Next post
TensorFlow 2.0 is now available!

Posted by the TensorFlow Team

Earlier this year, we announced TensorFlow 2.0 in alpha at the TensorFlow Dev Summit. Today, we’re delighted to announce that the final release of TensorFlow 2.0 is now available! Learn how to install it here.

TensorFlow 2.0 is driven by the community telling us they want an easy-to-use platform that is both flexible and powerful, and which supports deployment to any …