Building the Future of TensorFlow
oktobris 20, 2022

Posted by the TensorFlow team

We’ve started planning the future of TensorFlow! In this article, we’d like to share our vision.

We open-sourced TensorFlow nearly seven years ago, on November 9, 2015. Since then, thanks to thousands of open-source contributors and our incredible community of Google Developer Experts, community organizers, researchers, and educators around the globe, TensorFlow has come to define its category. 

Today, TensorFlow is the most-used machine learning platform, adopted by millions of developers. It’s the 3rd most-starred software repository on GitHub (right behind Vue and React) and the most-downloaded machine learning package on PyPI. It has brought machine learning to the mobile ecosystem: TFLite now runs on four billion devices (maybe on yours, too!). TensorFlow has also brought machine learning to the Web: TensorFlow.js is now downloaded 170 thousand times weekly.

Across Google's product lineup, TensorFlow powers virtually all production machine learning, from Search, GMail, YouTube, Maps, Play, Ads, Photos, and many more. Beyond Google, at other Alphabet companies, TensorFlow and Keras enable the machine intelligence in Waymo's self-driving cars. 

In the broader industry, TensorFlow powers machine learning systems at thousands of companies, including most of the largest machine learning users in the world – Apple, ByteDance, Netflix, Tencent, Twitter, and countless more. And in the research world, every month, Google Scholar is indexing over 3,000 new scientific publications that mention TensorFlow or Keras.

Today, our user base and developer ecosystem are larger than ever, and growing!

We see the growth of TensorFlow not just as an achievement to celebrate, but as an opportunity to go further and deliver more value for the machine learning community.

Our goal is to provide the best machine learning platform on the planet. Software that will become a new superpower in the toolbox of every developer. Software that will turn machine learning from a niche craft into an industry as mature as web development.

To achieve this, we listen to the needs of our users, anticipate new industry trends, iterate on our APIs, and work to make it increasingly easy for you to innovate at scale. In the same way that TensorFlow originally helped the rise of deep learning, we want to continue to facilitate the evolution of machine learning by giving you the platform that lets you push the boundaries of what's possible. Machine learning is evolving rapidly, and so is TensorFlow.

Today, we're excited to announce we've started working on the next iteration of TensorFlow that will enable the next decade of machine learning development. We are building on TensorFlow's class-leading capabilities, and focusing on four pillars.

Four pillars of TensorFlow

Fast and scalable

  • XLA Compilation. We are focusing on XLA compilation and aim to make most model training and inference workflows faster on GPU and CPU, building on XLA’s performance wins on TPU. We intend for XLA to become the industry-standard deep learning compiler, and we’ve opened it up to open-source collaboration as part of the OpenXLA initiative.
  • Distributed computing. We are investing in DTensor, a new API for large-scale model parallelism. DTensor unlocks the future of ultra-large model training and deployment and allows you to develop your model as if you were training on a single device, even while using multiple clients. DTensor will be unified with the tf.distribute API, allowing for flexible model and data parallelism.
  • Performance optimization. Besides compilation, we are also further investing in algorithmic performance optimization techniques such as mixed-precision and reduced-precision computation, which can deliver considerable speed ups on GPUs and TPUs.

Applied ML

  • New tools for CV and NLP. We are investing in our ecosystem for applied ML, in particular via the KerasCV and KerasNLP packages which offer modular and composable components for applied CV and NLP use cases, including a large array of state-of-the-art pretrained models.
  • Production grade solutions. We are expanding the TF Model Garden (GitHub) to cover a broad spectrum of ML tasks and domains. The Model Garden provides end-to-end production-grade modeling solutions. It has many reproducible canonical state-of-the-art (SOTA) model implementations for Computer Vision and Natural Language Processing (NLP). It also offers a training codebase to allow you to quickly run machine learning experiments using these models and export to standard TF serving formats.
  • Developer resources. We are adding more code examples, guides, and documentation for popular and emerging applied ML use cases. We aim to increasingly reduce the barrier to entry of ML and turn it into a tool in the hands of every developer.

Ready to deploy

  • Easier exporting. We are making it even easier to export to mobile (Android or iOS), edge (microcontrollers), server backends, or JavaScript. Exporting your model to TFLite and TF.js and optimizing its inference performance will be as easy as a call to `model.export()`.
  • C++ API for applications. We are developing a public TF2 C++ API for native server-side inference as part of a C++ application.
  • Deploy JAX models. We are making it easier for you to deploy models developed using JAX with TensorFlow Serving, and to mobile and the web with TensorFlow Lite and TensorFlow.js. 

Simplicity

  • NumPy API. As the field of ML expanded over the last few years TensorFlow’s API surface also increased, not always in ways that are consistent or simple to understand. We are working actively on consolidating and simplifying these APIs. For example, we will be adopting the NumPy API standard for numerics. 
  • Easier debugging. A framework isn't just its API surface, it's also its debugging experience. We aim at minimizing the time-to-solution for developing any applied ML system by focusing on better debugging capabilities.

The future of TensorFlow will be 100% backwards-compatible

We want TensorFlow to serve as a bedrock foundation for the machine learning industry to build upon. We see API stability as our most important feature. As an engineer who relies on TensorFlow as part of their product, as a builder of a TensorFlow ecosystem package, you should be able to upgrade to the latest TensorFlow version and immediately start benefiting from its new features and performance improvements – without fear that your existing codebase might break. As such, we commit to full backwards compatibility from TensorFlow 2 to the next version – your TensorFlow 2 code will run as-is. There will be no conversion script to run, no manual changes to apply.

Timeline

We plan to release a preview of the new TensorFlow capabilities in Q2 2023 and will release the production version later in the year. We will publish regular updates on our progress in the meantime. You can follow our progress via the TensorFlow blog, and on the TensorFlow YouTube channel.

Your feedback is welcome

We want to hear from you! For questions or feedback, please reach out via the TensorFlow forum.

Next post
Building the Future of TensorFlow

Posted by the TensorFlow teamWe’ve started planning the future of TensorFlow! In this article, we’d like to share our vision.We open-sourced TensorFlow nearly seven years ago, on November 9, 2015. Since then, thanks to thousands of open-source contributors and our incredible community of Google Developer Experts, community organizers, researchers, and educators around the globe, TensorFlow has co…