https://blog.tensorflow.org/2019/06/fastais-deep-learning-from-foundations_28.html?hl=da
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmv_Gq41EqeN-misY-KVayIsc4hv-hvMVRJ1YPdFlcGWQN1P-7A5eLGsgVSQqyLuAvn3FYf0xt9XVxj6Ape1PBCNsV-MOdgg0WN9gZB4M3VfgUVaEmPMi5qCCe5XvKIwT83tSaX-ufjlA/s320/1_nC9ygmIQesXvCUZ8RzKYIQ.jpeg
Posted by Jeremy Howard, Fast Ai and Chris Lattner, Distinguished Engineer
The TensorFlow team is constantly innovating on technologies to unlock cutting-edge research and power production-scale workloads. Some of our most valuable advances come from collaborations with leading research teams external to Google — especially researchers focused on creative deep learning research, high quality educational materials, and developer-friendly, intuitive APIs.
The Swift for TensorFlow team has been collaborating with Jeremy Howard, creator of fast.ai and former president of Kaggle, since early 2019. In April, Jeremy and Chris Lattner co-taught two advanced sessions of
Deep Learning from the Foundations. 28 notebooks were created for the course and over a thousand students watched the live class, both in-person and online, to develop their deep learning skills. Today, we are excited to announce that a
MOOC based on the recorded lectures, a
companion fastai library, and
all course materials are now publicly available!
In research earlier this year, Jeremy found that “Swift can
match the performance of hand-tuned assembly code from numerical library vendors”. He
said: “Swift for TensorFlow is the first serious effort I’ve seen to incorporate differentiable programming deep into the heart of a widely used language that is designed from the ground up for performance”. As a demonstration in the MOOC, Jeremy shows how to build a new image processing pipeline in Swift with exceptional performance — right in a Jupyter notebook.
To support the launch of the latest fast.ai curriculum, we’re also releasing Swift for TensorFlow v0.4, which includes:
- Support for automatic differentiation of functions with control flow at compile time.
- A prototype new execution mode — Lazy Tensor — that has the potential to unlock higher performance on accelerators such as GPUs and TPUs.
- Many new activation functions and layers, as well as a collection of notebooks and tutorials.
- Several new models in our model garden, which includes ResNet, Transformer, and MiniGo.
To find out more about these new developments, and to see the full list of improvements, be sure to check out the release notes. Finally, we’ve updated all
Colab tutorials and
fast.ai notebooks to be compatible with this latest release. You can run the notebooks locally on Linux and macOS, or
get started right in your browser using Google Colab.
Make sure to join our
mailing list, as well as our open design meetings each Friday at 9:00am PT / 16:00 UTC.
Meeting recaps, questions, and discussion around upcoming features are also available for review. We are excited about the new capabilities that have been unlocked by Swift for TensorFlow, and cannot wait to see what you create!