https://blog.tensorflow.org/2019/02/mit-deep-learning-basics-introduction-tensorflow.html?hl=hi

Education

https://2.bp.blogspot.com/-8-0LNmaQPlk/XhOT7JzEsSI/AAAAAAAACS8/iJapJLgN3Sg6KjKWfu3Aj069JDcs5FlCgCLcBGAsYHQ/s1600/1_0vWKsATw_uUzjZioJU8DyQ.png

फ़रवरी 04, 2019 —
*Guest post by Lex Fridman*

As part of the MIT Deep Learning series of lectures and GitHub tutorials, we are covering the basics of using neural networks to solve problems in computer vision, natural language processing, games, autonomous driving, robotics, and beyond.

This blog post provides an overview of deep learning in **7 architectural paradigms** with links to TensorFlow tutorials for each. It ac…

MIT Deep Learning Basics: Introduction and Overview with TensorFlow

MIT Deep Learning series of courses (6.S091, 6.S093, 6.S094). Lecture videos and tutorials are open to all. |

This blog post provides an overview of deep learning in

My favorite example of the former is the publication in 1543 by Copernicus of the

Heliocentrism (1543) vs Geocentrism (6th century BC). |

**Encoders**find patterns in raw data to form compact, useful representations.

**Decoders**generate high-resolution data from those representations. The generated data is either new examples or descriptive knowledge.

Dense encoders are used to map an already compact set of numbers on the input to a prediction: either a classification (discrete) or a regression (continuous).

Error on the training and validation sets as the network learns. |

Instead of using only densely-connected layers, they use convolutional layers (convolutional encoder). These networks are used for image classification, object detection, video action recognition, and any data that has some spatial invariance in its structure (e.g., speech audio).

Classification predictions (right) of the morphing, generated handwritten digits (left). |

Many variants of RNNs modules have been developed, including LSTMs and GRUs, to help learn patterns in longer sequences. Applications include natural language modeling, speech recognition, speech generation, etc.

Source: Text Generation with TensorFlow |

Note that the encoder and decoder can be very different from each other. For example an image captioning network may have a convolutional encoder (for an image input) and a recurrent decoder (for a natural language output). Applications include semantic segmentation, machine translation, etc.

Tutorial: Driving Scene Segmentation with TensorFlow |

Since the ground truth data comes from the input data, no human effort is required. In other words, it’s self-supervised. Applications include unsupervised embeddings, image denoising, etc. But most importantly, its fundamental idea of “representation learning” is central to the generative models in the next section and all of deep learning.

Over the past few years, many variants and improvements for GANs have been proposed, including the ability generate images from a particular class, the ability to map images from one domain to another, and an incredible increase in realism of generated images. See the lecture on Deep Learning State of the Art that touches on and contextualizes the rapid development of GANs. For example, take a look at three samples generated from a single category (fly agaric mushroom) by BigGAN (arXiv paper):

Images generated by BigGAN. |

MIT DeepTraffic: Deep Reinforcement Learning Competition |

Finally many deep learning systems combine these architectures in complex ways to jointly learn from multi-modal data or jointly learn to solve multiple tasks. Many of these concepts are covered in the other lectures for the course with more coming soon:

On a personal note, as I said in the comments, it’s humbling for me to have the opportunity to teach at MIT and exciting to be part of the AI and TensorFlow community. Thank you all for the support and great discussions over the past few years. It’s been an amazing ride. If you have suggestions for topics I should cover in future lectures, let me know (on Twitter or LinkedIn).

Next post

Education
**·**

MIT Deep Learning Basics: Introduction and Overview with TensorFlow

फ़रवरी 04, 2019
—
*Guest post by Lex Fridman*

As part of the MIT Deep Learning series of lectures and GitHub tutorials, we are covering the basics of using neural networks to solve problems in computer vision, natural language processing, games, autonomous driving, robotics, and beyond.

This blog post provides an overview of deep learning in **7 architectural paradigms** with links to TensorFlow tutorials for each. It ac…

Build, deploy, and experiment easily with TensorFlow