MIT Introduction to Deep Learning
februára 28, 2019
Guest post by MIT 6.S191 Introduction to Deep Learning

MIT Introduction to Deep Learning lectures and labs are are open-source and free for everyone!

MIT 6.S191: Introduction to Deep Learning is an introductory course offered formally at MIT and open-sourced on its course website. The class consists of a series of foundational lectures on the fundamentals of neural networks and their applications to sequence modeling, computer vision, generative models, and reinforcement learning.

MIT’s official motto is “Mens et Manus” — Mind and Hand — so it’s no coincidence that we, too, are big believers in this philosophy. As the organizers and lecturers for MIT’s Introduction to Deep Learning, we wanted to develop a course that focused on both the conceptual foundation and the practical skills it takes to understand and implement deep learning algorithms. And we’re beyond excited to share what we’ve put together with you, here and now: a series of nine technical lectures and three TensorFlow software labs, designed to be accessible to a variety of technical backgrounds, free and open to all.
MIT’s Introduction to Deep Learning consists of technical lectures on state-of-the-art algorithms as well as applied software labs in TensorFlow.
Mens: Technical Lectures. We start from the very fundamentals of neural networks — the Perceptron, fully connected networks, and the backpropagation algorithm; journey through recurrent and convolutional neural networks, generative models, and deep reinforcement learning; and explore the expanding frontiers of modern deep learning research, concluding with a series of guest lectures from leading industry researchers. All lectures are free and open to all, check out the introduction below.
All lectures are available online for free — click here to watch!
Manus: TensorFlow Software Labs. We’ve designed three open-source, interactive TensorFlow software labs that cover the basics of TensorFlow, recurrent neural network models for music generation, computer vision, debiasing facial recognition systems, and deep reinforcement learning. Labs are run in Google’s awesome Colaboratory environment (all you need to get started is a Google account!), and include “TODO” code blocks left for you to complete. We guide you through how to define and train deep learning models using TensorFlow’s Keras API and its new imperative execution style.

This blog highlights each of these three software labs and their accompanying lectures.
Gain practical experience with in-depth TensorFlow software labs.

Lab 1: Intro to TensorFlow & Music Generation

Designing the course and the labs to be accessible to as many people as possible was a big priority for us. So, Lecture 1 focuses on neural network fundamentals, and the first module in Lab 1 provides a clean introduction to TensorFlow, and is written in preparation for the upcoming release of TensorFlow 2.0.

Our introduction to TensorFlow exercises highlight a few key concepts in particular: how to execute computations using math operators, how to define neural network models,and how to use automatic differentiation to train networks with backpropagation.

Following the Intro to TensorFlow module, Lab 1’s second module dives right into building and applying a recurrent neural network (RNN) for music generation, designed to accompany Lecture 2 on deep sequence modeling. You’ll build an AI algorithm that can generate brand new, never-heard-before Irish folk music. Why Irish folk music, you may ask? Well, we think these cute dancing clovers (courtesy of Google) are reason enough.



You’ll fill in code blocks to define the RNN model, train the model using a dataset of Irish folk songs (in the ABC notation), use the learned model to generate a new song, and then play back what’s generated to hear how well your model performs. Check out this example song we generated:


Lab 2: Computer Vision: Debiasing Facial Detection Systems

Lab 2 accompanies lectures on deep computer vision and deep generative models. Part 1 provides continued practice with the implementation of fundamental neural network architectures through an example of convolutional neural networks (CNNs) for classification of handwritten digits in the famous MNIST dataset.

The second portion of this lab takes things a step further, and explores two prominent examples of applied deep learning: facial detection and algorithmic bias. Though it may be no surprise that neural networks perform really well at recognizing faces in images, there’s been a lot of attention recently on how some of this AI may suffer from hidden algorithmic bias. Actually, it turns out that deep learning itself can help fight this bias.

In recent work, we trained a model, based on a variational autoencoder (VAE), that learns both a specific task, like face detection, and the underlying structure of the training data. In turn, the algorithm uses this learned latent structure to uncover and minimize hidden biases. When applied to the facial detection task, our algorithm decreased categorical bias and maintained high overall accuracy compared to state-of-the-art models.

This software lab is inspired by this work: you’ll actually build this debiasing model and evaluate its efficacy in debiasing the facial detection task.
debiasing model
In addition to thinking about issues of algorithmic bias — and how to combat them — you’ll gain practical exposure in working with VAEs, an architecture that is often not highlighted in deep learning implementation tutorials. For example, you’ll fill out this code block that defines the loss function for the VAE used in the debiasing model:
def debiasing_loss_func(x, x_pred, y_label, y_logit, z_mu, z_logsigma, kl_weight=0.005):
  # compute loss components
  reconstruction_loss = tf.reduce_mean(tf.keras.losses.MSE(x,x_pred), axis=(1,2)) 
  classification_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_label, logits=y_logit)
  kl_loss = 0.5 * tf.reduce_sum(tf.exp(z_logsigma) + tf.square(z_mu) - 1.0 - z_logsigma, axis=1)
  
  # propogate debiasing gradients only on relevant datapoints
  gradient_mask = tf.cast(tf.equal(y_label, 1), tf.float32)
  
  # define the total debiasing loss as a combination of the three losses
  vae_loss = kl_weight * kl_loss + reconstruction_loss
  total_loss = tf.reduce_mean(classification_loss + gradient_mask * vae_loss)
  return total_loss
Importantly, this approach can be applied beyond facial detection to any setting where we may want to debias against imbalances that may exist within the data.

If you’re excited to learn more, check out the paper: Amini, Soleimany, et al., “Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure”. AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, 2019.

Lab 3: Model-Free Reinforcement Learning

In the final lab, students explore a different class of learning problems from the previous two labs. In Lecture 5 students are exposed to foundational techniques in Deep Reinforcement Learning.

Compared to previous labs which focused on supervised and unsupervised learning, reinforcement learning seeks to teach an agent how to act in the world to maximize its own reward. Tensorflow’s imperative execution provides a streamlined method for RL which students program from the ground-up in Lab 3.

We focus on learning two tasks in both control (e.g. Cart-Pole) and games (e.g. Pong). Students were tasked in building a modular RL framework to learn these two very different environments using only a single “RL brain”.

Dealing with these baseline environments provides students with a fast way to quickly prototype new algorithms, get a concrete understanding of how to implement RL training procedures, and use these ideas as templates moving forward in their final projects.

Summary

We would like to thank the TensorFlow team and our incredible group of TA’s for their continued support in making this MIT course possible. MIT 6.S191 software labs provide a great way to quickly gain practical experience with TensorFlow, by implementing state-of-the-art techniques and algorithms introduced in lecture.


Next post
MIT Introduction to Deep Learning

Guest post by MIT 6.S191 Introduction to Deep Learning

MIT 6.S191: Introduction to Deep Learning is an introductory course offered formally at MIT and open-sourced on its course website. The class consists of a series of foundational lectures on the fundamentals of neural networks and their applications to sequence modeling, computer vision, generative models, and reinforcement learning.MIT’s offi…