https://blog.tensorflow.org/2018/04/mit-6s191-introduction-to-deep-learning.html

Community
**·**
Education

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdyR9b1w9P1fYVvE7e3hvKo8XaDljjpedUmS5eVWNCQ9Qe5wPqsfNPyXbUKESrzQ6j9LbXrYchZ701rBaWif28mfE7ijqXfn-1NH5bh1MDObaKVUiHj6DinTIoQrp4Yb7Wzv5AqAc-Dno/s1600/mitintro.png

April 05, 2018 —
*Guest post by MIT 6.S191 Introduction to Deep Learning*

MIT 6.S191: Introduction to Deep Learning is an introductory course offered formally offered at MIT and open-sourced on the course website. The class consists of a series of foundational lectures on the fundamentals of neural networks, its applications to sequence modeling, computer vision, generative models, and reinforcement learning.MIT 6.S…

MIT 6.S191: Introduction to Deep Learning

MIT 6.S191: MIT’s official introductory course on deep learning algorithms and their applications |

And so, we turned to TensorFlow. We designed two TensorFlow based software labs, focusing on music generation with recurrent neural networks and pneumothorax detection in medical images, to complement the course lectures. The TensorFlow labs gave students an opportunity to apply the fundamentals to two interesting, relevant problems and to build and refine their TensorFlow skills.

The material in 6.S191 is designed to be as accessible as possible, for people from varying backgrounds and levels of experience and for both the MIT community and beyond.

Accordingly, the first lab takes students through TensorFlow basics — building and executing computation graphs, sessions, and common operations used in deep learning. This introduction also highlights some of the latest and greatest that TensorFlow has to offer: the imperative version of TensorFlow, Eager mode.

This background sets students up to build models in TensorFlow for music generation and for pneumothorax detection in chest x-rays.

RNNs are well suited for music generation, as they can capture temporal dependencies in time-series data like music. In this first lab, students work through encoding a dataset of music files, defining a RNN model in TensorFlow, and sampling from the model to generate new music that has never been heard before.

The model is based off a single long short-term memory (LSTM) cell, where the state vector tracks the temporal dependencies between consecutive notes. At each time step, a sequence of previous notes is fed into the cell, and the final output of the last unit in our LSTM is fed into a fully connected layer. Thus, we can output a probability distribution over the next note at time step t given the notes at all previous time steps. We visualize this process in the diagram below:

Predicting the likelihood of the next musical note in a sequence |

The lab first goes through setting the relevant hyperparameters, defining placeholder variables, and initializing the weights for the RNN model. Students then worked to define their own function

`RNN(input_vec, weights, biases)`

that takes in the corresponding input variables and defines a computation graph.The lab allows students to experiment with various loss functions, optimization schemes, and even accuracy metrics:

```
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits, labels))
optimizer = tf.train.AdamOptimizer(learning_rate)
true_note = tf.argmax(output_vec,1) # identify the correct note
pred_note = tf.argmax(prediction, 1) # identify the predicted note
correct_pred = tf.equal(pred_note, true_note) # compare!
```

The lab guides students through feeding the trained model a seed (after all, it can’t predict any new notes without something to start with!), and then iteratively predicting each successive note using the trained RNN. This amounts to randomly sampling from the probability distribution over the next note that’s outputted by the RNN at each time step, and then using these samples to generate a new song.

As before, we gave students a guided structure for doing this, but defining the sampling was all on them.

To provide a sampling (pun intended) of generated songs, we went ahead and trained a model, then sampled from it to generate new songs. Listen in on an example generated with a trained model:

We took this lab a step beyond classification to try to address the notion of

Since this is a real dataset, it is quite noisy. We wanted to have students work with real data so that they could get a sense of some of the challenges in curating and annotating data, particularly in the context of computer vision.

CNN architecture for pneumothorax detection |

To provide some background, CAM is a method to visualize the regions of an image that a CNN “attends” to in its last convolutional layer. Note that CAM visualization pertains to architectures with a global average pooling layer before the final fully connected layer, where we output the spatial average of the feature map of each unit at the last convolutional layer.

CAMs effectively

In the context of our pneumothorax classifier, this amounts to highlighting the pixels in a x-ray that were most important in detecting (or not detecting) pneumothorax.

Class Activation Mapping on the final feature maps |

The lab goes through the process of computing and visualizing CAMs in Tensorflow from start to finish. Students have to define a function for extracting the feature maps and weights for the CAM computation:

`(feature_maps, dense_weights) = extract_features_weights(model)`

Students feed the extracted `feature_maps`

from the last convolutional layer and `dense_weights`

from the fully connected layer into a function for computing the CAM, and then define the upsampling procedure.Class activation map for chest x-ray positive for pneumothorax |

Perhaps the best part of this lab was the discussions it spawned. Students were left to mull over instances in which the model incorrectly classified the input x-ray, what the CAM looked like in those instances, and what changes could be made to the model to address these limitations. Building algorithms to “look” inside the brain of the neural networks piqued students’ curiosity and gave them a taste of the importance of interpretability in machine learning.

Watch all of the MIT 6.S191 lectures online for free! |

Next post

Community
**·**
Education
**·**

MIT 6.S191: Introduction to Deep Learning

April 05, 2018
—
*Guest post by MIT 6.S191 Introduction to Deep Learning*

MIT 6.S191: Introduction to Deep Learning is an introductory course offered formally offered at MIT and open-sourced on the course website. The class consists of a series of foundational lectures on the fundamentals of neural networks, its applications to sequence modeling, computer vision, generative models, and reinforcement learning.MIT 6.S…

Build, deploy, and experiment easily with TensorFlow