This is part of my “journal club for credit” series. You can see the other computational neuroscience papers in this post.
Unit: Deep Learning
- Perceptron
- Energy Based Neural Networks
- Training Networks
- Deep Learning
Papers
Deep Learning. By LeCun, Bengio, and Hinton in 2015.
Deep learning in neural networks: An overview. By Schmidhuber in 2015.
Other Useful References
- Deep Learning – Wikipedia
- Deep Learning Book (Under Development)
- Michael Nielsen EBook
What is deep learning?
In previous weeks we have introduced perceptrons, multilayer perceptrons (MLP), Hopfield neural networks, and Boltzmann machines.
But what is deep learning? I think it is really two things:
- Successful training of multilayered neural networks perform better (higher classification accuracy, etc) and involve more layers than previous implementations
- Just a rebranding of neural networks
Here is my summary of the history of deep learning, see both reviews above for extended details. In 2006, Deep Belief Networks (DBN) were introduced in two papers (Reducing the Dimensionality of Data with Neural Networks and A Fast Learning Algorithm for Deep Belief Nets). The idea of a DBN is to train a series of restricted Boltzmann machines (RBM). The first RBM is trained and given the original data, produces an output of hidden layer activations. The second RBM uses the first RBM’s hidden layer activations as inputs, and trains on that “data”. This is continued to the desired depth. At this point, the DBN can be used for unsupervised learning, or one can use it as pretraining for a MLP which will utilize backpropagation for supervised learning.
So on the one hand, there were technical breakthroughs that enabled neural networks to utilize more layers than previous iterations and achieve state of the art performance. However, the actual component (RBMs and MLPs), have been around since the 1980s, so it would also be fair to deem deep learning as a rebranding of neural networks.
Therefore, neural network winter (mid 90s to mid 00s) officially ended in 2006. I propose calling 2006-2012 neural network spring. While interest in neural networks increased and new advances were made, the general machine learning community was not obsessed with deep learning. That changed in 2012 when the neural network summer began. This paper presented at NIPS revolutionized the computer vision community by cutting the error rate on Imagenet in half! The Imagenet challenge was viewed as a serious benchmark that all computer vision systems should address. By blowing previous results out of the water, the revolution was completed.
So for now, enjoy neural network summer, but always remember, winter is coming.
Fundamental Questions
- When do extra layers help in a neural network? When do they hurt?
- Why was pretraining originally needed, but is no longer used in practice? Check out these papers for details: Glorot and Saxe.
- Learn about convolutional and recurrent neural networks. These are extremely popular right now!
Advanced Questions
- Do research on unsupervised learning! It is definitely less popular today, but all the big-shots think is the longterm future of neural networks.
4 thoughts on “Deep Learning”
Comments are closed.