ML Bookmarks
Contents
Special Nets
One Shot Learning
- Matching Networks for One Shot Learning - one shot learning from Google Deep Mind for image net
- The More You Know: Using Knowledge Graphs for Image Classification
- Prototypical Networks for Few-shot Learning
Reinforcement Learning
- Progressive Neural Networks progressive networks approach immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features.
- Reward Augmented Maximum Likelihood for Neural Structured Prediction (Short Summary)This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. We establish a connection between the log-likelihood and regularized expected reward objectives, showing that at a zero temperature, they are approximately equivalent in the vicinity of the optimal solution.
Build Neural Nets with another Neural Nets
- HYPER NETWORKS This work explores hypernetworks: an approach of using a small network, also known as a hypernetwork, to generate the weights for a larger network.
- AdaNet: Adaptive Structural Learning of Artificial Neural Networks Our approach simultaneously and adaptively learns both the structure of the network as well as its weights.
- A Roadmap towards Machine Intelligence In this paper, some fundamental properties that intelligent machines should have were proposed, focusing in particular on communication and learning.
- Neural Architecture Search with Reinforcement Learning - broad grid search for availbale models architectures with LSTM. As result we receive new conv-net architecture and new RNN node.
Other Topics
- Highway Networks - list of papers, code, etc.
- NIPS 2016 Tutorial: Generative Adversarial Network
- Gumbel-Softmax at Categorical Variational Autoencoders - blog post. Categorical Reparameterization with Gumbel-Softmax original paper
- Survey of resampling techniques for improving classification performance in unbalanced datasets
- Speed/accuracy trade-offs for modern convolutional object detectors
Optimization Techniques
- RECURRENT NEURAL NETWORK REGULARIZATION
- Recurrent Dropout without Memory Loss
- Styles of Truncated Backpropagation
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Hyperparameter optimization for Neural Networks - example of many optimization approaches from NeuPy library.
Papers regarding efficient backprop and neural networks training
- Why Momentum Really Works
- Stochastic Gradient Descent Tricks(Leon Bottou)
- Efficient BackProb(Yann LeCun)
- Practical Recommendations for Gradient-Based Training of Deep Architectures(Yoshua Bengio)
- FitNets: Hints for Thin Deep Nets
- Sobolev Training for Neural Networks - training network with respect to optimize function derivatives.
- Set parameter for one network from another(Learning to learn by gradient descent by gradient descent)
- The Marginal Value of Adaptive Gradient Methods in Machine Learning - explanation why SGD can be better than Adam or other methods
Additional Resources
- A Few Useful Things to Know about Machine Learning from cs231n course.
Benchmarks
- DeepBench Benchmarking Deep Learning operations on different hardware
Datasets
- e-Lab Video Data Set(s) Objects Tracking Dataset