Top Related Projects
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
A curated list of awesome Deep Learning tutorials, projects and communities.
Recurrent Neural Network - A curated list of resources dedicated to RNN
TensorFlow - A curated list of dedicated resources http://tensorflow.org
A curated list of awesome Machine Learning frameworks, libraries and software.
Quick Overview
The "awesome-deep-learning-papers" repository is a curated list of the most cited deep learning papers since 2012. It aims to provide a comprehensive collection of influential research in the field of deep learning, organized by year and topic. The repository serves as a valuable resource for researchers, students, and practitioners in the field of artificial intelligence and machine learning.
Pros
- Comprehensive collection of highly cited and influential deep learning papers
- Well-organized structure, categorized by year and topic for easy navigation
- Regularly updated with new papers and contributions from the community
- Includes a "Top 100" list of the most cited papers for quick reference
Cons
- May not include all relevant papers, as it focuses on highly cited works
- Subjective selection process, which may exclude some important but less cited papers
- Requires regular maintenance to stay up-to-date with the rapidly evolving field
- Limited to papers published since 2012, potentially missing earlier foundational works
Getting Started
As this is not a code library but a curated list of research papers, there is no code or quick start guide. To use this resource:
- Visit the GitHub repository: https://github.com/terryum/awesome-deep-learning-papers
- Browse the papers by year or topic in the README.md file
- Click on the paper titles to access the original publications
- Consider starring the repository to stay updated with new additions
Competitor Comparisons
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
Pros of Deep-Learning-Papers-Reading-Roadmap
- Provides a structured learning path for beginners to advanced practitioners
- Includes detailed explanations and summaries for each paper
- Offers a more comprehensive coverage of deep learning topics
Cons of Deep-Learning-Papers-Reading-Roadmap
- Less frequently updated compared to awesome-deep-learning-papers
- May be overwhelming for newcomers due to its extensive content
- Lacks some of the latest cutting-edge papers in the field
Code Comparison
While both repositories primarily focus on curating and organizing research papers, they don't contain significant code samples. However, Deep-Learning-Papers-Reading-Roadmap occasionally includes code snippets or links to implementations, such as:
# Example from a linked implementation
import torch
import torch.nn as nn
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc = nn.Linear(784, 10)
awesome-deep-learning-papers, on the other hand, typically doesn't include code snippets directly in the repository.
A curated list of awesome Deep Learning tutorials, projects and communities.
Pros of awesome-deep-learning
- Broader scope, covering various aspects of deep learning beyond just papers
- Includes practical resources like tutorials, datasets, and frameworks
- More frequently updated, with recent contributions
Cons of awesome-deep-learning
- Less focused on academic research and cutting-edge papers
- May be overwhelming for beginners due to the large amount of information
- Some sections lack detailed descriptions or explanations
Code comparison
While both repositories are primarily curated lists, awesome-deep-learning does include some code snippets in its README. For example:
# awesome-deep-learning
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
awesome-deep-learning-papers doesn't include code snippets, focusing instead on organizing and categorizing research papers.
Summary
awesome-deep-learning offers a comprehensive resource for practitioners and learners, covering a wide range of deep learning topics and tools. awesome-deep-learning-papers is more focused on academic research, providing a curated list of influential papers in the field. The choice between the two depends on whether you're looking for practical resources or a guide to important research in deep learning.
Recurrent Neural Network - A curated list of resources dedicated to RNN
Pros of awesome-rnn
- Focused specifically on Recurrent Neural Networks (RNNs), providing in-depth coverage of this topic
- Includes code implementations and tutorials, making it more practical for developers
- Regularly updated with new RNN-related papers and resources
Cons of awesome-rnn
- Limited scope compared to awesome-deep-learning-papers, which covers a broader range of deep learning topics
- May not provide as comprehensive an overview of the entire deep learning field
- Fewer overall resources due to its specialized focus
Code comparison
While awesome-deep-learning-papers doesn't typically include code snippets, awesome-rnn often provides implementation examples. Here's a sample from awesome-rnn:
class RNN:
def step(self, x):
self.h = np.tanh(np.dot(self.W_hh, self.h) + np.dot(self.W_xh, x))
y = np.dot(self.W_hy, self.h)
return y
Both repositories primarily serve as curated lists of papers and resources rather than providing extensive code examples. The main difference lies in their focus and organization of content.
TensorFlow - A curated list of dedicated resources http://tensorflow.org
Pros of awesome-tensorflow
- Focused specifically on TensorFlow resources, making it more targeted for TensorFlow users
- Includes a wider variety of resource types, such as tutorials, videos, and projects
- More frequently updated with new TensorFlow-related content
Cons of awesome-tensorflow
- Limited scope compared to awesome-deep-learning-papers, which covers a broader range of deep learning topics
- May not provide as much depth in theoretical foundations of deep learning
- Less emphasis on academic research papers and cutting-edge developments
Code comparison
While both repositories primarily consist of curated lists rather than code, awesome-tensorflow does include some code snippets and examples. Here's a brief comparison:
awesome-tensorflow:
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
awesome-deep-learning-papers: No direct code examples provided, as it focuses on listing research papers.
A curated list of awesome Machine Learning frameworks, libraries and software.
Pros of awesome-machine-learning
- Broader scope, covering various ML topics beyond deep learning
- More comprehensive resource list, including books, courses, and frameworks
- Regular updates and contributions from the community
Cons of awesome-machine-learning
- Less focused on specific research papers
- May be overwhelming for beginners due to the vast amount of information
- Lacks curated lists of top papers in specific subfields
Code comparison
While both repositories primarily consist of curated lists rather than code, awesome-machine-learning does include some code snippets for certain libraries. For example:
awesome-machine-learning:
from sklearn import svm
X = [[0, 0], [1, 1]]
y = [0, 1]
clf = svm.SVC()
clf.fit(X, y)
awesome-deep-learning-papers: No code snippets are provided, as it focuses on listing research papers.
Both repositories serve as valuable resources for machine learning enthusiasts and researchers. awesome-machine-learning offers a broader overview of the field with various resources, while awesome-deep-learning-papers provides a more focused collection of influential research papers in deep learning. The choice between the two depends on the user's specific needs and interests within the machine learning domain.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Awesome - Most Cited Deep Learning Papers
[Notice] This list is not being maintained anymore because of the overwhelming amount of deep learning papers published every day since 2017.
A curated list of the most cited deep learning papers (2012-2016)
We believe that there exist classic deep learning papers which are worth reading regardless of their application domain. Rather than providing overwhelming amount of papers, We would like to provide a curated list of the awesome deep learning papers which are considered as must-reads in certain research domains.
Background
Before this list, there exist other awesome deep learning lists, for example, Deep Vision and Awesome Recurrent Neural Networks. Also, after this list comes out, another awesome list for deep learning beginners, called Deep Learning Papers Reading Roadmap, has been created and loved by many deep learning researchers.
Although the Roadmap List includes lots of important deep learning papers, it feels overwhelming for me to read them all. As I mentioned in the introduction, I believe that seminal works can give us lessons regardless of their application domain. Thus, I would like to introduce top 100 deep learning papers here as a good starting point of overviewing deep learning researches.
To get the news for newly released papers everyday, follow my twitter or facebook page!
Awesome list criteria
- A list of top 100 deep learning papers published from 2012 to 2016 is suggested.
- If a paper is added to the list, another paper (usually from *More Papers from 2016" section) should be removed to keep top 100 papers. (Thus, removing papers is also important contributions as well as adding papers)
- Papers that are important, but failed to be included in the list, will be listed in More than Top 100 section.
- Please refer to New Papers and Old Papers sections for the papers published in recent 6 months or before 2012.
(Citation criteria)
- < 6 months : New Papers (by discussion)
- 2016 : +60 citations or "More Papers from 2016"
- 2015 : +200 citations
- 2014 : +400 citations
- 2013 : +600 citations
- 2012 : +800 citations
- ~2012 : Old Papers (by discussion)
Please note that we prefer seminal deep learning papers that can be applied to various researches rather than application papers. For that reason, some papers that meet the criteria may not be accepted while others can be. It depends on the impact of the paper, applicability to other researches scarcity of the research domain, and so on.
We need your contributions!
If you have any suggestions (missing papers, new papers, key researchers or typos), please feel free to edit and pull a request. (Please read the contributing guide for further instructions, though just letting me know the title of papers can also be a big contribution to us.)
(Update) You can download all top-100 papers with this and collect all authors' names with this. Also, bib file for all top-100 papers are available. Thanks, doodhwala, Sven and grepinsight!
- Can anyone contribute the code for obtaining the statistics of the authors of Top-100 papers?
Contents
- Understanding / Generalization / Transfer
- Optimization / Training Techniques
- Unsupervised / Generative Models
- Convolutional Network Models
- Image Segmentation / Object Detection
- Image / Video / Etc
- Natural Language Processing / RNNs
- Speech / Other Domain
- Reinforcement Learning / Robotics
- More Papers from 2016
(More than Top 100)
- New Papers : Less than 6 months
- Old Papers : Before 2012
- HW / SW / Dataset : Technical reports
- Book / Survey / Review
- Video Lectures / Tutorials / Blogs
- Appendix: More than Top 100 : More papers not in the list
Understanding / Generalization / Transfer
- Distilling the knowledge in a neural network (2015), G. Hinton et al. [pdf]
- Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]
- How transferable are features in deep neural networks? (2014), J. Yosinski et al. [pdf]
- CNN features off-the-Shelf: An astounding baseline for recognition (2014), A. Razavian et al. [pdf]
- Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. [pdf]
- Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf]
- Decaf: A deep convolutional activation feature for generic visual recognition (2014), J. Donahue et al. [pdf]
Optimization / Training Techniques
- Training very deep networks (2015), R. Srivastava et al. [pdf]
- Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015), S. Loffe and C. Szegedy [pdf]
- Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. [pdf]
- Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. [pdf]
- Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
- Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf]
- Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]
Unsupervised / Generative Models
- Pixel recurrent neural networks (2016), A. Oord et al. [pdf]
- Improved techniques for training GANs (2016), T. Salimans et al. [pdf]
- Unsupervised representation learning with deep convolutional generative adversarial networks (2015), A. Radford et al. [pdf]
- DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]
- Generative adversarial nets (2014), I. Goodfellow et al. [pdf]
- Auto-encoding variational Bayes (2013), D. Kingma and M. Welling [pdf]
- Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf]
Convolutional Neural Network Models
- Rethinking the inception architecture for computer vision (2016), C. Szegedy et al. [pdf]
- Inception-v4, inception-resnet and the impact of residual connections on learning (2016), C. Szegedy et al. [pdf]
- Identity Mappings in Deep Residual Networks (2016), K. He et al. [pdf]
- Deep residual learning for image recognition (2016), K. He et al. [pdf]
- Spatial transformer network (2015), M. Jaderberg et al., [pdf]
- Going deeper with convolutions (2015), C. Szegedy et al. [pdf]
- Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman [pdf]
- Return of the devil in the details: delving deep into convolutional nets (2014), K. Chatfield et al. [pdf]
- OverFeat: Integrated recognition, localization and detection using convolutional networks (2013), P. Sermanet et al. [pdf]
- Maxout networks (2013), I. Goodfellow et al. [pdf]
- Network in network (2013), M. Lin et al. [pdf]
- ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. [pdf]
Image: Segmentation / Object Detection
- You only look once: Unified, real-time object detection (2016), J. Redmon et al. [pdf]
- Fully convolutional networks for semantic segmentation (2015), J. Long et al. [pdf]
- Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf]
- Fast R-CNN (2015), R. Girshick [pdf]
- Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. [pdf]
- Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. [pdf]
- Semantic image segmentation with deep convolutional nets and fully connected CRFs, L. Chen et al. [pdf]
- Learning hierarchical features for scene labeling (2013), C. Farabet et al. [pdf]
Image / Video / Etc
- Image Super-Resolution Using Deep Convolutional Networks (2016), C. Dong et al. [pdf]
- A neural algorithm of artistic style (2015), L. Gatys et al. [pdf]
- Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei [pdf]
- Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. [pdf]
- Show and tell: A neural image caption generator (2015), O. Vinyals et al. [pdf]
- Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf]
- VQA: Visual question answering (2015), S. Antol et al. [pdf]
- DeepFace: Closing the gap to human-level performance in face verification (2014), Y. Taigman et al. [pdf]:
- Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. [pdf]
- Two-stream convolutional networks for action recognition in videos (2014), K. Simonyan et al. [pdf]
- 3D convolutional neural networks for human action recognition (2013), S. Ji et al. [pdf]
Natural Language Processing / RNNs
- Neural Architectures for Named Entity Recognition (2016), G. Lample et al. [pdf]
- Exploring the limits of language modeling (2016), R. Jozefowicz et al. [pdf]
- Teaching machines to read and comprehend (2015), K. Hermann et al. [pdf]
- Effective approaches to attention-based neural machine translation (2015), M. Luong et al. [pdf]
- Conditional random fields as recurrent neural networks (2015), S. Zheng and S. Jayasumana. [pdf]
- Memory networks (2014), J. Weston et al. [pdf]
- Neural turing machines (2014), A. Graves et al. [pdf]
- Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. [pdf]
- Sequence to sequence learning with neural networks (2014), I. Sutskever et al. [pdf]
- Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. [pdf]
- A convolutional neural network for modeling sentences (2014), N. Kalchbrenner et al. [pdf]
- Convolutional neural networks for sentence classification (2014), Y. Kim [pdf]
- Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf]
- Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov [pdf]
- Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. [pdf]
- Efficient estimation of word representations in vector space (2013), T. Mikolov et al. [pdf]
- Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. [pdf]
- Generating sequences with recurrent neural networks (2013), A. Graves. [pdf]
Speech / Other Domain
- End-to-end attention-based large vocabulary speech recognition (2016), D. Bahdanau et al. [pdf]
- Deep speech 2: End-to-end speech recognition in English and Mandarin (2015), D. Amodei et al. [pdf]
- Speech recognition with deep recurrent neural networks (2013), A. Graves [pdf]
- Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. [pdf]
- Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. [pdf]
- Acoustic modeling using deep belief networks (2012), A. Mohamed et al. [pdf]
Reinforcement Learning / Robotics
- End-to-end training of deep visuomotor policies (2016), S. Levine et al. [pdf]
- Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection (2016), S. Levine et al. [pdf]
- Asynchronous methods for deep reinforcement learning (2016), V. Mnih et al. [pdf]
- Deep Reinforcement Learning with Double Q-Learning (2016), H. Hasselt et al. [pdf]
- Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al. [pdf]
- Continuous control with deep reinforcement learning (2015), T. Lillicrap et al. [pdf]
- Human-level control through deep reinforcement learning (2015), V. Mnih et al. [pdf]
- Deep learning for detecting robotic grasps (2015), I. Lenz et al. [pdf]
- Playing atari with deep reinforcement learning (2013), V. Mnih et al. [pdf])
More Papers from 2016
- Layer Normalization (2016), J. Ba et al. [pdf]
- Learning to learn by gradient descent by gradient descent (2016), M. Andrychowicz et al. [pdf]
- Domain-adversarial training of neural networks (2016), Y. Ganin et al. [pdf]
- WaveNet: A Generative Model for Raw Audio (2016), A. Oord et al. [pdf] [web]
- Colorful image colorization (2016), R. Zhang et al. [pdf]
- Generative visual manipulation on the natural image manifold (2016), J. Zhu et al. [pdf]
- Texture networks: Feed-forward synthesis of textures and stylized images (2016), D Ulyanov et al. [pdf]
- SSD: Single shot multibox detector (2016), W. Liu et al. [pdf]
- SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. [pdf]
- Eie: Efficient inference engine on compressed deep neural network (2016), S. Han et al. [pdf]
- Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1 (2016), M. Courbariaux et al. [pdf]
- Dynamic memory networks for visual and textual question answering (2016), C. Xiong et al. [pdf]
- Stacked attention networks for image question answering (2016), Z. Yang et al. [pdf]
- Hybrid computing using a neural network with dynamic external memory (2016), A. Graves et al. [pdf]
- Google's neural machine translation system: Bridging the gap between human and machine translation (2016), Y. Wu et al. [pdf]
New papers
Newly published papers (< 6 months) which are worth reading
- MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (2017), Andrew G. Howard et al. [pdf]
- Convolutional Sequence to Sequence Learning (2017), Jonas Gehring et al. [pdf]
- A Knowledge-Grounded Neural Conversation Model (2017), Marjan Ghazvininejad et al. [pdf]
- Accurate, Large Minibatch SGD:Training ImageNet in 1 Hour (2017), Priya Goyal et al. [pdf]
- TACOTRON: Towards end-to-end speech synthesis (2017), Y. Wang et al. [pdf]
- Deep Photo Style Transfer (2017), F. Luan et al. [pdf]
- Evolution Strategies as a Scalable Alternative to Reinforcement Learning (2017), T. Salimans et al. [pdf]
- Deformable Convolutional Networks (2017), J. Dai et al. [pdf]
- Mask R-CNN (2017), K. He et al. [pdf]
- Learning to discover cross-domain relations with generative adversarial networks (2017), T. Kim et al. [pdf]
- Deep voice: Real-time neural text-to-speech (2017), S. Arik et al., [pdf]
- PixelNet: Representation of the pixels, by the pixels, and for the pixels (2017), A. Bansal et al. [pdf]
- Batch renormalization: Towards reducing minibatch dependence in batch-normalized models (2017), S. Ioffe. [pdf]
- Wasserstein GAN (2017), M. Arjovsky et al. [pdf]
- Understanding deep learning requires rethinking generalization (2017), C. Zhang et al. [pdf]
- Least squares generative adversarial networks (2016), X. Mao et al. [pdf]
Old Papers
Classic papers published before 2012
- An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. [pdf]
- Deep sparse rectifier neural networks (2011), X. Glorot et al. [pdf]
- Natural language processing (almost) from scratch (2011), R. Collobert et al. [pdf]
- Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]
- Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. [pdf]
- Learning mid-level features for recognition (2010), Y. Boureau [pdf]
- A practical guide to training restricted boltzmann machines (2010), G. Hinton [pdf]
- Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio [pdf]
- Why does unsupervised pre-training help deep learning (2010), D. Erhan et al. [pdf]
- Learning deep architectures for AI (2009), Y. Bengio. [pdf]
- Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations (2009), H. Lee et al. [pdf]
- Greedy layer-wise training of deep networks (2007), Y. Bengio et al. [pdf]
- Reducing the dimensionality of data with neural networks, G. Hinton and R. Salakhutdinov. [pdf]
- A fast learning algorithm for deep belief nets (2006), G. Hinton et al. [pdf]
- Gradient-based learning applied to document recognition (1998), Y. LeCun et al. [pdf]
- Long short-term memory (1997), S. Hochreiter and J. Schmidhuber. [pdf]
HW / SW / Dataset
- SQuAD: 100,000+ Questions for Machine Comprehension of Text (2016), Rajpurkar et al. [pdf]
- OpenAI gym (2016), G. Brockman et al. [pdf]
- TensorFlow: Large-scale machine learning on heterogeneous distributed systems (2016), M. Abadi et al. [pdf]
- Theano: A Python framework for fast computation of mathematical expressions, R. Al-Rfou et al.
- Torch7: A matlab-like environment for machine learning, R. Collobert et al. [pdf]
- MatConvNet: Convolutional neural networks for matlab (2015), A. Vedaldi and K. Lenc [pdf]
- Imagenet large scale visual recognition challenge (2015), O. Russakovsky et al. [pdf]
- Caffe: Convolutional architecture for fast feature embedding (2014), Y. Jia et al. [pdf]
Book / Survey / Review
- On the Origin of Deep Learning (2017), H. Wang and Bhiksha Raj. [pdf]
- Deep Reinforcement Learning: An Overview (2017), Y. Li, [pdf]
- Neural Machine Translation and Sequence-to-sequence Models(2017): A Tutorial, G. Neubig. [pdf]
- Neural Network and Deep Learning (Book, Jan 2017), Michael Nielsen. [html]
- Deep learning (Book, 2016), Goodfellow et al. [html]
- LSTM: A search space odyssey (2016), K. Greff et al. [pdf]
- Tutorial on Variational Autoencoders (2016), C. Doersch. [pdf]
- Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton [pdf]
- Deep learning in neural networks: An overview (2015), J. Schmidhuber [pdf]
- Representation learning: A review and new perspectives (2013), Y. Bengio et al. [pdf]
Video Lectures / Tutorials / Blogs
(Lectures)
- CS231n, Convolutional Neural Networks for Visual Recognition, Stanford University [web]
- CS224d, Deep Learning for Natural Language Processing, Stanford University [web]
- Oxford Deep NLP 2017, Deep Learning for Natural Language Processing, University of Oxford [web]
(Tutorials)
- NIPS 2016 Tutorials, Long Beach [web]
- ICML 2016 Tutorials, New York City [web]
- ICLR 2016 Videos, San Juan [web]
- Deep Learning Summer School 2016, Montreal [web]
- Bay Area Deep Learning School 2016, Stanford [web]
(Blogs)
- OpenAI [web]
- Distill [web]
- Andrej Karpathy Blog [web]
- Colah's Blog [Web]
- WildML [Web]
- FastML [web]
- TheMorningPaper [web]
Appendix: More than Top 100
(2016)
- A character-level decoder without explicit segmentation for neural machine translation (2016), J. Chung et al. [pdf]
- Dermatologist-level classification of skin cancer with deep neural networks (2017), A. Esteva et al. [html]
- Weakly supervised object localization with multi-fold multiple instance learning (2017), R. Gokberk et al. [pdf]
- Brain tumor segmentation with deep neural networks (2017), M. Havaei et al. [pdf]
- Professor Forcing: A New Algorithm for Training Recurrent Networks (2016), A. Lamb et al. [pdf]
- Adversarially learned inference (2016), V. Dumoulin et al. [web][pdf]
- Understanding convolutional neural networks (2016), J. Koushik [pdf]
- Taking the human out of the loop: A review of bayesian optimization (2016), B. Shahriari et al. [pdf]
- Adaptive computation time for recurrent neural networks (2016), A. Graves [pdf]
- Densely connected convolutional networks (2016), G. Huang et al. [pdf]
- Region-based convolutional networks for accurate object detection and segmentation (2016), R. Girshick et al.
- Continuous deep q-learning with model-based acceleration (2016), S. Gu et al. [pdf]
- A thorough examination of the cnn/daily mail reading comprehension task (2016), D. Chen et al. [pdf]
- Achieving open vocabulary neural machine translation with hybrid word-character models, M. Luong and C. Manning. [pdf]
- Very Deep Convolutional Networks for Natural Language Processing (2016), A. Conneau et al. [pdf]
- Bag of tricks for efficient text classification (2016), A. Joulin et al. [pdf]
- Efficient piecewise training of deep structured models for semantic segmentation (2016), G. Lin et al. [pdf]
- Learning to compose neural networks for question answering (2016), J. Andreas et al. [pdf]
- Perceptual losses for real-time style transfer and super-resolution (2016), J. Johnson et al. [pdf]
- Reading text in the wild with convolutional neural networks (2016), M. Jaderberg et al. [pdf]
- What makes for effective detection proposals? (2016), J. Hosang et al. [pdf]
- Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks (2016), S. Bell et al. [pdf].
- Instance-aware semantic segmentation via multi-task network cascades (2016), J. Dai et al. [pdf]
- Conditional image generation with pixelcnn decoders (2016), A. van den Oord et al. [pdf]
- Deep networks with stochastic depth (2016), G. Huang et al., [pdf]
- Consistency and Fluctuations For Stochastic Gradient Langevin Dynamics (2016), Yee Whye Teh et al. [pdf]
(2015)
- Ask your neurons: A neural-based approach to answering questions about images (2015), M. Malinowski et al. [pdf]
- Exploring models and data for image question answering (2015), M. Ren et al. [pdf]
- Are you talking to a machine? dataset and methods for multilingual image question (2015), H. Gao et al. [pdf]
- Mind's eye: A recurrent visual representation for image caption generation (2015), X. Chen and C. Zitnick. [pdf]
- From captions to visual concepts and back (2015), H. Fang et al. [pdf].
- Towards AI-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. [pdf]
- Ask me anything: Dynamic memory networks for natural language processing (2015), A. Kumar et al. [pdf]
- Unsupervised learning of video representations using LSTMs (2015), N. Srivastava et al. [pdf]
- Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding (2015), S. Han et al. [pdf]
- Improved semantic representations from tree-structured long short-term memory networks (2015), K. Tai et al. [pdf]
- Character-aware neural language models (2015), Y. Kim et al. [pdf]
- Grammar as a foreign language (2015), O. Vinyals et al. [pdf]
- Trust Region Policy Optimization (2015), J. Schulman et al. [pdf]
- Beyond short snippents: Deep networks for video classification (2015) [pdf]
- Learning Deconvolution Network for Semantic Segmentation (2015), H. Noh et al. [pdf]
- Learning spatiotemporal features with 3d convolutional networks (2015), D. Tran et al. [pdf]
- Understanding neural networks through deep visualization (2015), J. Yosinski et al. [pdf]
- An Empirical Exploration of Recurrent Network Architectures (2015), R. Jozefowicz et al. [pdf]
- Deep generative image models using a laplacian pyramid of adversarial networks (2015), E.Denton et al. [pdf]
- Gated Feedback Recurrent Neural Networks (2015), J. Chung et al. [pdf]
- Fast and accurate deep network learning by exponential linear units (ELUS) (2015), D. Clevert et al. [pdf]
- Pointer networks (2015), O. Vinyals et al. [pdf]
- Visualizing and Understanding Recurrent Networks (2015), A. Karpathy et al. [pdf]
- Attention-based models for speech recognition (2015), J. Chorowski et al. [pdf]
- End-to-end memory networks (2015), S. Sukbaatar et al. [pdf]
- Describing videos by exploiting temporal structure (2015), L. Yao et al. [pdf]
- A neural conversational model (2015), O. Vinyals and Q. Le. [pdf]
- Improving distributional similarity with lessons learned from word embeddings, O. Levy et al. [[pdf]] (https://www.transacl.org/ojs/index.php/tacl/article/download/570/124)
- Transition-Based Dependency Parsing with Stack Long Short-Term Memory (2015), C. Dyer et al. [pdf]
- Improved Transition-Based Parsing by Modeling Characters instead of Words with LSTMs (2015), M. Ballesteros et al. [pdf]
- Finding function in form: Compositional character models for open vocabulary word representation (2015), W. Ling et al. [pdf]
(~2014)
- DeepPose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy [pdf]
- Learning a Deep Convolutional Network for Image Super-Resolution (2014, C. Dong et al. [pdf]
- Recurrent models of visual attention (2014), V. Mnih et al. [pdf]
- Empirical evaluation of gated recurrent neural networks on sequence modeling (2014), J. Chung et al. [pdf]
- Addressing the rare word problem in neural machine translation (2014), M. Luong et al. [pdf]
- On the properties of neural machine translation: Encoder-decoder approaches (2014), K. Cho et. al.
- Recurrent neural network regularization (2014), W. Zaremba et al. [pdf]
- Intriguing properties of neural networks (2014), C. Szegedy et al. [pdf]
- Towards end-to-end speech recognition with recurrent neural networks (2014), A. Graves and N. Jaitly. [pdf]
- Scalable object detection using deep neural networks (2014), D. Erhan et al. [pdf]
- On the importance of initialization and momentum in deep learning (2013), I. Sutskever et al. [pdf]
- Regularization of neural networks using dropconnect (2013), L. Wan et al. [pdf]
- Learning Hierarchical Features for Scene Labeling (2013), C. Farabet et al. [pdf]
- Linguistic Regularities in Continuous Space Word Representations (2013), T. Mikolov et al. [pdf]
- Large scale distributed deep networks (2012), J. Dean et al. [pdf]
- A Fast and Accurate Dependency Parser using Neural Networks. Chen and Manning. [pdf]
Acknowledgement
Thank you for all your contributions. Please make sure to read the contributing guide before you make a pull request.
License
To the extent possible under law, Terry T. Um has waived all copyright and related or neighboring rights to this work.
Top Related Projects
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
A curated list of awesome Deep Learning tutorials, projects and communities.
Recurrent Neural Network - A curated list of resources dedicated to RNN
TensorFlow - A curated list of dedicated resources http://tensorflow.org
A curated list of awesome Machine Learning frameworks, libraries and software.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot