实用资料::RNN和LSTM资源目录收集大全
Types of RNN
1) Plain Tanh Recurrent Nerual Networks
2) Gated Recurrent Neural Networks (GRU)
3) Long Short-Term Memory (LSTM)
Tutorials
A Beginner’s Guide to Recurrent Networks and LSTMs
http://deeplearning4j.org/lstm.html
A Deep Dive into Recurrent Neural Nets
Nikhil Buduma | A Deep Dive into Recurrent Neural Nets
<p>Last time, we talked about the traditional feed-forward neural net and concepts that form the basis of deep learning. These ideas are extremely powerful! We saw how feed-forward convolutional neural networks have set records on many difficult tasks including handwritten digit recognition and object classification. And even today, feed-forward neural networks consistently outperform virtually all other approaches to solving classification tasks.</p>
Long Short-Term Memory: Tutorial on LSTM Recurrent Networks
http://people.idsia.ch/~juergen/lstm/index.htm1
LSTM implementation explained
http://apaszke.github.io/lstm-explained.html
Recurrent Neural Networks Tutorial
- Part 1(Introduction to RNNs): http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/1
- Part 2(Implementing a RNN using Python and Theano):http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano/
- Part 3(Understanding the Backpropagation Through Time (BPTT) algorithm):http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/
- Part 4(Implementing a GRU/LSTM RNN): http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/
Understanding LSTM Networks
- blog: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
- ZH: http://www.jianshu.com/p/9dc9f41f0b291
Recurrent Neural Networks in DL4J
http://deeplearning4j.org/usingrnns.html
Learning RNN Hierarchies
Train RNN
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units
- arxiv: http://arxiv.org/abs/1504.009411
- gitxiv: http://gitxiv.com/posts/7j5JXvP3kn5Jf8Waj/irnn-experiment-with-pixel-by-pixel-sequential-mnist
- github: https://github.com/fchollet/keras/blob/master/examples/mnist_irnn.py
- github: https://gist.github.com/GabrielPereyra/353499f2e6e407883b32
- blog(“Implementing Recurrent Neural Net using chainer!”): http://t-satoshi.blogspot.jp/2015/06/implementing-recurrent-neural-net-using.html
- reddit:https://www.reddit.com/r/MachineLearning/comments/31rinf/150400941_a_simple_way_to_initialize_recurrent/
- reddit:https://www.reddit.com/r/MachineLearning/comments/32tgvw/has_anyone_been_able_to_reproduce_the_results_in/
Sequence Level Training with Recurrent Neural Networks (ICLR 2016)
- arxiv: http://arxiv.org/abs/1511.06732
- github: https://github.com/facebookresearch/MIXER
- notes: https://www.evernote.com/shard/s189/sh/ada01a82-70a9-48d4-985c-20492ab91e84/8da92be19e704996dc2b929473abed46
Training Recurrent Neural Networks (PhD thesis)
- atuhor: Ilya Sutskever
- thesis: https://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf
Deep learning for control using augmented Hessian-free optimization
- blog: https://studywolf.wordpress.com/2016/04/04/deep-learning-for-control-using-augmented-hessian-free-optimization/
- github: https://github.com/studywolf/blog/blob/master/train_AHF/train_hf.py
Hierarchical Conflict Propagation: Sequence Learning in a Recurrent Deep Neural Network
Recurrent Batch Normalization
- arxiv: http://arxiv.org/abs/1603.09025
- github: https://github.com/iassael/torch-bnlstm
- github: https://github.com/cooijmanstim/recurrent-batch-normalization
- github(“LSTM with Batch Normalization”): https://github.com/fchollet/keras/pull/2183
- notes: http://www.shortscience.org/paper?bibtexKey=journals/corr/CooijmansBLC16
Optimizing Performance of Recurrent Neural Networks on GPUs
- arxiv: http://arxiv.org/abs/1604.01946
- github: https://github.com/parallel-forall/code-samples/blob/master/posts/rnn/LSTM.cu
Learn To Execute Programs
Learning to Execute
- arXiv: http://arxiv.org/abs/1410.4615
- github: https://github.com/wojciechz/learning_to_execute
- github(Tensorflow): https://github.com/raindeer/seq2seq_experiments
Neural Programmer-Interpreters (Google DeepMind. ICLR 2016 Best Paper)
- arXiv: http://arxiv.org/abs/1511.06279
- project page: http://www-personal.umich.edu/~reedscot/iclr_project.html
- github: https://github.com/mokemokechicken/keras_npi
A Programmer-Interpreter Neural Network Architecture for Prefrontal Cognitive Control
Convolutional RNN: an Enhanced Model for Extracting Features from Sequential Data
Attention Models
Recurrent Models of Visual Attention (Google DeepMind. NIPS2014)
- paper: http://arxiv.org/abs/1406.6247
- data: https://github.com/deepmind/mnist-cluttered
- code: https://github.com/Element-Research/rnn/blob/master/examples/recurrent-visual-attention.lua
Recurrent Model of Visual Attention(Google DeepMind)
- paper: http://arxiv.org/abs/1406.6247
- GitXiv: http://gitxiv.com/posts/ZEobCXSh23DE8a8mo/recurrent-models-of-visual-attention
- blog: http://torch.ch/blog/2015/09/21/rmva.html
- code: https://github.com/Element-Research/rnn/blob/master/scripts/evaluate-rva.lua
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
A Neural Attention Model for Abstractive Sentence Summarization(EMNLP 2015. Facebook AI Research)
- arXiv: http://arxiv.org/abs/1509.00685
- github: https://github.com/facebook/NAMAS
Effective Approaches to Attention-based Neural Machine Translation(EMNLP2015)
Generating Images from Captions with Attention
- arxiv: http://arxiv.org/abs/1511.02793
- github: https://github.com/emansim/text2image
- demo: http://www.cs.toronto.edu/~emansim/cap2im.html
Attention and Memory in Deep Learning and NLP
Survey on the attention based RNN model and its applications in computer vision
Papers
Generating Sequences With Recurrent Neural Networks
- arxiv: http://arxiv.org/abs/1308.0850
- github: https://github.com/hardmaru/write-rnn-tensorflow
- github: https://github.com/szcom/rnnlib
- blog: http://blog.otoro.net/2015/12/12/handwriting-generation-demo-in-tensorflow/
Unsupervised Learning of Video Representations using LSTMs(ICML2015)
- project: http://www.cs.toronto.edu/~nitish/unsupervised_video/
- paper: http://arxiv.org/abs/1502.04681
- code: http://www.cs.toronto.edu/~nitish/unsupervised_video/unsup_video_lstm.tar.gz
- github: https://github.com/emansim/unsupervised-videos
LSTM: A Search Space Odyssey
- paper: http://arxiv.org/abs/1503.04069
- notes: https://www.evernote.com/shard/s189/sh/48da42c5-8106-4f0d-b835-c203466bfac4/50d7a3c9a961aefd937fae3eebc6f540
- blog(“Dissecting the LSTM”): https://medium.com/jim-fleming/implementing-lstm-a-search-space-odyssey-7d50c3bacf93#.crg8pztop
- github: https://github.com/jimfleming/lstm_search
Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets
A Critical Review of Recurrent Neural Networks for Sequence Learning
- arXiv: http://arxiv.org/abs/1506.00019
- review: http://blog.terminal.com/a-thorough-and-readable-review-on-rnns/
Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks(Winner of MSCOCO image captioning challenge, 2015)
Visualizing and Understanding Recurrent Networks (ICLR 2016. Andrej Karpathy, Justin Johnson, Fei-Fei Li)
- paper: http://arxiv.org/abs/1506.02078
- slides: http://www.robots.ox.ac.uk/~seminars/seminars/Extra/2015_07_06_AndrejKarpathy.pdf
- github: https://github.com/karpathy/char-rnn
Grid Long Short-Term Memory
- arxiv: http://arxiv.org/abs/1507.01526
- github(Torch7): https://github.com/coreylynch/grid-lstm/
Depth-Gated LSTM
Deep Knowledge Tracing
- paper: https://web.stanford.edu/~cpiech/bio/papers/deepKnowledgeTracing.pdf
- github: https://github.com/chrispiech/DeepKnowledgeTracing
Top-down Tree Long Short-Term Memory Networks
Alternative structures for character-level RNNs(INRIA & Facebook AI Research. ICLR 2016)
- arXiv: http://arxiv.org/abs/1511.06303
- github: https://github.com/facebook/Conditional-character-based-RNN
Pixel Recurrent Neural Networks (Google DeepMind)
- arxiv: http://arxiv.org/abs/1601.06759
- notes(by Hugo Larochelle): https://www.evernote.com/shard/s189/sh/fdf61a28-f4b6-491b-bef1-f3e148185b18/aba21367d1b3730d9334ed91d3250848
Long Short-Term Memory-Networks for Machine Reading
Lipreading with Long Short-Term Memory
Associative Long Short-Term Memory
Representation of linguistic form and function in recurrent neural networks
Architectural Complexity Measures of Recurrent Neural Networks
Easy-First Dependency Parsing with Hierarchical Tree LSTMs
Training Input-Output Recurrent Neural Networks through Spectral Methods
Projects
NeuralTalk (Deprecated): a Python+numpy project for learning Multimodal Recurrent Neural Networks that describe images with sentences
NeuralTalk2: Efficient Image Captioning code in Torch, runs on GPU
char-rnn in Blocks
Project: pycaffe-recurrent
Using neural networks for password cracking
- blog: https://0day.work/using-neural-networks-for-password-cracking/
- github: https://github.com/gehaxelt/RNN-Passwords
torch-rnn: Efficient, reusable RNNs and LSTMs for torch
Deploying a model trained with GPU in Torch into JavaScript, for everyone to use
- blog: http://testuggine.ninja/blog/torch-conversion
- demo: http://testuggine.ninja/DRUMPF-9000/
- github: https://github.com/Darktex/char-rnn
LSTM implementation on Caffe
JNN: Java Neural Network Library
- intro: C2W model, LSTM-based Language Model, LSTM-based Part-Of-Speech-Tagger Model
- github: https://github.com/wlin12/JNN
LSTM-Autoencoder: Seq2Seq LSTM Autoencoder
RNN Language Model Variations
- intro: Standard LSTM, Gated Feedback LSTM, 1D-Grid LSTM
- github: https://github.com/cheng6076/mlm
keras-extra: Extra Layers for Keras to connect CNN with RNN
Blogs
Survey on Attention-based Models Applied in NLP
Survey on Attention-based Models Applied in NLP
Attention-based models are firstly proposed in the field of computer vision around mid 2014. And then they spread into Natural Language Processing. In this post, I will mainly focus on a list of attention-based models applied in natural language processing.
Survey on Advanced Attention-based Models
Survey on Advanced Attention-based Models
In the previous post, I briefly introduce a list of paper applying attention-based models in natural language processing. Though slight different, they are all soft alignment models. However, there actually exits two class of alignment models, the soft one, and also the hard one. In fact, the soft and hard alignment models are concurred in computer vision around late 2014[^1]. Due to differences between CV and NLP (more precisely, image vs. language), hard alignment models are more difficult to transfer into NLP. In this post, I aim at introducing some advanced attention-based models especially hard ones, which have not been yet but will be popular.
Online Representation Learning in Recurrent Neural Language Models
http://www.marekrei.com/blog/online-representation-learning-in-recurrent-neural-language-models/
Fun with Recurrent Neural Nets: One More Dive into CNTK and TensorFlow
Fun with Recurrent Neural Nets: One More Dive into CNTK and TensorFlow
In a previous article I set about comparing Microsoft’s Computational Network Took Kit for deep neural nets to Google’s TensorFlow. I concluded that piece with a deep dive into how recurrent neura…
Materials to understand LSTM
https://medium.com/@shiyan/materials-to-understand-lstm-34387d6454c1#.4mt3bzoau
Understanding LSTM and its diagrams (★★★★★)
- blog: https://medium.com/@shiyan/understanding-lstm-and-its-diagrams-37e2f46f1714
- slides: https://github.com/shi-yan/FreeWill/blob/master/Docs/Diagrams/lstm_diagram.pptx
Persistent RNNs: 30 times faster RNN layers at small mini-batch sizes (Greg Diamos, Baidu Silicon Valley AI Lab)
http://svail.github.io/persistent_rnns/
All of Recurrent Neural Networks
https://medium.com/@jianqiangma/all-about-recurrent-neural-networks-9e5ae2936f6e#.q4s02elqg
Rolling and Unrolling RNNs
Rolling and Unrolling RNNs
A while back, I discussed Recurrent Neural Networks (RNNs), a type of artificial neural network in which some of the connections between neurons point “backwards”. When a sequence of in…
Sequence prediction using recurrent neural networks(LSTM) with TensorFlow: LSTM regression using TensorFlow
- blog: http://mourafiq.com/2016/05/15/predicting-sequences-using-rnn-in-tensorflow.html
- github: https://github.com/mouradmourafiq/tensorflow-lstm-regression
Resources
Awesome Recurrent Neural Networks – A curated list of resources dedicated to RNN
- homepage: http://jiwonkim.org/awesome-rnn/
- github: https://github.com/kjw0612/awesome-rnn
Jürgen Schmidhuber’s page on Recurrent Neural Networks
http://people.idsia.ch/~juergen/rnn.html
Reading and Questions
Are there any Recurrent convolutional neural network network implementations out there ?
显示中文翻译(机器翻译别说我啊哈哈):
类型的递归神经网络
递归神经网络的函数1)平原
2)门控的递归神经网络(GRU)
3)长短期记忆(LSTM)
教程
一个初学者的指南lstms递归网络
deeplearning4j.org lstm.html http:/ /
深潜入神经网
http:/ / / / nikhilbuduma.com 2015年01/11/a深潜水到经常性的神经网络/
长期短期记忆:对LSTM网络教程
http:/ / / / / lstm Juergen people.idsia.ch ~。
严格执行解释
apaszke.github.io lstm-explained.html http:/ /
递归神经网络教程
- 1部分(介绍RNNs):http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
- 2部分(实施网络使用Python和西雅娜):http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-2-implementing-a-language-model-rnn-with-python-numpy-and-theano/
- 3部分(通过时间(BPTT)了解反向传播算法):http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/
- 4部分(实施格鲁/ LSTM RNN):http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/
了解LSTM网络
- 博客http://colah.github.io/posts/2015-08-understanding-lstms/
- ZH:http://www.jianshu.com / P / 9dc9f41f0b29
递归神经网络的dl4j
deeplearning4j.org usingrnns.html http:/ /
学习RNN Hierarchies
训练RNN
一个简单的方法来初始化校正线性单元的递归网络
- arXiv:http:/ / / / 1504.00941 ABS参考
- gitxiv:http:/ / / / / gitxiv.com帖子7j5jxvp3kn5jf8waj—实验,与一个像素由像素的顺序MNIST
- GitHub:https://github.com/fchollet/keras/blob/master/examples/mnist_irnn.py
- GitHub:https://gist.github.com/gabrielpereyra/353499f2e6e407883b32
- 博客(“实施经常性的神经网络采用链!“):http://t-satoshi.blogspot.jp/2015/06/implementing-recurrent-neural-net-using.html
- 红迪网http://www.reddit.com : / / / R / machinelearning /评论/ 31rinf / 150400941 _有_简单_路_ to _初始化_复发/
- 红迪网https://www.reddit.com/r/machinelearning/comments/32tgvw/has_anyone_been_able_to_reproduce_the_results_in/
与神经网络的训练序列水平(性质2016)
- arXiv:http:/ / / / 1511.06732 ABS参考
- GitHub:HTTPS:/ / / / github.com facebookresearch搅拌机
- 笔记http://www.evernote.com HTTPS:/ / / / / / ada01a82碎片s189 SH – 70a9 – 48d4 985c – / – 20492ab91e84 8da92be19e704996dc2b929473abed46
递归神经网络的训练(博士学位论文)
- atuhor:伊利亚sutskever
- 论文:http:/ / http://www.cs。utoronto。钙/ ~伊利亚/酒吧/ ilya_sutskever_phd_thesis.pdf
使用增强Hessian-free优化控制深度学习
- 博客https://studywolf.wordpress.com/2016/04/04/deep-learning-for-control-using-augmented-hessian-free-optimization/
- GitHub:http:/ / / /博客/ github.com studywolf BLOB /硕士/火车/ _ hf.py _ AHF的列车
层次冲突传播:在递归深度神经网络的学习顺序
经常性的批量化
- arXiv:http:/ / / / 1603.09025 ABS参考
- GitHub:http:/ / / / github.com iassael – bnlstm火炬
- GitHub:https://github.com/cooijmanstim/recurrent-batch-normalization
- GitHub(“批量归一化”LSTM):https://github.com/fchollet/keras/pull/2183
- 笔记http://www.shortscience.org/paper?bibtexKey=journals/corr/CooijmansBLC16
递归神经网络的基于GPU的性能优化
- arXiv:http:/ / / / 1604.01946 ABS参考
- GitHub:http:/ / / / github.com并行代码样本forall BLOB /硕士/ / / / lstm.cu RNN的帖子
学会执行程序
学习执行
- arXiv:http:/ / / / 1410.4615 ABS参考
- GitHub:https://github.com/wojciechz/learning_to_execute
- GitHub(tensorflow):http:/ / / / seq2seq github.com raindeer _实验
神经程序员译员(谷歌DeepMind。IclR 2016最佳论文)
- arXiv:http:/ / / / 1511.06279 ABS参考
- 项目页面:HTTP:/ / www个人。密歇根大学。教育/ ~ reedscot / iclr_project.html
- GitHub:http:/ / / / github.com mokemokechicken HTTP _ NPI
一个程序员翻译前额叶的认知控制的神经网络结构
卷积网络:一种从序列数据中提取特征增强模型
注意模型
视觉注意递归模型(谷歌DeepMind。nips2014)
- 论文:http:/ / / / 1406.6247 ABS参考
- 日期:https://github.com/deepmind/mnist-cluttered
- 代码https://github.com/element-research/rnn/blob/master/examples/recurrent-visual-attention.lua
视觉注意递归模型(谷歌DeepMind)
- 论文:http:/ / / / 1406.6247 ABS参考
- gitxiv:http:/ / / / / gitxiv.com帖子zeobcxsh23de8a8mo复发模型的视觉注意力)。
- 博客torch.ch http:/ / / / /博客/ 21 / rmva.html 2015年09
- 代码https://github.com/element-research/rnn/blob/master/scripts/evaluate-rva.lua
显示,参加并告诉:视觉注意的神经图像字幕生成
一个抽象的句子总结神经注意力模型(emnlp 2015。脸谱网的人工智能研究)
基于神经机器翻译关注的有效途径(emnlp2015)
从字幕注意图像生成
- arXiv:http:/ / / / 1511.02793 ABS参考
- GitHub:https://github.com/emansim/text2image
- 演示http:/ / / / cap2im.html emansim http://www.cs.toronto.edu ~
在深入学习NLP的注意和记忆
关注调查的递归神经网络模型及其在计算机视觉中的应用
论文
生成递归神经网络的序列
- arXiv:http:/ / / / 1308.0850 ABS参考
- GitHub:https://github.com/hardmaru/write-rnn-tensorflow
- GitHub:深圳市鑫科姆github.com https:////rnnlib
- 博客blog.otoro.net http:/ / / / / / handwriting 2015年12 12生成演示在tensorflow /
视频lstms表示使用无监督学习(icml2015)
- 项目HTTP:/ / http://www.cs。多伦多。edu / ~ Nitish / unsupervised_video /
- 论文:http:/ / / / 1502.04681 ABS参考
- 代码HTTP:/ / http://www.cs。多伦多。edu / ~ Nitish / unsupervised_video / unsup_video_lstm.tar.gz
- GitHub:https://github.com/emansim/unsupervised-videos
方:搜索太空奥德赛
- 论文:http:/ / / / 1503.04069 ABS参考
- 笔记http://www.evernote.com HTTPS:/ / / / / / 48da42c5碎片s189 SH – 8106 – 4f0d b835 – / – c203466bfac4 50d7a3c9a961aefd937fae3eebc6f540
- 博客(“dissecting《lstm”):http:/ / / /实施medium.com吉姆弗莱明- lstm A搜索空间7d50c3bacf93 # .crg8pztop奥德赛
- GitHub:http:/ / / / lstm github.com jimfleming _搜索
推理算法模式堆栈增强经常网
序列学习递归神经网络的评论
- arXiv:http:/ / / / 1506.00019 ABS参考
- 回顾:http://blog.terminal.com/a-thorough-and-readable-review-on-rnns/
将神经网络预测的采样序列(mscoco图像字幕的挑战,2015人)
可视化和理解递归神经网络(IclR 2016。Andrej karpathy,贾斯廷约翰逊,李飞飞)
- 论文:http:/ / / / 1506.02078 ABS参考
- 幻灯片HTTP:/ / http://www.robots。ox.ac.uk / ~讲座/研讨会/特/ 2015_07_06_andrejkarpathy.pdf
- GitHub:http://github.com /那/炭RNN
网格长短期记忆
- arXiv:http:/ / / / 1507.01526 ABS参考
- GitHub(torch7):github.com https://///coreylynch网格lstm
深度门控LSTM
深知识跟踪
- 论文:HTTPS:/ /网站。斯坦福.edu / ~ cpiech /生物/文件/ deepknowledgetracing.pdf
- GitHub:https://github.com/chrispiech/deepknowledgetracing
自顶向下的树长短期记忆网络
人物等级RNNs替代结构(INRIA和脸谱网人工智能研究。性质2016)
- arXiv:http:/ / / / 1511.06303 ABS参考
- GitHub:https://github.com/facebook/conditional-character-based-rnn
像素递归神经网络(谷歌DeepMind)
- arXiv:http:/ / / / 1601.06759 ABS参考
- 笔记(雨果LaRochelle):http://www.evernote.com https://////fdf61a28 shard s189 SH – f4b6 – 491b – bef1 – f3e148185b18 / aba21367d1b3730d9334ed91d3250848
机器阅读长短期记忆网络
唇读长短期记忆
结合长短期记忆
在神经网络的语言形式和功能的表示
递归神经网络的建筑的复杂性措施
第一层次树lstms依存句法分析
输入输出训练神经网络通过光谱方法
项目
neuraltalk(不推荐使用):一种多模态神经网络描述图像的句子学习Python NumPy项目
neuraltalk2:高效的图像字幕代码运行在GPU上的火炬,
炭块递归
项目:pycaffe复发
使用神经网络进行密码破解
- 博客https://0day.work/using-neural-networks-for-password-cracking/
- GitHub:http:/ / / / github.com gehaxelt RNN -密码
火炬RNN:高效、可重复使用型和lstms火炬
部署一个GPU在火炬到JavaScript的训练模式,供大家使用
- 博客http://testuggine.ninja/blog/torch-conversion
- 演示http://testuggine.ninja/drumpf-9000/
- GitHub:http:/ / / /炭github.com darktex RNN
在严格执行咖啡
约翰:java网络图书馆
- 导语:C2W模型,基于语言模型的对应,基于模型的一部分的讲话恶搞的方
- GitHub:http:/ / / / github.com wlin12吉诺宁
lstm -基于自编码:seq2seq lstm基于自编码
语言变化的RNN模型
- 标准简介:lstm门控反馈,lstm,lstm一维网格
- GitHub:http:/ / / / github.com cheng6076传销
keras额外:为keras连接美国有线电视新闻网与网络层
博客
注意模型应用于自然语言处理的调查
http://yanran.li/peppypapers/2015/10/07/survey-attention-model-1.html
先进的基于注意的测量模型
http://yanran.li/peppypapers/2015/10/07/survey-attention-model-2.html
在线表示在神经语言模式学习
http://www.marekrei.com /博客/在线式学习在复发性神经语言模型/
乐趣与神经网:一个潜入CNTK和tensorflow
材料的理解方
http://medium.com十堰/材料/ @ -理解- lstm – 34387d6454c1 # .4mt3bzoau
Understanding LSTM and its diagrams (★★★★★)
- 博客http:/ /介质。COM / @十堰/ understanding-lstm-and-its-diagrams-37e2f46f1714
- 幻灯片http://github.com /石燕/自由/ BLOB /硕士/文档/图/ lstm _ diagram.pptx
持续型:快30倍的网络层在迷你小批量(格雷戈diamos,百度硅谷人工智能实验室)
http://svail.github.io/persistent_rnns/
所有的递归神经网络
http:/ /介质。COM / @ jianqiangma / all-about-recurrent-neural-networks-9e5ae2936f6e #。q4s02elqg
轧制和展开型
shapeofdata.wordpress.com HTTPS:/ / / / / /滚动04 27 2016—展开—RNNs /
利用神经网络的时间序列预测(LSTM)与tensorflow:LSTM回归使用tensorflow
- 博客http://mourafiq.com/2016/05/15/predicting-sequences-using-rnn-in-tensorflow.html
- GitHub:http:/ / / / github.com mouradmourafiq tensorflow – lstm回归
资源
可怕的递归神经网络-策划列表资源致力于网络
J rgenü说的关于递归神经网络
http:/ / / / rnn.html Juergen people.idsia.ch ~
阅读和问题
有没有复发的卷积神经网络实现了吗?