# 第二章：自然语言处理

- [Recurrent Neural Network based Language Model](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/recurrent-neural-network-based-languaged-model.md)
- [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/learning-phrase-representations-using-rnn-encoder-decoder-for-statistical-machine-translation.md)
- [Neural Machine Translation by Jointly Learning to Align and Translate](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/neural-machine-translation-by-jointly-learning-to-align-and-translate.md)
- [Hierarchical Attention Networks for Document Classification](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/hierarchical-attention-networks-for-document-classification.md)
- [Connectionist Temporal Classification : Labelling Unsegmented Sequence Data with Recurrent Neural Ne](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/connectionist-temporal-classification-labelling-unsegmented-sequence-data-with-recurrent-neural-netw.md)
- [About Long Short Term Memory](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/about-long-short-term-memory.md)
- [Attention Is All you Need](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/attention-is-all-you-need.md)
- [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](/advanced-deep-learning/di-er-zhang-ff1a-xu-lie-mo-xing/bert-pre-training-of-deep-bidirectional-transformer-for-language-understanding.md)
