Contribute Media
A thank you to everyone who has made this possible: Read More

Using deep learning in natural language processing: explaining Google's Neural Machine Translation

Description

PyData Amsterdam 2017

Using deep learning in natural language processing: explaining Google's Neural Machine Translation

Recent advancements in Natural Language Processing (NLP) use deep learning to improve performance. In September 2016, Google announced that Google Translate will shift from phrase-based to neural machine translation. Other fields of NLP are making a similar shift. This talk motivates and explains these algorithms and discusses implementations.

Recent advancements in Natural Language Processing (NLP) use deep learning algorithms to improve performance. Google Translate shifts to neural machine translation, Baidu speech genetarion uses neural nets and question answering too. These neural networks share common architectures. They exploit recurrent computation to traverse the input and output. This talk will motivate the recurrent neural networks and discuss architectures. In the second half we discuss extensions such as attention mechanisms. Key words: RNN, seq2seq, attention, word vectors, data/model parallelism, low precision inference, TPU (explained in this order)

Details

Improve this page