In this talk, I will present my view on recent dominance of neural networks in language modeling. I will try to explain in a simple way why neural networks became so popular, and describe my ideas on what could be the research goals for scientists who want to overcome them.
Tomáš Mikolov is a senior researcher at CIIRC, ČVUT. Previously, he has been a research scientist at Facebook AI Research and Google Brain, where he developed popular open-source NLP projects such as word2vec and fastText. He published RNNLM toolkit in 2010 which was the first project that allowed to correctly train large recurrent neural networks, and produced state of the art results on several standard benchmarks in language modeling, speech recognition, machine translation and text generation.