[SAIF 2019] Day 2: Rational Recurrences For Empirical Natural Language Processing – Noah Smith ─ Samsung

Despite their often-discussed advantages, deep learning methods largely disregard theories of both learning and language. This makes their prediction behavior hard to understand and explain. In this talk, I will present a path toward more understandable (but still "deep") natural language processing models, without sacrificing accuracy. Rational recurrences comprise a family of recurrent neural networks that obey a particular set of rules about how to calculate hidden states, and hence correspond to parallelized weighted finite-state pattern matching. Many recently introduced models turn out to be members of this family, and the weighted finite-state view lets us derive some new ones. I'll introduce rational RNNs and present some of the ways we have used them in NLP. My collaborators on this work include Jesse Dodge, Hao Peng, Roy Schwartz, and Sam Thomson.

#SAIF
#SamsungAIForum
#SR
#SamsungResearch


<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }</style><div class="embed-container"><iframe src="https://www.youtube.com/embed/Pl2XddiPTno" frameborder="0" allowfullscreen></iframe></div>

Watch Video on YouTube Watch Full-Window Video