Explore representation learning of grounded language and knowledge in this lecture by Yejin Choi from the University of Washington. Delve into intelligent communication, types of knowledge, and end-to-end learning approaches. Examine practical applications in cooking instructions, biology wet lab procedures, and engine oil changes. Discover the unique challenges of procedural language and the use of action graphs. Investigate unsupervised learning techniques and the role of knowledge in models. Learn about recipe generation as a form of machine translation, including attention mechanisms and neural checklist models. Compare various baselines and examine neural recipe examples. Analyze the limitations of end-to-end learning and explore dynamic networks and verb physics frames. Conclude with insights on reverse engineering commonsense knowledge and the intersection of finite state automata with RNN language models.
Representation Learning of Grounded Language and Knowledge - With and Without End-to-End Learning