Friday, 7 October, 2016 - 10:45 to 12:15
Room: 

Multimodal Representations for Natural Language Meaning

In this talk, I discuss the recent research my group has been
conducting on multimodal representations for modeling natural language
meaning. This aproach, Multimodal Semantic Simulation (MSS), assumes a
rich formal model of events and their participants, as well as a
modeling language for constructing 3D visualizations of objects and
events denoted by natural language expressions. The Dynamic Event
Model (DEM) encodes events as programs in a dynamic logic with an
operational semantics, while the language VoxML, Visual Object Concept
Modeling Language, is being used as the platform for multimodal
semantic simulations in the context of human-computer
communication. Within the context of embodiment and simulation
semantics, a rich extension of Generative Lexicon's qualia structure
has been developed into a notion of situational context, called a
habitat. Visual representations of concepts are called voxemes and are
stored in a voxicon. The linked structures of a lexicon and voxicon
are called a multimodal lexicon, which is accessed for natural
language parsing, generation, and simulation.

CV: 

James Pustejovsky holds the TJX Feldberg Chair in Computer Science at Brandeis University, where he directs the Lab for Linguistics and Computation, and chairs both the Program in Language and Linguistics and the Computational Linguistics MA Program. He has conducted research in computational linguistics, AI, lexical semantics, temporal reasoning, and corpus linguistics and language annotation. He has written several books on linguistics, computational semantics, computational linguistics, and corpus processing.