Simultaneous interpreting requires the concurrent execution of multiple processes: listening, comprehension, conversion of a message from one language to another, speech production, and self-monitoring. This requires the deployment of an impressive array of linguistic and cognitive control mechanisms that must coordinate the various brain systems implicated in handling these tasks. Indeed, we might argue that simultaneous interpreting is one of the most demanding linguistic tasks that there is. Given that we normally use only one language at a time, even if we engage in dense code-switching, using two at once (as is essential for successful simultaneous interpretation) is an extraordinary feat. How does the brain handle the challenge of juggling two languages? How is the extreme language control capacity required during interpreting implemented?
I will discuss a series of neuroimaging investigations of the cerebral networks involved in interpreting and brain structural consequences of increasing interpreting proficiency, and the insights these can provide more broadly for neurocognitive theories of multilingual language control. The unique perspective provided by examining language in action has led to novel hypotheses concerning the extent of the brain network of language and its relationship to phylogenetically older mechanisms of domain general behavioural and cognitive control. In more recent work, I have begun to examine the neural basis of cross-modal (sign-oral) interpretation, leading to results that invite significant re-evaluation and reinterpretation of the existing data. I will reflect on the methodological issues that might be in play and propose some approaches to addressing these challenges.
Alexis Hervais-Adelman is assistant professor of neural dynamics and human electrophysiology at the University of Geneva and research associate at the Zurich Linguistics Center, University of Zurich. He holds a PhD in cognitive neuroscience from the University of Cambridge (2008), where he studied perceptual learning of degraded speech. His research investigates the neural mechanisms of language, with a focus on extreme language processing, such as simultaneous interpreting, and degraded speech perception. His work uses neuroimaging to explore brain networks involved in multilingualism and auditory challenges. His applied research focuses on developing cognitive or non-invasive neurostimulation interventions for enhancing speech comprehension for listeners with hearing difficulties. In recent years, his research has expanded to include evolutionary aspects of language development, in part by examining the development of the fetal brain.