Eyetracked Multi-Modal Translation (EMMT)


EMMT (Eyetracked Multi-Modal Translation) is a simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios.

About EMMT

We present EMMT, a dataset containing monocular eye movement recordings, audio data and 4-electrode wearable electroencephalogram (EEG) data of 43 participants while engaged in sight translation supported by an image.

The dataset can be found here: https://github.com/ufal/eyetracked-multi-modal-translation

The full description of the experiment design is our arxiv paper: EMMT: A simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios

Data Browser

You can browse EMMT visually in this interface:

https://ufal-eeg-analyzer.streamlit.app/

Known limitations:

  • Modality synchronization is probably not perfect.

[Screenshot of EMMT Data Browser]