Hausa Visual Question Answering Dataset 1.0 (in short HaVQA 1.0), is a multimodal dataset consisting of text and images suitable for visual question answering (VQA), Visual Question Elicitation (VQE), and both Text-only and Multimodal Machine Translation (MMT) applications and research for the Hausa language.

 

The dataset contains 1,555 unique images and 12,044 gold-standard English-Hausa parallel sentences.

 
 

How to cite

@article{parida2023havqa,
  title={HaVQA: A Dataset for Visual Question Answering and Multimodal Research in Hausa Language},
  author={Parida, Shantipriya and Abdulmumin, Idris and Muhammad, Shamsuddeen Hassan and Bose, Aneesh and Kohli, Guneet Singh and Ahmad, Ibrahim Said and Kotwal, Ketan and Sarkar, Sayan Deb and Bojar, Ond{\v{r}}ej and Kakudi, Habeebah Adamu},
  journal={arXiv preprint arXiv:2305.17690},
  year={2023}
}