Optical Music Recognition (OMR) is a field that attempts to automatically understand written music. Its application fall into several broad categories: those based on understanding the written music document itself, like automatic re-typesetting of manuscripts in a notation editor, and applications based on interpreting the document to recover the encoded musical content, such as MIDI output for indexing large music score archives. There is demand for working systems by composers and musicians, but more importantly also by digital musicology researchers and libraries. The comparison to OCR may seem obvious, however, OMR is significantly more difficult, mostly because music notation itself is a very complex visual language. Despite more than fifty years of the field’s history, there is currently no satisfactory solution for other than high-quality scanned printed music scores, much less for manuscripts. There are now hopes of tackling OMR using deep learning, which has been shown to work well on the lower-level sub-problems, and certain applications of OMR also allow a promising end-to-end formulation, but as far as understanding the syntactic structure of music notation is concerned, there are still modeling problems for which deep learning has no answer. In this talk, we will introduce OMR, describe our ongoing work regarding evaluation, the MUSCIMA++ dataset, and musical symbol detection experiments, and we will also discuss the outstanding challenges of OMR.