Handwritten Music Recognition (HMR) is a task that aims to automatically recognize the symbols of music manuscripts and convert them into a machine-readable symbolic format. The task is similar to handwritten text recognition, but it is much harder, mainly because of the not-quite-linear notation (vertical placement carries meaning and symbols may appear on top of each other) and because of this complexity it is still not solved. The task is useful, for example, in the processing of archived music documents, where after the recognition the document can be searched by melody, can be matched with other musical works, or modified in notation software. Unlike Optical Music Recognition (OMR), there is also an extremely large variability in the appearance of symbols, which makes it impossible to use classical computer vision algorithms. We are thus reliant on statistical methods and machine learning, which however needs a lot of training data (especially when using deep neural networks), and there is very little of that available in this area (handwritten music). Manual annotation of training data in the required amount is very expensive, therefore so-called synthetic training data has recently started to be used in some areas. Such data is created by a computer simulation of the process that creates real-world data and thus we have complete and flawless information about the music contained in them. For printed music, existing engraving tools can be used for data creation, but so far there is no well-used tool for synthesis of handwritten music. The aim of the project is to gradually develop such a tool and in the course of development make maximum use of it for exploring modern, data-intensive methods for the recognition itself.