Over decades, much research has been devoted to the development of friendly music score editors. Despite all these efforts, there is still no satisfactory solution. The emergence of tablet computer devices has open new avenues to approach this problem. With these devices, a musician can compose its music on a digital score using an electronic pen and have it automatically digitized.
The recognition of online handwritten musical notation task is defined as recognition of musical symbols at the time they are being written. The great variability in the manner of writing the musical symbols is the main difficulty to overcome.
The goal of the Handwritten Online Musical Symbols (HOMUS) dataset is to provide a reference corpus with around 15000 samples for research on the recognition of online handwritten music notation. Each musician has their own writing style, as it occurs in handwritten text. Increasing both the set of musical symbols and the number of different writing styles in the experiments is advisable if reliable results are pursued.
Among the HOMUS dataset, the following musical symbols can be found:
|Note||whole, half, quarter, eithght, sixteenth, thirty-second, sixty-fourth|
|Rest||whole/half, quarter, eithght, sixteenth, thirty-second, sixty-fourth|
|Accidentals||flat, sharp, natural, double sharp|
|Time signatures||common time, cut time, 4-4, 2-2, 2-4, 3-4, 3-8, 6,8, 9-8, 12-8|
|Clef||G-clef, C-clef, F-clef|
The HOMUS contains 15200 text files, distributed in independent directories for each musician. The files consist of:
The HOMUS was built by 100 musicians from the Escuela de Educandos Asociación Musical l'Avanç (El Campello, Spain) and Conservatorio Superior de Música de Murcia "Manuel Massotti Littell" (Murcia, Spain) music schools, among whom were both music teachers and advanced students. o cover more possibilities, some of them were experienced in handwritten music composition while others have few or none handwritten composition experience. Musicians were encouragingly asked to draw the symbols trying not to do it in a perfect manner, but in its own, particular style. Each of them were asked to draw four times the symbols listed in the previous table, which has resulted in 15200 samples spread over 36 templates (The eighth, sixteenth, thirty-second, and sixty-fourth note symbols are written twice: right and inverted). Each sample of the dataset contains the label and the strokes composing the symbol are stored separately. These strokes consists of a set of points relative to a coordinate center, located in the place where it is drawn the first point of the sample. Storing the data in this way allows covering all the possibilities considered: the image can be generated from the strokes, every single stroke can be extracted easily, and each individual symbol remains isolated. Since the pitch of the notes is based on its position over the staff, it is unnecessary to detect it in the classification, but it may be assigned in a post-processing stage.
To create the dataset a Samsung Galaxy Note 10.1 device was used and symbols were written using the stylus S-Pen. This device was chosen among the standalone friendly options because of its optimality to work with the pen. The device has a resolution of 1280x800 (149 ppi) and a sampling rate of 16 ms (60 fps). An application that request musical symbols to be drawn on an empty staff was developed. The staff was composed of five parallel lines with a line thickness of 3 and an equal staff line spacing of 14. These two values are provided as a reference for possible rescaling as they are the common features for this purpose in OMR systems.