A key challenge in the area of music information is the development of efficient and reliable tools for music description, search and retrieval. Conventional music description, search and retrieval systems are mainly based on text metadata. However, music contents and text contents are of a very different nature which very often makes textual information retrieval unsatisfactory. This project aims to study and develop innovative components for music description and information retrieval systems, using semantic descriptors of musical content automatically extracted from music data, using for that pattern recognition and machine learning techniques.
Nowadays, pattern recognition applications whose solutions have a sequential nature, like machine translation, speech recognition, etc. are conceived to work off-line. Nevertheless, their output usually is not error free so it will need the expert supervision. Recently, to involve this supervision stage in the system's learning is receiving a growing interest. This way, through the use of an interactive interface the system can improve its models from the user's corrections. This approach is applicable in the present project to tasks like music transcription and analysis.
1. Extraction, analysis and processing of low-level descriptors from both symbolic music and digital audio.
2. High-level musical semantic attribute extraction (able of describe similarity, music categories, expressiveness and musical structure), based on the low-level descriptors from the previous point.
3. Design and implementation of pattern recognition and machine learning techniques for building semantic models of different aspects of music and sound. The models will involve techniques to deal with this multimodal aspect of the music content.
4. Development of prototypes for music mining, study, personalization, and information retrieval. Some of these prototypes will be designed under an interactive approach, where the feedback from the user expert can help to improve the underlying models.