The aim of this project was to work towards the automatic semantic description of digital music. The project intended to contribute to bridging the current semantic gap in music information and apply the results to content-based music processing.

Main objectives:

1. The analysis and manipulation of low-level audio descriptors by spectral modeling analysis and synthesis techniques,

2. The extraction of high-level musical attributes from these low-level audio descriptors,

3. The study and development of pattern recognition and machine learning techniques applied to sequential data for building semantic models of different aspects of music and, based on these semantic models,

4. The development of prototypes of music mining, personalization, and postproduction next generation systems.

As a tool for carrying out 1 and 2 we implemented a distributed real-time concurrent programming language.

Currently, a number of projects on music style recognition, melody recognition, melodic analysis and similarity, automatic transcription of digital audio, algorithm composition, expressive music performance, style-based performer recognition, and cognitive state decoding are being developed in the context of the project.