The main worklines of our group are:
- Fast nearest neighbor search algorithms for metric spaces
-
Given a training set of objects classified into classes, the
nearest neighbor (NN) classifier assigns an unknown sample to the class of
its nearest neighbor in the set. The nearest neighbor may be found using
an exhaustive search (brute force approach), or using one of the many
fast NN search algorithms. Our group has developed several
algorithms for finding the nearest neighbor that do not require a vector
representation of objects. These algorithms are specially suitable for
tasks where the distance between two objects is computationally demanding,
e.g. the Levenshtein distance (edit distance).
- Classification rules based on neighborhood
-
The nearest neighbor (NN) rule and the k-NN rule define a neighborhood
around the sample. Although these are simple and theoretically have a good
behaviour, in practice its results may be improved using another
neighborhood definition. In our group we have
developed another neighborhood definition, the k-NSN rule, that uses the
candidates to NN in a fast NN search algorithm in order to approximate to
k-NN rule error rates with the cost of a 1-NN search. Also, we are
currently working on reducing the k-NN error rates using alternative
neighborhood definitions.
- Combination of classifiers
- The combination of a group of classifiers often obtain better results
than a single classifier. There are many ways of combining classifiers such
as fusion of outputs, using different training sets or different feature sets.
Recently, our group has developed several ways of training weights for
classifier output combination.
- Application of pattern recognition techniques to:
- handwritten character recognition
- classification of marble textures
- music genre recognition
- classification of music melodies
- robot vision and grasping