Menu Content/Inhalt
WP6 Description arrow Evaluation campaigns
Third evaluation campaign Print E-mail

The third evaluation campaign, organized by the MIPRCV consortium, comprises a set of benchmarks where the tasks have been defined with the goal of evaluating and comparing new techniques closely related to the main interests of the project. We invite the members of the consortium to participate in this third (and last) campaign. There will be six main tasks, some of them as a continuation of the first and second campaign and new ones.


Robot Vision - ImageCLEF: multimodal place classification (NEW)

Fourth edition of this benchmark in the ImageCLEF challenge. Participants will be asked to classify functional areas on the basis of image sequences, captured by a perspective camera and a kinect mounted on a mobile robot within an office environment. Therefore, participants will have available visual (RGB) images and depth images generated from 3D cloud points.

Organizers: Ismael Garcia-Varea (PRHLT-ITI (UCLM group) and IDIAP)

MNIST benchmark: active/online learning

In this third edition of the campaign, the goal is to apply active and online learning methods. For the evaluation of the online task, results will be obtained running common classifiers with a protocol inspired in the protocol used in the PASCAL challenge on active learning.

Organizers: Ernest Valveny (PR-CVC group)

Interactive video retrieval benchmark

Second edition of this benchmark extracted from a set of videos of the TRECVID 2007 video retrieval set. These videos were divided into shots, and a keyframe was selected for each shot. These keyframes were annotated manually. In this edition two types of files with one or two categories are provided.

Organizers: Ángeles López (CVDSP-UJI group)

Interactive image annotation benchmark

First edition of this benchmark where the objective is to compare the performance of different strategies for image annotation. The goal is to assign words/tags to a given new image which hopefully describes or has relations with the content of that image. To this end, 10 thousand images downloaded from the Internet are provided.

Organizers: Mauricio Villegas and Roberto Paredes (PRHLT-ITI group)

Interactive sequence labeling benchmark

The aim of this benchmark is to find new search strategies for passive and active interactive sequence labeling. The corpus is a compilation of handwritten national identification numbers (DNI) from real forms.

Organizers: Vicente Alabau (PRHLT-ITI/UPV group)

BiModal handwritten benchmark

Third edition of this benchmark of on-line and off-line handwritten text. Several word instances of each on-line and off-line modalities are available and the goal is to classify each pair of on/off-line test samples into one of the classes in the vocabulary.

Organizers: Moisés Pastor (PRHLT-ITI/UPV group)


Click here to access to the evaluation campaign


Active Image

 
Next >

Login Form






Lost Password?