This page gives information about the ImageCLEF 2004 cross language
medical image retrieval task. The data for this image retrieval task has been
kindly donated by University Hospitals of Geneva and can only be used within
the conditions specified in the CLEF copyright agreement
The use of content-based image retrieval (CBIR) systems is becoming
an important factor in medical imaging research. The main goal of this campaign
is to compare CBIR systems and in particular determine how associated
cross-language text can be used in combination with CBIR to improve retrieval
and ranking in this domain. We do not expect participants to have a deep
clinical knowledge to perform well inthis task, although understanding the
domain will help in self-evaluation prior to submission to ImageCLEF.
The main objectives of this task are exploratory and we aim to
(1) how can we
estimate the confidence that the first visually retrieved images are
(2) how can these images be
used for automatic query expansion?
what strategies can be used for visual expansion (e.g.
(4) compare and
evaluate visual features and distance metrics.
(5) what success can be obtained with mediocre input
(6) what benefits in using both
text and visual features combined?
The goal is to find images that are similar with respect to
modality (CT, radiograph, MRI, ...), with respect to the anatomic region shown
(lung, liver, head, ...) and sometimes with respect to the radiologic protocol
(such as a contrast agent.), when applicable. The first query step has to be
Given the query image the simplest submission is to find
visually similar images (e.g. texture and colour). More advanced retrieval
methods may be tuned to features such as contrast and modality. The case notes
may also be used to refine images which are visually similar to ensure they
match modality and anatomic region, e.g. through automatic query expansion.
Results submitted can be:
only visual retrieval
(2) query expansion
(3) manual feedback from the
first 20 results images visual
manual feedback from the first 20 results
We plan to have query attributes on at least three dimensions: 1.
Visual vs. textual 2. Automatic vs. manual (batch vs. user-generated) 3.
Initial vs. expansion/feedback.
|Query images for
set of 26 images which will be used to evaluate participating systems can be
downloaded here [Zip], and an overview
of all image thumbnails on one sheet
To enable participation to the medical task to those without access
to their own CBIR system, we provide access to the
GIFT/Viper image retrieval system
via an http link. The medical collection has been indexed and a test interface
is provided here. In
addition, for those interested in using CBIR techniques, but do not want to use
GIFT/Viper, a list of the top N images returned by GIFT/Viper for each test image can be
downloaded here. This can be used to
retrieve an initial set of images based on visual similarity, then case notes
can be used to retrieve further images. For more information about using the
GIFT/Viper system in ImageCLEF please contact Henning Mueller
For the selection of the query tasks, a radiologist familiar with the
database was asked to chose a number of topics (images only) that represent the
database well. He chose 30-35 images. Henning Mueller then started queries with
the images to find further images in the database resembling the query using
feedback and also the textual data. When there were at least a few similar
images Henning left the images as topics. They correspond to different
modalities, different anatomic regions and several radiologic protocols such as
contrast agents or weightings for the MRI.