The goal of the interactive
task is not to compare participants systems in a competitive environment, but
rather for participants to explore variations of their retrieval system within
a given scenario. There are at least four aspects of a Cross-Language image
retrieval system we could investigate including:
||How the CLIR system supports user query formulation for
images with English captions, particularly for users in their native language
which may be non-English. This is also an opportunity to study how the images
themselves could also be used as part of the query formulation.
||Whether the CLIR system supports
query re-formulation, e.g. the support of positive and negative feedback to
improve the user's search experience, and how this affects
||Browsing the image collection. This might include
support for summarising the image results set through categorising images
according to pre-defined categories (which must also be translated) or visually
based on the images themselves (e.g. by shape, colour etc.). Browsing becomes
particularly important in a CLIR system when query translation fails and
returns irrelevant or no results.
||How well the CLIR system presents the retrieval results
to the user to enable selection of relevant images. This might include how the
system presents the caption to the user (particularly if they are not familiar
with English or some of the specific and colloquial language used in the
captions) and investigate the relationship between the image and caption for
will compare two interactive Cross-Language image retrieval systems (one
intended as a baseline) that differ in the facilities provided for interactive
query refinement (this includes points 2 and 3 from above). For example the user is searching for a picture of an
arched bridge and starts with the query "bridge". Through query modification (e.g. query expansion based on the
captions), or perhaps browsing for similar images and using feedback based on
visual features, the user refines the query
until relevant images are found.
As a CL image retrieval task,
the initial query should be in a language different from the collection (i.e.
not English) and translated into English for retrieval. The
simplest approach is to translate the query
and display only images to the user (assuming relevance can be based on the
image only and images are language independent), maybe using relevance feedback
on visual features only, enabling browsing, or categorising the images in some
way and allowing the user to narrow their search through selecting these
categories. Any text displayed to the user
must be translated into the user's source language. This might include
captions, summaries, pre-defined image categories etc.
of 8 users (who can search with non-English queries) and 16 example images
(topics) are required for this task (we supply the topics) which is described
below. Although the goal is to experiment with users who search with
non-English queries, we are willing to relax this condition if required (but
please discuss with Paul Clough).
|Scenario and example
an image (not including the caption) from the St Andrews
collection, the goal for the searcher is to find the same image again using
a Cross-Language image retrieval system. This aims to allow researchers to
study how users describe images and their methods of searching the collection
for particular images, e.g. browsing or by conducting specific searches.
The scenario models the
situation in which a user searches with a specific image in mind (perhaps they
have seen it before) but without knowing key information thereby requiring them
to describe the image instead, e.g. searches for a familiar painting whose
title and painter are unknown.
This task can be used to determine whether the retrieval system is
being used in the manner intended by the system designers and determine how the
interface helps users reformulate and refine their search requests.
The following images have been selected for
this task (click to view larger image):
| TOPIC 5
| TOPIC 9
*(NB - the
black box in the larger version of these images is to hide text on the
The scenario should be described to users before starting the
experiments. You can use some text like this:
In this task
we will show you 16 different images, one at a time, using two different
Cross-Language image retrieval systems. The pictures cover a variety of topics
and are taken from the St Andrews historic photographic collection. When we
show you each image, we will ask you to search the collection and try and find
that same image again. We will let you keep the image to refer to during your
search. This known-item search is aimed at modelling the scenario in which you
know the image you want from the collection, but don't have it to hand; you
know it exists in the collection but can't remember the exact person, location
or name of the object in the image. You can browse and search for the image any
way you want and you have a maximum of 5 minutes to find each image. You can
stop searching when you have found it. We want to observe how our system
supports this kind of task, what words/phrases you use to describe the images
and whether you are successful in finding the required images or not.
note that it is a good idea to let users search the collection prior to
starting the experiments to let them get a feel for its contents. More
information about the collection which you could give to people can be found
here. It is also a good idea to iterate to users that
they can search using any part of the image, i.e. objects in the foreground and
Experiment instructions for
The interactive ImageCLEF
task is run similar to
using a similar experimental procedure. However, because of the type of
evaluation (i.e. whether known items are found or not), the experimental
procedure for iCLEF 2004
(Q&A) is also very relevant and we make use of both iCLEF
Given the 16 topics shown above, participants get the 8 users to test each system with 8
topics. Users are given a maximum of 5 mins only to find each image.
Topics and systems will be presented to the
user in combinations following a latin-square design to ensure user/topic and
system/topic interactions are minimised. The
procedure given in iCLEF 2004 is to be followed.
duration is slightly different than for iCLEF and participants should use the
following as a guideline:
|Tutorials (2 systems)
||30 minutes total
|Searching (system A, 8
||40 minutes (5
|Searching (system B, 8
||40 minutes (5
questionnaires are a recommended way of obtaining feedback from the user about
their level of satisfaction with the system. There is no fixed questionnaire,
but you can use the
iCLEF 2003 to give you some ideas for ImageCLEF. These correspond to the
surveys suggested in the above procedure, but may need some modification to
suit the image retrieval task.
To measure the performance of this task, the following metrics will
be used: whether the user could find the intended image or not, the time taken
to find the image, the number of steps/iterations required to reach the
solution (e.g. the number of clicks or the number of queries), and the number
of images displayed to the user. For each topic, we require that you summarise
your system and provide us with this information.
These factors help to measure the
efficiency with which a cross language image retrieval search could be
performed, e.g. how quickly or how many clicks were necessary to find the
relevant image. Information about how the interface was useful for the user can
be obtained from performing a user questionnaire after the
|What to submit
|Please provide us with a basic description of your two systems, the
language used for searching, and the main feature of your system which supports
query refinement, e.g. combining visual and text for relevance feedback.
For each topic, please state information for the measures given
above: whether the user found the image or not, the time taken to find the
image, the number of steps/iterations, and the number of images displayed to
the user. We will normalise some of these scores (e.g. the time taken) across
all submissions to compare systems.
Please can you submit your
results by: June 10th.
We follow the timetable given