help-gift
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[help-GIFT] A layman experiments on GIFT


From: I. Wronsky
Subject: [help-GIFT] A layman experiments on GIFT
Date: Wed, 29 May 2002 21:07:53 +0300 (EEST)

A layman experiments on GIFT

Abstract: State of GIFT v0.1.8 on a specific domain of human 
          images is reported.

It is well known that humans are interested in looking 
at pictures of humans. Content based image retrieval
systems to aid in the image finding process, and should 
therefore be successful in real-life settings where 
both the content and queries reflect true user interest.

To evaluate the current usability of GIFT on such a setting, 
we collected several thousand images of one or more human
specimen, depicting such activities that are usually perceived 
as interesting for an average human network user. We base 
this assumption on the popularity and commercial success of 
such images.

Due to their origin, many of the collected images had a natural 
clustering, where each group contains images having the same 
photographic source, featuring essentially just one situation 
and the same background setup. We call these clusters "photo sessions". 
Across different clusters, however, the photographing equipment, 
scanner hardware, lighting conditions, resolutions, postprocessing,
objects, and so on, are naturally very diverse. On the other hand, 
the pictures are semantically very alike: humans are the primary object.

As it was our purpose to do the evaluation as laymen, we
maintained our objectivity by skipping all gift-related technical 
papers. That is, we remain blissfully ignorant to the feature 
extraction mechanisms and the distance/matching metrics used.

It is now possible - unfortunately with very little or no
precision, and lots of subjectivity - to relate our trivial 
findings. 

The primary method stated by the Charmer GUI, "separating normalisation" 
is seemingly unable to find cross-cluster images with perceptual
similarity. If, for example, a human in a certain pose is
presented, the query will not result in various pictures
of other humans in similar poses, even though such exist
in the database. Selecting more query images and excluding
some results does not particularly help, and the couple
of positive hits can be accounted to random. The matching 
is also relatively insensitive to clothing, hair style 
or other apparel which might be thought of as a reasonable
matching criteria by a human user.

However, when presented with a query image belonging
to a cluster (photo-session), GIFT is able find lots
of images belonging to the same cluster, even if their
visual similarity is limited to the general atmosphere. Its 
not known (to us) whether GIFT does this by relying on the color 
histograms, resolution or even some signature left by the
scanner hardware or image postprocessing (we do not even 
discount the possibility of GIFT cheating by the filenames,
as we have not verified the source code or bothered to 
rename the images).

A success on in-cluster images can be useful if GIFT is evaluated, 
for example, on the corel datasets where all images belonging
to the same class ("sunset") might have a similar color scheme
and general atmosphere. Unfortunately, when querying about humans
on a real-life human database, this capability is virtually useless,
because the images are already clustered to photosessions at 
their sources, and after one image is found, the rest are
usually nearby. Particularly, they tend to share same filename
prefix.

Based on this, the layman performing an evaluation of GIFT 
will most likely think that the current method relies 
almost entirely on color, and less on texture, and very little 
on object shapes or relations whatever. It is an interesting 
avenue for future work to design open-source systems that can 
adequately utilize information of shapes and object relations.
Of additional practical interest is the applicability
of such systems in meaningful real-life settings.

---


ps. any feedback is welcome. ;) ;)





reply via email to

[Prev in Thread] Current Thread [Next in Thread]