help-gift
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [help-GIFT] A layman experiments on GIFT


From: MUELLER Henning
Subject: Re: [help-GIFT] A layman experiments on GIFT
Date: Thu, 06 Jun 2002 16:35:47 +0200

Hi,

thanks for doing this experiment with GIFT.

Is the database freely available that you did the tests with? It would
be interesting to compare gift with other systems and also to get some
quantitative tests, not just qualitative ones. There is a lot of
literature about evaluating retrieval systems.

As David pointed out, gift is only using global and local color and
texture characteristics. There is no cheating with using filenames or
scanner signatures for the queries and no object or shape features are
used, either.

Color and texture features are equally weighted, so the system does not
favor color for retrieval as you suggested.
Our eyes are simply much more used to find similaries with respect to
colors than with respect to textures when comparing images. 
You can execute the queries with using exclusively one of the four
feature groups or any combination to find out more about the origin of
the similarity scores.

We would, of course, be happy to find developers for features such as
shapes or object features based on segmentation. The features that are
currently used are relatively simple and we know that they have
shortcomings for certain query tasks.

We are really happy about all comments from people using gift!

Cheers, Henning

"I. Wronsky" wrote:
> 
> A layman experiments on GIFT
> 
> Abstract: State of GIFT v0.1.8 on a specific domain of human
>           images is reported.
> 
> It is well known that humans are interested in looking
> at pictures of humans. Content based image retrieval
> systems to aid in the image finding process, and should
> therefore be successful in real-life settings where
> both the content and queries reflect true user interest.
> 
> To evaluate the current usability of GIFT on such a setting,
> we collected several thousand images of one or more human
> specimen, depicting such activities that are usually perceived
> as interesting for an average human network user. We base
> this assumption on the popularity and commercial success of
> such images.
> 
> Due to their origin, many of the collected images had a natural
> clustering, where each group contains images having the same
> photographic source, featuring essentially just one situation
> and the same background setup. We call these clusters "photo sessions".
> Across different clusters, however, the photographing equipment,
> scanner hardware, lighting conditions, resolutions, postprocessing,
> objects, and so on, are naturally very diverse. On the other hand,
> the pictures are semantically very alike: humans are the primary object.
> 
> As it was our purpose to do the evaluation as laymen, we
> maintained our objectivity by skipping all gift-related technical
> papers. That is, we remain blissfully ignorant to the feature
> extraction mechanisms and the distance/matching metrics used.
> 
> It is now possible - unfortunately with very little or no
> precision, and lots of subjectivity - to relate our trivial
> findings.
> 
> The primary method stated by the Charmer GUI, "separating normalisation"
> is seemingly unable to find cross-cluster images with perceptual
> similarity. If, for example, a human in a certain pose is
> presented, the query will not result in various pictures
> of other humans in similar poses, even though such exist
> in the database. Selecting more query images and excluding
> some results does not particularly help, and the couple
> of positive hits can be accounted to random. The matching
> is also relatively insensitive to clothing, hair style
> or other apparel which might be thought of as a reasonable
> matching criteria by a human user.
> 
> However, when presented with a query image belonging
> to a cluster (photo-session), GIFT is able find lots
> of images belonging to the same cluster, even if their
> visual similarity is limited to the general atmosphere. Its
> not known (to us) whether GIFT does this by relying on the color
> histograms, resolution or even some signature left by the
> scanner hardware or image postprocessing (we do not even
> discount the possibility of GIFT cheating by the filenames,
> as we have not verified the source code or bothered to
> rename the images).
> 
> A success on in-cluster images can be useful if GIFT is evaluated,
> for example, on the corel datasets where all images belonging
> to the same class ("sunset") might have a similar color scheme
> and general atmosphere. Unfortunately, when querying about humans
> on a real-life human database, this capability is virtually useless,
> because the images are already clustered to photosessions at
> their sources, and after one image is found, the rest are
> usually nearby. Particularly, they tend to share same filename
> prefix.
> 
> Based on this, the layman performing an evaluation of GIFT
> will most likely think that the current method relies
> almost entirely on color, and less on texture, and very little
> on object shapes or relations whatever. It is an interesting
> avenue for future work to design open-source systems that can
> adequately utilize information of shapes and object relations.
> Of additional practical interest is the applicability
> of such systems in meaningful real-life settings.
> 
> ---
> 
> ps. any feedback is welcome. ;) ;)
> 
> _______________________________________________
> help-GIFT mailing list
> address@hidden
> http://mail.gnu.org/mailman/listinfo/help-gift

-- 
     ----------------------------------------------------------
     Henning Mueller, Computer Vision Group
     Computer Science Department, University of Geneva
     24, rue du General Dufour, CH-1211 Geneva 4, SWITZERLAND  
     Phone : +41(22)705 7633; fax: +41(22)705 7780
     address@hidden
     ----------------------------------------------------------



reply via email to

[Prev in Thread] Current Thread [Next in Thread]