help-gift
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [help-GIFT] gift algorithms - separate normalization and CIDF


From: David Squire
Subject: Re: [help-GIFT] gift algorithms - separate normalization and CIDF
Date: Sat, 07 Jun 2003 09:00:50 +1000
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030312

Mika Rummukainen wrote:

Hi there,

I've been wondering for a while now for couple of questions.

Question 1:
How does GIFT perform the weighting of features with different algorithms?

From articles (Content-based query of image databases,inspirations from text
retrieval: inverted files, frequency-based weights and relevance feedback
and
Content-based query of image databases: inspirations from text retrieval
(Pattern Recognition Letters 21))

I managed to find some information but I assume the equations how scores are
calculated (after relevance feedback) for every image are for CIDF only.
Now I'd like to know how Separate Normalization performs its weighting. From
"Strategies for positive and negative relevance feedback in image retrieval" I
found a "Separately weighted feedback" - is this how separate normalization is
performaing?
Question 2:
This makes me think that when GIFT uses separate normalization, it first uses
CIDF algorithm to weight the features and then uses separate normalization to
weight even more the weighted features calculated by CIDF. Am I completely on a
wrong path here?
All that "separate normalization" means is that the similarity score is first calculated for each feature group separately, each feature group score is normalized, and then they are added to get the final score, e.g.

FinalSimilarity = 0.25*ColourHistogramSimilarity + 0.25*ColourBlockSimilarity + 0.25*TextureHistogramSimilarity + 0.25*TextureBlockSimilarity

where all of ColourHistogramSimilarity, ColourBlockSimilarity, TextureHistogramSimilarity and TextureBlockSimilarity have been normalized to the range [0,1]. This prevents feature groups that have more features completely dominating the total score.

Question 3: How is the similarity of images calculated with both algorithms if there would
be no relevance feedback?
All query similarities are calculated the same way, whether there is relevance feedback or not. A single image query simply has one relevant image.

Cheers,

David


--
Dr. David McG. Squire, Postgraduate Research Coordinator (Caulfield),
Computer Science and Software Engineering, Monash University, Australia
http://www.csse.monash.edu.au/~davids/






reply via email to

[Prev in Thread] Current Thread [Next in Thread]