[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [help-GIFT] patch-fu

From: David Squire
Subject: Re: [help-GIFT] patch-fu
Date: Thu, 17 Aug 2006 11:24:56 +0100
User-agent: Thunderbird (Macintosh/20060719)

Jonas Lindqvist wrote:
Is the "double conv[65536]" related stuff really needed for the performance improvements?

The feature-extractor is obviously capable (at least before these patches) of dealing with images of sizes other than 256 by 256.

That is not true of all aspects of it. See below.

If I were to alter to resize the images to something other than 256x256, how would that affect the results of a query to the giftserver?

They key thing that depends on image size is the recursive decomposition of the image into blocks (a la quad-tree). This in turn is reflected in the numbering of the features, and thus the mapping from feature numbers to feature types (groups).

IMHO, the best reason for removing the resize to 256x256 step is that for non-square images this distorts textures. In practice, it seems not to have hurt us too much, but it is not right. It would be fairly easy to modify the code to handle non-square images, but some of the "clever" tricks might break.

Handling larger images raises several questions. First, should we ensure that we always do enough four-way decompositions that the smallest blocks are ~16x16? If so, this would imply that larger images would have features not present in smaller ones, and we would have to change the block feature indexing scheme.

I am not in favour of changes that make it harder to introduce such improvements (corrections) in future. IMHO, correctness and flexibility are more important than speed. Perhaps there could be a separate extractor specially optimized for 256x256 images for those who want it, but this should be introduced in parallel to the base code, not replacing it.

Just my thoughts. I still haven't had time to look at the recent patches.



17 aug 2006 kl. 11:13 skrev David Squire:

Jonas Lindqvist wrote:

I feel an urge to ask some questions... (Perhaps silly, but anyway... I know I could probably find the answers by digging in the code a bit deeper, but I admit I'm lazy...)

* The function gabor_filter, in gabor.c, now uses a fixed array of 65536 doubles, instead of callocing the size indicated by the width and height that are passed as parameters to gabor_filter(...). Very well... Are the width and height always 256, or can they be 128*512 or 2*32768 or whatever?

That is not a change I would approve. The width and height are presently
always 256x256, but this was always intended to be a temporary measure.
Code that does not need this assumption should not make it. The code
should be kept as open for extension and generalization as possible.

* I guess that most modern CPUs have some kind of SSE2-ish features that gcc could use, but what would the effect of the patch be for an architecture that lacks it? (Something seriously old, pre MMX, or something else that perhaps one would not use for this application anyway...)

* Wouldn't memset be faster than looping and setting to zero?:
    for (i = 0; i < width*height; i++)
        conv[i]= 0;   /* needs to be zeroed */

calloc should handle this.

and isn't width*height always 65536?

See my comment above.

I've just got back from a few weeks away from the internet. Much catching up to do...



--Dr David McG. Squire, Senior Lecturer, on sabbatical in 2006
Caulfield School of Information Technology, Monash University, Australia
CRICOS Provider No. 00008C

help-GIFT mailing list

help-GIFT mailing list

Dr David McG. Squire, Senior Lecturer, on sabbatical in 2006
Caulfield School of Information Technology, Monash University, Australia
CRICOS Provider No. 00008C

reply via email to

[Prev in Thread] Current Thread [Next in Thread]