guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Evolution & optimization of the module system


From: Ludovic Courtès
Subject: Re: Evolution & optimization of the module system
Date: Thu, 22 Feb 2007 10:20:55 +0100
User-agent: Gnus/5.110006 (No Gnus v0.6) Emacs/21.4 (gnu/linux)

Hi,

Kevin Ryde <address@hidden> writes:

> address@hidden (Ludovic Courtès) writes:
>>
>> Actually, `process-duplicates' is O(N*USES) _for each module used_.  So
>> the overall duplicate processing is really O(N*USES^2).  With the
>> patched version, the whole process is O(N*USES).  That can make quite a
>> difference when USES > 1.
>
> It should be ok, it's only hash table lookups, which are fast.  And N
> is normally pretty modest too.

I don't think so.  Remember: `module-import-interface' (used by
`process-duplicates'), in the current Guile, is _not_ a hash table
lookup, it's a traversal of the module's use list.  The patched version
is always USES times as fast as the current implementation.  So even
with USES <= 5, it does make a difference.

The measurements in the `module-duplicates.scm' file I posted choose
USES = 10000 by default, which is arguably unrealistic.  However,
timings are way too small when choosing USES < 1000.

> Copying the table of 2000 core bindings into every module doesn't
> sound good, not if it's only for once-off duplicates checking.

I agree that it sounds a bit overkill at first sight.  ;-)  However, it
benefits to both duplicate checking and variable lookup.

> If you
> want you can check the existing innermost loops are good.  In
> process-duplicates var1 and var2 are almost always different (one of
> them #f usually), so getting that down to C with some sort of
> "hashq-intersection" or "hashq-for-each-intersection" would help a
> lot.  I'd predict throwing a little C at bottlenecks like that will be
> enough.

Yes, that would probably help a little.  However, I was trying to have
an algorithmic approach to the issue, being convinced that the most
important gains can be obtained this way.  So I'd like to stick to an
algorithmic evaluation for now and only then consider
"micro-optimizations".

> Another possibility would be to defer duplicates checking until the
> end of a define-module or use-modules form (or even until the end of
> the file), if mutual cross-checks can be done faster en-block, if you
> know what I mean.

That's already what happens: When `process-define-module' finishes, it
invokes `module-use-interfaces!', passing it all the imported modules.
If a `use-modules' form is used later in the source file, the a new
duplicate processing stage occurs.  Currently, it doesn't make any
difference performance-wise, though, since `process-duplicates' only
handles one imported module at a time (and I can't think of any other
way to do it).

> It could use a temporary combined hash if that
> helped (perhaps sharing bucket cells to save gc work).

What do you mean by "combined hash"?

> The particular
> "module-define!" you struck should obviously be only about USES many
> hash lookups (ie. about a dozen typically), most of the time, if
> that's not already the case.

Theoretically, yes.  However, that can only be the case if observers are
passed precise information about what changed in the observed module,
such a description of the operation that led to the change (e.g.,
`define') and a list of affected bindings.  Currently, observers are
just notified that "something" changed, thus they have to run
`process-duplicates' in its entirety.

Anyway, I'm not sure we should worry too much about "`module-define!' at
run-time".

> They're special in that we know there's no clashes between them.
> process-duplicates should ignore any ice-9 vs ice-9, if that doesn't
> happen already.

I'd prefer to first optimize the general case, and then only resort to
such special-casing optimizations when all other recipes failed.
Special-casing makes code more complex and harder to work with IMO.

Thanks,
Ludovic.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]