guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: fmatch


From: stefan
Subject: Re: fmatch
Date: Fri, 7 May 2010 22:53:05 +0200
User-agent: KMail/1.12.4 (Linux/2.6.31.12-0.2-desktop; KDE/4.3.5; x86_64; ; )

> > The core of the matcher is a loop with a set of if's arranged so that in
> > the mean it will do 2-3 checks and then find the instruction, so no named
> > labels.
> 
> You mean no “labels as values” (info "(gcc) Labels as Values"), right?

Yep!
 
> Hmmmm.  My first reaction is that I’d rather avoid complex VM
> instructions like this and instead focus on native compilation (AOT or
> JIT) when we feel like improving performance.
> 
> What do you think?

Well, I think that for plain pattern matching, a sane compilation is the way 
to go in the long run. For unification I'm not sure. Code generated from 
unwinding unification code typically increases geometrically with the number 
of words and I'm not sure this is needed. I think that it is hard to beat a 
tiny vm that does this well in c. In a way this is a clean solution. Think
of debugging the generated code for errors, you will need special hacks to 
the scheme in order to make this workable and there is a lot of similar 
overhead associated with bloated code. So my thinking is that for some time 
we will have a need for this tiny vm as one extra instruction, although a 
complex one. So why not use it for plain matching as well. At some point
we could write this tiny vm in Scheme :-). Implementing it all in scheme 
today would mean that I must work with a slow matcher which is not ideal

But to judge what to do. how far are we from a scheme compiler?

> [...]
> 
> > I also have similar code for usage in prolog like matching e.g.
> > unfication + backtracking I would expect at least the same speedup here.
> > Note that for this application a pure unwinding of the logic, similar to
> > the match macro will yield very bloated code and going for a targeted
> > virtual machine will make
> > for lean and efficient code.
> >
> > I hope that in the end a prolog implementation using this approach would
> > be 3-4
> > times slower that what you have with a more targeted platform like gnu
> > prolog.
> 
> This all sounds like exciting work!

I have a matcher right know for the einstin case and it takes 90ms compared to
16-20 for gnu prolog on my machine. (The same method is used) I think 
that if I can dispatch more cleanly I could reduce it even further.

the code for the member function currently looks like,

(def gp-member
     ( X (+ (X . L)) F     (if (F)
                               #t
                               (begin (gp-unwind *gp-fi*) (gp-next)) ))
     ( X (+ (Y . L)) F     (gp-member X L F)                )
     ( X (+ ())      F     #f                               ))

Here the + directive extension is used to indicate a uniying match.
 *gp-fi* is implicitly bounded to a frame so that we can undo any variable 
settings. I find it very nice to work with pattern matchers that can 
handle both ordinary pattern matching and unifying pattern matching so it is
quite usefull to bundle these two things together.


 
> > So in a sense this is a test of what you want. Compare the speed
> > difference of using c-based or scheme based solution. Still I expect the
> > macth construct to be easier to hack and improve uppon and at some point
> > a direct match version will win because you can compile it naitively.
> 
> Well, until 1.8, the focus in Guile had been to write the “hot spots” in
> C, for performance.  However, that doesn’t scale well and leads to a
> hard to maintain code base (we want to write Scheme in the first place,
> not C).

I Agree with this experience.

> For 2.2 and beyond, I really think the focus should be on allowing hot
> spots to be written in Scheme, which means compiling Scheme code
> natively.  This would be beneficial to all Scheme code, not just this
> specific pattern matching construct.

This is clearly a good move. Hmm Ok, I see your point here. I could write 
the whole stuff out in scheme directly. Hmm it would still be nice to have
an implemenation in C and compare with what you get when introducing this 
code. Also one should focus on stuff in the right order. So if I spend the 
next 
two weeks writing a small prolog implementaion. Should we wait untill after 
2.2 to get the suggested speed and live with 15x performance hit? It is 
tempting to deliver that system and then spend the next years to shoot it 
down into pure scheme. 

Also I use this way of programming alot. It would be cool to have a fast 
implementaion at the desk within a short timeframe.

At least it is a fun hack!

/Stefan












reply via email to

[Prev in Thread] Current Thread [Next in Thread]