guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: a plan for native compilation


From: Andy Wingo
Subject: Re: a plan for native compilation
Date: Thu, 22 Apr 2010 13:28:55 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.0.92 (gnu/linux)

Hi Ken,

On Wed 21 Apr 2010 19:02, Ken Raeburn <address@hidden> writes:

> On Apr 18, 2010, at 07:41, Andy Wingo wrote:
>> Specifically, we should make it so that there is nothing you would
> want
>> to go to a core file for. Compiling Scheme code to native code should
>> never produce code that segfaults at runtime. All errors would still
> be
>> handled by the catch/throw mechanism.
>
> Including a segfault in compiled Scheme code, caused by an
> application-supplied C procedure returning something that looks like one
> of the pointer-using SCM objects but is in reality just garbage? There
> *will* be core files.

Good point.

>>> * Debug info in native representations, handled by GDB and other
>>> debuggers. Okay, this is hard if we don't go via C code as an
>>> intermediate language, and probably even if we do. But we can
> probably
>>> at least map PC address ranges to function names and line numbers,
>>> stuff like that. Maybe we could do the more advanced stuff one format
>>> at a time, starting with DWARF.
>> 
>> We should be able to do this already; given that we map bytecode
> address
>> ranges to line numbers, and the function is on the stack still you you
>> can query it for whatever you like. Adding a map when generating
> native
>> code should be easy.
>
> I think for best results with GDB and other debuggers, it should be
> converted into whatever the native format is, DWARF or otherwise.

I agree that this would be nice, eventually. However Guile's debugging
information is currently for Guile, not for GDB. It needs to be readable
by Guile. This would imply DWARF readers for Guile, which would be nice,
but a pain also.

>>> * Even for JIT compilation, but especially for AOT compilation,
>>> optimizations should only be enabled with careful consideration of
>>> concurrent execution. E.g., if "(while (not done) ....)" is supposed
>>> to work with a second thread altering "done", you may not be able to
>>> combine multiple cases of reading the value of any variable even when
>>> you can prove that the current thread doesn't alter the value in
>>> between.
>> 
>> Fortunately, Scheme programming style discourages global variables ;)
>> Reminds me of "spooky action at a distance". And when they are read,
> it
>> is always through an indirection, so we should be good.
>
> Who said global? It could be two procedures accessing a value in a
> shared outer scope, with one of them launched in a second thread,
> perhaps indirectly via a third procedure which the compiler couldn't
> examine at the time to know that it would create a thread.
>
> I'm not sure indirection helps -- unless you mean it disables that sort
> of optimization.

Variables which are never set may be copied when closures are made.
Variables which are set! need to be boxed due to continuations, and so
that closures can just copy the box instead of values. There is still an
indirection. The compiler handles this.

>> Better for emacs? Well I don't think we should over-sell speed, if
>> that's what you're getting at.
>
> Hey, you're the one who said, "Guile can implement Emacs Lisp better
> than Emacs can." :-) And specifically said that Emacs using Guile would
> be faster.

You caught me! ;)

Emacs using Guile will certainly be faster, when we get native
compilation. I think while we're just bytecode-based though it will be
the same.

> The initial work, at least, wouldn't involve a rewrite of Lisp into
> Scheme. So we still need to support dynamic scoping of, well, just about
> anything.

Indeed.

>> Native-code compilation will make both Scheme and Elisp significantly
>> faster -- I think 4x would be a typical improvement, though one would
>> find 2x and 20x as well.
>
> For raw Scheme data processing, perhaps. Like I said, I'm concerned
> about how much of the performance of Emacs is tied to that of the Emacs
> C code (redisplay, buffer manipulation, etc) and that part probably
> wouldn't improve much if at all. So a 4x speedup of actual Emacs Lisp
> code becomes ... well, a much smaller speedup of Emacs overall.

Ah, a speedup to emacs itself! I was just talking about elisp ;-) It
certainly depends on what you're doing, I guess is the answer here. I
would like my Gnus to be faster, but I'm quite fine with just editing
source code and mail ;-)

>>> On my reasonably fast Mac desktop, Emacs takes about 3s to launch and
>>> load my .emacs file.
>> 
>> How long does emacs -Q take?
>
> Maybe about 1s less?

Good to know, thanks.

>>> I'm also pondering loading different Lisp files in two or three
>>> threads in parallel, when dependencies allow, but any manipulation of
>>> global variables has to be handled carefully, as do any load-time
>>> errors. (One thread blocks reading, while another executes
>>> already-loaded code... maybe more, to keep multiple cores busy at
>>> once.)
>> 
>> This is a little crazy ;-)
>
> Only a little?

:)


Well, I've spent the whole morning poking mail, which doesn't do much to
help Guile or Emacs. I'm going to see if I can focus on code in the next
two or three weeks, besides GHM organization obligations.

Happy hacking,

Andy
-- 
http://wingolog.org/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]