gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] Re: ALPHA native object relocation committed to gclcvs


From: Camm Maguire
Subject: [Gcl-devel] Re: ALPHA native object relocation committed to gclcvs
Date: 15 Apr 2005 11:13:59 -0400
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2

Greetings!

Leon Bottou <address@hidden> writes:

> On Thursday 14 April 2005 05:41 pm, you wrote:
> > Greetings!  Happy to announce that native relocation is now also
> > enabled for the alpha following the mips .got-section-per-module
> > strategy.
> > OK, you may want to look at the alpha code now committed if you are
> > ever interested in alpha.
> 
> I will!
> 

Great!  Am hoping the last two, ia64 and hppa, will be simple as
well. 

If one really knows their stuff, which I don't, there is further room
for improvement in setting the HINT relocations and possibly avoiding
the full i-cache flush with imb() on load.  Have no way of telling how
important this might be.

There is also this annoying aspect about bfd that they define the
reloc_howto_type to be constant, which on alpha, actually puts it into
read-only memory, making your *(void **)& workaround segfault.  We
have our local copy of bfd, so I just removed the const.  While GCL
can also build with the external system bfd, which is more modular, it
makes the image less portable, as certain heap recreation steps
require the same bfd lib to be present.  In fact, we already have
local bfd patches for macosx.  While I originally sought out bfd as a
way to externalize this nasty relocation stuff, which has indeed
succeeded on at least 6 arches, there are a few disadvantages beside
the local macosx patch and read-only howto issues mentioned above: 1)
upstream bfd explicitly states that it does not provide any stable or
even versioned binary-compatible api, 2) image portability, 3) memory
fragmentation and excess consumption (~ 1.5M) with moderate use of
malloc/free on each module load.  As to the latter, it would be easy
to alleviate if one could estimate the memory needs before loading,
but it begs the question if we really need to carry around all of bfd
for the (relatively) simple task at hand.



> > I can see we are thinking alike.  
> > One can't appear to beat gdb's add-symbol-file
> This is what I am using right now.
> 
> > > Dumping the image state is not done with an unexec trick, but
> > > by parsing an architecture independent dump file and reconstructing
> > > the corresponding lisp objects.  Yes this is slower.
> > > Modules are reloaded when the corresponding lisp object is reconsructed.
> > 
> > I see.  Why then do you need to relocate at all, as opposed to just dlopen? 
> >  
> 
> The lush compiler does not produce native code. 
> It generates C code, compiles it with gcc, and loads the .o file.
> To make this transparent, we need to replace and relink 
> compiled code on the fly whenever we recompile a function.
> This is too complicated for dlopen/dlclose.
> 
> Essentially we save the trouble of native code generation
> at the expense of a more complicated dynamic linker...
> 

OK, this is exactly the case with GCL too.  As mentioned previously,
axiom and acl2 load so many .o files that they greatly exceed the
maximum number of dlopen file handles available on most systems!  I
take it lush does not aim to implement ansi common lisp, otherwise
there is serious duplication between our projects!  ECL is the other
lisp-like program following suit, which differs primarily from GCL in
its goal of being embedable.  Wonder how many existing uses people
have for such a feature.

Take care,

> > And likewise!  Still would be nice to avoid duplication of effort to
> > the extent possible.
> 
> Of course.
> Thanx again.
> 
> - L.
> 
> 
> 

-- 
Camm Maguire                                            address@hidden
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah




reply via email to

[Prev in Thread] Current Thread [Next in Thread]