gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] shared memory load address and gcl


From: Camm Maguire
Subject: [Gcl-devel] shared memory load address and gcl
Date: Tue, 02 Feb 2010 11:56:05 -0500

Greetings!  GCL, like many lisp systems, manages memory via appending
pages to its .data section with sbrk as needed.  GCL performs at
configuration time various checks to determine how many pages can be
thus added before running into some other obstacle, usually the shared
memory load address base.  On x86 linux, this is at 0x40000000.  In
addition, gcl attempts to craft a linker script to lower its .text
section down to 0 to make more room if needed.  

My understanding is that these addresses are not configurable by the
user.  I would so love to be shown that this is incorrect.  In any
case, assuming this is the case, on debian bsd amd64, the porter box
(asdfasdf, with broken gdb by the way), loads ld.so at a very low load
address of ~ 0x1000000, but then other libs at the higher
0x800000000.  Small test programs have sbrk start beneath the former,
while larger programs have sbrk start in the gap between the
addresses.  There is more than enough space in the latter area, but
not the former, for gcl.  The buildd apparently starts at 0x20000000
even for small programs.  On this machine, the linker script lowering
of the .text address to 0 works, on asdfasdf it aborts, presumably
because of the low ld.so address.

1) Are any of these addresses configurable by the user?
2) Why are they different on these machines?  Are they standards of
the kernel, or intended to be so?
3) Do I need to plan to work around possible gaps in this area, as is
the case presently with asdfasdf?  This is the only machine I've ever
used with such a gap.

Thanks so much!

-- 
Camm Maguire                                        address@hidden
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah




reply via email to

[Prev in Thread] Current Thread [Next in Thread]