gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gcl-devel] Memory management in 2.6.9


From: Camm Maguire
Subject: Re: [Gcl-devel] Memory management in 2.6.9
Date: Sat, 13 Jul 2013 09:33:00 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.4 (gnu/linux)

Greetings, and thank you very much for this heads up!  In fact, I've
been just lucky with my brk probes so far as the default linux kernel
heuristic is so good.

BTW, on Linux it is runtime configurable via

/proc/sys/vm/overcommit_memory
/proc/sys/vm/overcommit_ratio

0 in /proc/sys/vm/overcommit_memory is the default heuristic, 1 means
always overcommit, and 2 essentially never.  The ratio defaults to 50,
the percent of physical ram which when added to swap provides the
ceiling for option 2.

As far as GCL goes, we need to know when to start decelerating the
allocations, as being to aggressive early on sets one up for a failure
to allocate at the end of some long calculation.  See
elementary-bounders in acl2 which is barely certifiable given the memory
on the Debian autobuilders.  On the other hand, one does not wish to
recompile simply when one uses a different machine with more memory.

The goal is not to never run out of memory, but to essentially never
fail at brk when needed, and report an oom situation gracefully from
lisp in advance as the end approaches.  Alas, simply succeeding at brk
at image startup appears to provide precious little in terms of
guarantees later on as the memory is actually written.  ia64 turned out
to be especially bad, as it allowed a brk overcommit to 17Gb on a system
with 8gb of ram.

I'm obviously still open to suggestions.  We can trudge along as is with
some ugly special defines, BRK_DOES_NOT_GUARANTEE_ALLOCATION on ia64 and
kfbsd, MAX_BRK on hurd, but something more robust would be great.

Take care,


Bruce-Robert Fenn Pocock <address@hidden> writes:

> Not an expert by any means, but I believe the brk overcommitment behaviour is 
> a (compile or runtime) option on Linux, as well …
>
> On Jul 12, 2013 12:24 PM, "Camm Maguire" <address@hidden> wrote:
>
>     Greetings!  Largely to accommodate acl2's growing appetite, I've put in
>     some memory management enhancements in 2.6.9.  Among these is a dynamic
>     maxpage -- no longer a compile time constant, the executable will try to
>     manage memory according to the amount available at runtime.
>    
>     This of course does not pertain to Windows or Mac, where sbrk is
>     emulated.  (Don't really think sbrk needs mac emulation, but that will
>     have to wait.)
>    
>     I've run into a few issues with this across the available unix like
>     systems, no surprise.  There does not appear to be a generic reliable
>     way to determine the available memory in advance without waiting for a
>     failure in the middle of some calculation.  The first attempt is using
>     brk, which is nice as it does not actually add pages to the process
>     until the memory is written.  Some systems, notably kfreebsd and perhaps
>     hurd, return success for brk calls beyond the physical memory of the
>     system.  Next I tried supplementing with sysconf (_SC_PHYS_PAGES).  This
>     actually returns -1 an ia64, which appears permissible from the
>     manpage.  On the bsd's it appears to work OK.
>    
>     Suggestions?
>    
>     Take care,
>     --
>     Camm Maguire                                       address@hidden
>     ==========================================================================
>     "The earth is but one country, and mankind its citizens."  --  Baha'u'llah
>    
>     _______________________________________________
>     Gcl-devel mailing list
>     address@hidden
>     https://lists.gnu.org/mailman/listinfo/gcl-devel
>

-- 
Camm Maguire                                        address@hidden
==========================================================================
"The earth is but one country, and mankind its citizens."  --  Baha'u'llah



reply via email to

[Prev in Thread] Current Thread [Next in Thread]