gcl-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gcl-devel] GCL memory allocation and GC problems


From: Vadim V. Zhytnikov
Subject: Re: [Gcl-devel] GCL memory allocation and GC problems
Date: Wed, 14 Jan 2004 23:25:57 +0300
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; ru-RU; rv:1.5) Gecko/20031006

Camm Maguire ?????:
"Vadim V. Zhytnikov" <address@hidden> writes:


Camm Maguire ?????:

Greetings!
"Vadim V. Zhytnikov" <address@hidden> writes:


Camm Maguire ?????:


Hi Vadim!  Finalizing the default holesize now and rereading your old
messages.  It seems as if you said no less than 1000 pages, and
approx. 1/10 MAXPAGE (here too, as well as growth maximum default) was
optimal.  I've looked at this, and it adds 2.5 Mb (on 6.2) to the
default build.  Is it worth it?  What do people think?  How often do
applications try to dramatically extend the core?
Take care,

Take a look at (room) output for new GCL build.
I see about 400 allocated pages for contiguous blocks.
Why?  Where this pages come from?  This is quite

There are a few stray mallocs/alloc_contblocks which might be cleaned
up, but the vast bulk is 1) the bfd relocation table (~ 950k, 304
contiguous pages (due to page boundary alignment)), and 2) relocated
lisp object files, maybe ~150 pages.  We could redirect the former to
a static internal array, or simply mark the pages as t_other to save
gbc traversal time (as we know, contblock gc is the most expensive by
far -- perhaps a better algorithm can be found at some point).  I'm
reasonable confident we could do this in less memory, but the tradeoff
is in not having to support relocation code on 12 platforms :-).


OK.  But please compare current CVS GCL and GCL
right before hole size related commit.  Where these
contblocks in the latter image?


Let me clarify the misunderstanding.  When I first run fresh
gcl build with new memory layout (faster grow, larger hole)
I noticed that right after start (room) shows more allocated contiguous
pages than GCL permanently installed on my system.  My mistake is
that I decided that this is some suspicious by-product of new memory
parameters.  In fact my permanent GCL is quite old - September 2003.
Recent CVS GCL (before new memory parameters) shows exactly the same
contiguous pages number.  So the difference has nothing to do with
recent changes.  But noticeable difference between current CVS and one
of September really exist.  It can be seen ether right after start
and in the way how GCL allocates large numbers of cont blocks.
I've tried simple test - create 20000 element list of large random
bignums with (set-gmp-allocate-relocatables nil).   The resulting
numbers with September's GCL are
  465/676      16661 contiguous blocks
while the same test with current GCL gives
  796/1152     24284 contiguous blocks
Strange difference.  But let's leave this problem for a moment.
It is not crucially important since now we are trying to avoid
cont blocks allocation as much as possible.

Is it worth to increase hole size?  Judge for yourself.
Do you remember this (pass) test which we used for
cons pages allocation?  I've tried the same test on several
CL implementation on Athlon XP+ 2400.  These numbers is the
time elapsed by 10 (pass) calls:

clisp                                               -  12 sec
cmucl                                               -  12 sec
sbcl                                                -   8 sec
GCL current CVS with new growth and hole size 512   - 166 sec
The same but with old default hole size 128 (?)     - 785 sec
The same with 1024 hole size                        -  83 sec
The same but with 10000 hole size                   -  13 sec

I think that 785 seconds is really too much.

OK, so what?  Are these numbers _really_ important? Maybe
2.5 Mb is more important? Obviously there is no universal answer
this is usual space/speed tradeoff.  Let's consider the situation
from practical point of view.  Main GCL customers Maxima, ACL2
and Axiom are large Lisp programs which are often used
for large computations.  Of course they can be used also
for something very simple but even in such situation is extra 2.5 Mb
important?  In general I don't think so taking into account typical
modern and not-so-modern hardware.  I could only imagine something
like hand-held computer where this extra RAM may be of real
importance.  On the other hand I understand that large
default hole size cuts some flexibility.  So let's make initial
hole size a new configure option (It is not so hard, isn't it?)
with the default value 4*MAXPAGE/1024 or even 8*MAXPAGE/1024.





Here's a schematic of GCL's memory layout:

.text: main (0x804ad30 in my image)
all compiled C files in o/
...
user_match                    (0x81870a0 in my image)
.data:                        (0x81a53a0 in my image)
intermixed typed pages
(i.e. cons, string, contblock...)
(contblocks hold loaded compiled
lisp objects, area malloced by
external libs, and, formerly,
bignums)
heap_end                      (0x8451000 in my image)
hole (512 pages)
rb_pointer (relblock area)    (0x865102c  in my image)
core_end                      (0x871d000  in my image)

which corresponds to a 7Mb+ DRS reported by 'ps axvwm'.  What is
interesting and beyond myunderstanding at the moment is that the image
size on disk is only ~ 4.5Mb -- there must be some sort of compression
going on I'd think.

Actually image size behaves somewhat strange to me.  I just build
from scratch current CVS gcl and few-days-ago CVS gcl.
Current unstripped saved_gcl size - 6Mb.
Previous - 10Mb. Why?


In any case, the hole is in the image, allocated in the .data section,
but is not among the typed pages in the Lisp heap.  In particular, the
hole is not kept in contblock pages.  Hope this helps.  Suggestions
for improvements as always most welcome.

Take care,


Finally,  please explain me about this notorious 2.5 Mb.
This is information from /proc/<pid>/status for
GCL after start and execution of (gbc t)(gbc nil)(gbc 1)

                                     RSS      VM     Data

1) GCL with small hole size 128     2724K    8012K    732K
2) GCL with def hole size 512       2612K   10164K   2884K
3) GCL with 1000 hole size          2116K   48116K  40836K

What is harmful in larger VM and Data size?
Compare with initial memory layout for cmucl:

                                     3928K    1.3M   1.3M

Maybe I misunderstand some UNIX VM concepts.

--
     Vadim V. Zhytnikov

      <address@hidden>
     <address@hidden>








reply via email to

[Prev in Thread] Current Thread [Next in Thread]