[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gcl-devel] Re: exec-shield mmap & brk randomization

From: root
Subject: [Gcl-devel] Re: exec-shield mmap & brk randomization
Date: Tue, 18 Nov 2003 21:07:08 -0500

Ummm, I've read this 3 times and I understand that lisp (in fact every
lisp I've ever worked on) will break in this model. Lisp systems depend
on managing memory. 

If I understand what you wrote it appears that in the worst case I
could get back a page at a time which could not be remapped into a
contiguous area! Please tell me that I'm wrong.

I've written a couple garbage collecting memory managers and I can
tell you that, while I could write code to manage lists of single-page
memory pools, it would be horribly inefficient.  If, in the worst
case, I can only guarantee a page of memory at a time I'll be forced
to simulate page tables and play pointer games.

It appears that exec-shield has taken a "master-slave" relationship with
the application. The master does what it wants, unpredictably so, and
every application has to cope.

This will be my first, last, and only bitch about this but I claim that
exec-shield is broken by design and should be withdrawn. It is correct
by legal definition and wrong by philosophy. The point of a computer is
the application. Randomizing the behavior of the operating system means
that every application programmer will have to defend against the 
operating system. It is true that correct code will always work but 
any slight misunderstanding by the application programmer will lead
to random, non-repeatable crashes. This will give Linux a reputation
for being unreliable.

Sorry if this sounds aggressive. I'm just used to 30+ years of a
certain memory models and I don't deeply understand the implications
of this change. It appears to be an attempt to "fix" bad programmers
code vulnerabilities. However, it then goes on to assume that the
same programmers can code rock-solid legal code under all possible
random results. 

So, now that I'm thru the denial and grief, lets move on to the coping

Do you have a set of examples from the design to show me what expectations
will be violated? I read your text but it would be helpful to see actual
code that used to work and will no longer be guaranteed to work. Is there
a design document I can read to understand this better?

Lisp needs large contiguous areas to be efficient and, given that I
can now address gigabytes worth of memory there ought to be a way to
guarantee that I'll get them. Clearly allocating from &end to the top
of memory won't work.

What I'd like to achieve is allocating all of available memory above 
the loaded code as one contiguous block. I can then manage all of the
memory myself. Can you comment on these potential ideas?

Is there a way to get a map all of addressable memory? Such a map could
tell me that the shared library(s) sit in ranges of my address space.
I could then route around the damage by allocating them as fixed lisp
objects that the garbage collector can't move. Alternatively we could
just copy the shared library out of the way and dynamically relink the
library calls.

Another strategy is to statically link everything. That way &end upward
will be unused. But I got a hint from your previous note that something
is about to break with static linking. How will static linking change?

Another strategy is to allocate really large data areas in the 

Is there a way to access the page tables? I can remap the pages in the
address space so the shared libraries are contiguous with the end of
my code within my address space, freeing up the rest of memory as a
contiguous block.

If all else fails can I allocate memory a page at a time, looping until
memory is exhausted, and then compute the memory map? This appears to
let me unrandomize the brk calls.

I guess the key question boils down to: how do I allocate very large
contiguous blocks of memory reliably in the new memory model?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]