bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#9196: integer and memory overflow issues (e.g., cut-and-paste crashe


From: Jan Djärv
Subject: bug#9196: integer and memory overflow issues (e.g., cut-and-paste crashes Emacs)
Date: Fri, 29 Jul 2011 18:49:20 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:5.0) Gecko/20110624 Thunderbird/5.0



Paul Eggert skrev 2011-07-29 18:21:
First, thanks for taking a look at the patch.  And in response to your
comments ...

On 07/29/11 03:01, Jan Djärv wrote:

a crash would be fine by me.  Actually better than memory_full,
because the core is much more useful.

We can easily arrange to crash, by replacing "memory_full (SIZE_MAX)"
with "abort ()".  I can do that for places where that's preferred.  It
sounds like xgselect.c is one of those places, so I'll do that; please
let me know of any other places.

The problem with abort is that gcc may optimize them to look like they occurred elsewhere.

I systematically looked for all places where memory is being
allocated, and where size calculations might overflow during this, and
fixed them all.  It's better to have a systematic policy where all
such size calculations are checked.  Trying to decide heuristically
which size overflows don't need checking because they can "never
happen" would lead to further sources of maintenance error, and anyway
the idea that some size overflows can "never happen" is uncomfortably
close to the apocryphal story that "640K of memory should be enough
for anybody".

It is also a maintenance burden to keep all checks. More code that can go wrong, but it really add nothing. The difference with 640k is enough and 2 billion scroll bars is huge. If you create a scroll bar every tenth second, it will take approx 6.8 years to reach 2 billion. Why is it important to check for that? I do think it is important to reflect on what the code does and how limits are reached before adding more code that increases CPU and memory usage for no real benefit.

In xgselect.c:

+    int gfds_size_max =
+      min (INT_MAX, min (PTRDIFF_MAX, SIZE_MAX) / sizeof *gfds);

Here a compile time constant is recalculated inside a loop.

Since it's a constant, the runtime cost of calculating it is zero, so
there's no efficiency gain by moving it.

I think you are assuming compiler optimizations here. They might be true for gcc, but not for other compilers. A define more clearly states that it is a constant than a computed variable.

Do you think it would be
clearer to hoist it out of the loop?

Yes, and by declaring it const.  But even better, a define.

If so, we can easily do that;
but there is something to be said for having the definition of the
constant near its use.


A define can be put anywhere.

The xgselect.c is also overengineering IMHO.  The number checked
represents the number of file descriptor sources Glib is checking.
I can understand checking sizes for strings that come from external
sources, but only code adds file descriptor sources.  If some bug
causes the addition of 2 billion sources

2 billion sources is not always the limit.  On typical 32-bit hosts,
the current code stops working at 500 million sources, or 250 million
if one is worried about pointer subtraction.  And if future glib
versions change the associated struct, that 250-million limit could go
down further.  In cases like these, it's helpful to check the limit
even if the check can "never fail".

250 million is way more than any OS can handle. What is the point of having Emacs check stuff that the OS simply can't handle anyway? Please come up with a real scenario when this may actually fail before adding checks. You aren't the one that has to take the time to update these checks when the code changes. You are putting maintenance burden on others without any real benefit.

        Jan D.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]