make-w32
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GNU make 3.81beta4 released


From: Eli Zaretskii
Subject: Re: GNU make 3.81beta4 released
Date: Wed, 18 Jan 2006 21:30:37 +0200

> From: Eli Zaretskii <address@hidden>
> Date: Wed, 18 Jan 2006 06:24:14 -0500
> Cc: address@hidden
> 
> WaitForMultipleObjects that sub_proc.c uses to wait for child
> processes' demise is documented to be limited to a maximum of
> MAXIMUM_WAIT_OBJECTS objects.  MAXIMUM_WAIT_OBJECTS's value is 64

As I looked at this issue in the Windows-specific code in Make, I
asked myself how does Make cope with a similar problem on Posix
platforms.  That is, what happens when "make -j" exceeds the limit on
the number of processes for the current user (or for the entire
system)?

Well, it turns out that Make doesn't seem to cope with this at all.  I
used "ulimit -S -u" to lower the limit on per-user processes to a
small value, and then ran one of the Makefile's in the `parallelism'
script from the test suite.  Sure enough, Make said that vfork failed
with EAGAIN, and the respective rule's commands were not run.  Which
isn't surprising, since I cannot find anything in the code that's
supposed to handle this situation.  From what I saw, when invoked with
"-j" with no argument and with no load average limitations, Make never
considers the machine load too high, so it goes on and on forking
subsidiary programs until it exceeds the limit.

Isn't this a bug?  Can it be fixed by looking at the values returned
by `getrlimit' when vfork returns EAGAIN, and if we think the failure
is because the number of our forked jobs is near the limit, to queue
the job for later, instead of failing it?  For that matter, should
Make look at the limit inside load_too_high and return non-zero if the
number of jobs is about to cross the limit?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]