coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] tests: avoid intermittent ulimit -v failures


From: Pádraig Brady
Subject: Re: [PATCH] tests: avoid intermittent ulimit -v failures
Date: Wed, 16 Dec 2015 18:48:03 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 16/12/15 09:51, Pádraig Brady wrote:
> On 16/12/15 02:15, Pádraig Brady wrote:
>> I got the continuous integration going again with:
>>
>>   http://git.sv.gnu.org/gitweb/?p=hydra-recipes.git;a=commitdiff;h=f2f1c98b
>>
>> but then noticed a failure on i686 linux at:
>>
>>   FAIL: tests/misc/cut-huge-range.sh (exit: 1)
>>   ============================================
>>   cut: error while loading shared libraries:
>>   libc.so.6: failed to map segment from shared object
>>
>> I'm not sure about the attached, but it might address the issue.
>> If not we can increase the limit further.
> 
> Actually the issue might have been due to
> and extra fork/exec associated with the pipe,
> in which case this would be more appropriate:
> 
> diff --git a/tests/misc/cut-huge-range.sh b/tests/misc/cut-huge-range.sh
> index 633ca85..001bcde 100755
> --- a/tests/misc/cut-huge-range.sh
> +++ b/tests/misc/cut-huge-range.sh
> @@ -51,15 +51,15 @@ CUT_MAX=$(echo $SIZE_MAX | sed "$subtract_one")
> 
>  # From coreutils-8.10 through 8.20, this would make cut try to allocate
>  # a 256MiB bit vector.
> -(ulimit -v $vm && : | cut -b$CUT_MAX- > err 2>&1) || fail=1
> +(ulimit -v $vm && cut -b$CUT_MAX- /dev/null > err 2>&1) || fail=1
> 
>  # Up to and including coreutils-8.21, cut would allocate possibly needed
>  # memory upfront.  Subsequently extra memory is no longer needed.
> -(ulimit -v $vm && : | cut -b1-$CUT_MAX >> err 2>&1) || fail=1
> +(ulimit -v $vm && cut -b1-$CUT_MAX /dev/null >> err 2>&1) || fail=1
> 
>  # Explicitly disallow values above CUT_MAX
> -(ulimit -v $vm && : | returns_ 1 cut -b$SIZE_MAX 2>/dev/null) || fail=1
> -(ulimit -v $vm && : | returns_ 1 cut -b$SIZE_OFLOW 2>/dev/null) || fail=1
> +(ulimit -v $vm && returns_ 1 cut -b$SIZE_MAX /dev/null 2>/dev/null) || fail=1
> +(ulimit -v $vm && returns_ 1 cut -b$SIZE_OFLOW /dev/null 2>/dev/null) || 
> fail=1

Spending a few minutes testing rather than speculating,
suggests my hunch about alignment was correct.
I could trigger on my system when I tightened the VM contraint
in get_min_ulimit_v_() from 1000 to 1, and then tested with:

  make SHELL=/bin/dash TESTS=tests/misc/cut-huge-range.sh check

Subtle variations to that on my system would not trigger.
With the get_min_ulimit_v_() patch applied (and using the correct
divisor of 1024 rather than 4), I could not trigger any issues
on my system in all get_min_ulimit_v_() using tests.

I'll also merge in the above change for defensive reasons.

cheers,
Pádraig.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]