[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: unsigned int for loop in bash
From: |
Bob Proulx |
Subject: |
Re: unsigned int for loop in bash |
Date: |
Sat, 1 Feb 2014 13:12:19 -0700 |
User-agent: |
Mutt/1.5.21 (2010-09-15) |
Mathieu Malaterre wrote:
> I am getting a weird behavior in bash. Would it be possible for the
> next release of bash to not get a SIGSEV ?
> for i in {0..4294967295}; do
> echo $i
> done
That is one of those expressions that I see and my eyes go *WIDE* with
shock! The {X..Y} expression is generating arguments.
{0..4294967295} is generating 4 GIG of arguments. And since those
arguements are strings they will take up a lot of memory.
Let's try some experiments to make it more obvious what it is doing.
$ echo {0..10}
0 1 2 3 4 5 6 7 8 9 10
$ echo {0..50}
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
$ echo {0..100}
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
$ echo {0..250}
...much too long for an email...redacted...
That is getting large very fast! Faster than just the number of
numbers because each number is taking up a larger amount of characters
as it gets largert in magnitude. Let's stop printing them and start
counting them.
$ echo {0..100} | wc -c
294
$ echo {0..1000} | wc -c
3895
$ echo {0..10000} | wc -c
48896
$ echo {0..100000} | wc -c
588897
$ echo {0..1000000} | wc -c
6888898
$ time echo {0..10000000} | wc -c
78888899
real 0m20.800s
user 0m15.280s
sys 0m1.860s
To print to 1M consumes 6,888,898 bytes and we are only a fraction of
the way to 4 gig! To print out 10M consumes 78,888,899 bytes and
starts to take a long time to complete. On my machine the bash
process gets up to around 1.8G of ram on that one and force other
memory to swap out. There isn't much memory left for other processes
to use. Any larger values and the machine starts to thrash swap and
slow down to disk I/O speeds. And we are only up to 10M and not yet
anywhere close to your 4,000+M request.
The reason you are seeing a segfault is because you are trying to use
more memory than exists in your system. You are trying to use more
memory than exists in most super computers. You are trying to use an
astronomical amount of memory!
If you are adventuresome you might try to actually produce that many
arguments. I expect that it will crash due to being out of memory.
If you are lucky it will only produce a bash malloc error. If you are
running Linux with the OOM enabled then it might kill something in
appropriate and damage your system.
echo {0..4294967295} >/dev/null
This has been discussed previously. I suggest reading through this
thread. It includes good suggestions for alternative ways of looping
without consuming endless memory.
http://lists.gnu.org/archive/html/bug-bash/2011-11/msg00181.html
Bob