[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Cvs-test-results] CVS trunk testing results (BSDI BSD/OS)
From: |
Larry Jones |
Subject: |
Re: [Cvs-test-results] CVS trunk testing results (BSDI BSD/OS) |
Date: |
Thu, 3 Nov 2005 15:45:32 -0500 (EST) |
Derek Price writes:
>
> Hrm. Larry, could you provide a little more information about this
> failure? If you back out the most recent change to zlib.c, does this
> test pass on BSDI & AIX?
No, it still fails (at least on BSD/OS).
> Also, what, exactly, is causing the "memory
> exhausted" error? Is it a new bug or simply because the test attempts
> to commit a file of about 4MB in size and the whole thing is held in
> memory at once (twice? more? - client & server?)?
It appears to be the latter:
#0 xalloc_die () at xalloc-die.c:38
#1 0x804d8a9 in get_buffer_data () at buffer.c:108
#2 0x804d987 in buf_output (buf=0x80e81c0,
data=0x813d000 "a lot of data on a line to make a really big file once it
is copied, copied,\ncopied, the digital equivalent of a mile.\na lot of data on
a line to make a really big file once it is copied, copied,\ncopi"...,
len=7798784) at buffer.c:181
#3 0x8054f41 in send_to_server_via (via_buffer=0x80e81c0,
str=0x813d000 "a lot of data on a line to make a really big file once it is
copied, copied,\ncopied, the digital equivalent of a mile.\na lot of data on a
line to make a really big file once it is copied, copied,\ncopi"...,
len=7798784) at client.c:3108
#4 0x8054fa6 in send_to_server (
str=0x813d000 "a lot of data on a line to make a really big file once it is
copied, copied,\ncopied, the digital equivalent of a mile.\na lot of data on a
line to make a really big file once it is copied, copied,\ncopi"...,
len=7798784) at client.c:3130
#5 0x8056664 in send_modified (file=0x80e6440 "big_file",
short_pathname=0x80e6410 "big_file", vers=0x80e8380) at client.c:4418
#6 0x8056992 in send_fileproc (callerdat=0x8047920, finfo=0x8047798)
at client.c:4549
#7 0x8081f59 in do_file_proc (p=0x80e53e0, closure=0x804778c) at recurse.c:959
#8 0x806277d in walklist (list=0x80ec400, proc=0x8081eb0 <do_file_proc>,
closure=0x804778c) at hash.c:419
#9 0x8081daa in do_recursion (frame=0x8047884) at recurse.c:847
#10 0x8082792 in unroll_files_proc (p=0x80e5320, closure=0x8047884)
at recurse.c:1344
#11 0x806277d in walklist (list=0x80eb800, proc=0x8082644 <unroll_files_proc>,
closure=0x8047884) at hash.c:419
#12 0x80818a1 in start_recursion (fileproc=0x8056690 <send_fileproc>,
filesdoneproc=0x8056ad8 <send_filesdoneproc>,
direntproc=0x8056b0c <send_dirent_proc>,
dirleaveproc=0x8056bec <send_dirleave_proc>, callerdat=0x8047920, argc=1,
argv=0x80e63e0, local=0, which=1, aflag=0, locktype=0, update_preload=0x0,
dosrcs=0, repository_in=0x0) at recurse.c:448
#13 0x8056e8a in send_files (argc=1, argv=0x80e6330, local=0, aflag=0, flags=0)
at client.c:4960
#14 0x8057c2c in commit (argc=0, argv=0x80e8088) at commit.c:602
#15 0x807059b in main (argc=2, argv=0x80e8080) at main.c:1153
#16 0x804a749 in __start ()
However, it's failing about 4MB into the 7MB file, which worries me
since that's nowhere near my process limits. And, indeed, at this point
I can malloc() lots of memory with no problem, but pagealign_alloc()
fails:
(gdb) p pagealign_alloc(4096)
$6 = (void *) 0x0
(gdb) p malloc(4096)
$7 = 143335424
(gdb) p malloc(40960)
$8 = 143339520
So, I think there's a problem with the way pagealign_alloc() uses
mmap().
I note that on my system, malloc() claims to page align large requests
-- maybe we're trying too hard.
-Larry Jones
My "C-" firmly establishes me on the cutting edge of the avant-garde.
-- Calvin