libunwind-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libunwind-devel] Testing time


From: Lassi Tuura
Subject: Re: [Libunwind-devel] Testing time
Date: Tue, 29 Mar 2011 17:51:16 +0200

Hi,

> I went ahead and applied a bunch of patches yesterday. I believe it
> covers most of the things that were discussed on the list recently. I
> still haven't had a chance to review Lassi's performance fixes patch.
> Both 32 and 64 bit x86 are looking good (the same 4 known failures).

I noticed most of the patches went in, thanks!

I am a bit struggling with backtrace(). I have one question and two issues.

1) The new slow_backtrace() does its own unw_getcontext(). Will this not 
introduce one extra stack level if slow_backtrace() isn't inlined into 
backtrace()? OTOH, if it's inlined, why split? In my builds it is always 
inlined.

2) I still can't get our library to call into libunwind's backtrace() function. 
If I read LD_DEBUG=symbols output correctly, libc.so is searched for 
'backtrace' always, before libunwind.so. I think we need a distinct symbol. I 
checked this both with our library ($LD_PRELOAD'ed into executables, depends on 
libunwind), and libunwind 'make check' test programs. In all cases 'backtrace' 
comes from libc.so.

3) Unfortunately the introduction of unw_step() into backtrace() to drop the 
first call level makes it ~15% slower than the version which just calls 
unw_tdep_trace(). I suggest instead we make unw_tdep_trace() internal to 
libunwind, and make it omit the first stack level.

In my tests the current libunwind head is back to spending about 100-110 clock 
cycles per stack level. I'd like to get it back to the 50-60 clock cycles. I've 
attached the patches I am currently testing. It's the performance fix patch 
adapted to current libunwind head, patch to re-export unw_backtrace() as a 
distinct symbol, then my workaround for 1) and 3) above. With these patches 
heavy stack tracing does ~57 clock cycles per stack frame level (was ~55 with 
previous patches).

I get 6 'make check' test failures before and after each of the patches on a 
RHEL5-based system. The failing tests are {G,L}test-exc (no error message), 
{G,L}test-dyn1 (G: FAILURE: expected 13, not 1 frames below signal frame; L: 
segfault, no message), test-setjmp (segfault), run-ptrace-misc (9 * unw_step() 
returned -8 for ip=<x> (start ip=<x>)). Every once in a blue moon I get 
lt-test-async-sig to hang, stack is the usual dynamic linker one in (a).

Note that there's something very fishy about the Gtest-trace.c test. Since with 
these patches tdep_trace() is no longer public, I used a standard unw_step() 
loop. For the life of mine I can't get the loop to return the top-most address 
on the stack. Both backtrace() and unw_backtrace() return one more frame than 
the unw_step() loop. The unw_step() loop also seems to return off-by-one IP 
address. See (b) for verbose output.

So looks like we have an API issue with unw_step() - doesn't return last frame, 
and returns off-by-one addresses.

Regards,
Lassi

(a)

$ pstack 31505
#0  0x00000037c1a0d4c4 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00000037c1a08e35 in _L_lock_1127 () from /lib64/libpthread.so.0
#2  0x00000037c1a08d33 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00000037c0e0d6f0 in _dl_fini () from /lib64/ld-linux-x86-64.so.2
#4  0x00000037c12333a5 in exit () from /lib64/libc.so.6
#5  0x0000000000401314 in sighandler ()
#6  <signal handler called>
#7  0x00000037c1a08d2d in pthread_mutex_lock () from /lib64/libpthread.so.0
#8  0x00000037c13085ba in dl_iterate_phdr () from /lib64/libc.so.6
#9  0x00002b2f158fe195 in _ULx86_64_dwarf_find_proc_info ()
#10 0x00002b2f158fd57f in _ULx86_64_dwarf_find_save_locs ()
#11 0x00002b2f158fdb69 in _ULx86_64_dwarf_step ()
#12 0x00002b2f158f999f in _ULx86_64_step ()
#13 0x0000000000400ef8 in do_backtrace ()
#14 0x00000000004011bf in main ()

(b)
$ build/u/tests/Gtest-trace 1
Normal backtrace:
        normal trace:
         #0   ip=0x400bc8
         #1   ip=0x401371
         #2   ip=0x4013ab
         #3   ip=0x37c121d994

        via backtrace():
         #0   ip=0x400c90
         #1   ip=0x401370
         #2   ip=0x4013aa
         #3   ip=0x37c121d993
         #4   ip=0x400ad8

        via unw_backtrace():
         #0   ip=0x400cf0
         #1   ip=0x401370
         #2   ip=0x4013aa
         #3   ip=0x37c121d993
         #4   ip=0x400ad8

Backtrace across signal handler:
sighandler: got signal 15, sp=0x7fffa1b26fcc @ 37c12306f7
        normal trace:
         #0   ip=0x400bc8
         #1   ip=0x400f79
         #2   ip=0x37c12302d0
         #3   ip=0x37c12306f7
         #4   ip=0x401401
         #5   ip=0x37c121d994

        via backtrace():
         #0   ip=0x400c90
         #1   ip=0x400f78
         #2   ip=0x37c12302cf
         #3   ip=0x37c12306f7
         #4   ip=0x401400
         #5   ip=0x37c121d993
         #6   ip=0x400ad8

        via unw_backtrace():
         #0   ip=0x400cf0
         #1   ip=0x400f78
         #2   ip=0x37c12302cf
         #3   ip=0x37c12306f7
         #4   ip=0x401400
         #5   ip=0x37c121d993
         #6   ip=0x400ad8

Backtrace across signal handler on alternate stack:
sighandler: got signal 15, sp=0x16d2bc3c @ 37c12306f7
        normal trace:
         #0   ip=0x400bc8
         #1   ip=0x400f79
         #2   ip=0x37c12302d0
         #3   ip=0x37c12306f7
         #4   ip=0x40149e
         #5   ip=0x37c121d994

        via backtrace():
         #0   ip=0x400c90
         #1   ip=0x400f78
         #2   ip=0x37c12302cf
         #3   ip=0x37c12306f7
         #4   ip=0x40149d
         #5   ip=0x37c121d993
         #6   ip=0x400ad8

        via unw_backtrace():
         #0   ip=0x400cf0
         #1   ip=0x400f78
         #2   ip=0x37c12302cf
         #3   ip=0x37c12306f7
         #4   ip=0x40149d
         #5   ip=0x37c121d993
         #6   ip=0x400ad8
SUCCESS.

Attachment: 01-performance-optimisations.patch
Description: Binary data

Attachment: 02-alias-backtrace.patch
Description: Binary data

Attachment: 03-tdep-trace-internal.patch
Description: Binary data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]