bug-gdb
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Debugging multithreaded core dumps


From: Bockenek, Richard
Subject: Debugging multithreaded core dumps
Date: Tue, 26 Jun 2001 21:16:49 -0400

Has anyone had any luck debugging multithreaded (pthreads) core dumps on
Linux?
While running the program under gdb, the thread commands appear to work
normally.
E.g. "info threads" shows all of the thread ids, and "thread apply all bt"
shows me back
traces of all threads.   However, when I run dbg on a core file generated
from the same
program, I can't see thread info except for main, e.g. "info threads" shows
only the 
initial thread id, and "thread apply all bt" backtraces only the initial
thread's stack.

Questions:
1. My highest priority right now is to get a backtrace of the running thread
at the time 
of the crash.   Can anyone explain how to do this using gdb or some other
debugger.

2. I would also like to get a general understanding of the differences
between running a
program under gdb as opposed to normal program execution.  E.g. any
differences in
in static data initialization, stack memory initialization, heap mgmt,
anything else that
is relevant.  Reason I ask is the program usally crashes when soaked
overnight, but 
does not crash when run in gdb for days.

3. What gets written to the core file, I noticed its quite a bit smaller
than the ELF
file size and only about the size of individual thread code + data + stack
sizes (as
displayed by top).  

4.  While debugging, gdb sometimes consumes up to  80% or more of the CPU.
I've never seen the program %CPU usage remotely close to this when running
normally. 

I guess that's enough questions for now.  Here are some particulars:

Linux   2.2.12 (Redhat 6.1)
egcs    1.1.2 
ld      2.9.1 
gdb     4.18
prog    ELF 32-bit LSB executable, Intel 80386, version 1, dynamically
linked 
        (uses shared libs), not stripped
core    ELF 32-bit LSB core file (signal 11), Intel 80386, version 1

Any help is greatly appreciated,
Richard




reply via email to

[Prev in Thread] Current Thread [Next in Thread]