gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] client crash report


From: Pavan T C
Subject: Re: [Gluster-devel] client crash report
Date: Tue, 9 Aug 2011 22:43:53 +0530
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Lightning/1.0b2 Thunderbird/3.1.10

On Tuesday 09 August 2011 06:17 PM, Emmanuel Dreyfus wrote:
Hi

Here is a 3.2.2 client crash obtained after a few hours of running 4
concurent tar -xvzf

Program terminated with signal 11, Segmentation fault.
#0  0xbbb63960 in pthread_mutex_lock () from /usr/lib/libpthread.so.0

Might be obvious, but did you see what address caused the seg fault?
You could do that by disassembling code around  0xbbb63960.
In your gdb prompt, try:

(gdb) x/20i 0xbbb63960

Also print the ioc_inode structure that is passed to ioc_inode_destroy and see if the values of its members make sense. You might have NULL values where you don't expect them. That might throw some light on the root cause.

(gdb) p *((ioc_inode_t *)0xb5af7100)

Pavan

(gdb) bt
#0  0xbbb63960 in pthread_mutex_lock () from /usr/lib/libpthread.so.0
#1  0xba9308cb in ioc_inode_destroy (ioc_inode=0xb5af7100) at ioc-inode.c:229
#2  0xba92a994 in ioc_forget (this=0xbb9c5000, inode=0xb986b3dc)
     at io-cache.c:316
#3  0xbbbaf32c in inode_table_prune (table=<value optimized out>)
     at inode.c:330
#4  0xbbbaf5e3 in inode_unref (inode=0xb986b3dc) at inode.c:457
#5  0xbbb9dbf8 in loc_wipe (loc=0xbb9a800c) at xlator.c:1641
#6  0xbba1b9b0 in free_fuse_state (state=0xbb9a8000) at fuse-helpers.c:86
#7  0xbba2d979 in fuse_getattr (this=0xbb95f000, finh=0xbb9631c0,
     msg=0xbb9631e8) at fuse-bridge.c:498
#8  0xbba23186 in fuse_thread_proc (data=0xbb95f000) at fuse-bridge.c:3220
#9  0xbbb6722b in pthread_setcancelstate () from /usr/lib/libpthread.so.0
#10 0xbba97090 in swapcontext () from /usr/lib/libc.so.12

A quick look at the code shows nothing obvious. I post it just in
case it rings a bell to someone.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]