qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] monitor: Fix order in monitor_cleanup()


From: Markus Armbruster
Subject: Re: [PATCH] monitor: Fix order in monitor_cleanup()
Date: Mon, 19 Oct 2020 11:19:29 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

Kevin Wolf <kwolf@redhat.com> writes:

> Am 14.10.2020 um 19:20 hat Alex Bennée geschrieben:
>> 
>> Kevin Wolf <kwolf@redhat.com> writes:
>> 
>> > We can only destroy Monitor objects after we're sure that they are not
>> > in use by the dispatcher coroutine any more. This fixes crashes like the
>> > following where we tried to destroy a monitor mutex while the dispatcher
>> > coroutine still holds it:
>> >
>> >  (gdb) bt
>> >  #0  0x00007fe541cf4bc5 in raise () at /lib64/libc.so.6
>> >  #1  0x00007fe541cdd8a4 in abort () at /lib64/libc.so.6
>> >  #2  0x000055c24e965327 in error_exit (err=16, msg=0x55c24eead3a0 
>> > <__func__.33> "qemu_mutex_destroy") at ../util/qemu-thread-posix.c:37
>> >  #3  0x000055c24e9654c3 in qemu_mutex_destroy (mutex=0x55c25133e0f0) at 
>> > ../util/qemu-thread-posix.c:70
>> >  #4  0x000055c24e7cfaf1 in monitor_data_destroy_qmp (mon=0x55c25133dfd0) 
>> > at ../monitor/qmp.c:439
>> >  #5  0x000055c24e7d23bc in monitor_data_destroy (mon=0x55c25133dfd0) at 
>> > ../monitor/monitor.c:615
>> >  #6  0x000055c24e7d253a in monitor_cleanup () at ../monitor/monitor.c:644
>> >  #7  0x000055c24e6cb002 in qemu_cleanup () at ../softmmu/vl.c:4549
>> >  #8  0x000055c24e0d259b in main (argc=24, argv=0x7ffff66b0d58, 
>> > envp=0x7ffff66b0e20) at ../softmmu/main.c:51
>> >
>> > Reported-by: Alex Bennée <alex.bennee@linaro.org>
>> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
>> 
>> LGTM:
>> 
>> Tested-by: Alex Bennée <alex.bennee@linaro.org>
>> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>> 
>> Who's tree is going to take it?
>
> In theory Markus, but he's on vacation this week. As this seems to
> affect multiple people and we want to unblock testing quickly, I'll just
> take it through mine.

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]