qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Race with atexit functions in system emulation


From: Alex Bennée
Subject: Re: Race with atexit functions in system emulation
Date: Thu, 02 Jul 2020 08:49:58 +0100
User-agent: mu4e 1.5.3; emacs 28.0.50

Pavel Dovgalyuk <pavel.dovgaluk@gmail.com> writes:

> Is it true, that semihosting can be used to access (read and write) host
> files from the guest?

They can - but in these test cases we are only using semihosting for
console output and signalling an exit code at the end of the test. I
don't think that gets in the way of record/replay (aside from the exit
race described).

> In such a case it can't be used with RR for the following reasons:
> 1. We don't preserve modified files, therefore the execution result may
> change in the future runs.
> 2. Even in the case, when all the files are read only, semihosting FDs
> can't be saved, therefore it may not be used with reverse debugging.

This raises a wider question of what is the best way to indicate support
(or lack of support) for a particular device to a user? Do we need a
"replay aware" list or annotation?

>
> On Wed, Jul 1, 2020 at 2:06 PM Alex Bennée <alex.bennee@linaro.org> wrote:
>
>> Hi,
>>
>> While running some TSAN tests I ran into the following race condition:
>>
>>   WARNING: ThreadSanitizer: data race (pid=1605)
>>     Write of size 4 at 0x55c437814d98 by thread T2 (mutexes: write M619):
>>       #0 replay_finish
>> /home/alex.bennee/lsrc/qemu.git/replay/replay.c:393:17
>> (qemu-system-aarch64+0xc55116)
>>       #1 at_exit_wrapper() <null> (qemu-system-aarch64+0x368988)
>>       #2 handle_semihosting
>> /home/alex.bennee/lsrc/qemu.git/target/arm/helper.c:9740:25
>> (qemu-system-aarch64+0x5e75b0)
>>       #3 arm_cpu_do_interrupt
>> /home/alex.bennee/lsrc/qemu.git/target/arm/helper.c:9788:9
>> (qemu-system-aarch64+0x5e75b0)
>>       #4 cpu_handle_exception
>> /home/alex.bennee/lsrc/qemu.git/accel/tcg/cpu-exec.c:504:13
>> (qemu-system-aarch64+0x4a4690)
>>       #5 cpu_exec
>> /home/alex.bennee/lsrc/qemu.git/accel/tcg/cpu-exec.c:712:13
>> (qemu-system-aarch64+0x4a4690)
>>       #6 tcg_cpu_exec /home/alex.bennee/lsrc/qemu.git/cpus.c:1452:11
>> (qemu-system-aarch64+0x441157)
>>       #7 qemu_tcg_rr_cpu_thread_fn
>> /home/alex.bennee/lsrc/qemu.git/cpus.c:1554:21
>> (qemu-system-aarch64+0x441157)
>>       #8 qemu_thread_start
>> /home/alex.bennee/lsrc/qemu.git/util/qemu-thread-posix.c:521:9
>> (qemu-system-aarch64+0xe38bd0)
>>
>>     Previous read of size 4 at 0x55c437814d98 by main thread:
>>       #0 replay_mutex_lock
>> /home/alex.bennee/lsrc/qemu.git/replay/replay-internal.c:217:9
>> (qemu-system-aarch64+0xc55c03)
>>       #1 os_host_main_loop_wait
>> /home/alex.bennee/lsrc/qemu.git/util/main-loop.c:239:5
>> (qemu-system-aarch64+0xe5af4f)
>>       #2 main_loop_wait
>> /home/alex.bennee/lsrc/qemu.git/util/main-loop.c:518:11
>> (qemu-system-aarch64+0xe5af4f)
>>       #3 qemu_main_loop
>> /home/alex.bennee/lsrc/qemu.git/softmmu/vl.c:1664:9
>> (qemu-system-aarch64+0x5ce806)
>>       #4 main /home/alex.bennee/lsrc/qemu.git/softmmu/main.c:49:5
>> (qemu-system-aarch64+0xdbf8b7)
>>
>>     Location is global 'replay_mode' of size 4 at 0x55c437814d98
>> (qemu-system-aarch64+0x0000021a9d98)
>>
>> Basically we have a clash between semihosting wanting to do an exit,
>> which is useful for reporting status and the fact that we have atexit()
>> handlers to clean up that clash with the main loop accessing the mutex
>> while we go. Ultimately I think this is harmless as we are shutting down
>> anyway but I was wondering how we would clean something like this up?
>>
>> Should we maybe defer the exit to once the main loop has been exited
>> with a some sort of vmstop? Or could we have an atexit handler that
>> kills the main thread?
>>
>> I should point out that linux-user has a fairly heavy preexit_cleanup
>> function to do this sort of tidying up. atexit() is also fairly heavily
>> used for other devices in system emulation.
>>
>> Ideas?
>>
>> --
>> Alex Bennée
>>


-- 
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]