qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: qemu fuzz crash in virtio_net_queue_reset()


From: Alexander Bulekov
Subject: Re: qemu fuzz crash in virtio_net_queue_reset()
Date: Thu, 21 Mar 2024 15:24:03 -0400

On 240321 2208, Vladimir Sementsov-Ogievskiy wrote:
> On 21.03.24 18:01, Alexander Bulekov wrote:
> > On 240320 0024, Vladimir Sementsov-Ogievskiy wrote:
> > > Hi all!
> > > 
> > >  From fuzzing I've got a fuzz-data, which produces the following crash:
> > > 
> > > qemu-fuzz-x86_64: ../hw/net/virtio-net.c:134: void 
> > > flush_or_purge_queued_packets(NetClientState *): Assertion 
> > > `!virtio_net_get_subqueue(nc)->async_tx.elem' failed.
> > > ==2172308== ERROR: libFuzzer: deadly signal
> > >      #0 0x5bd8c748b5a1 in __sanitizer_print_stack_trace 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x26f05a1)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #1 0x5bd8c73fde38 in fuzzer::PrintStackTrace() 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x2662e38)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #2 0x5bd8c73e38b3 in fuzzer::Fuzzer::CrashCallback() 
> > > (/home/settlements/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x26488b3)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #3 0x739eec84251f  (/lib/x86_64-linux-gnu/libc.so.6+0x4251f) 
> > > (BuildId: c289da5071a3399de893d2af81d6a30c62646e1e)
> > >      #4 0x739eec8969fb in __pthread_kill_implementation 
> > > nptl/./nptl/pthread_kill.c:43:17
> > >      #5 0x739eec8969fb in __pthread_kill_internal 
> > > nptl/./nptl/pthread_kill.c:78:10
> > >      #6 0x739eec8969fb in pthread_kill nptl/./nptl/pthread_kill.c:89:10
> > >      #7 0x739eec842475 in gsignal signal/../sysdeps/posix/raise.c:26:13
> > >      #8 0x739eec8287f2 in abort stdlib/./stdlib/abort.c:79:7
> > >      #9 0x739eec82871a in __assert_fail_base assert/./assert/assert.c:92:3
> > >      #10 0x739eec839e95 in __assert_fail assert/./assert/assert.c:101:3
> > >      #11 0x5bd8c995d9e2 in flush_or_purge_queued_packets 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../hw/net/virtio-net.c:134:5
> > >      #12 0x5bd8c9918a5f in virtio_net_queue_reset 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../hw/net/virtio-net.c:563:5
> > >      #13 0x5bd8c9b724e5 in virtio_queue_reset 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../hw/virtio/virtio.c:2492:9
> > >      #14 0x5bd8c8bcfb7c in virtio_pci_common_write 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../hw/virtio/virtio-pci.c:1372:13
> > >      #15 0x5bd8c9e19cf3 in memory_region_write_accessor 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../softmmu/memory.c:492:5
> > >      #16 0x5bd8c9e19631 in access_with_adjusted_size 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../softmmu/memory.c:554:18
> > >      #17 0x5bd8c9e17f3c in memory_region_dispatch_write 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../softmmu/memory.c:1514:16
> > >      #18 0x5bd8c9ea3bbe in flatview_write_continue 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../softmmu/physmem.c:2825:23
> > >      #19 0x5bd8c9e91aab in flatview_write 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../softmmu/physmem.c:2867:12
> > >      #20 0x5bd8c9e91568 in address_space_write 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../softmmu/physmem.c:2963:18
> > >      #21 0x5bd8c74c8a90 in __wrap_qtest_writeq 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../tests/qtest/fuzz/qtest_wrappers.c:187:9
> > >      #22 0x5bd8c74dc4da in op_write 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../tests/qtest/fuzz/generic_fuzz.c:487:13
> > >      #23 0x5bd8c74d942e in generic_fuzz 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../tests/qtest/fuzz/generic_fuzz.c:714:17
> > >      #24 0x5bd8c74c016e in LLVMFuzzerTestOneInput 
> > > /home/vsementsov/work/src/qemu/yc7-fuzz/build/../tests/qtest/fuzz/fuzz.c:152:5
> > >      #25 0x5bd8c73e4e43 in fuzzer::Fuzzer::ExecuteCallback(unsigned char 
> > > const*, unsigned long) 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x2649e43)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #26 0x5bd8c73cebbf in fuzzer::RunOneTest(fuzzer::Fuzzer*, char 
> > > const*, unsigned long) 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x2633bbf)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #27 0x5bd8c73d4916 in fuzzer::FuzzerDriver(int*, char***, int 
> > > (*)(unsigned char const*, unsigned long)) 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x2639916)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #28 0x5bd8c73fe732 in main 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x2663732)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > >      #29 0x739eec829d8f in __libc_start_call_main 
> > > csu/../sysdeps/nptl/libc_start_call_main.h:58:16
> > >      #30 0x739eec829e3f in __libc_start_main csu/../csu/libc-start.c:392:3
> > >      #31 0x5bd8c73c9484 in _start 
> > > (/home/vsementsov/work/src/qemu/yc7-fuzz/build/qemu-fuzz-x86_64+0x262e484)
> > >  (BuildId: b41827f440fd9feaa98c667dbdcc961abb2799ae)
> > > 
> > > 
> > > 
> > 
> > Hello Vladimir,
> > This looks like a similar crash.
> > https://gitlab.com/qemu-project/qemu/-/issues/1451
> > 
> > That issue has a qtest reproducer that does not require a fuzzer to
> > reproduce.
> 
> Right, looks very similar, thanks! 1 year ago and still no news.. It's not 
> encouraging.
> 
> > 
> > The fuzzer should run fine under gdb. e.g.
> > gdb ./qemu-fuzz-i386
> > r  --fuzz-target=generic-fuzz-virtio-net-pci-slirp 
> > ~/generic-fuzz-virtio-net-pci-slirp.crash-7707e14adea64d129be88faeb6ca57dab6118ec5
> > 
> 
> Yes, I tried this. But somehow when it crashes, qemu-fuzz just prints the 
> backtrace and exists, I can't debug a crash in gdb as usual. But anyway, 
> reproducer in gitlab is better point to start.

Ah that might be because of Address Sanitizer. It might help to set a
breakpoint on __asan::ReportGenericError per:
https://github.com/google/sanitizers/wiki/AddressSanitizerAndDebugger

-Alex

> 
> > There are instructions in docs/devel/fuzzing.rst for building
> > reproducers from fuzzer inputs in section "Building Crash Reproducers",
> > however those instructions might not always work and the input might
> > require some further tweaks to ensure that DMA activity does not extend
> > past the physical memory limits of a normal qemu system.
> > 
> > Let me know if I can provide any other info
> > -Alex
> > 
> > > How to reproduce:
> > > ./configure --target-list=x86_64-softmmu --enable-debug --disable-docs 
> > > --cc=clang --cxx=clang++ --enable-fuzzing --enable-sanitizers 
> > > --enable-slirp
> > > make -j20 qemu-fuzz-x86_64
> > > ./build/qemu-fuzz-x86_64 --fuzz-target=generic-fuzz-virtio-net-pci-slirp 
> > > ../generic-fuzz-virtio-net-pci-slirp.crash-7707e14adea64d129be88faeb6ca57dab6118ec5
> > > 
> > > 
> > > This ...crash-7707... file is attached.
> > > 
> > > git-bisect points to 7dc6be52f4ead25e7da8fb758900bdcb527996f7 
> > > "virtio-net: support queue reset" as a first bad commit. That's a commit 
> > > which introduces virtio_net_queue_reset() function.
> > > 
> > > 
> > > I'm a newbie in qemu-fuzzing, and don't know virtio-net code, so I've no 
> > > idea how to debug this thing further. I even don't know, how to get a 
> > > normal coredump file to open it in gdb, it's not produced from fuzzing 
> > > process...
> > > 
> > > 
> > > I tried to search for "async_tx.elem" in git log, and found two commits, 
> > > fixing similar crashes:
> > > 
> > >    bc5add1dadcc140fef9af4fe215167e796cd1a58 "vhost-vdpa: fix assert 
> > > !virtio_net_get_subqueue(nc)->async_tx.elem in virtio_net_reset"
> > > and
> > > 
> > >    5fe19fb81839ea42b592b409f725349cf3c73551 "net: use peer when purging 
> > > queue in qemu_flush_or_purge_queue_packets()"
> > > 
> > > but I failed to get helping idea from them.
> > > 
> > > 
> > > 
> > > Could someone please help with this?
> > > 
> > > 
> > > -- 
> > > Best regards,
> > > Vladimir
> > 
> > 
> 
> -- 
> Best regards,
> Vladimir
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]