qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug 1886362] [NEW] Heap use-after-free in lduw_he_p through e1000e_


From: Jason Wang
Subject: Re: [Bug 1886362] [NEW] Heap use-after-free in lduw_he_p through e1000e_write_to_rx_buffers
Date: Tue, 14 Jul 2020 16:56:05 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2020/7/10 下午6:37, Li Qiang wrote:
Paolo Bonzini <pbonzini@redhat.com> 于2020年7月10日周五 上午1:36写道:
On 09/07/20 17:51, Li Qiang wrote:
Maybe we should check whether the address is a RAM address in 'dma_memory_rw'?
But it is a hot path. I'm not sure it is right. Hope more discussion.
Half of the purpose of dma-helpers.c (as opposed to address_space_*
functions in exec.c) is exactly to support writes to MMIO.  This is
Hi Paolo,

Could you please explain more about this(to support writes to MMIO).
I can just see the dma helpers with sg DMA, not related with MMIO.


Please refer doc/devel/memory.rst.

The motivation of memory API is to allow support modeling different memory regions. DMA to MMIO is allowed in hardware so Qemu should emulate this behaviour.




especially true of dma_blk_io, which takes care of doing the DMA via a
bounce buffer, possibly in multiple steps and even blocking due to
cpu_register_map_client.

For dma_memory_rw this is not needed, so it only needs to handle
QEMUSGList, but I think the design should be the same.

However, this is indeed a nightmare for re-entrancy.  The easiest
solution is to delay processing of descriptors to a bottom half whenever
MMIO is doing something complicated.  This is also better for latency
because it will free the vCPU thread more quickly and leave the work to
the I/O thread.
Do you mean we define a per-e1000e bottom half. And in the MMIO write
or packet send
trigger this bh?


Probably a TX bh.


So even if we again trigger the MMIO write, then
second bh will not be executed?


Bh is serialized so no re-entrancy issue.

Thanks




Thanks,
Li Qiang

Paolo





reply via email to

[Prev in Thread] Current Thread [Next in Thread]