On 1/6/23 05:18, Akihiko Odaki wrote:
Recently MemReentrancyGuard was added to DeviceState to record that the
device is engaging in I/O. The network device backend needs to update it
when delivering a packet to a device.
This implementation follows what bottom half does, but it does not add
a tracepoint for the case that the network device backend started
delivering a packet to a device which is already engaging in I/O. This
is because such reentrancy frequently happens for
qemu_flush_queued_packets() and is insignificant.
This series consists of two patches. The first patch makes a bulk
change to
add a new parameter to qemu_new_nic() and does not contain behavioral
changes.
The second patch actually implements MemReentrancyGuard update.
/me look at the 'net' API.
So the NetReceive* handlers from NetClientInfo process the HW NIC
data flow, independently from the CPUs.
IIUC MemReentrancyGuard is supposed to protect reentrancy abuse from
CPUs.
NetReceive* handlers aren't restricted to any particular API, they
just consume blob of data. Looking at e1000_receive_iov(), this data
is filled into memory using the pci_dma_rw() API. pci_dma_rw() gets
the AddressSpace to use calling pci_get_address_space(), which returns
PCIDevice::bus_master_as. Then we use the dma_memory_rw(), followed
by address_space_rw(). Beh, I fail to see why there is reentrancy
checks from this NIC DMA HW path.
Maybe the MemoryRegion API isn't the correct place to check for
reentrancy abuse and we should do that at the AddressSpace level,
keeping DMA ASes clear and only protecting CPU ASes?