qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC/PATCH v0 12/12] gunyah: Documentation


From: Alex Bennée
Subject: Re: [RFC/PATCH v0 12/12] gunyah: Documentation
Date: Wed, 18 Oct 2023 16:54:08 +0100
User-agent: mu4e 1.11.22; emacs 29.1.50

Srivatsa Vaddagiri <quic_svaddagi@quicinc.com> writes:

(add VirtIO maintainer MST to CC)

> * Alex Benn?e <alex.bennee@linaro.org> [2023-10-12 15:55:59]:
>
>> > Hi Phil,
>> >    We do want to see Gunyah support merged in Qemu at the earliest (as soon
>> > as the kernel driver is merged upstream that is), so any dependent change 
>> > in
>> > Qemu for Gunyah would be of much interest to us! I am not sure though if 
>> > Quic
>> > can sign up for the entire "make cpustate accel agnostic" work. Can you 
>> > point
>> > to your ongoing work that I could take a look at? Would that address 
>> > virtio-pci
>> > becoming accelerator agnostic?
>> 
>> Why wouldn't virtio-pci be accelerator agnostic?
>
> I checked usage of few KVM APIs in virtio-pci.c. I think most of them are to 
> do
> with use of MSI and IRQFD. If lets say we are not supporting MSI, then I 
> *think*
> current virtio-pci should work just fine. It would use virtio_pci_notify ->
> pci_set_irq -> .. -> qemu_set_irq, which should land in
> gunyah_arm_gicv3_set_irq [Patch 7/12] AFAICT. Let me try getting virtio-pci
> working and then I can update this thread again!

Hmm yeah looking at the file the relationship between KVM and virtio-pci
is a bit tangled up. Fundamentally the reason I say virtio-pci should be
accelerator agnostic is that we use virtio-pci under TCG emulation with
no KVM at all. IOW if all of the PCI emulation is done within QEMU there
should be no limit to which accelerators support it.

(warning! potentially incomplete understanding ahead)

However as you have seen there is an optimisation where KVM can take
over some of the PCI bus emulation and instead of synchronous
trap-and-exit to QEMU simply queues MSIs on irqfd which QEMU can then
consume the events and behave appropriately. I'm not sure what happens
with real PCI buses and pass-through sitting alongside virtual PCI
devices.

I suspect other hypervisors might want to support a similar thing and
even might end up re-using the irqfd mechanism for signalling events
from the hypervisor to the VMM. So Philippe's suggestion to fix this
would be to properly abstract whats needed to set this up and then clean
up the hardwired case with a generalised class representing the
in-[kernel|hypervisor] interface. Gunyah could then provide an
implementation of that interface if it ever supports injecting MSIs
directly.

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



reply via email to

[Prev in Thread] Current Thread [Next in Thread]