Gustavo Romero <gustavo.romero@linaro.org> writes:
Hi Markus,
Thanks for interesting in the ivshmem-flat device.
Bill Mills (cc:ed) is the best person to answer your question,
so please find his answer below.
On 2/28/24 3:29 AM, Markus Armbruster wrote:
Gustavo Romero <gustavo.romero@linaro.org> writes:
[...]
This patchset introduces a new device, ivshmem-flat, which is similar to the
current ivshmem device but does not require a PCI bus. It implements the ivshmem
status and control registers as MMRs and the shared memory as a directly
accessible memory region in the VM memory layout. It's meant to be used on
machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
memory-constrained resource targets.
The patchset includes a QTest for the ivshmem-flat device, however, it's also
possible to experiment with it in two ways:
(a) using two Cortex-M VMs running Zephyr; or
(b) using one aarch64 VM running Linux with the ivshmem PCI device and another
arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
Please note that for running the ivshmem-flat QTests the following patch, which
is not committed to the tree yet, must be applied:
https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
What problem are you trying to solve with ivshmem?
Shared memory is not a solution to any communication problem, it's
merely a building block for building such solutions: you invariably have
to layer some protocol on top. What do you intend to put on top of
ivshmem?
Actually ivshmem is shared memory and bi-direction notifications (in this case
a doorbell register and an irq).
Yes, ivshmem-doorbell supports interrupts. Doesn't change my argument.
This is the fundamental requirement for many types of communication but our
interest is for the OpenAMP project [1].
All the OpenAMP project's communication is based on shared memory and
bi-directional notification. Often this is on a AMP SOC with Cortex-As and
Cortex-Ms or Rs. However we are now expanding into PCIe based AMP. One example
of this is an x86 host computer and a PCIe card with an ARM SOC. Other
examples include two systems with PCIe root complex connected via a
non-transparent bridge.
The existing PCI based ivshmem lets us model these types of systems in a simple
generic way without worrying about the details of the RC/EP relationship or the
details of a specific non-transparent bridge. In fact the ivshmem looks to the
two (or more) systems like a non-transparent bridge with its own memory (and no
other memory access is allowed).
Right now we are testing this with RPMSG between two QEMU system where both
systems are cortex-a53 and both running Zephyr. [2]
We will expand this by switching one of the QEMU systems to either arm64 Linux
or x86 Linux.
So you want to simulate a heterogeneous machine by connecting multiple
qemu-system-FOO processes via ivshmem, correct?