qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Multiple SMMUv3 instances on PCI Bus and PCI Host Bridge


From: Nicolin Chen
Subject: Multiple SMMUv3 instances on PCI Bus and PCI Host Bridge
Date: Fri, 4 Jun 2021 16:08:27 -0700
User-agent: Mutt/1.9.4 (2018-02-28)

Hello Eric, Yubo, and other QEMU developers,

I am having a problem with links between vSMMU and PCI Host Bridge,
using ARM-VIRT (64-bit; ACPI) + SMMUv3 (nested translation) setup.

Firstly, I am very new to the areas of QEMU, PCI and ACPI. So some
of my thoughts/ideas below might not sound very reasonable to you.

My goal here is to create two vSMMU instances in QEMU level and to
link them to different passthrough devices: each vSMMU has my local
feature that reads/writes through a VFIO mdev interface to talk to
Host OS, so it has to be two vSMMU instances in the QEMU level for
my use case.

As we know, QEMU by default has only one PCI root bus (PCIE.0) that
links to a default vSMMU (let's call it vSMMU0). And I learned that
now ARM-VIRT has PCI gpex feature. So I was planning to create one
host bridge (PCIE.128) to link to a different instance (vSMMU1) --
later on I can pass through different PCI devices to either PCIE.0
or PCIE.128 for different mdev pathways.

I then tried to add a PCI Host Bridge using the following commands
that created one default vSMMU instance, as my first experiment.

/home/ubuntu/qemu-system-aarch64
    -machine virt,accel=kvm,gic-version=3,iommu=smmuv3 \
    -cpu host -smp cpus=1 -m 1G -nographic -monitor none -display none \
    -kernel /boot/Image -bios /usr/share/AAVMF/AAVMF_CODE.fd \
    -initrd /home/ubuntu/buildroot-20200422-aarch64-qemu-test-rootfs.cpio \
    -object memory-backend-ram,size=1G,id=m0 \
    -numa node,cpus=0,nodeid=0,memdev=m0 \
    -device pxb-pcie,id=pxb-pcie.128,bus=pcie.0,bus_nr=128,numa_node=0 \
    -device 
pcie-root-port,id=pcie.128,bus=pxb-pcie.128,slot=1,addr=0,io-reserve=0 \
    -device vfio-pci,host=0003:01:00.0,rombar=0,bus=pcie.128

However I found that PCIE.128 was also added to vSMMU0, which feels
like that PCIE.128 treated PCIE.0 root bus as a parent device so it
was added to the parent's vSMMU too.

Then I tried another experiment with the following hack, hoping that
it would link vSMMU0 to PCIE.128 instead of PCIE.0:

@@ -385,13 +387,13 @@ build_iort(GArray *table_data, BIOSLinker *linker, 
VirtMachineState *vms)
     /* fully coherent device */
     rc->memory_properties.cache_coherency = cpu_to_le32(1);
     rc->memory_properties.memory_flags = 0x3; /* CCA = CPM = DCAS = 1 */
     rc->pci_segment_number = bus_num; /* MCFG pci_segment */
+    rc->pci_segment_number = cpu_to_le32(bus_num); /* MCFG pci_segment */

     /* Identity RID mapping covering the whole input RID range */
     idmap = &rc->id_mapping_array[0];
     idmap->input_base = 0;
     idmap->id_count = cpu_to_le32(0xFFFF);
-    idmap->output_base = 0;
+    idmap->output_base = cpu_to_le32(bus_num << 16);

Yet, I found it's not successful either: the vSMMU instance was not
added to either PCIE.0 or PCIE.128.

So I started to have questions in my mind:
(1) Can PCI host bridge (PCIE.128) add to a different vSMMU without
    following PCIE.0's SMMU setup?
(2) If the answer to (1) is yes, is there any way to have two pairs
    of PCI+vSMMU?
(3) If the answer to (1) is no, how can I correctly change the iort
    table to link vSMMU0 to PCIE.128?

Would it be possible for you to shed some light here?

Thanks
Nic



reply via email to

[Prev in Thread] Current Thread [Next in Thread]