qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Seeing a problem in multi cpu runs where memory mapped pcie device regis


From: mwoodpatrick
Subject: Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values
Date: Thu, 2 Jul 2020 04:59:52 -0700

Background
==========

I have a test environment which runs QEMU 4.2 with a plugin that runs two
copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04
guest. When running with a single QEMU CPU using:

     -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device
intel-iommu,intremap=on

Our tests run fine. 

But when running with multiple cpu’s:

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device
intel-iommu,intremap=on -smp 2,sockets=1,cores=2

Some mmio reads to the simulated devices BAR 0 registers by our device
driver running on the guest are returning are returning incorrect values. 

Running QEMU under gdb I see that the read request is reaching our simulated
device correctly and that the correct result is being returned by the
simulator. Using gdb I have tracked the return value all the way back up the
call stack and the correct value is arriving in KVM_EXIT_MMIO
in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the value
returned to the device driver which initiated the read is 0.

Question
========

Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting
incorrect reads from memory mapped device registers  when running in this
mode? I would appreciate any pointers on how best to debug the flow from
KVM_EXIT_MMIO back to the device driver running on the guest
              




reply via email to

[Prev in Thread] Current Thread [Next in Thread]