qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Seeing a problem in multi cpu runs where memory mapped pcie device r


From: Mark Wood-Patrick
Subject: RE: Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values
Date: Sun, 12 Jul 2020 17:54:15 +0000

 

 

From: Mark Wood-Patrick <mwoodpatrick@nvidia.com>
Sent: Wednesday, July 1, 2020 11:26 AM
To: qemu-devel@nongnu.org
Cc: Mark Wood-Patrick <mwoodpatrick@nvidia.com>
Subject: Seeing a problem in multi cpu runs where memory mapped pcie device register reads are returning incorrect values

 

Background

I have a test environment which runs QEMU 4.2 with a plugin that runs two copies of a PCIE device simulator on a CentOS 7.5 host with an Ubuntu 18.04 guest. When running with a single QEMU CPU using:

 

     -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device intel-iommu,intremap=on

 

Our tests run fine. But when running with multiple cpu’s:

 

    -cpu kvm64,+lahf_lm -M q35,kernel-irqchip=off -device intel-iommu,intremap=on -smp 2,sockets=1,cores=2

 

The values retuned are correct  all the way up the call stack and in KVM_EXIT_MMIO in kvm_cpu_exec (qemu-4.2.0/accel/kvm/kvm-all.c:2365)  but the value returned to the device driver which initiated the read is 0.

 

Question

Is anyone else running QEMU 4.2 in multi cpu mode? Is anyone getting incorrect reads from memory mapped device registers  when running in this mode? I would appreciate any pointers on how best to debug the flow from KVM_EXIT_MMIO back to the device driver running on the guest

             


reply via email to

[Prev in Thread] Current Thread [Next in Thread]