qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 2/2] hw/acpi: Implement the SRAT GI affinity structure


From: Jonathan Cameron
Subject: Re: [PATCH v7 2/2] hw/acpi: Implement the SRAT GI affinity structure
Date: Wed, 6 Mar 2024 11:46:14 +0000

On Wed, 6 Mar 2024 10:33:17 +0000
Ankit Agrawal <ankita@nvidia.com> wrote:

> >> >> Jonathan, Alex, do you know how we may add tests that is dependent
> >> >> on the vfio-pci device?  
> >> >
> >> > There are none.
> >> >
> >> > This would require a host device always available for passthrough and
> >> > there is no simple solution for this problem. Such tests would need to
> >> > run in a nested environment under avocado: a pc/virt machine with an
> >> > igb device and use the PF and/or VFs to check device assignment in a
> >> > nested guests.
> >> >
> >> > PPC just introduced new tests to check nested guest support on two
> >> > different HV implementations. If you have time, please take a look
> >> > at tests/avocado/ppc_hv_tests.py for the framework.
> >> >
> >> > I will try to propose a new test when I am done with the reviews,
> >> > not before 9.0 soft freeze though.  
> >>
> >> Thanks for the information. As part of this patch, I'll leave out
> >> this test change then.  
> >
> > For BIOS table purposes it can be any PCI device. I've been testing
> > this with a virtio-net-pci but something like virtio-rng-pci will
> > do fine.  The table contents doesn't care if it's vfio or not.  
> 
> Thanks, I was able to work this out with the virtio-rng-pci device.
> 
> > I can spin a test as part of the follow up Generic Port series that
> > incorporates both and pushes the limits of the hmat code in general.
> > Current tests are too tame ;)  
> 
> Sure, that is fine by me.
> FYI, this is how the test change looked like in case you were wondering.

Looks good as a starting point.
Ideally I'd like HMAT + a few bandwidth and latency values
so we test that GIs work with that as well part.

Think you'd just need
"-machine hmat=on "
//some values for cpu to local memory
"-numa 
hmat-lb,initiator=0,target=0,hierarchy-memory,data-type=access_latency,latency=10"
"-numa 
hmat-lb,initiator=0,target=0,hierarchy-memory,data-type=access_bandwidth,bandwidth=10G"
//some values for the GI node to main memory.
"-numa 
hmat-lb,initiator=1,target=0,hierarchy-memory,data-type=access_latency,latency=200"
"-numa 
hmat-lb,initiator=1,target=0,hierarchy-memory,data-type=access_bandwidth,bandwidth=5G"





reply via email to

[Prev in Thread] Current Thread [Next in Thread]