[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH-for-4.2 v8 7/9] hw/arm/virt-acpi-build: Add PC-D
From: |
Igor Mammedov |
Subject: |
Re: [Qemu-devel] [PATCH-for-4.2 v8 7/9] hw/arm/virt-acpi-build: Add PC-DIMM in SRAT |
Date: |
Mon, 12 Aug 2019 15:47:16 +0200 |
On Fri, 9 Aug 2019 16:02:39 +0000
Shameerali Kolothum Thodi <address@hidden> wrote:
> Hi Igor,
>
> > -----Original Message-----
> > From: Qemu-devel
> > [mailto:qemu-devel-bounces+shameerali.kolothum.thodi=huawei.com@nongn
> > u.org] On Behalf Of Igor Mammedov
> > Sent: 06 August 2019 14:22
> > To: Shameerali Kolothum Thodi <address@hidden>
> > Cc: address@hidden; address@hidden;
> > address@hidden; address@hidden;
> > address@hidden; xuwei (O) <address@hidden>; Linuxarm
> > <address@hidden>; address@hidden; address@hidden;
> > address@hidden; address@hidden
> > Subject: Re: [Qemu-devel] [PATCH-for-4.2 v8 7/9] hw/arm/virt-acpi-build: Add
> > PC-DIMM in SRAT
> >
> > On Fri, 26 Jul 2019 11:45:17 +0100
> > Shameer Kolothum <address@hidden> wrote:
> >
> > > Generate Memory Affinity Structures for PC-DIMM ranges.
> > >
> > > Signed-off-by: Shameer Kolothum <address@hidden>
> > > Signed-off-by: Eric Auger <address@hidden>
> > > Reviewed-by: Igor Mammedov <address@hidden>
> > > ---
> > > hw/arm/virt-acpi-build.c | 9 +++++++++
> > > 1 file changed, 9 insertions(+)
> > >
> > > diff --git a/hw/arm/virt-acpi-build.c b/hw/arm/virt-acpi-build.c
> > > index 018b1e326d..75657caa36 100644
> > > --- a/hw/arm/virt-acpi-build.c
> > > +++ b/hw/arm/virt-acpi-build.c
> > > @@ -518,6 +518,7 @@ build_srat(GArray *table_data, BIOSLinker *linker,
> > VirtMachineState *vms)
> > > int i, srat_start;
> > > uint64_t mem_base;
> > > MachineClass *mc = MACHINE_GET_CLASS(vms);
> > > + MachineState *ms = MACHINE(vms);
> > > const CPUArchIdList *cpu_list =
> > mc->possible_cpu_arch_ids(MACHINE(vms));
> > >
> > > srat_start = table_data->len;
> > > @@ -543,6 +544,14 @@ build_srat(GArray *table_data, BIOSLinker *linker,
> > VirtMachineState *vms)
> > > }
> > > }
> > >
> > > + if (ms->device_memory) {
> > > + numamem = acpi_data_push(table_data, sizeof *numamem);
> > > + build_srat_memory(numamem, ms->device_memory->base,
> > > +
> > memory_region_size(&ms->device_memory->mr),
> > > + nb_numa_nodes - 1,
> > > + MEM_AFFINITY_HOTPLUGGABLE |
> > MEM_AFFINITY_ENABLED);
> > > + }
> > > +
> > > build_header(linker, table_data, (void *)(table_data->data +
> > srat_start),
> > > "SRAT", table_data->len - srat_start, 3, NULL, NULL);
> > > }
> >
> > missing entry in
> > tests/bios-tables-test-allowed-diff.h
>
> I can't find any SRAT file in tests/data/acpi/virt. Arm/virt doesn't have much
> tests in bios-tables-test.c. So does it make any difference?
acpi tests for arm/virt are new and are enabled only since 4.1,
now it should be trivial to add extra cases for code you are adding.
Since you're touching her SRAT, I'd suggest to enable 'numamem' and 'memhp'
tests with this series (for example see:
test_acpi_piix4_tcg_numamem/test_acpi_piix4_tcg_memhp).
> > PS:
> > I don't really know what ARM guest kernel expects but on x86 we had to
> > enable
> > numa
> > for guest to figure out max_possible_pfn
> > (see: in linux.git: 8dd330300197 / ec941c5ffede).
>
> From whatever I can find, doesn't look like there is any special handling of
> max_possible_pfn in ARM64 world. The variable seems to be only updated
> in acpi_numa_memory_affinity_init()
>
> https://elixir.bootlin.com/linux/v5.3-rc3/source/drivers/acpi/numa.c#L298
problem was that drivers (stub dma ops) (guest booted with RAM below 4Gb)
were breaking when they received RAM buffers above 4Gb. To fix it we needed
to turn on swiotlb if possible max PFN could be above 4Gb.
That's where SRAT played its role to let guest know what possible max PFN
could be.
> Is there any way to test this in Guest to see whether this is actually a
> problem?
from my x86 experience:
1. for linux:
* start guest with RAM that not goes over 4Gb PFN mark (for example with -m
1Gb)
and native drivers (not virtio ones see linux.git commit message
ec941c5ffede4)
* hotplug RAM to go over 4Gb boundary
* stress test drivers (that should trigger various issues)
(on x64 it were ATA and various usb drivers leading to data corruption and
not
working mouse in guests)
2. for Windows guests memory hotplug doesn't work at all unless NUMA is
enabled.
Based on above I'd assume, we need to turn on numa for ARM as well if
memhp is enabled since SRAT is the only way of describing max possible RAM end
to the guest OS.
> Thanks,
> Shameer
>
> > It's worth to check if we might need a patch for turning on NUMA
> > (how to do it in QEMU see: auto_enable_numa_with_memhp)
>