qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Question] Regarding PMU initialization within the QEMU for ARM VCPU


From: Salil Mehta
Subject: RE: [Question] Regarding PMU initialization within the QEMU for ARM VCPUs
Date: Wed, 3 Jun 2020 13:48:10 +0000

Hi Andrew,

> From: Andrew Jones [mailto:drjones@redhat.com]
> Sent: Wednesday, June 3, 2020 1:16 PM
> To: Salil Mehta <salil.mehta@huawei.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>; mst@redhat.com;
> qemu-devel@nongnu.org; eric.auger@redhat.com; qemu-arm@nongnu.org; Igor
> Mammedov <imammedo@redhat.com>
> Subject: Re: [Question] Regarding PMU initialization within the QEMU for ARM
> VCPUs
> 
> On Wed, Jun 03, 2020 at 11:45:22AM +0000, Salil Mehta wrote:
> > Hi Andrew,
> > Many thanks for the reply.
> >
> > > From: Andrew Jones [mailto:drjones@redhat.com]
> > > Sent: Wednesday, June 3, 2020 10:38 AM
> > > To: Salil Mehta <salil.mehta@huawei.com>
> > > Cc: qemu-devel@nongnu.org; qemu-arm@nongnu.org; Peter Maydell
> > > <peter.maydell@linaro.org>; Igor Mammedov <imammedo@redhat.com>;
> > > mst@redhat.com
> > > Subject: Re: [Question] Regarding PMU initialization within the QEMU for 
> > > ARM
> > > VCPUs
> > >
> > > On Mon, Jun 01, 2020 at 03:04:33PM +0000, Salil Mehta wrote:
> > > > Hello,
> > > > I could see below within function fdt_add_pmu_nodes() part of
> > > > hw/arm/virt.c during virt machine initialization time:
> > > >
> > > > Observation:
> > > > In below function, support of PMU feature is being checked for
> > > > each vcpu and if the PMU is found part of the features then PMU
> > > > is initialized with in the host/KVM. But if there is even one
> > > > vcpu which is found to not support the PMU then loop is exited
> > > > and PMU is not initialized for the rest of the vcpus as well.
> > > >
> > > > Questions:
> > > > Q1. Not sure what is the logic of the premature exit and not
> > > >     continuing with further checks and initialization of other
> > > >     VCPU PMUs?
> > >
> > > KVM requires all VCPUs to have a PMU if one does. If the ARM ARM
> > > says it's possible to have PMUs for only some CPUs, then, for TCG,
> > > the restriction could be relaxed. I expect it will take more than
> > > just removing the check for things to work though.
> >
> > Got it. Many thanks for this info.
> >
> > During virt machine init we take cpu type from (-cpu <cpu-type>)
> > option and it should apply evenly to all of the vcpus. Therefore,
> > I can assume all of the processors to be identical for now. This
> > combined with the KVM restriction you mentioned above means for
> > PMU we could only have Enable-for-All OR Enable-for-none config
> > for all of the vcpus being booted even though we at different
> > places do have per-vcpu specific check like below available
> >
> > /* MADT */
> > static void
> > build_madt(GArray *table_data, BIOSLinker *linker, VirtMachineState *vms)
> > {
> > [...]
> >
> >     for (i = 0; i < vms->smp_cpus; i++) {
> >         AcpiMadtGenericCpuInterface *gicc = acpi_data_push(table_data,
> >                                                            sizeof(*gicc));
> >         [...]
> >
> >         if (arm_feature(&armcpu->env, ARM_FEATURE_PMU)) {---> This check
> >             gicc->performance_interrupt =
> cpu_to_le32(PPI(VIRTUAL_PMU_IRQ));
> >         }
> >  [...]
> > }
> >
> > Do per-vcpu feature check for PMU even makes sense till we allow
> > heterogeneous support of processors or relax the PMU enablement
> > on the per-vcpu basis within the KVM?
> 
> It may not be necessary now or ever to test more than one CPU for the
> PMU feature, but without a good reason to change it to a machine
> property then I'd prefer we always to the N-1 pointless checks. The
> feature is a CPU feature, not a machine feature, so, IMO, it should
> remain something configured and tested at the CPU level, not the machine
> level.


I do understand this and all looks logical what you have said.

Yes, there is a reason for this (but not sure if that is
convincing enough) but having an added flag at per-machine level
will help in supporting ARM VCPU Hotplug feature I am dealing with.

Just a summary to give you a context of vcpu hotplug design:

As per the design, at the time of machvirt_init() we pre-create
the ARMCPU objects along with the corresponding KVM vcpus at the
host. KVM vcpu initialized context(struct kvm_parked_vcpus) for the
disabled vcpus is parked at per VM list kvm_parked_vcpus. 

We create the ARMCPU objects(but these are not *realized* in qdev
sense) to facilitate the GIC initialization (pre-sized with possible
vcpus) with minimum change. After Initialization of the machine is
complete we release the ARMCPU Objects for the disabled vcpus(which
shall be re-created at the time when vcpu is hot plugged and
at that time we re-attach this new ARMCPU object with already
parked KVM VCPU context). 

So we have few options related to ARMCPU object release:
1. The ARMCPU objects for the disabled vcpus are released in
   context to the virt_machine_done() notifier. 
2. Defer release till a new vcpu object is hot plugged.
3. Never release and keep on reusing them and release once
   at VM exit.

Each of the above approaches come with  their own pros and cons.
(And I have prototyped each)

1st case looks more cleaner but the only problem we are facing
is after ARMCPUs are released and later during UEFI takes over,
it could again call the QEMU virt_acpi_build_update() to update
the ACPI tables(which UEFI has patched) which further ends up in
build_dsdt()->build_madt() path and leads to a problem since
disabled ARMCPU object are not available as they were released
earlier in context of virt_machine_done(). 

The problem happens in MADT which  needs per-vcpu PMU feature
check  to decide whether to enable it in the GICC MDAT entry.

Perhaps the problem is that maybe we are going against some
design expectations and we should not be releasing the ARMCPU
at all as firmware is dependent on it. And maybe UEFI and QEMU
have some sort of coupled design which does not helps.



> > > > Q2. Does it even makes sense to have PMUs initialized for some
> > > >     vcpus and not for others unless we have heterogeneous system?
> > >
> > > I don't know, but it doesn't sound like a configuration I'd like
> > > to see.
> >
> >
> > sure. but in the existing code we do prematurely exit after we
> > discover first vcpu amongst the possible vcpus not supporting
> > PMU feature. This looks abnormal as well?
> 
> Are you trying to configure heterogeneous mach-virt machines? Or machines
> that only provide PMUs to some CPUs?

No, I am not rather assuming identical/homogenous processing.

 If not, then I'm not sure why this
> would be a problem. Indeed it's likely a pointless check and, instead of
> silently returning, it should output a warning or even assert. Otherwise,
> I don't see a problem with it, since we want to be sure we're dealing with
> the type of configuration we expect, i.e. one where each CPU has a PMU if
> any CPU has a PMU.

It is not exactly a problem and as you said a rather pointless check
present. But as explained I was trying to check if we could have
a per-machine flag to devise a workaround for the PMU for now and have
the design(above 3 approaches discussed) of the vcpu hotplug discussed
as part of the patches which I have almost prepared.

(Maybe I should float the ARM VCPU Hotplug patches and let this
 discussion be held over there?)

Many thanks,
Salil.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]