[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [PATCH 2/7] spapr: Move handling of special NVLink numa n
From: |
Greg Kurz |
Subject: |
Re: [Qemu-ppc] [PATCH 2/7] spapr: Move handling of special NVLink numa node from reset to init |
Date: |
Wed, 11 Sep 2019 09:33:48 +0200 |
On Wed, 11 Sep 2019 14:04:47 +1000
David Gibson <address@hidden> wrote:
> The number of NUMA nodes in the system is fixed from the command line.
> Therefore, there's no need to recalculate it at reset time, and we can
> determine the special gpu_numa_id value used for NVLink2 devices at init
> time.
>
> This simplifies the reset path a bit which will make further improvements
> easier.
>
> Signed-off-by: David Gibson <address@hidden>
> ---
Reviewed-by: Greg Kurz <address@hidden>
> hw/ppc/spapr.c | 21 +++++++++++----------
> 1 file changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index c551001f86..e03e874d94 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1737,16 +1737,6 @@ static void spapr_machine_reset(MachineState *machine)
> spapr_setup_hpt_and_vrma(spapr);
> }
>
> - /*
> - * NVLink2-connected GPU RAM needs to be placed on a separate NUMA node.
> - * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() which is
> - * called from vPHB reset handler so we initialize the counter here.
> - * If no NUMA is configured from the QEMU side, we start from 1 as GPU
> RAM
> - * must be equally distant from any other node.
> - * The final value of spapr->gpu_numa_id is going to be written to
> - * max-associativity-domains in spapr_build_fdt().
> - */
> - spapr->gpu_numa_id = MAX(1, machine->numa_state->num_nodes);
> qemu_devices_reset();
>
> /*
> @@ -2885,6 +2875,17 @@ static void spapr_machine_init(MachineState *machine)
>
> }
>
> + /*
> + * NVLink2-connected GPU RAM needs to be placed on a separate NUMA node.
> + * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() which is
> + * called from vPHB reset handler so we initialize the counter here.
> + * If no NUMA is configured from the QEMU side, we start from 1 as GPU
> RAM
> + * must be equally distant from any other node.
> + * The final value of spapr->gpu_numa_id is going to be written to
> + * max-associativity-domains in spapr_build_fdt().
> + */
> + spapr->gpu_numa_id = MAX(1, machine->numa_state->num_nodes);
> +
> if ((!kvm_enabled() || kvmppc_has_cap_mmu_radix()) &&
> ppc_type_check_compat(machine->cpu_type, CPU_POWERPC_LOGICAL_3_00, 0,
> spapr->max_compat_pvr)) {
- [Qemu-ppc] [PATCH 0/7] spapr: CAS and reset cleanup preliminaries, David Gibson, 2019/09/11
- [Qemu-ppc] [PATCH 5/7] spapr: Do not put empty properties for -kernel/-initrd/-append, David Gibson, 2019/09/11
- [Qemu-ppc] [PATCH 1/7] spapr: Simplify handling of pre ISA 3.0 guest workaround handling, David Gibson, 2019/09/11
- [Qemu-ppc] [PATCH 2/7] spapr: Move handling of special NVLink numa node from reset to init, David Gibson, 2019/09/11
- [Qemu-ppc] [PATCH 7/7] spapr: Perform machine reset in a more sensible order, David Gibson, 2019/09/11
- [Qemu-ppc] [PATCH 4/7] spapr: Skip leading zeroes from memory@ DT node names, David Gibson, 2019/09/11
- [Qemu-ppc] [PATCH 3/7] spapr: Fixes a leak in CAS, David Gibson, 2019/09/11