qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [RFC PATCH] numa: add auto_enable_numa to fix broken chec


From: Tao Xu
Subject: Re: [Qemu-ppc] [RFC PATCH] numa: add auto_enable_numa to fix broken check in spapr
Date: Mon, 5 Aug 2019 08:56:40 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0

On 8/2/2019 2:55 PM, David Gibson wrote:
On Thu, Aug 01, 2019 at 03:52:58PM +0800, Tao Xu wrote:
Introduce MachineClass::auto_enable_numa for one implicit NUMA node,
and enable it to fix broken check in spapr_validate_node_memory(), when
spapr_populate_memory() creates a implicit node and info then use
nb_numa_nodes which is 0.

Suggested-by: Igor Mammedov <address@hidden>
Suggested-by: Eduardo Habkost <address@hidden>
Signed-off-by: Tao Xu <address@hidden>

The change here looks fine so,

Acked-by: David Gibson <address@hidden>

However, I'm not following what check in spapr is broken and why.

Sorry, may be I should update the commit message.

Because in spapr_populate_memory(), if numa node is 0

    if (!nb_nodes) {
        nb_nodes = 1;
        ramnode.node_mem = machine->ram_size;
        nodes = &ramnode;
    }

it use a local 'nb_nodes' as 1 and update global nodes info, but inpapr_validate_node_memory(), use the global nb_numa_nodes

    for (i = 0; i < nb_numa_nodes; i++) {
        if (numa_info[i].node_mem % SPAPR_MEMORY_BLOCK_SIZE) {

so the global is 0 and skip the node_mem check.
---

This patch has a dependency on
https://patchwork.kernel.org/cover/11063235/
---
  hw/core/numa.c      | 9 +++++++--
  hw/ppc/spapr.c      | 9 +--------
  include/hw/boards.h | 1 +
  3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/hw/core/numa.c b/hw/core/numa.c
index 75db35ac19..756d243d3f 100644
--- a/hw/core/numa.c
+++ b/hw/core/numa.c
@@ -580,9 +580,14 @@ void numa_complete_configuration(MachineState *ms)
       *   guest tries to use it with that drivers.
       *
       * Enable NUMA implicitly by adding a new NUMA node automatically.
+     *
+     * Or if MachineClass::auto_enable_numa is true and no NUMA nodes,
+     * assume there is just one node with whole RAM.
       */
-    if (ms->ram_slots > 0 && ms->numa_state->num_nodes == 0 &&
-        mc->auto_enable_numa_with_memhp) {
+    if (ms->numa_state->num_nodes == 0 &&
+        ((ms->ram_slots > 0 &&
+        mc->auto_enable_numa_with_memhp) ||
+        mc->auto_enable_numa)) {
              NumaNodeOptions node = { };
              parse_numa_node(ms, &node, &error_abort);
      }
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index f607ca567b..e50343f326 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -400,14 +400,6 @@ static int spapr_populate_memory(SpaprMachineState *spapr, 
void *fdt)
      hwaddr mem_start, node_size;
      int i, nb_nodes = machine->numa_state->num_nodes;
      NodeInfo *nodes = machine->numa_state->nodes;
-    NodeInfo ramnode;
-
-    /* No NUMA nodes, assume there is just one node with whole RAM */
-    if (!nb_nodes) {
-        nb_nodes = 1;
-        ramnode.node_mem = machine->ram_size;
-        nodes = &ramnode;
-    }
for (i = 0, mem_start = 0; i < nb_nodes; ++i) {
          if (!nodes[i].node_mem) {
@@ -4369,6 +4361,7 @@ static void spapr_machine_class_init(ObjectClass *oc, 
void *data)
       */
      mc->numa_mem_align_shift = 28;
      mc->numa_mem_supported = true;
+    mc->auto_enable_numa = true;
smc->default_caps.caps[SPAPR_CAP_HTM] = SPAPR_CAP_OFF;
      smc->default_caps.caps[SPAPR_CAP_VSX] = SPAPR_CAP_ON;
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 2eb9a0b4e0..4a350b87d2 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -220,6 +220,7 @@ struct MachineClass {
      bool smbus_no_migration_support;
      bool nvdimm_supported;
      bool numa_mem_supported;
+    bool auto_enable_numa;
HotplugHandler *(*get_hotplug_handler)(MachineState *machine,
                                             DeviceState *dev);





reply via email to

[Prev in Thread] Current Thread [Next in Thread]