qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] hw/arm/virt: Expose empty NUMA nodes through ACPI


From: Gavin Shan
Subject: Re: [PATCH v2] hw/arm/virt: Expose empty NUMA nodes through ACPI
Date: Fri, 5 Nov 2021 23:47:37 +1100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0

Hi Drew and Igor,

On 11/2/21 6:39 PM, Andrew Jones wrote:
On Tue, Nov 02, 2021 at 10:44:08AM +1100, Gavin Shan wrote:

Yeah, I agree. I don't have strong sense to expose these empty nodes
for now. Please ignore the patch.


So were describing empty numa nodes on the command line ever a reasonable
thing to do? What happens on x86 machine types when describing empty numa
nodes? I'm starting to think that the solution all along was just to
error out when a numa node has memory size = 0...


Sorry for the delay as I spent a few days looking into linux virtio-mem
driver. I'm afraid we still need this patch for ARM64. I don't think x86
has this issue even though I didn't experiment on X86. For example, I
have the following command lines. The hot added memory is put into node#0
instead of node#2, which is wrong.

There are several bitmaps tracking the node states in Linux kernel. One of
them is @possible_map, which tracks the nodes available, but don't have to
be online. @passible_map is sorted out from the following ACPI table.

  ACPI_SRAT_TYPE_MEMORY_AFFINITY
  ACPI_SRAT_TYPE_GENERIC_AFFINITY
  ACPI_SIG_SLIT                          # if it exists when optional distance 
map
                                         # is provided on QEMU side.

Note: Drew might ask why we have node#2 in "/sys/devices/system/node" again.
hw/arm/virt-acpi-build.c::build_srat() creates additional node in ACPI SRAT
table and the node's PXM is 3 ((ms->numa_state->num_nodes - 1)) in this case,
but linux kernel assigns node#2 to it.

  /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
  -accel kvm -machine virt,gic-version=host               \
  -cpu host -smp 4,sockets=2,cores=2,threads=1            \
  -m 1024M,slots=16,maxmem=64G                            \
  -object memory-backend-ram,id=mem0,size=512M            \
  -object memory-backend-ram,id=mem1,size=512M            \
  -numa node,nodeid=0,cpus=0-1,memdev=mem0                \
  -numa node,nodeid=1,cpus=2-3,memdev=mem1                \
  -numa node,nodeid=2 -numa node,nodeid=3                 \
  -object memory-backend-ram,id=vmem0,size=512M           \
  -device virtio-mem-pci,id=vm0,memdev=vmem0,node=2,requested-size=0 \
  -object memory-backend-ram,id=vmem1,size=512M           \
  -device virtio-mem-pci,id=vm1,memdev=vmem1,node=3,requested-size=0
     :
  # ls  /sys/devices/system/node | grep node
  node0
  node1
  node2
  # cat /proc/meminfo | grep MemTotal\:
  MemTotal:        1003104 kB
  # cat /sys/devices/system/node/node0/meminfo | grep MemTotal\:
  Node 0 MemTotal: 524288 kB

  (qemu) qom-set vm0 requested-size 512M
  # cat /proc/meminfo | grep MemTotal\:
  MemTotal:        1527392 kB
  # cat /sys/devices/system/node/node0/meminfo | grep MemTotal\:
  Node 0 MemTotal: 1013652 kB

Try above test after the patch is applied. The hot added memory is
put into node#2 correctly as the user expected.

  # ls  /sys/devices/system/node | grep node
  node0
  node1
  node2
  node3
  # cat /proc/meminfo | grep MemTotal\:
  MemTotal:        1003100 kB
  # cat /sys/devices/system/node/node2/meminfo | grep MemTotal\:
  Node 2 MemTotal: 0 kB

  (qemu) qom-set vm0 requested-size 512M
  # cat /proc/meminfo | grep MemTotal\:
  MemTotal:        1527388 kB
  # cat /sys/devices/system/node/node2/meminfo | grep MemTotal\:
  Node 2 MemTotal: 524288 kB

Thanks,
Gavin





reply via email to

[Prev in Thread] Current Thread [Next in Thread]