qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 1/6] system/physmem: handle hugetlb correctly in qemu_ram_


From: William Roche
Subject: Re: [PATCH v5 1/6] system/physmem: handle hugetlb correctly in qemu_ram_remap()
Date: Mon, 27 Jan 2025 22:16:02 +0100
User-agent: Mozilla Thunderbird

On 1/14/25 15:02, David Hildenbrand wrote:
On 10.01.25 22:14, “William Roche wrote:
From: William Roche <william.roche@oracle.com>

The list of hwpoison pages used to remap the memory on reset
is based on the backend real page size. When dealing with
hugepages, we create a single entry for the entire page.

To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
hugetlb page; hugetlb pages cannot be partially mapped.

Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: William Roche <william.roche@oracle.com>
---

See my comments to v4 version and my patch proposal.

I'm copying and answering your comments here:


On 1/14/25 14:56, David Hildenbrand wrote:
On 10.01.25 21:56, William Roche wrote:
On 1/8/25 22:34, David Hildenbrand wrote:
On 14.12.24 14:45, “William Roche wrote:
From: William Roche <william.roche@oracle.com>
[...]
@@ -1286,6 +1286,10 @@ static void kvm_unpoison_all(void *param)
   void kvm_hwpoison_page_add(ram_addr_t ram_addr)
   {
       HWPoisonPage *page;
+    size_t page_size = qemu_ram_pagesize_from_addr(ram_addr);
+
+    if (page_size > TARGET_PAGE_SIZE)
+        ram_addr = QEMU_ALIGN_DOWN(ram_addr, page_size);

Is that part still required? I thought it would be sufficient (at least
in the context of this patch) to handle it all in qemu_ram_remap().

qemu_ram_remap() will calculate the range to process based on the
RAMBlock page size. IOW, the QEMU_ALIGN_DOWN() we do now in
qemu_ram_remap().

Or am I missing something?

(sorry if we discussed that already; if there is a good reason it might
make sense to state it in the patch description)

You are right, but at this patch level we still need to round up the

s/round up/align_down/

address and doing it here is small enough.

Let me explain.

qemu_ram_remap() in this patch here doesn't need an aligned addr. It
will compute the offset into the block and align that down.

The only case where we need the addr besides from that is the
error_report(), where I am not 100% sure if that is actually what we
want to print. We want to print something like ram_block_discard_range().


Note that ram_addr_t is a weird, separate address space. The alignment
does not have any guarantees / semantics there.


See ram_block_add() where we set
     new_block->offset = find_ram_offset(new_block->max_length);

independent of any other RAMBlock properties.

The only alignment we do is
     candidate = ROUND_UP(candidate, BITS_PER_LONG << TARGET_PAGE_BITS);

There is no guarantee that new_block->offset will be aligned to 1 GiB with
a 1 GiB hugetlb mapping.


Note that there is another conceptual issue in this function: offset
should be of type uint64_t, it's not really ram_addr_t, but an
offset into the RAMBlock.

Ok.


Of course, the code changes on patch 3/7 where we change both x86 and
ARM versions of the code to align the memory pointer correctly in both
cases.

Thinking about it more, we should never try aligning ram_addr_t, only
the offset into the memory block or the virtual address.

So please remove this from this ram_addr_t alignment from this patch, and look into
aligning the virtual address / offset for the other user. Again, aligning
ram_addr_t is not guaranteed to work correctly.


Thanks for the technical details.

The ram_addr_t value alignment on the beginning of the page was useful to create a single entry in the hwpoison_page_list for a large page, but I understand that this use of ram_addr alignment may not be always accurate. Removing this alignment (without replacing it with something else) will end up creating several page entries in this list for the same hugetlb page. Because when we loose a large page, we can receive several MCEs for the sub-page locations touched on this large page before the VM crashes. So the recovery phase on reset will go through the list to discard/remap all the entries, and the same hugetlb page can be treated several times. But when we had a single entry for a large page, this multiple discard/remap does not occur.

Now, it could be technically acceptable to discard/remap a hugetlb page several times. Other than not being optimal and taking time, the same page being mapped or discarded multiple times doesn't seem to be a problem. So we can leave the code like that without complicating it with a block and offset attributes to the hwpoison_page_list entries for example.


So the patch itself should probably be (- patch description):


diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 801cff16a5..8a47aa7258 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1278,7 +1278,7 @@ static void kvm_unpoison_all(void *param)

      QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
          QLIST_REMOVE(page, list);
-        qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
+        qemu_ram_remap(page->ram_addr);
          g_free(page);
      }
  }
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 638dc806a5..50a829d31f 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -67,7 +67,7 @@ typedef uintptr_t ram_addr_t;

  /* memory API */

-void qemu_ram_remap(ram_addr_t addr, ram_addr_t length);
+void qemu_ram_remap(ram_addr_t addr);
  /* This should not be used by devices.  */
  ram_addr_t qemu_ram_addr_from_host(void *ptr);
  ram_addr_t qemu_ram_addr_from_host_nofail(void *ptr);
diff --git a/system/physmem.c b/system/physmem.c
index 03d3618039..355588f5d5 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2167,17 +2167,35 @@ void qemu_ram_free(RAMBlock *block)
  }

  #ifndef _WIN32
-void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
+/*
+ * qemu_ram_remap - remap a single RAM page
+ *
+ * @addr: address in ram_addr_t address space.
+ *
+ * This function will try remapping a single page of guest RAM identified by
+ * @addr, essentially discarding memory to recover from previously poisoned
+ * memory (MCE). The page size depends on the RAMBlock (i.e., hugetlb). @addr
+ * does not have to point at the start of the page.
+ *
+ * This function is only to be used during system resets; it will kill the
+ * VM if remapping failed.
+ */
+void qemu_ram_remap(ram_addr_t addr)
  {
      RAMBlock *block;
-    ram_addr_t offset;
+    uint64_t offset;
      int flags;
      void *area, *vaddr;
      int prot;
+    size_t page_size;

      RAMBLOCK_FOREACH(block) {
          offset = addr - block->offset;
          if (offset < block->max_length) {
+            /* Respect the pagesize of our RAMBlock */
+            page_size = qemu_ram_pagesize(block);
+            offset = QEMU_ALIGN_DOWN(offset, page_size);
+
              vaddr = ramblock_ptr(block, offset);
              if (block->flags & RAM_PREALLOC) {
                  ;
@@ -2191,21 +2209,22 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
                  prot = PROT_READ;
                  prot |= block->flags & RAM_READONLY ? 0 : PROT_WRITE;
                  if (block->fd >= 0) {
-                    area = mmap(vaddr, length, prot, flags, block->fd,
+                    area = mmap(vaddr, page_size, prot, flags, block->fd,
                                  offset + block->fd_offset);
                  } else {
                      flags |= MAP_ANONYMOUS;
-                    area = mmap(vaddr, length, prot, flags, -1, 0);
+                    area = mmap(vaddr, page_size, prot, flags, -1, 0);
                  }
                  if (area != vaddr) {
-                    error_report("Could not remap addr: "
-                                 RAM_ADDR_FMT "@" RAM_ADDR_FMT "",
-                                 length, addr);
+                    error_report("Could not remap RAM %s:%" PRIx64 " +%zx",
+                                 block->idstr, offset, page_size);
                      exit(1);
                  }
-                memory_try_enable_merging(vaddr, length);
-                qemu_ram_setup_dump(vaddr, length);
+                memory_try_enable_merging(vaddr, page_size);
+                qemu_ram_setup_dump(vaddr, page_size);
              }
+
+            break;
          }
      }
  }

I'll use your suggested changes, Thanks.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]