qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 1/6] system/physmem: handle hugetlb correctly in qemu_ram_


From: David Hildenbrand
Subject: Re: [PATCH v5 1/6] system/physmem: handle hugetlb correctly in qemu_ram_remap()
Date: Tue, 28 Jan 2025 19:41:47 +0100
User-agent: Mozilla Thunderbird

On 27.01.25 22:16, William Roche wrote:
On 1/14/25 15:02, David Hildenbrand wrote:
On 10.01.25 22:14, “William Roche wrote:
From: William Roche <william.roche@oracle.com>

The list of hwpoison pages used to remap the memory on reset
is based on the backend real page size. When dealing with
hugepages, we create a single entry for the entire page.

To correctly handle hugetlb, we must mmap(MAP_FIXED) a complete
hugetlb page; hugetlb pages cannot be partially mapped.

Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: William Roche <william.roche@oracle.com>
---

See my comments to v4 version and my patch proposal.

I'm copying and answering your comments here:


On 1/14/25 14:56, David Hildenbrand wrote:
On 10.01.25 21:56, William Roche wrote:
On 1/8/25 22:34, David Hildenbrand wrote:
On 14.12.24 14:45, “William Roche wrote:
From: William Roche <william.roche@oracle.com>
[...]
@@ -1286,6 +1286,10 @@ static void kvm_unpoison_all(void *param)
    void kvm_hwpoison_page_add(ram_addr_t ram_addr)
    {
        HWPoisonPage *page;
+    size_t page_size = qemu_ram_pagesize_from_addr(ram_addr);
+
+    if (page_size > TARGET_PAGE_SIZE)
+        ram_addr = QEMU_ALIGN_DOWN(ram_addr, page_size);

Is that part still required? I thought it would be sufficient (at least
in the context of this patch) to handle it all in qemu_ram_remap().

qemu_ram_remap() will calculate the range to process based on the
RAMBlock page size. IOW, the QEMU_ALIGN_DOWN() we do now in
qemu_ram_remap().

Or am I missing something?

(sorry if we discussed that already; if there is a good reason it might
make sense to state it in the patch description)

You are right, but at this patch level we still need to round up the

s/round up/align_down/

address and doing it here is small enough.

Let me explain.

qemu_ram_remap() in this patch here doesn't need an aligned addr. It
will compute the offset into the block and align that down.

The only case where we need the addr besides from that is the
error_report(), where I am not 100% sure if that is actually what we
want to print. We want to print something like ram_block_discard_range().


Note that ram_addr_t is a weird, separate address space. The alignment
does not have any guarantees / semantics there.


See ram_block_add() where we set
      new_block->offset = find_ram_offset(new_block->max_length);

independent of any other RAMBlock properties.

The only alignment we do is
      candidate = ROUND_UP(candidate, BITS_PER_LONG << TARGET_PAGE_BITS);

There is no guarantee that new_block->offset will be aligned to 1 GiB with
a 1 GiB hugetlb mapping.


Note that there is another conceptual issue in this function: offset
should be of type uint64_t, it's not really ram_addr_t, but an
offset into the RAMBlock.

Ok.


Of course, the code changes on patch 3/7 where we change both x86 and
ARM versions of the code to align the memory pointer correctly in both
cases.

Thinking about it more, we should never try aligning ram_addr_t, only
the offset into the memory block or the virtual address.

So please remove this from this ram_addr_t alignment from this patch,
and look into
aligning the virtual address / offset for the other user. Again, aligning
ram_addr_t is not guaranteed to work correctly.


Thanks for the technical details.

The ram_addr_t value alignment on the beginning of the page was useful
to create a single entry in the hwpoison_page_list for a large page, but
I understand that this use of ram_addr alignment may not be always accurate.
Removing this alignment (without replacing it with something else) will
end up creating several page entries in this list for the same hugetlb
page. Because when we loose a large page, we can receive several MCEs
for the sub-page locations touched on this large page before the VM crashes.

Right, although the kernel will currently only a single event IIRC. At least for hugetlb.

So the recovery phase on reset will go through the list to discard/remap
all the entries, and the same hugetlb page can be treated several times.
But when we had a single entry for a large page, this multiple
discard/remap does not occur.

Now, it could be technically acceptable to discard/remap a hugetlb page
several times. Other than not being optimal and taking time, the same
page being mapped or discarded multiple times doesn't seem to be a problem.
So we can leave the code like that  without complicating it with a block
and offset attributes to the hwpoison_page_list entries for example.

Right, this is something to optimize when it really becomes a problem I think.

--
Cheers,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]