[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 2/2] softmmu/physmem: fix dirty memory bitmap memleak
From: |
David Hildenbrand |
Subject: |
Re: [PATCH 2/2] softmmu/physmem: fix dirty memory bitmap memleak |
Date: |
Thu, 31 Mar 2022 18:26:11 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.2 |
On 31.03.22 14:27, Peter Xu wrote:
> On Thu, Mar 31, 2022 at 10:37:39AM +0200, David Hildenbrand wrote:
>> On 25.03.22 16:40, Andrey Ryabinin wrote:
>>> The sequence of ram_block_add()/qemu_ram_free()/ram_block_add()
>>> function calls leads to leaking some memory.
>>>
>>> ram_block_add() calls dirty_memory_extend() to allocate bitmap blocks
>>> for new memory. These blocks only grow but never shrink. So the
>>> qemu_ram_free() restores RAM size back to it's original stat but
>>> doesn't touch dirty memory bitmaps.
>>>
>>> After qemu_ram_free() there is no way of knowing that we have
>>> allocated dirty memory bitmaps beyond current RAM size.
>>> So the next ram_block_add() will call dirty_memory_extend() again to
>>> to allocate new bitmaps and rewrite pointers to bitmaps left after
>>> the first ram_block_add()/dirty_memory_extend() calls.
>>>
>>> Rework dirty_memory_extend() to be able to shrink dirty maps,
>>> also rename it to dirty_memory_resize(). And fix the leak by
>>> shrinking dirty memory maps on qemu_ram_free() if needed.
>>>
>>> Fixes: 5b82b703b69a ("memory: RCU ram_list.dirty_memory[] for safe RAM
>>> hotplug")
>>> Cc: qemu-stable@nongnu.org
>>> Signed-off-by: Andrey Ryabinin <arbn@yandex-team.com>
>>
>> I looked at this a while ago and I think the problem is more involved,
>> because we might actually generate holes for which we can free the
>> bitmap. I think this patch impoves the situation, though.
>>
>>
>> IIRC if you hotplug two dimms and then hotunplug only the latter, the
>
> I assume you meant "former"? :)
I remember it would have to be the one "plugged first" :)
>
>> bitmap for the first dimm will remain as long as the second dimm isn't
>> hotunplugged.
>
> IMHO it's fine to keep the dirty block for the unplugged hole. It'll be
> better if we could free it, but we can fix the memory leak first which
> seems to be more severe. The dirty memory isn't extremely large (32K ratio
> to mem size) if just to be kept idle, but frequent plug/unplug will leak
> infinite host mem.
Oh, I see, thanks for clarifying.
--
Thanks,
David / dhildenb
Re: [PATCH 1/2] softmmu/physmem: move last_ram_page() call under qemu_mutex_lock_ramlist(), Peter Xu, 2022/03/30