qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 8/9] util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESE


From: Marcel Apfelbaum
Subject: Re: [PATCH v2 8/9] util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESERVE
Date: Mon, 8 Mar 2021 10:54:27 +0200

Hi David,

On Mon, Mar 8, 2021 at 10:45 AM David Hildenbrand <david@redhat.com> wrote:
On 07.03.21 15:11, Marcel Apfelbaum wrote:
> Hi David,
>
> On Sun, Mar 7, 2021 at 3:18 PM David Hildenbrand <david@redhat.com
> <mailto:david@redhat.com>> wrote:
>
>     On 05.03.21 16:51, Peter Xu wrote:
>      > On Fri, Mar 05, 2021 at 04:44:36PM +0100, David Hildenbrand wrote:
>      >> On 05.03.21 16:42, Peter Xu wrote:
>      >>> On Fri, Mar 05, 2021 at 11:16:33AM +0100, David Hildenbrand wrote:
>      >>>> +#define OVERCOMMIT_MEMORY_PATH "/proc/sys/vm/overcommit_memory"
>      >>>> +static bool map_noreserve_effective(int fd, bool readonly,
>     bool shared)
>      >>>> +{
>      >>>
>      >>> [...]
>      >>>
>      >>>> @@ -184,8 +251,7 @@ void *qemu_ram_mmap(int fd,
>      >>>>        size_t offset, total;
>      >>>>        void *ptr, *guardptr;
>      >>>> -    if (noreserve) {
>      >>>> -        error_report("Skipping reservation of swap space is
>     not supported");
>      >>>> +    if (noreserve && !map_noreserve_effective(fd, shared,
>     readonly)) {
>      >>>
>      >>> Need to switch "shared" & "readonly"?
>      >>
>      >> Indeed, interestingly it has the same effect (as we don't have
>     anonymous
>      >> read-only memory in QEMU :) )
>      >
>      > But note there is still a "g_assert(!shared || fd >= 0);" inside.. :)
>
>     Aaaaaand, I just figured that we actually can create shared anonymous
>     memory in QEMU, simply via
>
>     -object memory-backend-ram,share=on
>
>     Introduced in 06329ccecfa0 ("mem: add share parameter to
>     memory-backend-ram"). That's also where we introduced the "shared" flag
>     for qemu_anon_ram_alloc().
>
>     That commit mentions a use case for "RDMA devices in order to remap
>     non-contiguous QEMU virtual addresses to a contiguous virtual address
>     range.". I fail to understand why that requires sharing RAM with child
>     processes.
>
>     Especially:
>
>     a) qemu_ram_is_shared() returned false before patch #1. RAM_SHARED is
>     never set.
>
>     b) qemu_ram_remap() does not work as expected?
>
>     c) ram_discard_range() is broken with shared anonymous memory. Instead
>     of MADV_DONTNEED we need MADV_REMOVE.
>
>     This looks like a partially broken feature and I wonder if there is an
>     actual user.
>
>     @Marcel, can you clarify if there is an actual use case for shared
>     anonymous memory in QEMU? I.e., if the original use case that required
>     that change is valid? (and why it wasn't able to just use proper shmem)
>
>
> As you correctly stated, the PVRDMA device requires remapping of
> non-contiguous QEMU
> virtual addresses to a contiguous virtual address range.
>
> In order to do so it calls
>       mremap (... , MREMAP_MAYMOVE | MREMAP_FIXED, ...)

Thanks - I was missing who remaps and how (for a second I thought in
another forked process).

docs/pvrdma.txt seems to describe the situation. Having to use anonymous
shared memory is a bit unfortunate.

I yet haven't figured out how it is valid to remap parts of RAMBlocks to
other locations via MREMAP_MAYMOVE. This sounds to me like we are
punching holes into RAMBlocks - that can't be right. 

Or maybe we are just shuffling around pages within a RAMBlock such that
we don't actually punch holes?

Indeed, we are adding a new mapping , but we leave the previous one in place.
The VM will continue to work with the "original" RAM while the host RDMA subsystem
will work with the re-mapped one.

Thanks,
Marcel
 

Or does that happen when the source VM is stopped and won't ever run again?

--
Thanks,

David / dhildenb


reply via email to

[Prev in Thread] Current Thread [Next in Thread]