qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [qemu-s390x] [Qemu-devel] [PATCH v1 5/5] s390: do not call memory_re


From: David Hildenbrand
Subject: Re: [qemu-s390x] [Qemu-devel] [PATCH v1 5/5] s390: do not call memory_region_allocate_system_memory() multiple times
Date: Thu, 18 Apr 2019 14:06:25 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 18.04.19 14:01, Igor Mammedov wrote:
> On Thu, 18 Apr 2019 13:24:43 +0200
> David Hildenbrand <address@hidden> wrote:
> 
>> On 18.04.19 11:38, Igor Mammedov wrote:
>>> On Tue, 16 Apr 2019 13:09:08 +0200
>>> Christian Borntraeger <address@hidden> wrote:
>>>   
>>>> This fails with more than 8TB, e.g.  "-m 9T "
>>>>
>>>> [pid 231065] ioctl(10, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, 
>>>> guest_phys_addr=0, memory_size=0, userspace_addr=0x3ffc8500000}) = 0
>>>> [pid 231065] ioctl(10, KVM_SET_USER_MEMORY_REGION, {slot=0, flags=0, 
>>>> guest_phys_addr=0, memory_size=9895604649984, 
>>>> userspace_addr=0x3ffc8500000}) = -1 EINVAL (Invalid argument)
>>>>
>>>> seems that the 2nd memslot gets the full size (and not 9TB-size of first 
>>>> slot).  
>>>
>>> it turns out MemoryRegions is rendered correctly in to 2 parts (one per 
>>> alias),
>>> but follow up flatview_simplify() collapses adjacent ranges back
>>> into big one.  
>>
>> That sounds dangerous. Imagine doing that at runtime (e.g. hotplugging a
>> DIMM), the kvm memory slot would temporarily be deleted to insert the
>> new, bigger one. Guest would crash. This could happen if backing memory
>> of two DIMMs would by pure luck be allocated side by side in user space.
>>
> 
> not sure I fully get your concerns, but if you look at can_merge()
> it ensures that ranges belong to the same MemoryRegion.
> 
> It's hard for me to say if flatview_simplify() works as designed,
> MemoryRegion code is quite complicated so I'd deffer to Paolo's
> opinion.
> 

What I had in mind:

We have the Memory Region for memory devices (m->device_memory).

Assume The first DIMM is created, allocating memory in the user space
process:

[0x100000000 .. 0x20000000]. It is placed at offset 0 in m->device_memory.

Guests starts to run, a second DIMM is hotplugged. Memory in user space
process is allocated (by pure luck) at:

[0x200000000 .. 0x30000000]. It is placed at offset 0x100000000 in
m->device_memory.

Without looking at the code, I could imagine that both might be merged
into a single memory slot. That is my concern. Maybe it is not valid.

-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]