qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH v3 06/22] memory-device: document MemoryDeviceClas


From: David Hildenbrand
Subject: Re: [Qemu-ppc] [PATCH v3 06/22] memory-device: document MemoryDeviceClass
Date: Wed, 26 Sep 2018 10:49:09 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.0

On 26/09/2018 10:29, Igor Mammedov wrote:
> On Tue, 25 Sep 2018 17:10:43 +0200
> David Hildenbrand <address@hidden> wrote:
> 
>> On 25/09/2018 16:19, Igor Mammedov wrote:
>>> On Thu, 20 Sep 2018 12:32:27 +0200
>>> David Hildenbrand <address@hidden> wrote:
>>>   
>>>> Document the functions and when to not expect errors.
>>>>
>>>> Signed-off-by: David Hildenbrand <address@hidden>
>>>> ---
>>>>  include/hw/mem/memory-device.h | 13 +++++++++++++
>>>>  1 file changed, 13 insertions(+)
>>>>
>>>> diff --git a/include/hw/mem/memory-device.h 
>>>> b/include/hw/mem/memory-device.h
>>>> index f02b229837..d6853156ff 100644
>>>> --- a/include/hw/mem/memory-device.h
>>>> +++ b/include/hw/mem/memory-device.h
>>>> @@ -29,9 +29,22 @@ typedef struct MemoryDeviceState {
>>>>      Object parent_obj;
>>>>  } MemoryDeviceState;
>>>>  
>>>> +/**
>>>> + * MemoryDeviceClass:
>>>> + * @get_addr: The address of the @md in guest physical memory. "0" means 
>>>> that
>>>> + * no address has been specified by the user and that no address has been
>>>> + * assigned yet.  
>>> While looking at attempts to implement
>>>  '[Qemu-devel] [PATCH v12 6/6] tpm: add ACPI memory clear interface'
>>> on QEMU side, where we are trying to clear RAM going via ramblocks list.
>>>
>>> Maybe we are doing it backwards and instead of trying to deal with host
>>> abstraction RAMBlocks/MemoryRegion and follow up efforts to tag it with
>>> device model attributes
>>>   '[PATCH] RFC: mark non-volatile memory region'
>>> we should do it other way around and approach it from guest point of view
>>> like firmware would do, i.e.
>>> make TPM enumerate RAM devices and tell them to clear related memory.
>>> Concrete device would know how to do it properly and/or how to ask backend
>>> to do it without need to tag RAMBlocks with device model specific info.
>>> That would allow to enumerate all RAM without trying to filter out not RAM
>>> RAMBlocks and not pollute host specific RAMBlock/MemoryRegion with device
>>> specific flags.  
>>
>> I agree, I consider it also as a problem that many different mechanism
>> in the code assume that any RAMBlock can e.g. be read/written etc. Or,
>> as you said, try to resolve certain properties from RAMBlocks. If we
>> could let the devices handle details (e.g. memory-device class callbacks
>> or similar), we would move the logic to a place where we actually know
>> what is happenig.
>>
>>>
>>> However we can't do it right now, since built-in memory doesn't have
>>> a corresponding device model and we use MemoryRegions directly.
>>> Maybe we should use memory-devices and create derivative 'ramchip' device
>>> to represent builtin RAM and use it instead.  
>>
>> Yes, we would need some sort of internal DIMM (however, as you correctly
>> state, DIMM is the wrong term, it really is *some* kind of ram device).
>>
>>>
>>> So if we were to do that, we would need to make get_addr() return value 0
>>> a valid one (suggest (uint64_t)-1 as unset value) and/or make built in
>>> memory work outside of device_memory region as short cut impl.
>>>   
>>
>> Something like that will certainly work, but require many other changes.
>> E.g. the device memory region has to be sized and setup completely
>> different. But then, we can change the default (have to whatch out about
>> compatibility handling when e.g. addr=0 is passed to e.g. dimms on the
>> cmdline).
> if we create a builtin ram device (i.e. no CLI available for it) and
> gate logic by type in memory-device handlers then we probably won't have
> any conflicts with dimm device.
> It's a bit hackish but it will be internal to memory-device helpers,
> then we would be able call 'device_add' internally to replace creating
> MemoryRegions manually. That way we would keep compatibility with old
> layout but will add device abstraction layer over builtin RAM.
> 
> I've contemplated switching to pc-dimm and it would be ideal but
> that's a lot more refactoring most likely with guest ABI breaking
> and introducing multiple device_memory zones to pc-dimm.
> so we probably would end up keeping old way to instantiate initial
> memory for old machine types and a new way for new ones.
> I'm not sure it's worth effort considering complexity it would bring.
> But I haven't actually tried to implement it to know issues in detail,
> so it's just my expectation of problems we would face if we go this route.

I consider this way the right thing to do. But as you mention, we have
to watch out for compatibility things. Especially numa nodes might be
tricky - we want one device per node most probably. Also haven't looked
into the details.

Converting one machine at a time would be possible.

-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]