qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [RFC PATCH v2] spapr: Support ibm, dynamic-memory-v2 prop


From: David Gibson
Subject: Re: [Qemu-ppc] [RFC PATCH v2] spapr: Support ibm, dynamic-memory-v2 property
Date: Wed, 11 Apr 2018 14:21:43 +1000
User-agent: Mutt/1.9.2 (2017-12-15)

On Tue, Apr 10, 2018 at 09:45:21AM +0530, Bharata B Rao wrote:
> On Tue, Apr 10, 2018 at 01:02:45PM +1000, David Gibson wrote:
> > On Mon, Apr 09, 2018 at 11:55:38AM +0530, Bharata B Rao wrote:
> > > The new property ibm,dynamic-memory-v2 allows memory to be represented
> > > in a more compact manner in device tree.
> > 
> > I still need to look at this in more detail, but to start with:
> > what's the rationale for this new format?
> > 
> > It's more compact, but why do we care?  The embedded people always
> > whinge about the size of the deivce tree, but I didn't think that was
> > really a concern with PAPR.
> 
> Here's a real example of how this has affected us earlier:
> 
> SLOF's CAS FDT buffer size was initially 32K, was changed to 64k to
> support 1TB guest memory and again changed to 2MB to support 16TB guest
> memory.

Ah.. I hadn't thought of the CAS buffer, that's a legitimate concern.

> With ibm,dynamic-memory-v2 we are less likely to hit such scenarios.
> 
> Also, theoretically it should be more efficient in the guest kernel
> to handle LMB-sets than individual LMBs.
> 
> We aren't there yet, but I believe grouping of LMBs should eventually
> help us do memory hotplug at set (or DIMM) granularity than at individual
> LMB granularity (Again theoretical possibility)

Ok, sounds like it might be useful.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]