qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] hyperv/synic: Allocate as ram_device


From: Dr. David Alan Gilbert
Subject: Re: [PATCH 2/2] hyperv/synic: Allocate as ram_device
Date: Thu, 9 Jan 2020 13:28:21 +0000
User-agent: Mutt/1.13.0 (2019-11-30)

* Roman Kagan (address@hidden) wrote:
> On Thu, Jan 09, 2020 at 02:00:00PM +0100, Vitaly Kuznetsov wrote:
> > "Dr. David Alan Gilbert" <address@hidden> writes:
> > 
> > > And I think vhost-user will fail if you have too many sections - and
> > > the 16 sections from synic I think will blow the slots available.
> > >
> > 
> > SynIC is percpu, it will allocate two 4k pages for every vCPU the guest
> > has so we're potentially looking at hundreds of such regions.
> 
> Indeed.
> 
> I think my original idea to implement overlay pages word-for-word to the
> HyperV spec was a mistake, as it lead to fragmentation and memslot
> waste.
> 
> I'll look into reworking it without actually mapping extra pages over
> the existing RAM, but achieving overlay semantics by just shoving the
> *content* of the "overlaid" memory somewhere.
> 
> That said, I haven't yet fully understood how the reported issue came
> about, and thus whether the proposed approach would resolve it too.

The problem happens when we end up with:

 a)  0-512k  RAM
 b)  512k +  synic
 c)  570kish-640k  RAM

the page alignment code rounds
  (a) to 0-2MB   - aligning to the hugepage it's in
  (b) leaves as is
  (c) aligns to 0-2MB

  it then tries to coalesce (c) and (a) and notices (b) got in the way
and fails it.

Given the guest can put Synic anywhere I'm not sure that changing it's
implementatino would help here.
(And changing it's implementation would probably break migration
compatibility).

Dave

> Thanks,
> Roman.
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]