qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots


From: Igor Mammedov
Subject: Re: [PATCH 0/2] kvm: clear dirty bitmaps from all overlapping memslots
Date: Mon, 23 Sep 2019 18:15:12 +0200

On Mon, 23 Sep 2019 09:29:46 +0800
Peter Xu <address@hidden> wrote:

> On Fri, Sep 20, 2019 at 03:58:51PM +0200, Igor Mammedov wrote:
> > On Fri, 20 Sep 2019 20:19:51 +0800
> > Peter Xu <address@hidden> wrote:
> >   
> > > On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote:  
> > > > A single ram_addr (representing a host-virtual address) could be aliased
> > > > to multiple guest physical addresses.  Since the KVM dirty page 
> > > > reporting
> > > > works on guest physical addresses, we need to clear all of the aliases
> > > > when a page is migrated, or there is a risk of losing writes to the
> > > > aliases that were not cleared.    
> > > 
> > > (CCing Igor too so Igor would be aware of these changes that might
> > >  conflict with the recent memslot split work)
> > >   
> > 
> > Thanks Peter,
> > I'll rebase on top of this series and do some more testing  
> 
> Igor,
> 
> It turns out that this series is probably not required for the current
> tree because memory_region_clear_dirty_bitmap() should have handled
> the aliasing issue correctly, but then this patchset will be a
> pre-requisite of your split series because when we split memory slots
> it starts to be possible that log_clear() will be applied to multiple
> kvm memslots.
> 
> Would you like to pick these two patches directly into your series?
> The 1st paragraph in the 2nd patch could probably be inaccurate and
> need amending (as mentioned).

Yep, commit message doesn't fit patch, how about following description:
"
Currently MemoryRegionSection has 1:1 mapping to KVMSlot.
However next patch will allow splitting MemoryRegionSection into
several KVMSlot-s, make sure that kvm_physical_log_slot_clear()
is able to handle such 1:N mapping.
"

> 
> Thanks,
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]