qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] migration: Count new_dirty instead of real_dirty


From: Dr. David Alan Gilbert
Subject: Re: [PATCH v3] migration: Count new_dirty instead of real_dirty
Date: Fri, 3 Jul 2020 15:20:13 +0100
User-agent: Mutt/1.14.5 (2020-06-23)

* Keqian Zhu (zhukeqian1@huawei.com) wrote:
> real_dirty_pages becomes equal to total ram size after dirty log sync
> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
> initialized to be all set, so old path counts them as "real dirty" at
> beginning.
> 
> This causes wrong dirty rate and false positive throttling.
> 
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>

OK, 

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

and queued.

you might still want to look at migration_trigger_thrtottle and see if
you can stop the throttling if in the RAM bulk stage.

> ---
> Changelog:
> 
> v3:
>  - Address Dave's comments.
> 
> v2:
>  - Use new_dirty_pages instead of accu_dirty_pages.
>  - Adjust commit messages.
> ---
>  include/exec/ram_addr.h | 5 +----
>  migration/ram.c         | 8 +++++---
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 7b5c24e928..3ef729a23c 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -442,8 +442,7 @@ static inline void 
> cpu_physical_memory_clear_dirty_range(ram_addr_t start,
>  static inline
>  uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
>                                                 ram_addr_t start,
> -                                               ram_addr_t length,
> -                                               uint64_t *real_dirty_pages)
> +                                               ram_addr_t length)
>  {
>      ram_addr_t addr;
>      unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
> @@ -469,7 +468,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock 
> *rb,
>              if (src[idx][offset]) {
>                  unsigned long bits = atomic_xchg(&src[idx][offset], 0);
>                  unsigned long new_dirty;
> -                *real_dirty_pages += ctpopl(bits);
>                  new_dirty = ~dest[k];
>                  dest[k] |= bits;
>                  new_dirty &= bits;
> @@ -502,7 +500,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock 
> *rb,
>                          start + addr + offset,
>                          TARGET_PAGE_SIZE,
>                          DIRTY_MEMORY_MIGRATION)) {
> -                *real_dirty_pages += 1;
>                  long k = (start + addr) >> TARGET_PAGE_BITS;
>                  if (!test_and_set_bit(k, dest)) {
>                      num_dirty++;
> diff --git a/migration/ram.c b/migration/ram.c
> index 069b6e30bc..5554a7d2d8 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -859,9 +859,11 @@ static inline bool migration_bitmap_clear_dirty(RAMState 
> *rs,
>  /* Called with RCU critical section */
>  static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb)
>  {
> -    rs->migration_dirty_pages +=
> -        cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length,
> -                                              &rs->num_dirty_pages_period);
> +    uint64_t new_dirty_pages =
> +        cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length);
> +
> +    rs->migration_dirty_pages += new_dirty_pages;
> +    rs->num_dirty_pages_period += new_dirty_pages;
>  }
>  
>  /**
> -- 
> 2.19.1
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]