qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] migration: Count new_dirty instead of real_dirty


From: Keqian Zhu
Subject: [PATCH] migration: Count new_dirty instead of real_dirty
Date: Mon, 1 Jun 2020 12:02:50 +0800

DIRTY_LOG_INITIALLY_ALL_SET feature is on the queue. This fixs the
dirty rate calculation for this feature. After introducing this
feature, real_dirty_pages is equal to total memory size at begining.
This causing wrong dirty rate and false positive throttling.

BTW, real dirty rate is not suitable and not very accurate.

1. For not suitable: We mainly concern on the relationship between
   dirty rate and network bandwidth. Net increasement of dirty pages
   makes more sense.
2. For not very accurate: With manual dirty log clear, some dirty pages
   will be cleared during each peroid, our "real dirty rate" is less
   than real "real dirty rate".

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 include/exec/ram_addr.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 5e59a3d8d7..af9677e291 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -443,7 +443,7 @@ static inline
 uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
                                                ram_addr_t start,
                                                ram_addr_t length,
-                                               uint64_t *real_dirty_pages)
+                                               uint64_t *accu_dirty_pages)
 {
     ram_addr_t addr;
     unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
@@ -469,7 +469,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
             if (src[idx][offset]) {
                 unsigned long bits = atomic_xchg(&src[idx][offset], 0);
                 unsigned long new_dirty;
-                *real_dirty_pages += ctpopl(bits);
                 new_dirty = ~dest[k];
                 dest[k] |= bits;
                 new_dirty &= bits;
@@ -502,7 +501,6 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
                         start + addr + offset,
                         TARGET_PAGE_SIZE,
                         DIRTY_MEMORY_MIGRATION)) {
-                *real_dirty_pages += 1;
                 long k = (start + addr) >> TARGET_PAGE_BITS;
                 if (!test_and_set_bit(k, dest)) {
                     num_dirty++;
@@ -511,6 +509,7 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
         }
     }
 
+    *accu_dirty_pages += num_dirty;
     return num_dirty;
 }
 #endif
-- 
2.19.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]