qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/3] migration/ram: Reduce unnecessary rate limiting


From: Peter Xu
Subject: Re: [PATCH v3 2/3] migration/ram: Reduce unnecessary rate limiting
Date: Fri, 5 Mar 2021 09:22:50 -0500

Kunkun,

On Fri, Mar 05, 2021 at 03:50:34PM +0800, Kunkun Jiang wrote:
> When the host page is a huge page and something is sent in the
> current iteration, the migration_rate_limit() should be executed.
> If not, this function can be omitted to save time.
> 
> Rename tmppages to pages_this_iteration to express its meaning
> more clearly.
> 
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
> ---
>  migration/ram.c | 21 ++++++++++++++-------
>  1 file changed, 14 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index a168da5cdd..9fc5b2997c 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1988,7 +1988,7 @@ static int ram_save_target_page(RAMState *rs, 
> PageSearchStatus *pss,
>  static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss,
>                                bool last_stage)
>  {
> -    int tmppages, pages = 0;
> +    int pages = 0;
>      size_t pagesize_bits =
>          qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
>      unsigned long start_page = pss->page;
> @@ -2000,21 +2000,28 @@ static int ram_save_host_page(RAMState *rs, 
> PageSearchStatus *pss,
>      }
>  
>      do {
> +        int pages_this_iteration = 0;
> +
>          /* Check if the page is dirty and send it if it is */
>          if (!migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
>              pss->page++;
>              continue;
>          }
>  
> -        tmppages = ram_save_target_page(rs, pss, last_stage);
> -        if (tmppages < 0) {
> -            return tmppages;
> +        pages_this_iteration = ram_save_target_page(rs, pss, last_stage);
> +        if (pages_this_iteration < 0) {
> +            return pages_this_iteration;
>          }
>  
> -        pages += tmppages;
> +        pages += pages_this_iteration;

To me, both names are okay, it's just that the new name doesn't really provide
a lot more new information, while it's even longer...

Since you seem to prefer cleaning up tmppages, I'm actually thinking whether
it should be called as "pages" at all since ram_save_target_page() majorly only
returns either 1 if succeeded or <0 if error.  There's only one very corner
case of xbzrle where it can return 0 in save_xbzrle_page():

    if (encoded_len == 0) {
        trace_save_xbzrle_page_skipping();
        return 0;
    }

I think it means the page didn't change at all, then I'm also wondering maybe
it can also return 1 showing one page migrated (though actually skipped!) which
should still be fine for the callers, e.g., ram_find_and_save_block() who will
finally check this "pages" value.

So I think _maybe_ that's a nicer cleanup to change that "return 0" to "return
1", then another patch to make the return value to be (1) return 0 if page
saved, or (2) return <0 if error.  Then here in ram_save_host_page() tmppages
can be renamed to "ret" or "succeed".

>          pss->page++;
> -        /* Allow rate limiting to happen in the middle of huge pages */
> -        migration_rate_limit();
> +        /*
> +         * Allow rate limiting to happen in the middle of huge pages if
> +         * something is sent in the current iteration.
> +         */
> +        if (pagesize_bits > 1 && pages_this_iteration > 0) {
> +            migration_rate_limit();
> +        }

I don't know whether this matters either..  Two calls in there:

    migration_update_counters(s, now);
    qemu_file_rate_limit(s->to_dst_file);

migration_update_counters() is mostly a noop for 99.9% cases. Looks still 
okay...

Thanks,

>      } while ((pss->page & (pagesize_bits - 1)) &&
>               offset_in_ramblock(pss->block,
>                                  ((ram_addr_t)pss->page) << 
> TARGET_PAGE_BITS));
> -- 
> 2.23.0
> 

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]