[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 05/14] migration: Yield bitmap_mutex properly when sending/sl
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [PATCH 05/14] migration: Yield bitmap_mutex properly when sending/sleeping |
Date: |
Tue, 4 Oct 2022 14:55:10 +0100 |
User-agent: |
Mutt/2.2.7 (2022-08-07) |
* Peter Xu (peterx@redhat.com) wrote:
> Don't take the bitmap mutex when sending pages, or when being throttled by
> migration_rate_limit() (which is a bit tricky to call it here in ram code,
> but seems still helpful).
>
> It prepares for the possibility of concurrently sending pages in >1 threads
> using the function ram_save_host_page() because all threads may need the
> bitmap_mutex to operate on bitmaps, so that either sendmsg() or any kind of
> qemu_sem_wait() blocking for one thread will not block the other from
> progressing.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
I generally dont like taking locks conditionally; but this kind of looks
OK; I think it needs a big comment on the start of the function saying
that it's called and left with the lock held but that it might drop it
temporarily.
> ---
> migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> 1 file changed, 31 insertions(+), 11 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index 8303252b6d..6e7de6087a 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2463,6 +2463,7 @@ static void postcopy_preempt_reset_channel(RAMState *rs)
> */
> static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> {
> + bool page_dirty, release_lock = postcopy_preempt_active();
Could you rename that to something like 'drop_lock' - you are taking the
lock at the end even when you have 'release_lock' set - which is a bit
strange naming.
> int tmppages, pages = 0;
> size_t pagesize_bits =
> qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs,
> PageSearchStatus *pss)
> break;
> }
>
> + page_dirty = migration_bitmap_clear_dirty(rs, pss->block, pss->page);
> + /*
> + * Properly yield the lock only in postcopy preempt mode because
> + * both migration thread and rp-return thread can operate on the
> + * bitmaps.
> + */
> + if (release_lock) {
> + qemu_mutex_unlock(&rs->bitmap_mutex);
> + }
Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?
> /* Check the pages is dirty and if it is send it */
> - if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) {
> + if (page_dirty) {
> tmppages = ram_save_target_page(rs, pss);
> - if (tmppages < 0) {
> - return tmppages;
> + if (tmppages >= 0) {
> + pages += tmppages;
> + /*
> + * Allow rate limiting to happen in the middle of huge pages
> if
> + * something is sent in the current iteration.
> + */
> + if (pagesize_bits > 1 && tmppages > 0) {
> + migration_rate_limit();
This feels interesting, I know it's no change from before, and it's
difficult to do here, but it seems odd to hold the lock around the
sleeping in the rate limit.
Dave
> + }
> }
> + } else {
> + tmppages = 0;
> + }
>
> - pages += tmppages;
> - /*
> - * Allow rate limiting to happen in the middle of huge pages if
> - * something is sent in the current iteration.
> - */
> - if (pagesize_bits > 1 && tmppages > 0) {
> - migration_rate_limit();
> - }
> + if (release_lock) {
> + qemu_mutex_lock(&rs->bitmap_mutex);
> }
> +
> + if (tmppages < 0) {
> + return tmppages;
> + }
> +
> pss->page = migration_bitmap_find_dirty(rs, pss->block, pss->page);
> } while ((pss->page < hostpage_boundary) &&
> offset_in_ramblock(pss->block,
> --
> 2.32.0
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
- Re: [PATCH 05/14] migration: Yield bitmap_mutex properly when sending/sleeping,
Dr. David Alan Gilbert <=