qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] vhost-user-blk: perform immediate cleanup if disconne


From: Raphael Norwitz
Subject: Re: [PATCH v2 2/2] vhost-user-blk: perform immediate cleanup if disconnect on initialization
Date: Wed, 24 Mar 2021 18:44:18 +0000
User-agent: Mutt/1.10.1 (2018-07-13)

Looks good, just clean up the commit message to reflect the way you've
now split the patches.

Reviewed-by: Raphael Norwitz <raphael.norwitz@nutanix.com>

On Wed, Mar 24, 2021 at 12:38:29PM +0300, Denis Plotnikov wrote:
> Commit 4bcad76f4c39 ("vhost-user-blk: delay vhost_user_blk_disconnect")
> introduced postponing vhost_dev cleanup aiming to eliminate qemu aborts
> because of connection problems with vhost-blk daemon.
> 
> However, it introdues a new problem. Now, any communication errors
> during execution of vhost_dev_init() called by vhost_user_blk_device_realize()
> lead to qemu abort on assert in vhost_dev_get_config().
> 
> This happens because vhost_user_blk_disconnect() is postponed but
> it should have dropped s->connected flag by the time
> vhost_user_blk_device_realize() performs a new connection opening.
> On the connection opening, vhost_dev initialization in
> vhost_user_blk_connect() relies on s->connection flag and
> if it's not dropped, it skips vhost_dev initialization and returns
> with success. Then, vhost_user_blk_device_realize()'s execution flow
> goes to vhost_dev_get_config() where it's aborted on the assert.
> 

The next sentence is a little misleading as splitting initialization and
operational logic has already been done in a prior patch. Please reword
to clarify that we've already made the split.

> The connection/disconnection processing should happen
> differently on initialization and operation of vhost-user-blk.
> On initialization (in vhost_user_blk_device_realize()) we fully
> control the initialization process. At that point, nobody can use the
> device since it isn't initialized and we don't need to postpone any
> cleanups, so we can do cleanup right away when there is communication
> problems with the vhost-blk daemon.
> On operation the disconnect may happen when the device is in use, so
> the device users may want to use vhost_dev's data to do rollback before
> vhost_dev is re-initialized (e.g. in vhost_dev_set_log()), so we
> postpone the cleanup.
>

ditto - this patch no longer splits the two cases.

> The patch splits those two cases, and performs the cleanup immediately on
> initialization, and postpones cleanup when the device is initialized and
> in use.
> 
> Signed-off-by: Denis Plotnikov <den-plotnikov@yandex-team.ru>
> ---
>  hw/block/vhost-user-blk.c | 48 +++++++++++++++++++--------------------
>  1 file changed, 24 insertions(+), 24 deletions(-)
> 
> diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
> index 1af95ec6aae7..4e215f71f152 100644
> --- a/hw/block/vhost-user-blk.c
> +++ b/hw/block/vhost-user-blk.c
> @@ -402,38 +402,38 @@ static void vhost_user_blk_event(void *opaque, 
> QEMUChrEvent event,
>          break;
>      case CHR_EVENT_CLOSED:
>          /*
> -         * A close event may happen during a read/write, but vhost
> -         * code assumes the vhost_dev remains setup, so delay the
> -         * stop & clear. There are two possible paths to hit this
> -         * disconnect event:
> -         * 1. When VM is in the RUN_STATE_PRELAUNCH state. The
> -         * vhost_user_blk_device_realize() is a caller.
> -         * 2. In tha main loop phase after VM start.
> -         *
> -         * For p2 the disconnect event will be delayed. We can't
> -         * do the same for p1, because we are not running the loop
> -         * at this moment. So just skip this step and perform
> -         * disconnect in the caller function.
> -         *
> -         * TODO: maybe it is a good idea to make the same fix
> -         * for other vhost-user devices.
> +         * Closing the connection should happen differently on device
> +         * initialization and operation stages.
> +         * On initalization, we want to re-start vhost_dev initialization
> +         * from the very beginning right away when the connection is closed,
> +         * so we clean up vhost_dev on each connection closing.
> +         * On operation, we want to postpone vhost_dev cleanup to let the
> +         * other code perform its own cleanup sequence using vhost_dev data
> +         * (e.g. vhost_dev_set_log).
>           */
>          if (realized) {
> +            /*
> +             * A close event may happen during a read/write, but vhost
> +             * code assumes the vhost_dev remains setup, so delay the
> +             * stop & clear.
> +             */
>              AioContext *ctx = qemu_get_current_aio_context();
>  
>              qemu_chr_fe_set_handlers(&s->chardev, NULL, NULL, NULL, NULL,
>                      NULL, NULL, false);
>              aio_bh_schedule_oneshot(ctx, vhost_user_blk_chr_closed_bh, 
> opaque);
> -        }
>  
> -        /*
> -         * Move vhost device to the stopped state. The vhost-user device
> -         * will be clean up and disconnected in BH. This can be useful in
> -         * the vhost migration code. If disconnect was caught there is an
> -         * option for the general vhost code to get the dev state without
> -         * knowing its type (in this case vhost-user).
> -         */
> -        s->dev.started = false;
> +            /*
> +             * Move vhost device to the stopped state. The vhost-user device
> +             * will be clean up and disconnected in BH. This can be useful in
> +             * the vhost migration code. If disconnect was caught there is an
> +             * option for the general vhost code to get the dev state without
> +             * knowing its type (in this case vhost-user).
> +             */
> +            s->dev.started = false;
> +        } else {
> +            vhost_user_blk_disconnect(dev);
> +        }
>          break;
>      case CHR_EVENT_BREAK:
>      case CHR_EVENT_MUX_IN:
> -- 
> 2.25.1
> 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]