qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PULL 16/71] nbd: Increase bs->in_flight d


From: Kevin Wolf
Subject: Re: [Qemu-block] [Qemu-devel] [PULL 16/71] nbd: Increase bs->in_flight during AioContext switch
Date: Thu, 11 Apr 2019 18:48:03 +0200
User-agent: Mutt/1.11.3 (2019-02-01)

Am 11.04.2019 um 16:48 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 11.04.2019 17:15, Kevin Wolf wrote:
> > Am 11.04.2019 um 15:40 hat Vladimir Sementsov-Ogievskiy geschrieben:
> >> 25.02.2019 18:19, Kevin Wolf wrote:
> >>> bdrv_drain() must not leave connection_co scheduled, so bs->in_flight
> >>> needs to be increased while the coroutine is waiting to be scheduled
> >>> in the new AioContext after nbd_client_attach_aio_context().
> >>
> >> Hi!
> >>
> >> I have some questions, could you explain, please?
> >>
> >> "bdrv_drain() must not leave connection_co scheduled" - it's because we 
> >> want to be
> >> sure that connection_co yielded from nbd_read_eof, yes?
> >>
> >> But it is guaranteed by aio_wait_bh_oneshot.. Why do we need additioinally 
> >> inc/dec
> >> bs->in_flight ?
> > 
> > Without incrementing bs->in_flight, nothing would guarantee that
> > aio_poll() is called and the BH is actually executed before bdrv_drain()
> > returns.
> 
> Don't follow.. Don't we want exactly this, we want BH to be executed while 
> node is still
> drained, as you write in comment?

Yes, exactly. But if bs->in_flight == 0, the AIO_WAIT_WHILE() condition
in the drain code could become false, so aio_poll() would not be called
again and drain would return even if the BH is still pending.

Kevin

> > 
> >>>
> >>> Signed-off-by: Kevin Wolf <address@hidden>
> >>> ---
> >>>    block/nbd-client.c | 20 ++++++++++++++++++--
> >>>    1 file changed, 18 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/block/nbd-client.c b/block/nbd-client.c
> >>> index 60f38f0320..bfbaf7ebe9 100644
> >>> --- a/block/nbd-client.c
> >>> +++ b/block/nbd-client.c
> >>> @@ -977,14 +977,30 @@ void nbd_client_detach_aio_context(BlockDriverState 
> >>> *bs)
> >>>        qio_channel_detach_aio_context(QIO_CHANNEL(client->ioc));
> >>>    }
> >>>    
> >>> +static void nbd_client_attach_aio_context_bh(void *opaque)
> >>> +{
> >>> +    BlockDriverState *bs = opaque;
> >>> +    NBDClientSession *client = nbd_get_client_session(bs);
> >>> +
> >>> +    /* The node is still drained, so we know the coroutine has yielded in
> >>> +     * nbd_read_eof(), the only place where bs->in_flight can reach 0, 
> >>> or it is
> >>> +     * entered for the first time. Both places are safe for entering the
> >>> +     * coroutine.*/
> >>> +    qemu_aio_coroutine_enter(bs->aio_context, client->connection_co);
> >>> +    bdrv_dec_in_flight(bs);
> >>> +}
> >>> +
> >>>    void nbd_client_attach_aio_context(BlockDriverState *bs,
> >>>                                       AioContext *new_context)
> >>>    {
> >>>        NBDClientSession *client = nbd_get_client_session(bs);
> >>>        qio_channel_attach_aio_context(QIO_CHANNEL(client->ioc), 
> >>> new_context);
> >>>    
> >>> -    /* FIXME Really need a bdrv_inc_in_flight() here */
> >>> -    aio_co_schedule(new_context, client->connection_co);
> >>> +    bdrv_inc_in_flight(bs);
> >>> +
> >>> +    /* Need to wait here for the BH to run because the BH must run while 
> >>> the
> >>> +     * node is still drained. */
> >>> +    aio_wait_bh_oneshot(new_context, nbd_client_attach_aio_context_bh, 
> >>> bs);
> >>>    }
> >>>    
> >>>    void nbd_client_close(BlockDriverState *bs)
> >>>
> >>
> >>
> >> -- 
> >> Best regards,
> >> Vladimir
> 
> 
> -- 
> Best regards,
> Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]