[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 05/12] aio: introduce aio_{disable, enable}_clie
From: |
Fam Zheng |
Subject: |
Re: [Qemu-devel] [PATCH 05/12] aio: introduce aio_{disable, enable}_clients |
Date: |
Mon, 12 Oct 2015 19:20:07 +0800 |
User-agent: |
Mutt/1.5.24 (2015-08-30) |
On Mon, 10/12 10:31, Kevin Wolf wrote:
> Am 09.10.2015 um 18:27 hat Fam Zheng geschrieben:
> > On Fri, 10/09 16:31, Kevin Wolf wrote:
> > > Am 09.10.2015 um 07:45 hat Fam Zheng geschrieben:
> > > > Signed-off-by: Fam Zheng <address@hidden>
> > > > ---
> > > > aio-posix.c | 3 ++-
> > > > aio-win32.c | 3 ++-
> > > > async.c | 42 ++++++++++++++++++++++++++++++++++++++++++
> > > > include/block/aio.h | 30 ++++++++++++++++++++++++++++++
> > > > 4 files changed, 76 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/aio-posix.c b/aio-posix.c
> > > > index d25fcfc..a261892 100644
> > > > --- a/aio-posix.c
> > > > +++ b/aio-posix.c
> > > > @@ -261,7 +261,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
> > > >
> > > > /* fill pollfds */
> > > > QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> > > > - if (!node->deleted && node->pfd.events) {
> > > > + if (!node->deleted && node->pfd.events
> > > > + && !aio_type_disabled(ctx, node->type)) {
> > > > add_pollfd(node);
> > > > }
> > > > }
> > > > diff --git a/aio-win32.c b/aio-win32.c
> > > > index f5ecf57..66cff60 100644
> > > > --- a/aio-win32.c
> > > > +++ b/aio-win32.c
> > > > @@ -309,7 +309,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
> > > > /* fill fd sets */
> > > > count = 0;
> > > > QLIST_FOREACH(node, &ctx->aio_handlers, node) {
> > > > - if (!node->deleted && node->io_notify) {
> > > > + if (!node->deleted && node->io_notify
> > > > + && !aio_type_disabled(ctx, node->type)) {
> > > > events[count++] = event_notifier_get_handle(node->e);
> > > > }
> > > > }
> > > > diff --git a/async.c b/async.c
> > > > index 244bf79..855b9d5 100644
> > > > --- a/async.c
> > > > +++ b/async.c
> > > > @@ -361,3 +361,45 @@ void aio_context_release(AioContext *ctx)
> > > > {
> > > > rfifolock_unlock(&ctx->lock);
> > > > }
> > > > +
> > > > +bool aio_type_disabled(AioContext *ctx, int type)
> > > > +{
> > > > + int i = 1;
> > > > + int n = 0;
> > > > +
> > > > + while (type) {
> > > > + bool b = type & 0x1;
> > > > + type >>= 1;
> > > > + n++;
> > >
> > > Any specific reason for leaving client_disable_counters[0] unused?
> >
> > No, I should have started from 0.
> >
> > >
> > > > + i <<= 1;
> > >
> > > i is never read.
> > >
> > > > + if (!b) {
> > > > + continue;
> > > > + }
> > > > + if (ctx->client_disable_counters[n]) {
> > > > + return true;
> > > > + }
> > > > + }
> > > > + return false;
> > > > +}
> > >
> > > In general I wonder whether this function really needs to take a mask
> > > with possibly multiple set bits instead of just a single type.
> >
> > Previous versions used to have more types than "internal" and "external",
> > so it
> > has been a mask. So yes, I think a single type will be better now.
> >
> > >
> > > > +void aio_disable_enable_clients(AioContext *ctx, int clients_mask,
> > > > + bool is_disable)
> > > > +{
> > > > + int i = 1;
> > > > + int n = 0;
> > > > + aio_context_acquire(ctx);
> > > > +
> > > > + while (clients_mask) {
> > > > + bool b = clients_mask & 0x1;
> > > > + clients_mask >>= 1;
> > > > + n++;
> > > > + i <<= 1;
> > >
> > > This i isn't used either.
> > >
> > > > + if (!b) {
> > > > + continue;
> > > > + }
> > > > + if (ctx->client_disable_counters[n]) {
> > > > + return true;
> > > > + }
> > >
> > > Wait, why are you checking the state instead of setting it?
> >
> > Oops, apparent I screwed my workspaces as I do remember coding this
> > assignment.
> > And I must have used a wrong command when building the tree so that I don't
> > even catch the compiling error. :(
> >
> > >
> > > How did you test this series?
> >
> > So far only smoke testing and qemu-iotests, because I don't have a good
> > idea of
> > testifying the transaction's atomicity. Any suggestions?
>
> Perhaps you could use blkdebug to delay something in the middle of the
> transaction while your guest keeps writing stuff? That should result in
> 100% reproducability.
>
> I guess you actually need to make sure that your guest doesn't do any
> I/O, then set the blkdebug breakpoint, send the transaction, and once a
> request is stopped, you start some I/O in the guest. Resume as soon as
> you know that something bad happened.
>
> Possibly you need to add a new blkdebug event to find a good place to
> suspend a transaction request.
>
It's difficult to "start some I/O" in the guest in the middle of transaction,
even with help of blkdebug, because BQL is hold during the whole transaction.
I think it would be a bit easier to program a VCPU to constantly submit I/O
requests to the vq, but that's far from enough.
Anyway I'll start by writing some unit test code instead, in tests/test-aio.c.
Fam
- [Qemu-devel] [PATCH 11/12] block: Introduce BlockDriver.bdrv_drain callback, (continued)
- [Qemu-devel] [PATCH 11/12] block: Introduce BlockDriver.bdrv_drain callback, Fam Zheng, 2015/10/09
- [Qemu-devel] [PATCH 08/12] block: Add "drained begin/end" for transactional backup, Fam Zheng, 2015/10/09
- [Qemu-devel] [PATCH 06/12] block: Introduce "drained begin/end" API, Fam Zheng, 2015/10/09
- [Qemu-devel] [PATCH 01/12] aio: Introduce "type" in aio_set_fd_handler and aio_set_event_notifier, Fam Zheng, 2015/10/09
- [Qemu-devel] [PATCH 05/12] aio: introduce aio_{disable, enable}_clients, Fam Zheng, 2015/10/09
- [Qemu-devel] [PATCH 09/12] block: Add "drained begin/end" for transactional blockdev-backup, Fam Zheng, 2015/10/09
- [Qemu-devel] [PATCH 04/12] dataplane: Mark host notifiers' client type as "external", Fam Zheng, 2015/10/10
- [Qemu-devel] [PATCH 12/12] qed: Implement .bdrv_drain, Fam Zheng, 2015/10/10