[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] hw/nvme: fix handling of over-committed queues
From: |
Klaus Jensen |
Subject: |
Re: [PATCH] hw/nvme: fix handling of over-committed queues |
Date: |
Tue, 29 Oct 2024 13:33:38 +0100 |
On Oct 28 09:15, Keith Busch wrote:
> On Mon, Oct 28, 2024 at 10:01:50AM +0100, Klaus Jensen wrote:
> > On Oct 25 10:45, Keith Busch wrote:
> > > On Fri, Oct 25, 2024 at 12:50:45PM +0200, Klaus Jensen wrote:
> > > > @@ -1520,9 +1520,16 @@ static void nvme_post_cqes(void *opaque)
> > > > nvme_inc_cq_tail(cq);
> > > > nvme_sg_unmap(&req->sg);
> > > > +
> > > > + if (QTAILQ_EMPTY(&sq->req_list) && !nvme_sq_empty(sq)) {
> > > > + qemu_bh_schedule(sq->bh);
> > > > + }
> > > > +
> > > > QTAILQ_INSERT_TAIL(&sq->req_list, req, entry);
> > > > }
> > >
> > > Shouldn't we schedule the bottom half after the req has been added to
> > > the list? I think everything the callback needs to be written prior to
> > > calling qemu_bh_schedule().
> > >
> >
> > Not as far as I know. It is only queued up; it won't be executed
> > immediately. It might run next (ASAP) if we are already in a bottom
> > half, but not before whatever context we are in returns.
>
> Okay. I was trying to come up with an explanation for why Waldek was
> still able to reproduce the problem, and that was all I have so far.
>
I was too eager in removing the start_sqs stuff. I removed kicking the
cq when transitioning from full to non-full. The core fix is the right
one, but I introduced a bug...
v2 just posted should be good. Verified it with OSv master.
signature.asc
Description: PGP signature