qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/2] Speed up QMP stream reading


From: Peter Xu
Subject: Re: [PATCH 0/2] Speed up QMP stream reading
Date: Fri, 3 Jan 2020 14:06:27 -0500

On Fri, Jan 03, 2020 at 11:07:31AM +0000, Dr. David Alan Gilbert wrote:
> * Yury Kotov (address@hidden) wrote:
> > Hi!
> > 
> > 20.12.2019, 19:09, "Markus Armbruster" <address@hidden>:
> > > Yury Kotov <address@hidden> writes:
> > >
> > >>  Hi,
> > >>
> > >>  This series is continuation of another one:
> > >>  [PATCH] monitor: Fix slow reading
> > >>  https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg03722.html
> > >>
> > >>  Which also tried to read more than one byte from a stream at a time,
> > >>  but had some problems with OOB and HMP:
> > >>  https://lists.gnu.org/archive/html/qemu-devel/2019-11/msg05018.html
> > >>
> > >>  This series is an attempt to fix problems described.
> > >
> > > Two problems: (1) breaks HMP migrate -d, and (2) need to think through
> > > how this affects reading of QMP input, in particular OOB.
> > >
> > > This series refrains from changing HMP, thus avoids (1). Good.
> > >
> > > What about (2)? I'm feeling denser than usual today... Can you explain
> > > real slow how QMP input works? PATCH 2 appears to splice in a ring
> > > buffer. Why is that needed?
> > 
> > Yes, the second patch introduced the input ring buffer to store remaining
> > bytes while monitor is suspended.
> > 
> > QMP input scheme:
> > 1. monitor_qmp_can_read returns a number of bytes, which it's ready to 
> > receive.
> >    Currently it returns 0 (if suspended) or 1 otherwise.
> >    In my patch: monitor_qmp_can_read returns a free size of the introduced
> >    ring buffer.
> > 
> > 2. monitor_qmp_read receives and handles input bytes
> >    Currently it just puts received bytes into a json lexer.
> >    If monitor is suspended this function won't be called and thus it won't
> >    process new command until monitor resume.
> >    In my patch: monitor_qmp_read stores input bytes into the buffer and then
> >    handles bytes in the buffer one by one while monitor is not suspended.
> >    So, it allows to be sure that the original logic is preserved and
> >    we won't handle new commands while monitor is suspended.
> > 
> > 3. monitor_resume schedules monitor_accept_input which calls
> >    monitor_qmp_handle_inbuf which tries to handle remaining bytes
> >    in the buffer. monitor_accept_input is a BH scheduled by monitor_resume
> >    on monitor's aio context. It is needed to be sure, that we access
> >    the input buffer only in monitor's context.
> > 
> > Example:
> > 1. QMP read 100 bytes
> > 2. Handle some command in the first 60 bytes
> > 3. For some reason, monitor becomes suspended after the first command
> > 4. 40 bytes are remaining
> > 5. After a while, something calls monitor_resume which handles
> >    the remaining bytes in the buffer (implicitly: resume -> sched bh -> buf)
> > 
> > Actually, QMP continues to receive data even though the monitor is suspended
> > until the buffer is full. But it doesn't process received data.
> 
> I *think* that's OK for OOB; my reading is that prior to this set of
> patches, if you filled the queue (even with oob enabled) you could
> suspend the monitor and block - but you're just not supposed to be
> throwing commands quickly at an OOB monitor; but I'm cc'ing in Peter.

I read this first:

https://lists.nongnu.org/archive/html/qemu-devel/2017-05/msg00472.html

Which makes sense to me.  From OOB POV, IMHO it's fine, because as
Markus pointed out that we only call emit() after the json
parser/streamer, so IIUC it should not be affected on how much we read
from the chardev frontend each time.

But from my understanding what Markus suggested has nothing to do with
the currently introduced ring buffer.  Also, from what I read above I
still didn't find anywhere that explained on why we need a ring buffer
(or I must have missed it).

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]