qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 3/5] 9pfs: add new function v9fs_co_readdir_many()


From: Greg Kurz
Subject: Re: [PATCH v6 3/5] 9pfs: add new function v9fs_co_readdir_many()
Date: Thu, 7 May 2020 17:59:52 +0200

On Thu, 07 May 2020 14:16:43 +0200
Christian Schoenebeck <address@hidden> wrote:

> On Montag, 4. Mai 2020 11:18:34 CEST Greg Kurz wrote:
> > On Fri, 01 May 2020 16:04:41 +0200
> > 
> > Christian Schoenebeck <address@hidden> wrote:
> > > On Donnerstag, 30. April 2020 15:30:49 CEST Greg Kurz wrote:
> > > > > > I agree that a client that issues concurrent readdir requests on the
> > > > > > same fid is probably asking for troubles, but this permitted by the
> > > > > > spec. Whether we should detect such conditions and warn or even fail
> > > > > > is discussion for another thread.
> > > > > > 
> > > > > > The locking is only needed to avoid concurrent accesses to the
> > > > > > dirent
> > > > > > structure returned by readdir(), otherwise we could return partially
> > > > > > overwritten file names to the client. It must be done for each
> > > > > > individual
> > > > > > call to readdir(), but certainly not for multiple calls.
> > > > > 
> > > > > Yeah, that would resolve this issue more appropriately for 9p2000.L,
> > > > > since
> > > > > Treaddir specifies an offset, but for 9p2000.u the result of a
> > > > > concurrent
> > > > > read on a directory (9p2000.u) would still be undefined.
> > > > 
> > > > The bad client behavior you want to tackle has nothing to do with
> > > > the locking itself. Since all the code in 9p.c runs serialized in
> > > > the context of the QEMU main loop, concurrent readdir requests could
> > > > easily be detected up-front with a simple flag in the fid structure.
> > > 
> > > Well, it's fine with me. I don't really see an issue here right now. But
> > > that all the code was serialized is not fully true. Most of the 9p.c code
> > > is still heavily dispatching between main thread and worker threads back
> > > and forth. And for that reason the order of request processing might
> > > change quite arbitrarily in between. Just keep that in mind.
> > 
> > Just to make things clear. The code in 9p.c is ALWAYS exclusively run by
> > the main thread. Only the code called under v9fs_co_run_in_worker() is
> > dispatched on worker threads. So, yes the order of individual backend
> > operations may change, but the start of a new client request is necessarily
> > serialized with the completion of pending ones, which is the only thing
> > we care for actually.
> 
> I just looked at this. 9p.c code is called by main I/O thread only, that's 
> clear. The start of requests come also in in order, yes, but it seems you 
> would think that main I/O thread would not grab the next client request from 
> queue before completing the current/previous client request (that is not 
> before transmitting result to client). If yes, I am not so sure about this 
> claim:
> 

Hrm... I don't think I've ever said anything like that :)

What I'm saying is that, thanks to the serialization of
request start and completion, v9fs_readdir() and v9fs_read()
can:
- flag the fid as being involved in a readdir request
- clear the flag when the request is complete or failed
- directly fail or print a warning if the fid already
  has the flag set (ie. a previous request hasn't
  completed yet)

But again, I'm not a big fan of adding code to handle
such an unlikely situation which is even not explicitly
forbidden by the 9p spec.

> For instance v9fs_path_write_lock() is using a co-mutex, right? So an 
> unsuccesful lock would cause main I/O thread to grab the next request before 
> completing the current/previous request.
> 

An unsuccessful lock would cause the main I/O thread to switch
to some other task.

> And what happens on any run_in_worker({}) call? If there is another client 
> request in the queue, main I/O thread would pull that request from the queue 
> before waiting for the worker thread to complete its task, wouldn't it?
> 

The main I/O thread won't wait for anything, so, yes, it will probably
start handling the new request until it yields.

> Just looked at the code so far, haven't tested it yet ...
> 
> Best regards,
> Christian Schoenebeck
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]