qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] RFC/WIP: Fix scsi devices plug/unplug races w.r.t virtio


From: Maxim Levitsky
Subject: Re: [PATCH 0/4] RFC/WIP: Fix scsi devices plug/unplug races w.r.t virtio-scsi iothread
Date: Mon, 04 May 2020 14:43:31 +0300

On Mon, 2020-05-04 at 13:38 +0200, Paolo Bonzini wrote:
> On 04/05/20 12:59, Stefan Hajnoczi wrote:
> > Regarding drive_del, I guess the issue here is that this HMP command's
> > semantics need to include not synchronize_rcu() but some kind of
> > drain_call_rcu() operation as well that ensures deletion has completed?
> 
> Good idea, this would be Linux's rcu_barrier().
> 
> It would be a pity though that we have to do this instead of just having
> the test rely on the DEVICE_DELETED event.
> 
> > drain_call_rcu() can be implemented by invoking call_rcu(temp,
> > drain_call_rcu_cb, rcu) where drain_call_rcu_cb() sets a QemuEvent that
> > the caller is waiting on. This way the caller can be sure that all
> > previously queued call_rcu() callbacks have completed. call_rcu_thread()
> > needs to be tweaked to avoid g_usleep() and instead use a timed wait so
> > that drain_call_rcu() can immediately wake up the thread.
> 
> This was actually intentional in order to let some RCU callbacks pile up
> (based on the observation, or the hope, that RCU data structures are
> written rarely).  But the overall delay would be 50 ms so I don't think
> it's a big deal to keep the unconditional sleep. The synchronize_rcu()
> call could be on the order of 50 ms if --enable-membarrier is in use.
> 
> Another thing to care about is that call_rcu needs the iothread lock, so
> you need to release it around the qemu_event_wait() call.
> 
> Paolo
> 
Thank a lot for the suggestions!
I'll try to implement this in the next version of these patches.

Best regards,
        Maxim Levitsky




reply via email to

[Prev in Thread] Current Thread [Next in Thread]