qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [RFC 0/1] Use dmabufs for display updates instead of pixman


From: Zhang, Tina
Subject: RE: [RFC 0/1] Use dmabufs for display updates instead of pixman
Date: Thu, 18 Mar 2021 06:24:48 +0000


> -----Original Message-----
> From: Qemu-devel <qemu-devel-bounces+tina.zhang=intel.com@nongnu.org>
> On Behalf Of Gerd Hoffmann
> Sent: Tuesday, March 2, 2021 8:04 PM
> To: Kasireddy, Vivek <vivek.kasireddy@intel.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>; Kim, Dongwon
> <dongwon.kim@intel.com>; qemu-devel@nongnu.org; Marc-André Lureau
> <marcandre.lureau@redhat.com>
> Subject: Re: [RFC 0/1] Use dmabufs for display updates instead of pixman
> 
> On Tue, Mar 02, 2021 at 12:03:57AM -0800, Vivek Kasireddy wrote:
> > This is still a WIP/RFC patch that attempts to use dmabufs for display
> > updates with the help of Udmabuf driver instead of pixman. This patch
> > is posted to the ML to elicit feedback and start a discussion whether
> > something like this would be useful or not for mainly non-Virgl
> > rendered BOs and also potentially in other cases.
> 
> Yes, it surely makes sense to go into that direction.
> The patch as-is doesn't, it breaks the guest/host interface.
> That's ok-ish for a quick proof-of-concept, but clearly not merge-able.

Hi,
According to 
https://lore.kernel.org/dri-devel/20210212110140.gdpu7kapnr7ovdcn@sirius.home.kraxel.org/
 proposal, we made some progress on making a 'virtio-gpu (display) + 
pass-through GPU' prototype. We leverage the kmsro framework provided by mesa 
to let the virtio-gpu display work with a passed-through GPU in headless mode. 
And the MR is here: 
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592

Although our work is different from this on-going discussion which is about 
enabling a general way to share buffers between guest and host, we'd like to 
leverage this patch. So, is there any plan to refine this patch? E.g. move the 
uuid blob support into another patch, as the implementation of the proposal 
doesn't require guest user space to share buffers with host side, and also 
maybe add the dma-buf support for cursor plane. Thanks.

BR,
Tina

> 
> > TODO:
> > - Use Blob resources for getting meta-data such as modifier, format, etc.
> 
> That is pretty much mandatory.  Without blob resources there is no concept of
> resources shared between host and guest in virtio-gpu, all data is explicitly
> copied with transfer commands.
> 
> Which implies quite a bit of work because we don't have blob resource support
> in qemu yet.
> 
> > - Test with Virgil rendered BOs to see if this can be used in that case..
> 
> That also opens up the question how to go forward with virtio-gpu in general.
> The object hierarchy we have right now (skipping pci + vga variants for
> simplicity):
> 
>   TYPE_VIRTIO_GPU_BASE (abstract base)
>    -> TYPE_VIRTIO_GPU (in-qemu implementation)
>    -> TYPE_VHOST_USER_GPU (vhost-user implementation)
> 
> When compiled with opengl + virgl TYPE_VIRTIO_GPU has a virgl=on/off
> property.  Having a single device is not ideal for modular builds.
> because the hw-display-virtio-gpu.so module has a dependency on ui-opengl.so
> so that is needed (due to symbol references) even for the virgl=off case.  
> Also
> the code is a bit of a #ifdef mess.
> 
> I think we should split TYPE_VIRTIO_GPU into two devices.  Remove
> virgl+opengl support from TYPE_VIRTIO_GPU.  Add a new
> TYPE_VIRTIO_GPU_VIRGL, with either TYPE_VIRTIO_GPU or
> TYPE_VIRTIO_GPU_BASE as parent (not sure which is easier), have all
> opengl/virgl support code there.
> 
> I think when using opengl it makes sense to also require virgl, so we can use 
> the
> virglrenderer library to manage blob resources (even when the actual rendering
> isn't done with virgl).  Also reduces the complexity and test matrix.
> 
> Maybe it even makes sense to deprecate in-qemu virgl support and focus
> exclusively on the vhost-user implementation, so we don't have to duplicate 
> all
> work for both implementations.
> 
> > Considerations/Challenges:
> > - One of the main concerns with using dmabufs is how to synchronize
> > access to them and this use-case is no different. If the Guest is
> > running Weston, then it could use a maximum of 4 color buffers but
> > uses only 2 by default and flips between them if it is not sharing the
> > FBs with other plugins while running with the drm backend. In this
> > case, how do we make sure that Weston and Qemu UI are not using the same
> buffer at any given time?
> 
> There is graphic_hw_gl_block + graphic_hw_gl_flushed for syncronization.
> Right now this is only wired up in spice, and it is rather simple (just 
> stalls virgl
> rendering instead of providing per-buffer syncronization).
> 
> > - If we have Xorg running in the Guest, then it gets even more
> > interesting as Xorg in some cases does frontbuffer rendering (uses
> DRM_IOCTL_MODE_DIRTYFB).
> 
> Well, if the guest does frontbuffer rendering we can't do much about it and 
> have
> to live with rendering glitches I guess.
> 
> take care,
>   Gerd
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]