qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver


From: Saket Sinha
Subject: Re: [virtio-dev] Re: Fwd: Qemu Support for Virtio Video V4L2 driver
Date: Mon, 11 May 2020 14:32:36 +0200

Hi Keiichi,

> > > I do not support the approach of  QEMU implementation forwarding
> > > requests to the host's vicodec module since  this can limit the scope
> > > of the virtio-video device only for testing,
> >
> > That was my understanding as well.
>
> Not really because the API which the vicodec provides is V4L2 stateful
> decoder interface [1], which are also used by other video drivers on
> Linux.
> The difference between vicodec and actual device drivers is that
> vicodec performs decoding in the kernel space without using special
> video devices. In other words, vicodec is a software decoder in kernel
> space which provides the same interface with actual video drivers.
> Thus, if the QEMU implementation can forward virtio-video requests to
> vicodec, it can forward them to the actual V4L2 video decoder devices
> as well and VM gets access to a paravirtualized video device.
>
> The reason why we discussed vicodec in the previous thread was it'll
> allow us to test the virtio-video driver without hardware requirement.
>
> [1] https://www.kernel.org/doc/html/latest/media/uapi/v4l/dev-decoder.html
>

Thanks for clarification.

Could  you provide your views if it would be possible to support also
paravirtualized v4l-subdev devices which is enabled by media
controller to expose ISP processing blocks to linux userspace.
Ofcourse, we might need to change implementation and spec to support that
Please refer (1) for details.

> >
> > > which instead can be used with multiple use cases such as -
> > >
> > > 1. VM gets access to paravirtualized  camera devices which shares the
> > > video frames input through actual HW camera attached to Host.
> >
> > This use-case is out of the scope of virtio-video. Initially I had a plan to
> > support capture-only streams like camera as well, but later the decision was
> > made upstream that camera should be implemented as separate device type. We
> > still plan to implement a simple frame capture capability as a downstream
> > patch though.
> >
> > >
> > > 2. If Host has multiple video devices (especially in ARM SOCs over
> > > MIPI interfaces or USB), different VM can be started or hotplugged
> > > with selective video streams from actual HW video devices.
> >
> > We do support this in our device implementation. But spec in general has no
> > requirements or instructions regarding this. And it is in fact flexible 
> > enough
> > to provide abstraction on top of several HW devices.
> >
> > >
> > > Also instead of using libraries like Gstreamer in Host userspace, they
> > > can also be used inside the VM userspace after getting access to
> > > paravirtualized HW camera devices .
>
> Regarding Gstreamer, I intended this video decoding API [2]. If QEMU
> can translate virtio-video requests to this API, we can easily support
> multiple platforms.
> I'm not sure how feasible it is though, as I have no experience of
> using this API by myself...
>
> [2] 
> https://gstreamer.freedesktop.org/documentation/tutorials/playback/hardware-accelerated-video-decoding.html
>

Like pointed out above, Gstreamer is not the only framework present there.
We have the newer libcamera framework [2] and then Openmax (used in
Android Hal )
Refer [3] for comparison.

My intentions are to make the implementation more generic so that it
can be used by different frameworks on different platforms.

[1]: https://static.sched.com/hosted_files/osseu19/21/libcamera.pdf
[2]: http://libcamera.org
[3]: https://processors.wiki.ti.com/images/7/7e/OMX_Android_GST_Comparison.pdf

Regards,
Saket Sinha



reply via email to

[Prev in Thread] Current Thread [Next in Thread]