qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 0/8] Support virtio-gpu DRM native context


From: Dmitry Osipenko
Subject: Re: [PATCH v5 0/8] Support virtio-gpu DRM native context
Date: Thu, 23 Jan 2025 14:23:48 +0300
User-agent: Mozilla Thunderbird

On 1/22/25 20:00, Alex Bennée wrote:
> Dmitry Osipenko <dmitry.osipenko@collabora.com> writes:
> 
>> This patchset adds DRM native context support to VirtIO-GPU on Qemu.
>>
>> Contarary to Virgl and Venus contexts that mediates high level GFX APIs,
>> DRM native context [1] mediates lower level kernel driver UAPI, which
>> reflects in a less CPU overhead and less/simpler code needed to support it.
>> DRM context consists of a host and guest parts that have to be implemented
>> for each GPU driver. On a guest side, DRM context presents a virtual GPU as
>> a real/native host GPU device for GL/VK applications.
>>
>> [1] https://www.youtube.com/watch?v=9sFP_yddLLQ
>>
>> Today there are four known DRM native context drivers existing in a wild:
>>
>>   - Freedreno (Qualcomm SoC GPUs), completely upstreamed
>>   - AMDGPU, mostly merged into upstreams
> 
> I tried my AMD system today with:
> 
> Host:
>   Aarch64 AVA system
>   Trixie
>   virglrenderer @ v1.1.0/99557f5aa130930d11f04ffeb07f3a9aa5963182
>   -display sdl,gl=on (gtk,gl=on also came up but handled window resizing
>   poorly)
>   
> KVM Guest
> 
>   Aarch64
>   Trixie
>   mesa @ main/d27748a76f7dd9236bfcf9ef172dc13b8c0e170f
>   -Dvulkan-drivers=virtio,amd -Dgallium-drivers=virgl,radeonsi 
> -Damdgpu-virtio=true
> 
> However when I ran vulkan-info --summary KVM faulted with:
> 
>   debian-trixie login: error: kvm run failed Bad address
>    PC=0000ffffb9aa1eb0 X00=0000ffffba0450a4 X01=0000aaaaf7f32400
>   X02=000000000000013c X03=0000ffffba045098 X04=0000aaaaf7f3253c
>   X05=0000ffffba0451d4 X06=00000000c0016900 X07=000000000000000e
>   X08=0000000000000014 X09=00000000000000ff X10=0000aaaaf7f32500
>   X11=0000aaaaf7e4d028 X12=0000aaaaf7edbcb0 X13=0000000000000001
>   X14=000000000000000c X15=0000000000007718 X16=0000ffffb93601f0
>   X17=0000ffffb9aa1dc0 X18=00000000000076f0 X19=0000aaaaf7f31330
>   X20=0000aaaaf7f323f0 X21=0000aaaaf7f235e0 X22=000000000000004c
>   X23=0000aaaaf7f2b5e0 X24=0000aaaaf7ee0cb0 X25=00000000000000ff
>   X26=0000000000000076 X27=0000ffffcd2b18a8 X28=0000aaaaf7ee0cb0
>   X29=0000ffffcd2b0bd0 X30=0000ffffb86c8b98  SP=0000ffffcd2b0bd0
>   PSTATE=20001000 --C- EL0t
>   QEMU 9.2.50 monitor - type 'help' for more information
>   (qemu) quit
> 
> Which looks very much like the PFN locking failure. However booting up
> with venus=on instead works. Could there be any differences in the way
> device memory is mapped in the two cases?

Memory mapping works exactly the same for nctx and venus. Are you on
6.13 host kernel?

-- 
Best regards,
Dmitry



reply via email to

[Prev in Thread] Current Thread [Next in Thread]