[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 6/7] vdpa: move iova_tree allocation to net_vhost_vdpa_ini
From: |
Eugenio Perez Martin |
Subject: |
Re: [PATCH v2 6/7] vdpa: move iova_tree allocation to net_vhost_vdpa_init |
Date: |
Wed, 14 Feb 2024 20:11:14 +0100 |
On Wed, Feb 14, 2024 at 7:29 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>
> Hi Michael,
>
> On 2/13/2024 2:22 AM, Michael S. Tsirkin wrote:
> > On Mon, Feb 05, 2024 at 05:10:36PM -0800, Si-Wei Liu wrote:
> >> Hi Eugenio,
> >>
> >> I thought this new code looks good to me and the original issue I saw with
> >> x-svq=on should be gone. However, after rebase my tree on top of this,
> >> there's a new failure I found around setting up guest mappings at early
> >> boot, please see attached the specific QEMU config and corresponding event
> >> traces. Haven't checked into the detail yet, thinking you would need to be
> >> aware of ahead.
> >>
> >> Regards,
> >> -Siwei
> > Eugenio were you able to reproduce? Siwei did you have time to
> > look into this?
> Didn't get a chance to look into the detail yet in the past week, but
> thought it may have something to do with the (internals of) iova tree
> range allocation and the lookup routine. It started to fall apart at the
> first vhost_vdpa_dma_unmap call showing up in the trace events, where it
> should've gotten IOVA=0x2000001000, but an incorrect IOVA address
> 0x1000 was ended up returning from the iova tree lookup routine.
>
> HVA GPA IOVA
> -------------------------------------------------------------------------------------------------------------------------
> Map
> [0x7f7903e00000, 0x7f7983e00000) [0x0, 0x80000000) [0x1000, 0x80000000)
> [0x7f7983e00000, 0x7f9903e00000) [0x100000000, 0x2080000000)
> [0x80001000, 0x2000001000)
> [0x7f7903ea0000, 0x7f7903ec0000) [0xfeda0000, 0xfedc0000)
> [0x2000001000, 0x2000021000)
>
> Unmap
> [0x7f7903ea0000, 0x7f7903ec0000) [0xfeda0000, 0xfedc0000) [0x1000,
> 0x20000) ???
> shouldn't it be [0x2000001000,
> 0x2000021000) ???
>
Yes, I'm still not able to reproduce. In particular, I don't know how
how the memory listener adds a region and then release a region with a
different size. I'm talking about these log entries:
1706854838.154394:vhost_vdpa_listener_region_add vdpa: 0x556d45c75140
iova 0x0 llend 0x80000000 vaddr: 0x7f7903e00000 read-only: 0
452:vhost_vdpa_listener_region_del vdpa: 0x556d45c75140 iova 0x0 llend
0x7fffffff
Is it possible for you to also trace the skipped regions? We should
add a debug trace there too...
Thanks!
> PS, I will be taking off from today and for the next two weeks. Will try
> to help out looking more closely after I get back.
>
> -Siwei
> > Can't merge patches which are known to break things ...
>
- [PATCH v2 1/7] vdpa: check for iova tree initialized at net_client_start, (continued)
- [PATCH v2 1/7] vdpa: check for iova tree initialized at net_client_start, Eugenio Pérez, 2024/02/01
- [PATCH v2 3/7] vdpa: set backend capabilities at vhost_vdpa_init, Eugenio Pérez, 2024/02/01
- [PATCH v2 2/7] vdpa: reorder vhost_vdpa_set_backend_cap, Eugenio Pérez, 2024/02/01
- [PATCH v2 4/7] vdpa: add listener_registered, Eugenio Pérez, 2024/02/01
- [PATCH v2 6/7] vdpa: move iova_tree allocation to net_vhost_vdpa_init, Eugenio Pérez, 2024/02/01
[PATCH v2 7/7] vdpa: move memory listener register to vhost_vdpa_init, Eugenio Pérez, 2024/02/01
[PATCH v2 5/7] vdpa: reorder listener assignment, Eugenio Pérez, 2024/02/01