qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/arm/smmuv3: Another range invalidation fix


From: Auger Eric
Subject: Re: [PATCH] hw/arm/smmuv3: Another range invalidation fix
Date: Mon, 10 May 2021 13:44:13 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0

Hi Peter,

On 5/10/21 1:31 PM, Peter Maydell wrote:
> On Wed, 21 Apr 2021 at 18:29, Eric Auger <eric.auger@redhat.com> wrote:
>>
>> 6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two range")
>> failed to completely fix misalignment issues with range
>> invalidation. For instance invalidations patterns like "invalidate 32
>> 4kB pages starting from 0xff395000 are not correctly handled" due
>> to the fact the previous fix only made sure the number of invalidated
>> pages were a power of 2 but did not properly handle the start
>> address was not aligned with the range. This can be noticed when
>> boothing a fedora 33 with protected virtio-blk-pci.
>>
>> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>> Fixes: 6d9cd115b9 ("hw/arm/smmuv3: Enforce invalidation on a power of two 
>> range")
>>
>> ---
>>
>> This bug was found with SMMU RIL avocado-qemu acceptance tests
>> ---
>>  hw/arm/smmuv3.c | 49 +++++++++++++++++++++++++------------------------
>>  1 file changed, 25 insertions(+), 24 deletions(-)
>>
>> diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c
>> index 8705612535..16f285a566 100644
>> --- a/hw/arm/smmuv3.c
>> +++ b/hw/arm/smmuv3.c
>> @@ -856,43 +856,44 @@ static void smmuv3_inv_notifiers_iova(SMMUState *s, 
>> int asid, dma_addr_t iova,
>>
>>  static void smmuv3_s1_range_inval(SMMUState *s, Cmd *cmd)
>>  {
>> -    uint8_t scale = 0, num = 0, ttl = 0;
>> -    dma_addr_t addr = CMD_ADDR(cmd);
>> +    dma_addr_t end, addr = CMD_ADDR(cmd);
>>      uint8_t type = CMD_TYPE(cmd);
>>      uint16_t vmid = CMD_VMID(cmd);
>> +    uint8_t scale = CMD_SCALE(cmd);
>> +    uint8_t num = CMD_NUM(cmd);
>> +    uint8_t ttl = CMD_TTL(cmd);
>>      bool leaf = CMD_LEAF(cmd);
>>      uint8_t tg = CMD_TG(cmd);
>> -    uint64_t first_page = 0, last_page;
>> -    uint64_t num_pages = 1;
>> +    uint64_t num_pages;
>> +    uint8_t granule;
>>      int asid = -1;
>>
>> -    if (tg) {
>> -        scale = CMD_SCALE(cmd);
>> -        num = CMD_NUM(cmd);
>> -        ttl = CMD_TTL(cmd);
>> -        num_pages = (num + 1) * BIT_ULL(scale);
>> -    }
>> -
>>      if (type == SMMU_CMD_TLBI_NH_VA) {
>>          asid = CMD_ASID(cmd);
>>      }
>>
>> +    if (!tg) {
>> +        trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, 1, ttl, leaf);
>> +        smmuv3_inv_notifiers_iova(s, asid, addr, tg, 1);
>> +        smmu_iotlb_inv_iova(s, asid, addr, tg, 1, ttl);
>> +    }
> 
> Is this intended to fall through ?
hum no it isn't. I will fix that.

Thanks

Eric
> 
>> +
>> +    /* RIL in use */
>> +
>> +    num_pages = (num + 1) * BIT_ULL(scale);
>> +    granule = tg * 2 + 10;
>> +
>>      /* Split invalidations into ^2 range invalidations */
>> -    last_page = num_pages - 1;
>> -    while (num_pages) {
>> -        uint8_t granule = tg * 2 + 10;
>> -        uint64_t mask, count;
>> +    end = addr + (num_pages << granule) - 1;
>>
>> -        mask = dma_aligned_pow2_mask(first_page, last_page, 64 - granule);
>> -        count = mask + 1;
>> +    while (addr != end + 1) {
>> +        uint64_t mask = dma_aligned_pow2_mask(addr, end, 64);
>>
>> -        trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, count, ttl, leaf);
>> -        smmuv3_inv_notifiers_iova(s, asid, addr, tg, count);
>> -        smmu_iotlb_inv_iova(s, asid, addr, tg, count, ttl);
>> -
>> -        num_pages -= count;
>> -        first_page += count;
>> -        addr += count * BIT_ULL(granule);
>> +        num_pages = (mask + 1) >> granule;
>> +        trace_smmuv3_s1_range_inval(vmid, asid, addr, tg, num_pages, ttl, 
>> leaf);
>> +        smmuv3_inv_notifiers_iova(s, asid, addr, tg, num_pages);
>> +        smmu_iotlb_inv_iova(s, asid, addr, tg, num_pages, ttl);
>> +        addr += mask + 1;
>>      }
>>  }
> 
> thanks
> -- PMM
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]