[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v3 05/11] hw/arm/smmu-common: Manage IOTLB block entries
From: |
Peter Maydell |
Subject: |
Re: [PATCH v3 05/11] hw/arm/smmu-common: Manage IOTLB block entries |
Date: |
Fri, 10 Jul 2020 10:00:34 +0100 |
On Wed, 8 Jul 2020 at 15:19, Eric Auger <eric.auger@redhat.com> wrote:
>
> At the moment each entry in the IOTLB corresponds to a page sized
> mapping (4K, 16K or 64K), even if the page belongs to a mapped
> block. In case of block mapping this unefficiently consumes IOTLB
> entries.
>
> Change the value of the entry so that it reflects the actual
> mapping it belongs to (block or page start address and size).
>
> Also the level/tg of the entry is encoded in the key. In subsequent
> patches we will enable range invalidation. This latter is able
> to provide the level/tg of the entry.
>
> Encoding the level/tg directly in the key will allow to invalidate
> using g_hash_table_remove() when num_pages equals to 1.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
>
> ---
> v2 -> v3:
> - simplify the logic in smmu_hash_remove_by_asid_iova as
> suggested by Peter
> - the key is a struct. We take into account the lvl in the
> jenkins hash function. Also the equal function is updated.
>
> v1 -> v2:
> - recompute starting_level
> ---
> hw/arm/smmu-internal.h | 7 ++++
> include/hw/arm/smmu-common.h | 10 ++++--
> hw/arm/smmu-common.c | 66 +++++++++++++++++++++++++-----------
> hw/arm/smmuv3.c | 6 ++--
> hw/arm/trace-events | 2 +-
> 5 files changed, 65 insertions(+), 26 deletions(-)
>
> diff --git a/hw/arm/smmu-internal.h b/hw/arm/smmu-internal.h
> index 3104f768cd..55147f29be 100644
> --- a/hw/arm/smmu-internal.h
> +++ b/hw/arm/smmu-internal.h
> @@ -97,4 +97,11 @@ uint64_t iova_level_offset(uint64_t iova, int inputsize,
> }
>
> #define SMMU_IOTLB_ASID(key) ((key).asid)
> +
> +typedef struct SMMUIOTLBPageInvInfo {
> + int asid;
> + uint64_t iova;
> + uint64_t mask;
> +} SMMUIOTLBPageInvInfo;
> +
> #endif
> diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h
> index 79c2c6486a..8b13ab0951 100644
> --- a/include/hw/arm/smmu-common.h
> +++ b/include/hw/arm/smmu-common.h
> @@ -97,6 +97,8 @@ typedef struct SMMUPciBus {
> typedef struct SMMUIOTLBKey {
> uint64_t iova;
> uint16_t asid;
> + uint8_t tg;
> + uint8_t level;
> } SMMUIOTLBKey;
>
> typedef struct SMMUState {
> @@ -159,12 +161,14 @@ IOMMUMemoryRegion *smmu_iommu_mr(SMMUState *s, uint32_t
> sid);
>
> #define SMMU_IOTLB_MAX_SIZE 256
>
> -SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg, hwaddr
> iova);
> +SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
> + SMMUTransTableInfo *tt, hwaddr iova);
> void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry
> *entry);
> -SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova);
> +SMMUIOTLBKey smmu_get_iotlb_key(uint16_t asid, uint64_t iova,
> + uint8_t tg, uint8_t level);
> void smmu_iotlb_inv_all(SMMUState *s);
> void smmu_iotlb_inv_asid(SMMUState *s, uint16_t asid);
> -void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova);
> +void smmu_iotlb_inv_iova(SMMUState *s, int asid, dma_addr_t iova);
>
> /* Unmap the range of all the notifiers registered to any IOMMU mr */
> void smmu_inv_notifiers_all(SMMUState *s);
> diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
> index 398e958bb4..d373e30aa5 100644
> --- a/hw/arm/smmu-common.c
> +++ b/hw/arm/smmu-common.c
> @@ -39,7 +39,7 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
>
> /* Jenkins hash */
> a = b = c = JHASH_INITVAL + sizeof(*key);
> - a += key->asid;
> + a += key->asid + key->level;
What's the rationale for putting the level into the hash
but not the tg?
> b += extract64(key->iova, 0, 32);
> c += extract64(key->iova, 32, 32);
>
> @@ -51,24 +51,38 @@ static guint smmu_iotlb_key_hash(gconstpointer v)
>
> static gboolean smmu_iotlb_key_equal(gconstpointer v1, gconstpointer v2)
> {
> - const SMMUIOTLBKey *k1 = v1;
> - const SMMUIOTLBKey *k2 = v2;
> -
> - return (k1->asid == k2->asid) && (k1->iova == k2->iova);
> + return !memcmp(v1, v2, sizeof(SMMUIOTLBKey));
Won't this also compare the padding at the end of the struct
(which isn't guaranteed to be the same)? I think just comparing
all the fields would be safer...
> }
Otherwise
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
thanks
-- PMM
- [PATCH v3 00/11] SMMUv3.2 Range-based TLB Invalidation Support, Eric Auger, 2020/07/08
- [PATCH v3 05/11] hw/arm/smmu-common: Manage IOTLB block entries, Eric Auger, 2020/07/08
- [PATCH v3 03/11] hw/arm/smmu: Introduce smmu_get_iotlb_key(), Eric Auger, 2020/07/08
- [PATCH v3 01/11] hw/arm/smmu-common: Factorize some code in smmu_ptw_64(), Eric Auger, 2020/07/08
- [PATCH v3 02/11] hw/arm/smmu-common: Add IOTLB helpers, Eric Auger, 2020/07/08
- [PATCH v3 07/11] hw/arm/smmuv3: Get prepared for range invalidation, Eric Auger, 2020/07/08
- [PATCH v3 04/11] hw/arm/smmu: Introduce SMMUTLBEntry for PTW and IOTLB value, Eric Auger, 2020/07/08
- [PATCH v3 10/11] hw/arm/smmuv3: Support HAD and advertise SMMUv3.1 support, Eric Auger, 2020/07/08
- [PATCH v3 08/11] hw/arm/smmuv3: Fix IIDR offset, Eric Auger, 2020/07/08