[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 4/5] target/arm: Support more GM blocksizes
From: |
Peter Maydell |
Subject: |
Re: [PATCH 4/5] target/arm: Support more GM blocksizes |
Date: |
Thu, 10 Aug 2023 15:23:58 +0100 |
On Thu, 10 Aug 2023 at 03:36, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Support all of the easy GM block sizes.
> Use direct memory operations, since the pointers are aligned.
>
> While BS=2 (16 bytes, 1 tag) is a legal setting, that requires
> an atomic store of one nibble. This is not difficult, but there
> is also no point in supporting it until required.
>
> Note that cortex-a710 sets GM blocksize to match its cacheline
> size of 64 bytes. I expect many implementations will also
> match the cacheline, which makes 16 bytes very unlikely.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> target/arm/tcg/mte_helper.c | 61 ++++++++++++++++++++++++++++++++-----
> 1 file changed, 53 insertions(+), 8 deletions(-)
>
> diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
> index 3640c6e57f..6faf4e42d5 100644
> --- a/target/arm/tcg/mte_helper.c
> +++ b/target/arm/tcg/mte_helper.c
> @@ -428,6 +428,8 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
> int gm_bs = env_archcpu(env)->gm_blocksize;
> int gm_bs_bytes = 4 << gm_bs;
> void *tag_mem;
> + uint64_t ret;
> + int shift;
>
> ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
>
> @@ -443,16 +445,39 @@ uint64_t HELPER(ldgm)(CPUARMState *env, uint64_t ptr)
>
> /*
> * The ordering of elements within the word corresponds to
> - * a little-endian operation.
> + * a little-endian operation. Computation of shift comes from
> + *
> + * index = address<LOG2_TAG_GRANULE+3:LOG2_TAG_GRANULE>
> + * data<index*4+3:index*4> = tag
> + *
> + * Because of the alignment of ptr above, BS=6 has shift=0.
> + * All memory operations are aligned.
> */
> switch (gm_bs) {
> - case 6:
> - /* 256 bytes -> 16 tags -> 64 result bits */
> - return ldq_le_p(tag_mem);
> - default:
> + case 2:
> /* cpu configured with unsupported gm blocksize. */
> g_assert_not_reached();
> + case 3:
> + /* 32 bytes -> 2 tags -> 8 result bits */
> + ret = *(uint8_t *)tag_mem;
> + break;
> + case 4:
> + /* 64 bytes -> 4 tags -> 16 result bits */
> + ret = cpu_to_le16(*(uint16_t *)tag_mem);
Does this really make a difference compared to ldw_le_p() ?
> + break;
> + case 5:
> + /* 128 bytes -> 8 tags -> 32 result bits */
> + ret = cpu_to_le32(*(uint32_t *)tag_mem);
> + break;
> + case 6:
> + /* 256 bytes -> 16 tags -> 64 result bits */
> + return cpu_to_le64(*(uint64_t *)tag_mem);
> + default:
> + /* cpu configured with invalid gm blocksize. */
> + g_assert_not_reached();
Is it worth having an assert in CPU realize for an invalid
blocksize, so that we can catch duff ID register values
without having to rely on there being a test run that
uses ldgm/stgm ?
> }
> + shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
> + return ret << shift;
> }
>
> void HELPER(stgm)(CPUARMState *env, uint64_t ptr, uint64_t val)
> @@ -462,6 +487,7 @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr,
> uint64_t val)
> int gm_bs = env_archcpu(env)->gm_blocksize;
> int gm_bs_bytes = 4 << gm_bs;
> void *tag_mem;
> + int shift;
>
> ptr = QEMU_ALIGN_DOWN(ptr, gm_bs_bytes);
>
> @@ -480,14 +506,33 @@ void HELPER(stgm)(CPUARMState *env, uint64_t ptr,
> uint64_t val)
>
> /*
> * The ordering of elements within the word corresponds to
> - * a little-endian operation.
> + * a little-endian operation. See LDGM for comments on shift.
> + * All memory operations are aligned.
> */
> + shift = extract64(ptr, LOG2_TAG_GRANULE, 4) * 4;
> + val >>= shift;
> switch (gm_bs) {
> + case 2:
> + /* cpu configured with unsupported gm blocksize. */
> + g_assert_not_reached();
> + case 3:
> + /* 32 bytes -> 2 tags -> 8 result bits */
> + *(uint8_t *)tag_mem = val;
> + break;
> + case 4:
> + /* 64 bytes -> 4 tags -> 16 result bits */
> + *(uint16_t *)tag_mem = cpu_to_le16(val);
> + break;
> + case 5:
> + /* 128 bytes -> 8 tags -> 32 result bits */
> + *(uint32_t *)tag_mem = cpu_to_le32(val);
> + break;
> case 6:
> - stq_le_p(tag_mem, val);
> + /* 256 bytes -> 16 tags -> 64 result bits */
> + *(uint64_t *)tag_mem = cpu_to_le64(val);
> break;
> default:
> - /* cpu configured with unsupported gm blocksize. */
> + /* cpu configured with invalid gm blocksize. */
> g_assert_not_reached();
> }
> }
> --
> 2.34.1
thanks
-- PMM
- [PATCH 2/5] target/arm: Reduce dcz_blocksize to uint8_t, (continued)