|
From: | Daniel Henrique Barboza |
Subject: | Re: [PATCH] target/riscv/vector_helper.c: Avoid shifting negative in fractional LMUL checking |
Date: | Wed, 6 Mar 2024 14:17:35 -0300 |
User-agent: | Mozilla Thunderbird |
On 3/6/24 13:10, Max Chou wrote:
When vlmul is larger than 5, the original fractional LMUL checking may gets unexpected result. Signed-off-by: Max Chou <max.chou@sifive.com> ---
There's already a fix for it in the ML: "[PATCH v3] target/riscv: Fix shift count overflow" https://lore.kernel.org/qemu-riscv/20240225174114.5298-1-demin.han@starfivetech.com/ Hopefully it'll be queued for the next PR. Thanks, Daniel
target/riscv/vector_helper.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 84cec73eb20..adceec378fd 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -53,10 +53,9 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1, * VLEN * LMUL >= SEW * VLEN >> (8 - lmul) >= sew * (vlenb << 3) >> (8 - lmul) >= sew - * vlenb >> (8 - 3 - lmul) >= sew */ if (vlmul == 4 || - cpu->cfg.vlenb >> (8 - 3 - vlmul) < sew) { + ((cpu->cfg.vlenb << 3) >> (8 - vlmul)) < sew) { vill = true; } }
[Prev in Thread] | Current Thread | [Next in Thread] |