qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 15/18] target/arm: Implement MVE long shifts by immediate


From: Richard Henderson
Subject: Re: [PATCH 15/18] target/arm: Implement MVE long shifts by immediate
Date: Tue, 29 Jun 2021 09:13:50 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 6/29/21 8:56 AM, Peter Maydell wrote:
I added the groupings, and the final result is:

{
   # The v8.1M MVE shift insns overlap in encoding with MOVS/ORRS
   # and are distinguished by having Rm==13 or 15. Those are UNPREDICTABLE
   # cases for MOVS/ORRS. We decode the MVE cases first, ensuring that
   # they explicitly call unallocated_encoding() for cases that must UNDEF
   # (eg "using a new shift insn on a v8.1M CPU without MVE"), and letting
   # the rest fall through (where ORR_rrri and MOV_rxri will end up
   # handling them as r13 and r15 accesses with the same semantics as A32).
   [
     {
       UQSHL_ri   1110101 0010 1 ....  0 ...  1111 .. 00 1111  @mve_sh_ri
       LSLL_ri    1110101 0010 1 ... 0 0 ... ... 1 .. 00 1111  @mve_shl_ri
       UQSHLL_ri  1110101 0010 1 ... 1 0 ... ... 1 .. 00 1111  @mve_shl_ri
     }

     {
       URSHR_ri   1110101 0010 1 ....  0 ...  1111 .. 01 1111  @mve_sh_ri
       LSRL_ri    1110101 0010 1 ... 0 0 ... ... 1 .. 01 1111  @mve_shl_ri
       URSHRL_ri  1110101 0010 1 ... 1 0 ... ... 1 .. 01 1111  @mve_shl_ri
     }

     {
       SRSHR_ri   1110101 0010 1 ....  0 ...  1111 .. 10 1111  @mve_sh_ri
       ASRL_ri    1110101 0010 1 ... 0 0 ... ... 1 .. 10 1111  @mve_shl_ri
       SRSHRL_ri  1110101 0010 1 ... 1 0 ... ... 1 .. 10 1111  @mve_shl_ri
     }

     {
       SQSHL_ri   1110101 0010 1 ....  0 ...  1111 .. 11 1111  @mve_sh_ri
       SQSHLL_ri  1110101 0010 1 ... 1 0 ... ... 1 .. 11 1111  @mve_shl_ri
     }

     {
       UQRSHL_rr    1110101 0010 1 ....  ....  1111 0000 1101  @mve_sh_rr
       LSLL_rr      1110101 0010 1 ... 0 .... ... 1 0000 1101  @mve_shl_rr
       UQRSHLL64_rr 1110101 0010 1 ... 1 .... ... 1 0000 1101  @mve_shl_rr
     }

     {
       SQRSHR_rr    1110101 0010 1 ....  ....  1111 0010 1101  @mve_sh_rr
       ASRL_rr      1110101 0010 1 ... 0 .... ... 1 0010 1101  @mve_shl_rr
       SQRSHRL64_rr 1110101 0010 1 ... 1 .... ... 1 0010 1101  @mve_shl_rr
     }

     UQRSHLL48_rr 1110101 0010 1 ... 1 ....  ... 1  1000 1101  @mve_shl_rr
     SQRSHRL48_rr 1110101 0010 1 ... 1 ....  ... 1  1010 1101  @mve_shl_rr
   ]

   MOV_rxri       1110101 0010 . 1111 0 ... .... .... ....     @s_rxr_shi
   ORR_rrri       1110101 0010 . .... 0 ... .... .... ....     @s_rrr_shi

   # v8.1M CSEL and friends
   CSEL           1110101 0010 1 rn:4 10 op:2 rd:4 fcond:4 rm:4
}


Unless you would prefer otherwise, I plan to put the adjusted patches
into a pullreq later this week, without resending a v2.

This looks pretty clean, thanks.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]