aarch64: Correct the maximum shift amount for shifted operands
The aarch64 ISA specification allows a left shift amount to be applied
after extension in the range of 0 to 4 (encoded in the imm3 field).
This is true for at least the following instructions:
* ADD (extend register)
* ADDS (extended register)
* SUB (extended register)
The result of this patch can be seen, when compiling the following code:
uint64_t myadd(uint64_t a, uint64_t b)
{
return a+(((uint8_t)b)<<4);
}
Without the patch the following sequence will be generated:
0000000000000000 <myadd>:
0:
d37c1c21 ubfiz x1, x1, #4, #8
4:
8b000020 add x0, x1, x0
8:
d65f03c0 ret
With the patch the ubfiz will be merged into the add instruction:
0000000000000000 <myadd>:
0:
8b211000 add x0, x0, w1, uxtb #4
4:
d65f03c0 ret
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_uxt_size): fix an
off-by-one in checking the permissible shift-amount.