AArch64: Add combine patterns for narrowing shift of half top bits (shuffle)
When doing a (narrowing) right shift by half the width of the original type then
we are essentially shuffling the top bits from the first number down.
If we have a hi/lo pair we can just use a single shuffle instead of needing two
shifts.
i.e.
typedef short int16_t;
typedef unsigned short uint16_t;
void foo (uint16_t * restrict a, int16_t * restrict d, int n)
{
for( int i = 0; i < n; i++ )
d[i] = (a[i] * a[i]) >> 16;
}
now generates:
.L4:
ldr q0, [x0, x3]
umull v1.4s, v0.4h, v0.4h
umull2 v0.4s, v0.8h, v0.8h
uzp2 v0.8h, v1.8h, v0.8h
str q0, [x1, x3]
add x3, x3, 16
cmp x4, x3
bne .L4
instead of
.L4:
ldr q0, [x0, x3]
umull v1.4s, v0.4h, v0.4h
umull2 v0.4s, v0.8h, v0.8h
sshr v1.4s, v1.4s, 16
sshr v0.4s, v0.4s, 16
xtn v1.4h, v1.4s
xtn2 v1.8h, v0.4s
str q1, [x1, x3]
add x3, x3, 16
cmp x4, x3
bne .L4
Thanks,
Tamar
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md
(*aarch64_<srn_op>topbits_shuffle<mode>_le): New.
(*aarch64_topbits_shuffle<mode>_le): New.
(*aarch64_<srn_op>topbits_shuffle<mode>_be): New.
(*aarch64_topbits_shuffle<mode>_be): New.
* config/aarch64/predicates.md
(aarch64_simd_shift_imm_vec_exact_top): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/shrn-combine-10.c: New test.
* gcc.target/aarch64/shrn-combine-5.c: New test.
* gcc.target/aarch64/shrn-combine-6.c: New test.
* gcc.target/aarch64/shrn-combine-7.c: New test.
* gcc.target/aarch64/shrn-combine-8.c: New test.
* gcc.target/aarch64/shrn-combine-9.c: New test.