sve: combine nested if predicates
The following example
void f5(float * restrict z0, float * restrict z1, float *restrict x,
float * restrict y, float c, int n)
{
for (int i = 0; i < n; i++) {
float a = x[i];
float b = y[i];
if (a > b) {
z0[i] = a + b;
if (a > c) {
z1[i] = a - b;
}
}
}
}
generates currently:
ptrue p3.b, all
ld1w z1.s, p1/z, [x2, x5, lsl 2]
ld1w z2.s, p1/z, [x3, x5, lsl 2]
fcmgt p0.s, p3/z, z1.s, z0.s
fcmgt p2.s, p1/z, z1.s, z2.s
fcmgt p0.s, p0/z, z1.s, z2.s
and p0.b, p0/z, p1.b, p1.b
The conditions for a > b and a > c become separate comparisons.
After this patch we generate:
ld1w z1.s, p0/z, [x2, x5, lsl 2]
ld1w z2.s, p0/z, [x3, x5, lsl 2]
fcmgt p1.s, p0/z, z1.s, z2.s
fcmgt p1.s, p1/z, z1.s, z0.s
Where the condition a > b && a > c are folded by using the predicate result of
the previous compare and thus allows the removal of one of the compares.
When never a mask is being generated from an BIT_AND we mask the operands of
the and instead and then just AND the result.
This allows us to be able to CSE the masks and generate the right combination.
However because re-assoc will try to re-order the masks in the & we have to now
perform a small local CSE on the vectorized loop is vectorization is successful.
Note: This patch series is working incrementally towards generating the most
efficient code for this and other loops in small steps.
gcc/ChangeLog:
* tree-vect-stmts.c (prepare_load_store_mask): Rename to...
(prepare_vec_mask): ...This and record operations that have already been
masked.
(vectorizable_call): Use it.
(vectorizable_operation): Likewise.
(vectorizable_store): Likewise.
(vectorizable_load): Likewise.
* tree-vectorizer.h (class _loop_vec_info): Add vec_cond_masked_set.
(vec_cond_masked_set_type, tree_cond_mask_hash): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/pred-combine-and.c: New test.