All VNx2 V_INT_CONTAINER entries should map to VNx2DI. The lower-case
version was already correct.
2019-12-27 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/iterators.md (V_INT_CONTAINER): Fix VNx2SF entry.
gcc/testsuite/
* gcc.target/aarch64/sve/mixed_size_11.c: New test.
From-SVN: r279743
2019-12-27 Richard Sandiford <richard.sandiford@arm.com>
+ * config/aarch64/iterators.md (V_INT_CONTAINER): Fix VNx2SF entry.
+
+2019-12-27 Richard Sandiford <richard.sandiford@arm.com>
+
* tree-vect-loop.c (vectorizable_reduction): Check whether the
target supports the required VEC_COND_EXPR operation before
allowing the fallback handling of masked fold-left reductions.
(VNx2DI "VNx2DI")
(VNx8HF "VNx8HI") (VNx4HF "VNx4SI")
(VNx2HF "VNx2DI")
- (VNx4SF "VNx4SI") (VNx2SF "VNx2SI")
+ (VNx4SF "VNx4SI") (VNx2SF "VNx2DI")
(VNx2DF "VNx2DI")])
;; Lower-case version of V_INT_CONTAINER.
2019-12-27 Richard Sandiford <richard.sandiford@arm.com>
+ * gcc.target/aarch64/sve/mixed_size_11.c: New test.
+
+2019-12-27 Richard Sandiford <richard.sandiford@arm.com>
+
* gcc.target/aarch64/sve/mixed_size_10.c: New test.
2019-12-26 Jakub Jelinek <jakub@redhat.com>
--- /dev/null
+/* { dg-options "-O3 -msve-vector-bits=256 -fno-tree-loop-distribution" } */
+
+float
+f (float *restrict x, float *restrict y, long *indices)
+{
+ float res = 0.0;
+ for (int i = 0; i < 100; ++i)
+ {
+ res += x[i - 4];
+ x[i] = y[indices[i]];
+ }
+ return res;
+}