2014-07-31 James Greenhalgh <james.greenhalgh@arm.com>
+ * config/aarch64/aarch64-builtins.c
+ (aarch64_gimple_fold_builtin): Don't fold reduction operations for
+ BYTES_BIG_ENDIAN.
+
+2014-07-31 James Greenhalgh <james.greenhalgh@arm.com>
+
* config/aarch64/aarch64.c (aarch64_simd_vect_par_cnst_half): Vary
the generated mask based on BYTES_BIG_ENDIAN.
(aarch64_simd_check_vect_par_cnst_half): New.
tree call = gimple_call_fn (stmt);
tree fndecl;
gimple new_stmt = NULL;
+
+ /* The operations folded below are reduction operations. These are
+ defined to leave their result in the 0'th element (from the perspective
+ of GCC). The architectural instruction we are folding will leave the
+ result in the 0'th element (from the perspective of the architecture).
+ For big-endian systems, these perspectives are not aligned.
+
+ It is therefore wrong to perform this fold on big-endian. There
+ are some tricks we could play with shuffling, but the mid-end is
+ inconsistent in the way it treats reduction operations, so we will
+ end up in difficulty. Until we fix the ambiguity - just bail out. */
+ if (BYTES_BIG_ENDIAN)
+ return false;
+
if (call)
{
fndecl = gimple_call_fndecl (stmt);