platform/upstream/gcc.git
4 years ago[AArch64] Pattern-match SVE extending gather loads
Richard Sandiford [Sat, 16 Nov 2019 11:26:11 +0000 (11:26 +0000)]
[AArch64] Pattern-match SVE extending gather loads

This patch pattern-matches a partial gather load followed by a sign or
zero extension into an extending gather load.  (The partial gather load
is already an extending load; we just don't rely on the upper bits of
the elements.)

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/iterators.md (SVE_2BHSI, SVE_2HSDI, SVE_4BHI)
(SVE_4HSI): New mode iterators.
(ANY_EXTEND2): New code iterator.
* config/aarch64/aarch64-sve.md
(@aarch64_gather_load_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>):
Extend to...
(@aarch64_gather_load_<ANY_EXTEND:optab><SVE_4HSI:mode><SVE_4BHI:mode>):
...this, handling extension to partial modes as well as full modes.
Describe the extension as a predicated rather than unpredicated
extension.
(@aarch64_gather_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>):
Likewise extend to...
(@aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>):
...this, making the same adjustments.
(*aarch64_gather_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_sxtw):
Likewise extend to...
(*aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>_sxtw)
...this, making the same adjustments.
(*aarch64_gather_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_uxtw):
Likewise extend to...
(*aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>_uxtw)
...this, making the same adjustments.
(*aarch64_gather_load_<ANY_EXTEND:optab><SVE_2HSDI:mode><SVE_2BHSI:mode>_<ANY_EXTEND2:su>xtw_unpacked):
New pattern.
(*aarch64_ldff1_gather<mode>_sxtw): Canonicalize to a constant
extension predicate.
(@aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>)
(@aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>)
(*aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_uxtw):
Describe the extension as a predicated rather than unpredicated
extension.
(*aarch64_ldff1_gather_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>_sxtw):
Likewise.  Canonicalize to a constant extension predicate.
* config/aarch64/aarch64-sve-builtins-base.cc
(svld1_gather_extend_impl::expand): Add an extra predicate for
the extension.
(svldff1_gather_extend_impl::expand): Likewise.

gcc/testsuite/
* gcc.target/aarch64/sve/gather_load_extend_1.c: New test.
* gcc.target/aarch64/sve/gather_load_extend_2.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_3.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_4.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_5.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_6.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_7.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_8.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_9.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_10.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_11.c: Likewise.
* gcc.target/aarch64/sve/gather_load_extend_12.c: Likewise.

From-SVN: r278346

4 years ago[AArch64] Add gather loads for partial SVE modes
Richard Sandiford [Sat, 16 Nov 2019 11:20:30 +0000 (11:20 +0000)]
[AArch64] Add gather loads for partial SVE modes

This patch adds support for gather loads of partial vectors,
where the vector base or offset elements can be wider than the
elements being loaded.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/iterators.md (SVE_24, SVE_2, SVE_4): New mode
iterators.
* config/aarch64/aarch64-sve.md
(gather_load<SVE_FULL_SD:mode><v_int_equiv>): Extend to...
(gather_load<SVE_24:mode><v_int_container>): ...this.
(mask_gather_load<SVE_FULL_S:mode><v_int_equiv>): Extend to...
(mask_gather_load<SVE_4:mode><v_int_container>): ...this.
(mask_gather_load<SVE_FULL_D:mode><v_int_equiv>): Extend to...
(mask_gather_load<SVE_2:mode><v_int_container>): ...this.
(*mask_gather_load<SVE_2:mode><v_int_container>_<su>xtw_unpacked):
New pattern.
(*mask_gather_load<SVE_FULL_D:mode><v_int_equiv>_sxtw): Extend to...
(*mask_gather_load<SVE_2:mode><v_int_equiv>_sxtw): ...this.
Allow the nominal extension predicate to be different from the
load predicate.
(*mask_gather_load<SVE_FULL_D:mode><v_int_equiv>_uxtw): Extend to...
(*mask_gather_load<SVE_2:mode><v_int_equiv>_uxtw): ...this.

gcc/testsuite/
* gcc.target/aarch64/sve/gather_load_1.c (TEST_LOOP): Start at 0.
(TEST_ALL): Add tests for 8-bit and 16-bit elements.
* gcc.target/aarch64/sve/gather_load_2.c: Update accordingly.
* gcc.target/aarch64/sve/gather_load_3.c (TEST_LOOP): Start at 0.
(TEST_ALL): Add tests for 8-bit and 16-bit elements.
* gcc.target/aarch64/sve/gather_load_4.c: Update accordingly.
* gcc.target/aarch64/sve/gather_load_5.c (TEST_LOOP): Start at 0.
(TEST_ALL): Add tests for 8-bit, 16-bit and 32-bit elements.
* gcc.target/aarch64/sve/gather_load_6.c: Add
--param aarch64-sve-compare-costs=0.
(TEST_LOOP): Start at 0.
* gcc.target/aarch64/sve/gather_load_7.c: Add
--param aarch64-sve-compare-costs=0.
* gcc.target/aarch64/sve/gather_load_8.c: New test.
* gcc.target/aarch64/sve/gather_load_9.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_6.c: Add
--param aarch64-sve-compare-costs=0.

From-SVN: r278345

4 years ago[AArch64] Add truncation for partial SVE modes
Richard Sandiford [Sat, 16 Nov 2019 11:14:51 +0000 (11:14 +0000)]
[AArch64] Add truncation for partial SVE modes

This patch adds support for "truncating" to a partial SVE vector from
either a full SVE vector or a wider partial vector.  This truncation is
actually a no-op and so should have zero cost in the vector cost model.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/aarch64-sve.md
(trunc<SVE_HSDI:mode><SVE_PARTIAL_I:mode>2): New pattern.
* config/aarch64/aarch64.c (aarch64_integer_truncation_p): New
function.
(aarch64_sve_adjust_stmt_cost): Call it.

gcc/testsuite/
* gcc.target/aarch64/sve/mask_struct_load_1.c: Add
--param aarch64-sve-compare-costs=0.
* gcc.target/aarch64/sve/mask_struct_load_2.c: Likewise.
* gcc.target/aarch64/sve/mask_struct_load_3.c: Likewise.
* gcc.target/aarch64/sve/mask_struct_load_4.c: Likewise.
* gcc.target/aarch64/sve/mask_struct_load_5.c: Likewise.
* gcc.target/aarch64/sve/pack_1.c: Likewise.
* gcc.target/aarch64/sve/truncate_1.c: New test.

From-SVN: r278344

4 years ago[AArch64] Pattern-match SVE extending loads
Richard Sandiford [Sat, 16 Nov 2019 11:11:47 +0000 (11:11 +0000)]
[AArch64] Pattern-match SVE extending loads

This patch pattern-matches a partial SVE load followed by a sign or zero
extension into an extending load.  (The partial load is already an
extending load; we just don't rely on the upper bits of the elements.)

Nothing yet uses the extra LDFF1 and LDNF1 combinations, but it seemed
more consistent to provide them, since I needed to update the pattern
to use a predicated extension anyway.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/aarch64-sve.md
(@aarch64_load_<ANY_EXTEND:optab><VNx8_WIDE:mode><VNx8_NARROW:mode>):
(@aarch64_load_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>)
(@aarch64_load_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>):
Combine into...
(@aarch64_load_<ANY_EXTEND:optab><SVE_HSDI:mode><SVE_PARTIAL_I:mode>):
...this new pattern, handling extension to partial modes as well
as full modes.  Describe the extension as a predicated rather than
unpredicated extension.
(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><VNx8_WIDE:mode><VNx8_NARROW:mode>)
(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><VNx4_WIDE:mode><VNx4_NARROW:mode>)
(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><VNx2_WIDE:mode><VNx2_NARROW:mode>):
Combine into...
(@aarch64_ld<fn>f1_<ANY_EXTEND:optab><SVE_HSDI:mode><SVE_PARTIAL_I:mode>):
...this new pattern, handling extension to partial modes as well
as full modes.  Describe the extension as a predicated rather than
unpredicated extension.
* config/aarch64/aarch64-sve-builtins.cc
(function_expander::use_contiguous_load_insn): Add an extra
predicate for extending loads.
* config/aarch64/aarch64.c (aarch64_extending_load_p): New function.
(aarch64_sve_adjust_stmt_cost): Likewise.
(aarch64_add_stmt_cost): Use aarch64_sve_adjust_stmt_cost to adjust
the cost of SVE vector stmts.

gcc/testsuite/
* gcc.target/aarch64/sve/load_extend_1.c: New test.
* gcc.target/aarch64/sve/load_extend_2.c: Likewise.
* gcc.target/aarch64/sve/load_extend_3.c: Likewise.
* gcc.target/aarch64/sve/load_extend_4.c: Likewise.
* gcc.target/aarch64/sve/load_extend_5.c: Likewise.
* gcc.target/aarch64/sve/load_extend_6.c: Likewise.
* gcc.target/aarch64/sve/load_extend_7.c: Likewise.
* gcc.target/aarch64/sve/load_extend_8.c: Likewise.
* gcc.target/aarch64/sve/load_extend_9.c: Likewise.
* gcc.target/aarch64/sve/load_extend_10.c: Likewise.
* gcc.target/aarch64/sve/reduc_4.c: Add
--param aarch64-sve-compare-costs=0.

From-SVN: r278343

4 years ago[AArch64] Add sign and zero extension for partial SVE modes
Richard Sandiford [Sat, 16 Nov 2019 11:07:23 +0000 (11:07 +0000)]
[AArch64] Add sign and zero extension for partial SVE modes

This patch adds support for extending from partial SVE modes
to both full vector modes and wider partial modes.

Some tests now need --param aarch64-sve-compare-costs=0 to force
the original full-vector code.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/iterators.md (SVE_HSDI): New mode iterator.
(narrower_mask): Handle VNx4HI, VNx2HI and VNx2SI.
* config/aarch64/aarch64-sve.md
(<ANY_EXTEND:optab><SVE_PARTIAL_I:mode><SVE_HSDI:mode>2): New pattern.
(*<ANY_EXTEND:optab><SVE_PARTIAL_I:mode><SVE_HSDI:mode>2): Likewise.
(@aarch64_pred_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Update
comment.  Avoid new narrower_mask ambiguity.
(@aarch64_cond_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Likewise.
(*cond_uxt<mode>_2): Update comment.
(*cond_uxt<mode>_any): Likewise.

gcc/testsuite/
* gcc.target/aarch64/sve/cost_model_1.c: Expect the loop to be
vectorized with bytes stored in 32-bit containers.
* gcc.target/aarch64/sve/extend_1.c: New test.
* gcc.target/aarch64/sve/extend_2.c: New test.
* gcc.target/aarch64/sve/extend_3.c: New test.
* gcc.target/aarch64/sve/extend_4.c: New test.
* gcc.target/aarch64/sve/load_const_offset_3.c: Add
--param aarch64-sve-compare-costs=0.
* gcc.target/aarch64/sve/mask_struct_store_1.c: Likewise.
* gcc.target/aarch64/sve/mask_struct_store_1_run.c: Likewise.
* gcc.target/aarch64/sve/mask_struct_store_2.c: Likewise.
* gcc.target/aarch64/sve/mask_struct_store_2_run.c: Likewise.
* gcc.target/aarch64/sve/unpack_unsigned_1.c: Likewise.
* gcc.target/aarch64/sve/unpack_unsigned_1_run.c: Likewise.

From-SVN: r278342

4 years ago[AArch64] Add autovec support for partial SVE vectors
Richard Sandiford [Sat, 16 Nov 2019 11:02:09 +0000 (11:02 +0000)]
[AArch64] Add autovec support for partial SVE vectors

This patch adds the bare minimum needed to support autovectorisation of
partial SVE vectors, namely moves and integer addition.  Later patches
add more interesting cases.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/aarch64-modes.def: Define partial SVE vector
float modes.
* config/aarch64/aarch64-protos.h (aarch64_sve_pred_mode): New
function.
* config/aarch64/aarch64.c (aarch64_classify_vector_mode): Handle the
new vector float modes.
(aarch64_sve_container_bits): New function.
(aarch64_sve_pred_mode): Likewise.
(aarch64_get_mask_mode): Use it.
(aarch64_sve_element_int_mode): Handle structure modes and partial
modes.
(aarch64_sve_container_int_mode): New function.
(aarch64_vectorize_related_mode): Return SVE modes when given
SVE modes.  Handle partial modes, taking the preferred number
of units from the size of the given mode.
(aarch64_hard_regno_mode_ok): Allow partial modes to be stored
in registers.
(aarch64_expand_sve_ld1rq): Use the mode form of aarch64_sve_pred_mode.
(aarch64_expand_sve_const_vector): Handle partial SVE vectors.
(aarch64_split_sve_subreg_move): Use the mode form of
aarch64_sve_pred_mode.
(aarch64_secondary_reload): Handle partial modes in the same way
as full big-endian vectors.
(aarch64_vector_mode_supported_p): Allow partial SVE vectors.
(aarch64_autovectorize_vector_modes): Try unpacked SVE vectors,
merging with the Advanced SIMD modes.  If two modes have the
same size, try the Advanced SIMD mode first.
(aarch64_simd_valid_immediate): Use the container rather than
the element mode for INDEX constants.
(aarch64_simd_vector_alignment): Make the alignment of partial
SVE vector modes the same as their minimum size.
(aarch64_evpc_sel): Use the mode form of aarch64_sve_pred_mode.
* config/aarch64/aarch64-sve.md (mov<SVE_FULL:mode>): Extend to...
(mov<SVE_ALL:mode>): ...this.
(movmisalign<SVE_FULL:mode>): Extend to...
(movmisalign<SVE_ALL:mode>): ...this.
(*aarch64_sve_mov<mode>_le): Rename to...
(*aarch64_sve_mov<mode>_ldr_str): ...this.
(*aarch64_sve_mov<SVE_FULL:mode>_be): Rename and extend to...
(*aarch64_sve_mov<SVE_ALL:mode>_no_ldr_str): ...this.  Handle
partial modes regardless of endianness.
(aarch64_sve_reload_be): Rename to...
(aarch64_sve_reload_mem): ...this and enable for little-endian.
Use aarch64_sve_pred_mode to get the appropriate predicate mode.
(@aarch64_pred_mov<SVE_FULL:mode>): Extend to...
(@aarch64_pred_mov<SVE_ALL:mode>): ...this.
(*aarch64_sve_mov<SVE_FULL:mode>_subreg_be): Extend to...
(*aarch64_sve_mov<SVE_ALL:mode>_subreg_be): ...this.
(@aarch64_sve_reinterpret<SVE_FULL:mode>): Extend to...
(@aarch64_sve_reinterpret<SVE_ALL:mode>): ...this.
(*aarch64_sve_reinterpret<SVE_FULL:mode>): Extend to...
(*aarch64_sve_reinterpret<SVE_ALL:mode>): ...this.
(maskload<SVE_FULL:mode><vpred>): Extend to...
(maskload<SVE_ALL:mode><vpred>): ...this.
(maskstore<SVE_FULL:mode><vpred>): Extend to...
(maskstore<SVE_ALL:mode><vpred>): ...this.
(vec_duplicate<SVE_FULL:mode>): Extend to...
(vec_duplicate<SVE_ALL:mode>): ...this.
(*vec_duplicate<SVE_FULL:mode>_reg): Extend to...
(*vec_duplicate<SVE_ALL:mode>_reg): ...this.
(sve_ld1r<SVE_FULL:mode>): Extend to...
(sve_ld1r<SVE_ALL:mode>): ...this.
(vec_series<SVE_FULL_I:mode>): Extend to...
(vec_series<SVE_I:mode>): ...this.
(*vec_series<SVE_FULL_I:mode>_plus): Extend to...
(*vec_series<SVE_I:mode>_plus): ...this.
(@aarch64_pred_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Avoid
new VPRED ambiguity.
(@aarch64_cond_sxt<SVE_FULL_HSDI:mode><SVE_PARTIAL_I:mode>): Likewise.
(add<SVE_FULL_I:mode>3): Extend to...
(add<SVE_I:mode>3): ...this.
* config/aarch64/iterators.md (SVE_ALL, SVE_I): New mode iterators.
(Vetype, Vesize, VEL, Vel, vwcore): Handle partial SVE vector modes.
(VPRED, vpred): Likewise.
(Vctype): New iterator.
(vw): Remove SVE modes.

gcc/testsuite/
* gcc.target/aarch64/sve/mixed_size_1.c: New test.
* gcc.target/aarch64/sve/mixed_size_2.c: Likewise.
* gcc.target/aarch64/sve/mixed_size_3.c: Likewise.
* gcc.target/aarch64/sve/mixed_size_4.c: Likewise.
* gcc.target/aarch64/sve/mixed_size_5.c: Likewise.

From-SVN: r278341

4 years ago[AArch64] Tweak gcc.target/aarch64/sve/clastb_8.c
Richard Sandiford [Sat, 16 Nov 2019 10:57:55 +0000 (10:57 +0000)]
[AArch64] Tweak gcc.target/aarch64/sve/clastb_8.c

clastb_8.c was using scan-tree-dump-times to check for fully-masked
loops, which made it sensitive to the number of times we try to
vectorize.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/testsuite/
* gcc.target/aarch64/sve/clastb_8.c: Use assembly tests to
check for fully-masked loops.

From-SVN: r278340

4 years ago[AArch64] Replace SVE_PARTIAL with SVE_PARTIAL_I
Richard Sandiford [Sat, 16 Nov 2019 10:55:40 +0000 (10:55 +0000)]
[AArch64] Replace SVE_PARTIAL with SVE_PARTIAL_I

Another renaming, this time to make way for partial/unpacked
float modes.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/iterators.md (SVE_PARTIAL): Rename to...
(SVE_PARTIAL_I): ...this.
* config/aarch64/aarch64-sve.md: Apply the above renaming throughout.

From-SVN: r278339

4 years ago[AArch64] Add "FULL" to SVE mode iterator names
Richard Sandiford [Sat, 16 Nov 2019 10:50:42 +0000 (10:50 +0000)]
[AArch64] Add "FULL" to SVE mode iterator names

An upcoming patch will make more use of partial/unpacked SVE vectors.
We then need a distinction between mode iterators that include partial
modes and those that only include "full" modes.  This patch prepares
for that by adding "FULL" to the names of iterators that only select
full modes.  There should be no change in behaviour.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/iterators.md (SVE_ALL): Rename to...
(SVE_FULL): ...this.
(SVE_I): Rename to...
(SVE_FULL_I): ...this.
(SVE_F): Rename to...
(SVE_FULL_F): ...this.
(SVE_BHSI): Rename to...
(SVE_FULL_BHSI): ...this.
(SVE_HSD): Rename to...
(SVE_FULL_HSD): ...this.
(SVE_HSDI): Rename to...
(SVE_FULL_HSDI): ...this.
(SVE_HSF): Rename to...
(SVE_FULL_HSF): ...this.
(SVE_SD): Rename to...
(SVE_FULL_SD): ...this.
(SVE_SDI): Rename to...
(SVE_FULL_SDI): ...this.
(SVE_SDF): Rename to...
(SVE_FULL_SDF): ...this.
(SVE_S): Rename to...
(SVE_FULL_S): ...this.
(SVE_D): Rename to...
(SVE_FULL_D): ...this.
* config/aarch64/aarch64-sve.md: Apply the above renaming throughout.
* config/aarch64/aarch64-sve2.md: Likewise.

From-SVN: r278338

4 years ago[AArch64] Enable VECT_COMPARE_COSTS by default for SVE
Richard Sandiford [Sat, 16 Nov 2019 10:43:52 +0000 (10:43 +0000)]
[AArch64] Enable VECT_COMPARE_COSTS by default for SVE

This patch enables VECT_COMPARE_COSTS by default for SVE, both so
that we can compare SVE against Advanced SIMD and so that (with future
patches) we can compare multiple SVE vectorisation approaches against
each other.  It also adds a target-specific --param to control this.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/aarch64.opt (--param=aarch64-sve-compare-costs):
New option.
* doc/invoke.texi: Document it.
* config/aarch64/aarch64.c (aarch64_autovectorize_vector_modes):
By default, return VECT_COMPARE_COSTS for SVE.

gcc/testsuite/
* gcc.target/aarch64/sve/reduc_3.c: Split multi-vector cases out
into...
* gcc.target/aarch64/sve/reduc_3_costly.c: ...this new test,
passing -fno-vect-cost-model for them.
* gcc.target/aarch64/sve/slp_6.c: Add -fno-vect-cost-model.
* gcc.target/aarch64/sve/slp_7.c,
* gcc.target/aarch64/sve/slp_7_run.c: Split multi-vector cases out
into...
* gcc.target/aarch64/sve/slp_7_costly.c,
* gcc.target/aarch64/sve/slp_7_costly_run.c: ...these new tests,
passing -fno-vect-cost-model for them.
* gcc.target/aarch64/sve/while_7.c: Add -fno-vect-cost-model.
* gcc.target/aarch64/sve/while_9.c: Likewise.

From-SVN: r278337

4 years agoOptionally pick the cheapest loop_vec_info
Richard Sandiford [Sat, 16 Nov 2019 10:40:23 +0000 (10:40 +0000)]
Optionally pick the cheapest loop_vec_info

This patch adds a mode in which the vectoriser tries each available
base vector mode and picks the one with the lowest cost.  The new
behaviour is selected by autovectorize_vector_modes.

The patch keeps the current behaviour of preferring a VF of
loop->simdlen over any larger or smaller VF, regardless of costs
or target preferences.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* target.h (VECT_COMPARE_COSTS): New constant.
* target.def (autovectorize_vector_modes): Return a bitmask of flags.
* doc/tm.texi: Regenerate.
* targhooks.h (default_autovectorize_vector_modes): Update accordingly.
* targhooks.c (default_autovectorize_vector_modes): Likewise.
* config/aarch64/aarch64.c (aarch64_autovectorize_vector_modes):
Likewise.
* config/arc/arc.c (arc_autovectorize_vector_modes): Likewise.
* config/arm/arm.c (arm_autovectorize_vector_modes): Likewise.
* config/i386/i386.c (ix86_autovectorize_vector_modes): Likewise.
* config/mips/mips.c (mips_autovectorize_vector_modes): Likewise.
* tree-vectorizer.h (_loop_vec_info::vec_outside_cost)
(_loop_vec_info::vec_inside_cost): New member variables.
* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Initialize them.
(vect_better_loop_vinfo_p, vect_joust_loop_vinfos): New functions.
(vect_analyze_loop): When autovectorize_vector_modes returns
VECT_COMPARE_COSTS, try vectorizing the loop with each available
vector mode and picking the one with the lowest cost.
(vect_estimate_min_profitable_iters): Record the computed costs
in the loop_vec_info.

From-SVN: r278336

4 years agoExtend can_duplicate_and_interleave_p to mixed-size vectors
Richard Sandiford [Sat, 16 Nov 2019 10:36:20 +0000 (10:36 +0000)]
Extend can_duplicate_and_interleave_p to mixed-size vectors

This patch makes can_duplicate_and_interleave_p cope with mixtures of
vector sizes, by using queries based on get_vectype_for_scalar_type
instead of directly querying GET_MODE_SIZE (vinfo->vector_mode).

int_mode_for_size is now the first check we do for a candidate mode,
so it seemed better to restrict it to MAX_FIXED_MODE_SIZE.  This avoids
unnecessary work and avoids trying to create scalar types that the
target might not support.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vectorizer.h (can_duplicate_and_interleave_p): Take an
element type rather than an element mode.
* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
Use get_vectype_for_scalar_type to query the natural types
for a given element type rather than basing everything on
GET_MODE_SIZE (vinfo->vector_mode).  Limit int_mode_for_size
query to MAX_FIXED_MODE_SIZE.
(duplicate_and_interleave): Update call accordingly.
* tree-vect-loop.c (vectorizable_reduction): Likewise.

From-SVN: r278335

4 years agoApply maximum nunits for BB SLP
Richard Sandiford [Sat, 16 Nov 2019 10:29:31 +0000 (10:29 +0000)]
Apply maximum nunits for BB SLP

The BB vectoriser picked vector types in the same way as the loop
vectoriser: it picked a vector mode/size for the region and then
based all the vector types off that choice.  This meant we could
end up trying to use vector types that had too many elements for
the group size.

The main part of this patch is therefore about passing the SLP
group size down to routines like get_vectype_for_scalar_type and
ensuring that each vector type in the SLP tree is chosen wrt the
group size.  That part in itself is pretty easy and mechanical.

The main warts are:

(1) We normally pick a STMT_VINFO_VECTYPE for data references at an
    early stage (vect_analyze_data_refs).  However, nothing in the
    BB vectoriser relied on this, or on the min_vf calculated from it.
    I couldn't see anything other than vect_recog_bool_pattern that
    tried to access the vector type before the SLP tree is built.

(2) It's possible for the same statement to be used in groups of
    different sizes.  Taking the group size into account meant that
    we could try to pick different vector types for the same statement.

    This problem should go away with the move to doing everything on
    SLP trees, where presumably we would attach the vector type to the
    SLP node rather than the stmt_vec_info.  Until then, the patch just
    uses a first-come, first-served approach.

(3) A similar problem exists for grouped data references, where
    different statements in the same dataref group could be used
    in SLP nodes that have different group sizes.  The patch copes
    with that by making sure that all vector types in a dataref
    group remain consistent.

The patch means that:

    void
    f (int *x, short *y)
    {
      x[0] += y[0];
      x[1] += y[1];
      x[2] += y[2];
      x[3] += y[3];
    }

now produces:

        ldr     q0, [x0]
        ldr     d1, [x1]
        saddw   v0.4s, v0.4s, v1.4h
        str     q0, [x0]
        ret

instead of:

        ldrsh   w2, [x1]
        ldrsh   w3, [x1, 2]
        fmov    s0, w2
        ldrsh   w2, [x1, 4]
        ldrsh   w1, [x1, 6]
        ins     v0.s[1], w3
        ldr     q1, [x0]
        ins     v0.s[2], w2
        ins     v0.s[3], w1
        add     v0.4s, v0.4s, v1.4s
        str     q0, [x0]
        ret

Unfortunately it also means we start to vectorise
gcc.target/i386/pr84101.c for -m32.  That seems like a target
cost issue though; see PR92265 for details.

2019-11-16  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vectorizer.h (vect_get_vector_types_for_stmt): Take an
optional maximum nunits.
(get_vectype_for_scalar_type): Likewise.  Also declare a form that
takes an slp_tree.
(get_mask_type_for_scalar_type): Take an optional slp_tree.
(vect_get_mask_type_for_stmt): Likewise.
* tree-vect-data-refs.c (vect_analyze_data_refs): Don't store
the vector type in STMT_VINFO_VECTYPE for BB vectorization.
* tree-vect-patterns.c (vect_recog_bool_pattern): Use
vect_get_vector_types_for_stmt instead of STMT_VINFO_VECTYPE
to get an assumed vector type for data references.
* tree-vect-slp.c (vect_update_shared_vectype): New function.
(vect_update_all_shared_vectypes): Likewise.
(vect_build_slp_tree_1): Pass the group size to
vect_get_vector_types_for_stmt.  Use vect_update_shared_vectype
for BB vectorization.
(vect_build_slp_tree_2): Call vect_update_all_shared_vectypes
before building the vectof from scalars.
(vect_analyze_slp_instance): Pass the group size to
get_vectype_for_scalar_type.
(vect_slp_analyze_node_operations_1): Don't recompute the vector
types for BB vectorization here; just handle the case in which
we deferred the choice for booleans.
(vect_get_constant_vectors): Pass the slp_tree to
get_vectype_for_scalar_type.
* tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
(vectorizable_call): Likewise.
(vectorizable_simd_clone_call): Likewise.
(vectorizable_conversion): Likewise.
(vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.
(vectorizable_comparison): Likewise.
(vect_is_simple_cond): Take the slp_tree as argument and
pass it to get_vectype_for_scalar_type.
(vectorizable_condition): Update call accordingly.
(get_vectype_for_scalar_type): Take a group_size argument.
For BB vectorization, limit the the vector to that number
of elements.  Also define an overload that takes an slp_tree.
(get_mask_type_for_scalar_type): Add an slp_tree argument and
pass it to get_vectype_for_scalar_type.
(vect_get_vector_types_for_stmt): Add a group_size argument
and pass it to get_vectype_for_scalar_type.  Don't use the
cached vector type for BB vectorization if a group size is given.
Handle data references in that case.
(vect_get_mask_type_for_stmt): Take an slp_tree argument and
pass it to get_mask_type_for_scalar_type.

gcc/testsuite/
* gcc.dg/vect/bb-slp-4.c: Expect the block to be vectorized
with -fno-vect-cost-model.
* gcc.dg/vect/bb-slp-bool-1.c: New test.
* gcc.target/aarch64/vect_mixed_sizes_14.c: Likewise.
* gcc.target/i386/pr84101.c: XFAIL for -m32.

From-SVN: r278334

4 years agoFix nonspec_time when there is no cached value.
Jan Hubicka [Sat, 16 Nov 2019 09:51:09 +0000 (10:51 +0100)]
Fix nonspec_time when there is no cached value.

* ipa-inline.h (do_estimate_edge_time): Add nonspec_time
parameter.
(estimate_edge_time): Use it.
* ipa-inline-analysis.c (do_estimate_edge_time): Add
ret_nonspec_time parameter.

From-SVN: r278333

4 years agoImplement the <tuple> part of C++20 p1032 Misc constexpr bits.
Edward Smith-Rowland [Sat, 16 Nov 2019 03:16:35 +0000 (03:16 +0000)]
Implement the <tuple> part of C++20 p1032 Misc constexpr bits.

2019-11-15  Edward Smith-Rowland  <3dw4rd@verizon.net>

Implement the <tuple> part of C++20 p1032 Misc constexpr bits.
* include/std/tuple (_Head_base, _Tuple_impl(allocator_arg_t,...)
(_M_assign, tuple(allocator_arg_t,...), _Inherited, operator=, _M_swap)
(swap, pair(piecewise_construct_t,): Constexpr.
* (__uses_alloc0::_Sink::operator=, __uses_alloc_t): Constexpr.
* testsuite/20_util/tuple/cons/constexpr_allocator_arg_t.cc: New test.
* testsuite/20_util/tuple/constexpr_swap.cc : New test.
* testsuite/20_util/uses_allocator/69293_neg.cc: Extra error for C++20.
* testsuite/20_util/uses_allocator/cons_neg.cc: : Extra error for C++20.

From-SVN: r278331

4 years agoDaily bump.
GCC Administrator [Sat, 16 Nov 2019 00:16:21 +0000 (00:16 +0000)]
Daily bump.

From-SVN: r278328

4 years agolibstdc++: Fix <stop_token> and improve tests
Jonathan Wakely [Fri, 15 Nov 2019 23:44:47 +0000 (23:44 +0000)]
libstdc++: Fix <stop_token> and improve tests

* include/std/stop_token: Reduce header dependencies by including
internal headers.
(stop_token::swap(stop_token&), swap(stop_token&, stop_token&)):
Define.
(operator!=(const stop_token&, const stop_token&)): Fix return value.
(stop_token::_Stop_cb::_Stop_cb(Cb&&)): Use std::forward instead of
(stop_token::_Stop_state_t) [_GLIBCXX_HAS_GTHREADS]: Use lock_guard
instead of unique_lock.
[!_GLIBCXX_HAS_GTHREADS]: Do not use mutex.
(stop_token::stop_token(_Stop_state)): Change parameter to lvalue
reference.
(stop_source): Remove unnecessary using-declarations for names only
used once.
(swap(stop_source&, stop_source&)): Define.
(stop_callback(const stop_token&, _Cb&&))
(stop_callback(stop_token&&, _Cb&&)): Replace lambdas with a named
function. Use std::forward instead of std::move. Run callbacks if a
stop request has already been made.
(stop_source::_M_execute()): Remove.
(stop_source::_S_execute(_Stop_cb*)): Define.
* include/std/version (__cpp_lib_jthread): Define conditionally.
* testsuite/30_threads/stop_token/stop_callback.cc: New test.
* testsuite/30_threads/stop_token/stop_source.cc: New test.
* testsuite/30_threads/stop_token/stop_token.cc: Enable test for
immediate execution of callback.

From-SVN: r278325

4 years agoDiagnose duplicate C2x standard attributes.
Joseph Myers [Fri, 15 Nov 2019 23:22:41 +0000 (23:22 +0000)]
Diagnose duplicate C2x standard attributes.

For each of the attributes currently included in C2x, it has a
constraint that the attribute shall appear at most once in each
attribute list (attribute-list being what appear between a single [[
and ]]).

This patch implements that check.  As the corresponding check in the
C++ front end (cp_parser_check_std_attribute) makes violations into
errors, I made them into errors, with the same wording, for C as well.

There is an existing check in the case of the fallthrough attribute,
with a warning rather than an error, in attribute_fallthrough_p.  That
is more general, as it also covers __attribute__ ((fallthrough)) and
the case of [[fallthrough]] [[fallthrough]] (multiple attribute-lists
in a single attribute-specifier-sequence), which is not a constraint
violation.  To avoid some [[fallthrough, fallthrough]] being diagnosed
twice, the check I added avoids adding duplicate attributes to the
list.

Bootstrapped with no regressions on x86_64-pc-linux-gnu.

gcc/c:
* c-parser.c (c_parser_std_attribute_specifier): Diagnose
duplicate standard attributes.

gcc/testsuite:
* gcc.dg/c2x-attr-deprecated-4.c, gcc.dg/c2x-attr-fallthrough-4.c,
gcc.dg/c2x-attr-maybe_unused-4.c: New tests.

From-SVN: r278324

4 years agotypeck.c (cp_truthvalue_conversion): Add tsubst_flags_t parameter and use it in calls...
Paolo Carlini [Fri, 15 Nov 2019 22:56:33 +0000 (22:56 +0000)]
typeck.c (cp_truthvalue_conversion): Add tsubst_flags_t parameter and use it in calls...

/cp
2019-11-15  Paolo Carlini  <paolo.carlini@oracle.com>

* typeck.c (cp_truthvalue_conversion): Add tsubst_flags_t parameter
and use it in calls; also pass the location_t of the expression to
cp_build_binary_op and c_common_truthvalue_conversion.
* rtti.c (build_dynamic_cast_1): Adjust call.
* cvt.c (ocp_convert): Likewise.
* cp-gimplify.c (cp_fold): Likewise.
* cp-tree.h (cp_truthvalue_conversion): Update declaration.

/testsuite
2019-11-15  Paolo Carlini  <paolo.carlini@oracle.com>

* g++.dg/warn/Walways-true-1.C: Check locations too.
* g++.dg/warn/Walways-true-2.C: Likewise.
* g++.dg/warn/Walways-true-3.C: Likewise.
* g++.dg/warn/Waddress-1.C: Check additional location.

From-SVN: r278320

4 years agoForgot to change teh date range.
Edward Smith-Rowland [Fri, 15 Nov 2019 21:27:49 +0000 (21:27 +0000)]
Forgot to change teh date range.

From-SVN: r278318

4 years agoImplement the default_searcher part of C++20 p1032 Misc constexpr bits.
Edward Smith-Rowland [Fri, 15 Nov 2019 21:26:25 +0000 (21:26 +0000)]
Implement the default_searcher part of C++20 p1032 Misc constexpr bits.

2019-11-15  Edward Smith-Rowland  <3dw4rd@verizon.net>

Implement the default_searcher part of C++20 p1032 Misc constexpr bits.
* include/std/functional
(default_searcher, default_searcher::operator()): Constexpr.
* testsuite/20_util/function_objects/constexpr_searcher.cc: New.

From-SVN: r278317

4 years agotestmain.exp: link against GOLIBS
Ian Lance Taylor [Fri, 15 Nov 2019 21:14:29 +0000 (21:14 +0000)]
testmain.exp: link against GOLIBS

    Patch by Maciej W. Rozycki.

    Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/207458

From-SVN: r278316

4 years agolibstdc++: Implement LWG 3149 for std::default_constructible
Jonathan Wakely [Fri, 15 Nov 2019 19:58:27 +0000 (19:58 +0000)]
libstdc++: Implement LWG 3149 for std::default_constructible

The change approved in Belfast did not actually rename the concept from
std::default_constructible to std::default_initializable, even though
that was intended. That is expected to be done soon as a separate issue,
so I'm implementing that now too.

* include/bits/iterator_concepts.h (weakly_incrementable): Adjust.
* include/std/concepts (default_constructible): Rename to
default_initializable and require default-list-initialization and
default-initialization to be valid (LWG 3149).
(semiregular): Adjust to new name.
* testsuite/std/concepts/concepts.lang/concept.defaultconstructible/
1.cc: Rename directory to concept.defaultinitializable and adjust to
new name.
* testsuite/std/concepts/concepts.lang/concept.defaultinitializable/
lwg3149.cc: New test.
* testsuite/util/testsuite_iterators.h (test_range): Adjust.

From-SVN: r278314

4 years agolibstdc++: Implement LWG 3070 in path::lexically_relative
Jonathan Wakely [Fri, 15 Nov 2019 19:58:15 +0000 (19:58 +0000)]
libstdc++: Implement LWG 3070 in path::lexically_relative

* src/c++17/fs_path.cc [_GLIBCXX_FILESYSTEM_IS_WINDOWS]
(is_disk_designator): New helper function.
(path::_Parser::root_path()): Use is_disk_designator.
(path::lexically_relative(const path&)): Implement resolution of
LWG 3070.
* testsuite/27_io/filesystem/path/generation/relative.cc: Check with
path components that look like a root-name.

From-SVN: r278313

4 years agom68k: add musl support
Szabolcs Nagy [Fri, 15 Nov 2019 19:47:12 +0000 (19:47 +0000)]
m68k: add musl support

Add the dynamic linker name and fix a type name to use the public name
instead of the glibc internal name.

gcc/ChangeLog:

2019-11-15  Szabolcs Nagy  <szabolcs.nagy@arm.com>

* config/m68k/linux.h (MUSL_DYNAMIC_LINKER): Define.

libgcc/ChangeLog:

2019-11-15  Szabolcs Nagy  <szabolcs.nagy@arm.com>

* config/m68k/linux-unwind.h (struct uw_ucontext): Use sigset_t instead
of __sigset_t.

From-SVN: r278312

4 years agoSupport C2x [[maybe_unused]] attribute.
Joseph Myers [Fri, 15 Nov 2019 18:39:35 +0000 (18:39 +0000)]
Support C2x [[maybe_unused]] attribute.

This patch adds support for the C2x [[maybe_unused]] attribute, using
the same handler as for GNU __attribute__ ((unused)).

As with other such attribute support, I think turning certain warnings
into pedwarns for usage in cases where that is a constraint violation
can be addressed later as a bug fix, as can the C2x constraint for
various standard attributes that they do not appear more than once
inside a single [[]].

However, the warnings that appear in c2x-attr-maybe_unused-1.c (that
the attribute is ignored on member declarations) need to remain as
warnings not pedwarns, since C2x does permit the attribute there.  (Or
they could be silenced, on the basis that GCC doesn't have warnings
for unused struct and union members so it's completely harmless that
it's ignoring an attribute that might do something useful with another
compiler that does have such warnings.)

Bootstrapped with no regressions on x86_64-pc-linux-gnu.

gcc/c:
* c-decl.c (std_attribute_table): Add maybe_unused.

gcc/testsuite:
* gcc.dg/c2x-attr-maybe_unused-1.c,
gcc.dg/c2x-attr-maybe_unused-2.c,
gcc.dg/c2x-attr-maybe_unused-3.c: New tests.

From-SVN: r278310

4 years agoMAINTAINERS: Change my email address as maintainer.
Kelvin Nilsen [Fri, 15 Nov 2019 17:52:25 +0000 (17:52 +0000)]
MAINTAINERS: Change my email address as maintainer.

ChangeLog:

2019-11-15  Kelvin Nilsen  <kelvin@gcc.gnu.org>

* MAINTAINERS: Change my email address as maintainer.

From-SVN: r278309

4 years agomicroblaze: fix PR65649
Nick Clifton [Fri, 15 Nov 2019 17:39:14 +0000 (17:39 +0000)]
microblaze: fix PR65649

microblaze-linux-musl build fails without this.

(This is a rebase of an earlier patch posted on bugzilla.)

gcc/ChangeLog:

2019-11-15  Nick Clifton  <nickc@redhat.com>
    Szabolcs Nagy  <szabolcs.nagy@arm.com>

PR target/65649
* config/microblaze/microblaze.c (print_operand): Print value as long.

Co-Authored-By: Szabolcs Nagy <szabolcs.nagy@arm.com>
From-SVN: r278308

4 years agoipa-inline.c (edge_badness, [...]): Revert accidental commit.
Jan Hubicka [Fri, 15 Nov 2019 17:13:21 +0000 (18:13 +0100)]
ipa-inline.c (edge_badness, [...]): Revert accidental commit.

* ipa-inline.c (edge_badness, inline_small_functions): Revert
accidental commit.

From-SVN: r278307

4 years ago[amdgcn] Unfix registers for frame pointer
Kwok Cheung Yeung [Fri, 15 Nov 2019 16:49:08 +0000 (16:49 +0000)]
[amdgcn] Unfix registers for frame pointer

Allow the registers used for the frame pointer to be used for other purposes
if the frame pointer is not being used.

2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* config/gcn/gcn.h (FIXED_REGISTERS): Unfix frame pointer.
(CALL_USED_REGISTERS): Make frame pointer callee-saved.

From-SVN: r278306

4 years ago[amdgcn] Update lower bounds for the number of registers in non-leaf kernels
Kwok Cheung Yeung [Fri, 15 Nov 2019 16:32:29 +0000 (16:32 +0000)]
[amdgcn] Update lower bounds for the number of registers in non-leaf kernels

Reduce the lower limits on the number of registers requested by non-leaf
kernels to help improve CU occupancy.

2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* config/gcn/gcn.c (MAX_NORMAL_SGPR_COUNT, MAX_NORMAL_VGPR_COUNT): New.
(gcn_conditional_register_usage): Use constants in place of hard-coded
values.
(gcn_hsa_declare_function_name): Set lower bound for number of
SGPRs/VGPRs in non-leaf kernels to MAX_NORMAL_SGPR_COUNT and
MAX_NORMAL_VGPR_COUNT.

From-SVN: r278305

4 years agoipa: Remove stray declaration
Martin Jambor [Fri, 15 Nov 2019 16:19:09 +0000 (17:19 +0100)]
ipa: Remove stray declaration

2019-11-15  Martin Jambor  <mjambor@suse.cz>

* ipa-utils.h (ipa_remove_useless_jump_functions): Remove stray
declaration.

From-SVN: r278303

4 years ago[amdgcn] Restrict registers available to non-kernel functions
Kwok Cheung Yeung [Fri, 15 Nov 2019 15:36:34 +0000 (15:36 +0000)]
[amdgcn] Restrict registers available to non-kernel functions

Restrict the number of SGPRs and VGPRs available to non-kernel functions
to improve compute-unit occupancy with multiple threads.

2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* config/gcn/gcn.c (default_requested_args): New.
(gcn_parse_amdgpu_hsa_kernel_attribute): Initialize requested args
set with default_requested_args.
(gcn_conditional_register_usage): Limit register usage of non-kernel
functions.  Reassign fixed registers if a non-standard set of args is
requested.
* config/gcn/gcn.h (FIXED_REGISTERS): Fix registers according to ABI.

From-SVN: r278301

4 years agore PR ipa/92528 (ICE in ipa_get_parm_lattices since r278219)
Feng Xue [Fri, 15 Nov 2019 15:03:24 +0000 (15:03 +0000)]
re PR ipa/92528 (ICE in ipa_get_parm_lattices since r278219)

2019-11-15  Feng Xue  <fxue@os.amperecomputing.com>

        PR ipa/92528
        * ipa-prop.c (update_jump_functions_after_inlining): Invalidate
        aggregate jump function when inlined-to caller has no edge summary.

From-SVN: r278300

4 years ago[amdgcn] Reinitialize registers for every function
Kwok Cheung Yeung [Fri, 15 Nov 2019 14:56:41 +0000 (14:56 +0000)]
[amdgcn] Reinitialize registers for every function

gcn_conditional_register_usage needs to be called for every function
to set the fixed registers depending on the kernel args currently
requested.

2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* config/gcn/gcn.c (gcn_init_cumulative_args): Call reinit_regs.

From-SVN: r278299

4 years agoImplement P1816R0, class template argument deduction for aggregates.
Jason Merrill [Fri, 15 Nov 2019 14:51:05 +0000 (09:51 -0500)]
Implement P1816R0, class template argument deduction for aggregates.

Rather than reimplement brace elision here, we call reshape_init and then
discard the result.  We needed to set CLASSTYPE_NON_AGGREGATE a bit more in
this patch, since outside a template it's set in check_bases_and_members.

* pt.c (maybe_aggr_guide, collect_ctor_idx_types): New.
(is_spec_or_derived): Split out from do_class_deduction.
(build_deduction_guide): Handle aggregate guide.
* class.c (finish_struct): Set CLASSTYPE_NON_AGGREGATE in a
template.
* cp-tree.h (CP_AGGREGATE_TYPE_P): An incomplete class is not an
aggregate.

From-SVN: r278298

4 years ago[amdgcn] Use first lane of v1 for zero offset
Kwok Cheung Yeung [Fri, 15 Nov 2019 14:48:15 +0000 (14:48 +0000)]
[amdgcn] Use first lane of v1 for zero offset

Use v1 instead of v0 when a zero-valued VGPR is needed.  This frees up
v0 for other purposes.

2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* config/gcn/gcn.c (gcn_expand_prologue): Remove initialization and
prologue use of v0.
(print_operand_address): Use v1 for zero vector offset.

From-SVN: r278297

4 years agolibstdc++: Fix definition of std::nostopstate object
Jonathan Wakely [Fri, 15 Nov 2019 14:38:59 +0000 (14:38 +0000)]
libstdc++: Fix definition of std::nostopstate object

Also add <stop_token> header to PCH and Doxygen config.

* doc/doxygen/user.cfg.in: Add <stop_token>.
* include/precompiled/stdc++.h: Likewise.
* include/std/stop_token: Fix definition of std::nostopstate.
* testsuite/30_threads/headers/stop_token/synopsis.cc: New test.
* testsuite/30_threads/headers/thread/types_std_c++20.cc: New test.
* testsuite/30_threads/stop_token/stop_source.cc: New test.
* testsuite/30_threads/stop_token/stop_token.cc: Remove unnecessary
dg-require directives. Remove I/O and inclusion of <iostream>.

From-SVN: r278296

4 years agoFix vector/scalar to vector/vector conversion (PR92515)
Richard Sandiford [Fri, 15 Nov 2019 14:37:57 +0000 (14:37 +0000)]
Fix vector/scalar to vector/vector conversion (PR92515)

r278235 broke conversions of vector/scalar shifts into vector/vector
shifts on targets that only provide the latter.  We need to record
whether a conversion is required in that case too.

Also, the old useless_type_conversion_p condition seemed unnecessarily
strong, since the shift amount can have a different signedness from
the shifted value and its vector type is never assumed to be identical
to vectype.  The patch therefore uses tree_nop_conversion_p instead.

2019-11-15  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
PR tree-optimization/92515
* tree-vect-stmts.c (vectorizable_shift): Record incompatible op1
types when converting a vector/scalar shift into a vector/vector one,
using tree_nop_conversion_p instead of useless_type_conversion_p.
Move the conversion code to the transform block.

From-SVN: r278295

4 years ago[mid-end][__RTL] Account for column numbers in __RTL functions
Matthew Malcomson [Fri, 15 Nov 2019 14:18:14 +0000 (14:18 +0000)]
[mid-end][__RTL] Account for column numbers in __RTL functions

The documentation for __RTL tests (see "(gccint) RTL Tests" info node) has the
following snippet.

```
 The parser expects the RTL body to be in the format emitted by this
dumping function:

     DEBUG_FUNCTION void
     print_rtx_function (FILE *outfile, function *fn, bool compact);

 when "compact" is true.  So you can capture RTL in the correct format
from the debugger using:

     (gdb) print_rtx_function (stderr, cfun, true);

 and copy and paste the output into the body of the C function.
```

Since r264944 print_rtx_function prints column number information, which the
__RTL function parsing does not handle.

This patch handles column number information optionally, so pre-existing __RTL
functions still work, and the above documentation quote still holds.

Note: If people would prefer to require column information I could make a
slightly neater code and update existing tests.
I guess this would be OK since the intended use for __RTL functions is in these
testcases so there is no worry about other existing code.

bootstrapped and regtested on aarch64
bootstrapped and regtested on x86_64

Ok for trunk?

Cheers,
Matthew

gcc/ChangeLog:

2019-11-15  Matthew Malcomson  <matthew.malcomson@arm.com>

* read-rtl-function.c
(function_reader::add_fixup_source_location): Take additional
parameter of a column.
(function_reader::maybe_read_location): Optionally parse column
information and pass to add_fixup_source_location.

gcc/testsuite/ChangeLog:

2019-11-15  Matthew Malcomson  <matthew.malcomson@arm.com>

* gcc.dg/rtl/aarch64/rtl-handle-column-numbers.c: New test.

From-SVN: r278294

4 years agore PR tree-optimization/92512 (ICE in gimple_op, at gimple.h:2436)
Richard Biener [Fri, 15 Nov 2019 13:52:09 +0000 (13:52 +0000)]
re PR tree-optimization/92512 (ICE in gimple_op, at gimple.h:2436)

2019-11-15  Richard Biener  <rguenther@suse.de>

PR tree-optimization/92512
* tree-vect-loop.c (check_reduction_path): Fix operand index
computability check.  Add check for second use in COND_EXPRs.

* gcc.dg/torture/pr92512.c: New testcase.

From-SVN: r278293

4 years ago[rs6000] Use VIEW_CONVERT_EXPR to reinterpret vectors (PR 92515)
Richard Sandiford [Fri, 15 Nov 2019 12:57:47 +0000 (12:57 +0000)]
[rs6000] Use VIEW_CONVERT_EXPR to reinterpret vectors (PR 92515)

The new tree-cfg.c checking in r278245 tripped on folds of
ALTIVEC_BUILTIN_VPERM_*, which were using gimple_convert
rather than VIEW_CONVERT_EXPR to reinterpret the contents
of a vector as a different type.

2019-11-15  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
PR target/92515
* config/rs6000/rs6000-call.c (rs6000_gimple_fold_builtin): Use
VIEW_CONVERT_EXPR to reinterpret vectors as different types.

From-SVN: r278292

4 years ago[amdgcn] Fix handling of VCC_CONDITIONAL_REG
Kwok Cheung Yeung [Fri, 15 Nov 2019 12:54:40 +0000 (12:54 +0000)]
[amdgcn] Fix handling of VCC_CONDITIONAL_REG

Classify vcc_lo and vcc_hi into the VCC_CONDITIONAL_REG class,
and spill them into SGPRs if necessary.

2019-11-15  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* config/gcn/gcn.c (gcn_regno_reg_class): Return VCC_CONDITIONAL_REG
register class for VCC_LO and VCC_HI.
(gcn_spill_class): Use SGPR_REGS to spill registers in
VCC_CONDITIONAL_REG.

From-SVN: r278290

4 years agore PR tree-optimization/92324 (ICE in expand_direct_optab_fn, at internal-fn.c:2890)
Richard Biener [Fri, 15 Nov 2019 12:48:34 +0000 (12:48 +0000)]
re PR tree-optimization/92324 (ICE in expand_direct_optab_fn, at internal-fn.c:2890)

2019-11-15  Richard Biener  <rguenther@suse.de>

PR tree-optimization/92324
* tree-vect-loop.c (vect_create_epilog_for_reduction): Fix
singedness of SLP reduction epilouge operations.  Also reduce
the vector width for SLP reductions before doing elementwise
operations if possible.

* gcc.dg/vect/pr92324-4.c: New testcase.

From-SVN: r278289

4 years agore PR fortran/69654 (ICE in gfc_trans_structure_assign)
Paul Thomas [Fri, 15 Nov 2019 12:42:29 +0000 (12:42 +0000)]
re PR fortran/69654 (ICE in gfc_trans_structure_assign)

2019-11-15  Paul Thomas  <pault@gcc.gnu.org>

PR fortran/69654
* trans-expr.c (gfc_trans_structure_assign): Move assignment to
'cm' after treatment of C pointer types and test that the type
has been completely built before it. Add an assert that the
backend_decl for each component exists.

2019-11-15  Paul Thomas  <pault@gcc.gnu.org>

PR fortran/69654
* gfortran.dg/derived_init_6.f90: New test.

From-SVN: r278287

4 years agolibstdc++: Fix changelog whitespace
Jonathan Wakely [Fri, 15 Nov 2019 12:16:21 +0000 (12:16 +0000)]
libstdc++: Fix changelog whitespace

From-SVN: r278286

4 years ago[mid-end][__RTL] Set global epilogue_completed in skip_pass
Matthew Malcomson [Fri, 15 Nov 2019 12:10:56 +0000 (12:10 +0000)]
[mid-end][__RTL] Set global epilogue_completed in skip_pass

Set global epilogue_completed when skipping pro_and_epilogue pass

When compiling RTL functions marked to start at a pass after the reload
pass, `skip_pass` is used to mark the reload pass as having completed
since many patterns use the `reload_completed` variable to determine
whether to run or not.

Here we do the same for the `epilogue_completed` variable and the
pro_and_epilogue pass.

Also include a testcase that relies on the availability of a
define_split in the aarch64 backend that is conditioned on this
`epilogue_completed` variable.

regtest done on native aarch64
regtest done on native x64_86

gcc/ChangeLog:

2019-11-15  Matthew Malcomson  <matthew.malcomson@arm.com>

* passes.c (skip_pass): Set epilogue_completed if skipping the
pro_and_epilogue pass.

gcc/testsuite/ChangeLog:

2019-11-15  Matthew Malcomson  <matthew.malcomson@arm.com>

* gcc.dg/rtl/aarch64/test-epilogue-set.c: New test.

From-SVN: r278285

4 years agoAdd tests for print from offload target.
Andrew Stubbs [Fri, 15 Nov 2019 10:49:10 +0000 (10:49 +0000)]
Add tests for print from offload target.

2019-11-15  Andrew Stubbs  <ams@codesourcery.com>

libgomp/
* testsuite/libgomp.c/target-print-1.c: New file.
* testsuite/libgomp.fortran/target-print-1.f90: New file.
* testsuite/libgomp.oacc-c/print-1.c: New file.
* testsuite/libgomp.oacc-fortran/print-1.f90: New file.

From-SVN: r278284

4 years ago[mid-end][__RTL] Clean state despite invalid __RTL startwith passes
Matthew Malcomson [Fri, 15 Nov 2019 10:01:38 +0000 (10:01 +0000)]
[mid-end][__RTL] Clean state despite invalid __RTL startwith passes

Hi there,

When compiling an __RTL function that has an invalid "startwith" pass we
currently don't run the dfinish cleanup pass. This means we ICE on the next
function.

This change ensures that all state is cleaned up for the next function
to run correctly.

As an example, before this change the following code would ICE when compiling
the function `foo2` because the "peephole2" pass is not run at optimisation
level -O0.

When compiled with
./aarch64-none-linux-gnu-gcc -O0 -S missed-pass-error.c -o test.s

```
int __RTL (startwith ("peephole2")) badfoo ()
{
(function "badfoo"
  (insn-chain
    (block 2
      (edge-from entry (flags "FALLTHRU"))
      (cnote 3 [bb 2] NOTE_INSN_BASIC_BLOCK)
      (cinsn 101 (set (reg:DI x19) (reg:DI x0)))
      (cinsn 10 (use (reg/i:SI x19)))
      (edge-to exit (flags "FALLTHRU"))
    ) ;; block 2
  ) ;; insn-chain
) ;; function "foo2"
}

int __RTL (startwith ("final")) foo2 ()
{
(function "foo2"
  (insn-chain
    (block 2
      (edge-from entry (flags "FALLTHRU"))
      (cnote 3 [bb 2] NOTE_INSN_BASIC_BLOCK)
      (cinsn 101 (set (reg:DI x19) (reg:DI x0)))
      (cinsn 10 (use (reg/i:SI x19)))
      (edge-to exit (flags "FALLTHRU"))
    ) ;; block 2
  ) ;; insn-chain
) ;; function "foo2"
}
```

Now it silently ignores the __RTL function and successfully compiles foo2.

regtest done on aarch64
regtest done on x86_64

OK for trunk?

gcc/ChangeLog:

2019-11-15  Matthew Malcomson  <matthew.malcomson@arm.com>

* passes.c (should_skip_pass_p): Always run "dfinish".

gcc/testsuite/ChangeLog:

2019-11-15  Matthew Malcomson  <matthew.malcomson@arm.com>

* gcc.dg/rtl/aarch64/missed-pass-error.c: New test.

From-SVN: r278283

4 years agoipa-inline.c (inline_small_functions): Move assignment to next before call destroying...
Richard Biener [Fri, 15 Nov 2019 09:38:03 +0000 (09:38 +0000)]
ipa-inline.c (inline_small_functions): Move assignment to next before call destroying edge.

2019-11-15  Richard Biener  <rguenther@suse.de>

* ipa-inline.c (inline_small_functions): Move assignment
to next before call destroying edge.

From-SVN: r278282

4 years agore PR tree-optimization/92039 (Spurious -Warray-bounds warnings building 32-bit glibc)
Richard Biener [Fri, 15 Nov 2019 09:09:16 +0000 (09:09 +0000)]
re PR tree-optimization/92039 (Spurious -Warray-bounds warnings building 32-bit glibc)

2019-11-15  Richard Biener  <rguenther@suse.de>

PR tree-optimization/92039
PR tree-optimization/91975
* tree-ssa-loop-ivcanon.c (constant_after_peeling): Revert
previous change, treat invariants consistently as non-constant.
(tree_estimate_loop_size): Ternary ops with just the first op
constant are not optimized away.

* gcc.dg/tree-ssa/cunroll-2.c: Revert to state previous to
unroller adjustment.
* g++.dg/tree-ssa/ivopts-3.C: Likewise.

From-SVN: r278281

4 years agogimplify.c (gimplify_call_expr): Don't call omp_resolve_declare_variant after gimplif...
Jakub Jelinek [Fri, 15 Nov 2019 08:32:36 +0000 (09:32 +0100)]
gimplify.c (gimplify_call_expr): Don't call omp_resolve_declare_variant after gimplification.

* gimplify.c (gimplify_call_expr): Don't call
omp_resolve_declare_variant after gimplification.
* omp-general.c (omp_context_selector_matches): For isa that might
match in some other function, defer if in declare simd function.
(omp_context_compute_score): Don't look for " score" in construct
trait set.  Set *score to -1 if it can't ever match.
(omp_resolve_declare_variant): If any variants need to be deferred,
don't punt immediately, but compute scores of all variants and if
ther eis a score winner that doesn't need to be deferred, return that.

* c-c++-common/gomp/declare-variant-13.c: New test.

From-SVN: r278280

4 years agore PR testsuite/92520 (new test case gcc/testsuite/gcc.dg/ipa/inline-9.c in r278220...
Jan Hubicka [Fri, 15 Nov 2019 08:19:16 +0000 (09:19 +0100)]
re PR testsuite/92520 (new test case gcc/testsuite/gcc.dg/ipa/inline-9.c in r278220 is unresolved)

PR testsuite/92520
* gcc.dg/ipa/inline-9.c: Fix template.

From-SVN: r278279

4 years agoFix comments typo
Luo Xiong Hu [Fri, 15 Nov 2019 08:17:31 +0000 (08:17 +0000)]
Fix comments typo

gcc/ChangeLog:

2019-11-15  Luo Xiong Hu  <luoxhu@linux.ibm.com>

* ipa-comdats.c: Fix comments typo.
* ipa-profile.c: Fix comments typo.
* tree-profile.c (gimple_gen_ic_profiler): Use the new variable
__gcov_indirect_call.counters and __gcov_indirect_call.callee.
(gimple_gen_ic_func_profiler): Likewise.
(pass_ipa_tree_profile::gate): Fix comments typo.

From-SVN: r278278

4 years agoUpdate iterator of next
Xiong Hu Luo [Fri, 15 Nov 2019 08:15:37 +0000 (08:15 +0000)]
Update iterator of next

next is initialized only in the loop before, it is never updated
in it's own loop.

gcc/ChangeLog:

2019-11-15  Xiong Hu Luo  <luoxhu@linux.ibm.com>

* ipa-inline.c (inline_small_functions): Update iterator of next.

From-SVN: r278277

4 years agocompiler: fix buglet in function inlining related to sink names
Ian Lance Taylor [Fri, 15 Nov 2019 03:28:49 +0000 (03:28 +0000)]
compiler: fix buglet in function inlining related to sink names

    When the compiler writes an inlinable function to the export data,
    parameter names are written out (in Export::write_name) using the
    Gogo::message_name as opposed to a raw/encoded name. This means that
    sink parameters (those named "_") get created with the name "_"
    instead of "._" (the name created by the lexer/parser). This confuses
    Gogo::is_sink_name, which looks for the latter sequence and not just
    "_". This can cause issues later on if an inlinable function is
    imported and fed through the rest of the compiler (things that are
    sinks are no recognized as such). To fix these issues, change
    Gogo::is_sink_name to return true for either variants ("_" or "._").

    Fixes golang/go#35586.

    Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/207259

From-SVN: r278275

4 years agoSupport for jthread and stop_token
Thomas Rodgers [Fri, 15 Nov 2019 03:09:19 +0000 (03:09 +0000)]
Support for jthread and stop_token

        * include/Makefile.am: Add <stop_token> header.
        * include/Makefile.in: Regenerate.
        * include/std/condition_variable: Add overloads for stop_token support
        to condition_variable_any.
        * include/std/stop_token: New file.
        * include/std/thread: Add jthread type.
        * include/std/version (__cpp_lib_jthread): New value.
        * testsuite/30_threads/condition_variable_any/stop_token/1.cc: New test.
        * testsuite/30_threads/condition_variable_any/stop_token/2.cc: New test.
        * testsuite/30_threads/condition_variable_any/stop_token/wait_on.cc: New test.
        * testsuite/30_threads/jthread/1.cc: New test.
        * testsuite/30_threads/jthread/2.cc: New test.
        * testsuite/30_threads/jthread/jthread.cc: New test.
        * testsuite/30_threads/stop_token/1.cc: New test.
        * testsuite/30_threads/stop_token/2.cc: New test.
        * testsuite/30_threads/stop_token/stop_token.cc: New test.

From-SVN: r278274

4 years agoImprove checks on C2x fallthrough attribute.
Joseph Myers [Fri, 15 Nov 2019 01:33:37 +0000 (01:33 +0000)]
Improve checks on C2x fallthrough attribute.

When adding C2x attribute support, some [[fallthrough]] support
appeared as a side-effect because of code for that attribute going
through separate paths from the normal attribute handling.

However, going through those paths without the normal attribute
handlers meant that certain checks, such as for the invalid usage
[[fallthrough()]], did not operate.  This patch improves checks by
adding this attribute to the standard attribute table, so that the
parser knows it expects no arguments, along with adding an explicit
check for "[[fallthrough]];" attribute-declarations at top level.  As
with other attributes, there are still cases where warnings should be
pedwarns because C2x constraints are violated, but this patch improves
the attribute handling.

Bootstrapped with no regressions on x86_64-pc-linux-gnu.

gcc/c:
* c-decl.c (std_attribute_table): Add fallthrough.
* c-parser.c (c_parser_declaration_or_fndef): Diagnose fallthrough
attribute at top level.

gcc/c-family:
* c-attribs.c (handle_fallthrough_attribute): Remove static.
* c-common.h (handle_fallthrough_attribute): Declare.

gcc/testsuite:
* gcc.dg/c2x-attr-fallthrough-2.c,
gcc.dg/c2x-attr-fallthrough-3.c: New tests.

From-SVN: r278273

4 years agoDaily bump.
GCC Administrator [Fri, 15 Nov 2019 00:16:19 +0000 (00:16 +0000)]
Daily bump.

From-SVN: r278272

4 years agoImplement the <array> part of C++20 p1032 Misc constexpr bits.
Edward Smith-Rowland [Fri, 15 Nov 2019 00:09:49 +0000 (00:09 +0000)]
Implement the <array> part of C++20 p1032 Misc constexpr bits.

2019-11-14  Edward Smith-Rowland  <3dw4rd@verizon.net>

Implement the <array> part of C++20 p1032 Misc constexpr bits.
* include/std/array (fill, swap): Make constexpr.
* testsuite/23_containers/array/requirements/constexpr_fill.cc: New.
* testsuite/23_containers/array/requirements/constexpr_swap.cc: New.

From-SVN: r278269

4 years agoSupport C2x [[deprecated]] attribute.
Joseph Myers [Fri, 15 Nov 2019 00:06:30 +0000 (00:06 +0000)]
Support C2x [[deprecated]] attribute.

This patch adds support for the C2x [[deprecated]] attribute.  All the
actual logic for generating warnings can be identical to the GNU
__attribute__ ((deprecated)), as can the attribute handler, so this is
just a matter of wiring things up appropriately and adding the checks
specified in the standard.  Unlike for C++, this patch gives
"deprecated" an entry in a table of standard attributes rather than
remapping it internally to the GNU attribute, as that seems a cleaner
approach to me.

Specifically, the only form of arguments to the attribute permitted in
the standard is (string-literal); empty parentheses are not permitted
in the case of no arguments, and a string literal (which includes
concatenated adjacent string literals, because concatenation is an
earlier phase of translation) cannot have further redundant
parentheses around it.  For the case of empty parentheses, this patch
makes the C parser disallow them for all known attributes using the
[[]] syntax, as done for C++.  For string literals (where the C++
front end is missing the check to avoid redundant parentheses, 92521
filed for that issue), a special case is inserted in the C parser.

A known issue that I think can be addressed later as a bug fix is that
the warnings for the attribute being ignored in certain cases
(attribute declarations, statements, most uses on types) ought to be
pedwarns, as those usages are constraint violations.

Bad handling of wide string literals with this attribute is also a
pre-existing bug (91182 - although that's filed as a C++ bug, the code
in question is language-independent, in tree.c).

Bootstrapped with no regressions on x86_64-pc-linux-gnu.

gcc/c:
* c-decl.c (std_attribute_table): New.
(c_init_decl_processing): Register attributes from
std_attribute_table.
* c-parser.c (c_parser_attribute_arguments): Add arguments
require_string and allow_empty_args.  All callers changed.
(c_parser_std_attribute): Set require_string argument for
"deprecated" attribute.

gcc/c-family:
* c-attribs.c (handle_deprecated_attribute): Remove static.
* c-common.h (handle_deprecated_attribute): Declare.

gcc/testsuite:
* gcc.dg/c2x-attr-deprecated-1.c, gcc.dg/c2x-attr-deprecated-2.c,
gcc.dg/c2x-attr-deprecated-3.c: New tests.

From-SVN: r278268

4 years agoCheck suitability of spill register for mode
Kwok Cheung Yeung [Thu, 14 Nov 2019 23:37:13 +0000 (23:37 +0000)]
Check suitability of spill register for mode

2019-11-14  Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
* lra-spills.c (assign_spill_hard_regs): Check that the spill
register is suitable for the mode.

From-SVN: r278267

4 years agoChange fold_range to return a boolean result.
Andrew MacLeod [Thu, 14 Nov 2019 22:29:56 +0000 (22:29 +0000)]
Change fold_range to return a boolean result.

2019-11-14  Andrew MacLeod  <amacleod@redhat.com>

* range-op.h (range_operator::fold_range): Return a bool.
* range-op.cc (range_operator::wi_fold): Assert supported type.
(range_operator::fold_range): Assert supported type and return true.
(operator_equal::fold_range): Return true.
(operator_not_equal::fold_range): Same.
(operator_lt::fold_range): Same.
(operator_le::fold_range): Same.
(operator_gt::fold_range): Same.
(operator_ge::fold_range): Same.
(operator_plus::op1_range): Adjust call to fold_range.
(operator_plus::op2_range): Same.
(operator_minus::op1_range): Same.
(operator_minus::op2_range): Same.
(operator_exact_divide::op1_range): Same.
(operator_lshift::fold_range): Return true and adjust fold_range call.
(operator_rshift::fold_range): Same.
(operator_cast::fold_range): Return true.
(operator_logical_and::fold_range): Same.
(operator_logical_or::fold_range): Same.
(operator_logical_not::fold_range): Same.
(operator_bitwise_not::fold_range): Adjust call to fold_range.
(operator_bitwise_not::op1_range): Same.
(operator_cst::fold_range): Return true.
(operator_identity::fold_range): Return true.
(operator_negate::fold_range): Return true and adjust fold_range call.
(operator_addr_expr::fold_range): Return true.
(operator_addr_expr::op1_range): Adjust call to fold_range.
(range_cast): Same.
* tree-vrp.c (range_fold_binary_symbolics_p): Adjust call to fold_range.
(range_fold_unary_symbolics_p): Same.

From-SVN: r278266

4 years agoSupport UTF-8 character constants for C2x.
Joseph Myers [Thu, 14 Nov 2019 20:18:33 +0000 (20:18 +0000)]
Support UTF-8 character constants for C2x.

C2x adds u8'' character constants to C.  This patch adds the
corresponding GCC support.

Most of the support was already present for C++ and just needed
enabling for C2x.  However, in C2x these constants have type unsigned
char, which required corresponding adjustments in the compiler and the
preprocessor to give them that type for C.

For C, it seems clear to me that having type unsigned char means the
constants are unsigned in the preprocessor (and thus treated as having
type uintmax_t in #if conditionals), so this patch implements that.  I
included a conditional in the libcpp change to avoid affecting
signedness for C++, but I'm not sure if in fact these constants should
also be unsigned in the preprocessor for C++ in which case that
!CPP_OPTION (pfile, cplusplus) conditional would not be needed.

Bootstrapped with no regressions on x86_64-pc-linux-gnu.

gcc/c:
* c-parser.c (c_parser_postfix_expression)
(c_parser_check_literal_zero): Handle CPP_UTF8CHAR.
* gimple-parser.c (c_parser_gimple_postfix_expression): Likewise.

gcc/c-family:
* c-lex.c (lex_charconst): Make CPP_UTF8CHAR constants unsigned
char for C.

gcc/testsuite:
* gcc.dg/c11-utf8char-1.c, gcc.dg/c2x-utf8char-1.c,
gcc.dg/c2x-utf8char-2.c, gcc.dg/c2x-utf8char-3.c,
gcc.dg/gnu2x-utf8char-1.c: New tests.

libcpp:
* charset.c (narrow_str_to_charconst): Make CPP_UTF8CHAR constants
unsigned for C.
* init.c (lang_defaults): Set utf8_char_literals for GNUC2X and
STDC2X.

From-SVN: r278265

4 years agoTweak gcc.dg/vect/bb-slp-4[01].c (PR92366)
Richard Sandiford [Thu, 14 Nov 2019 19:24:21 +0000 (19:24 +0000)]
Tweak gcc.dg/vect/bb-slp-4[01].c (PR92366)

gcc.dg/vect/bb-slp-40.c was failing on some targets because the
explicit dg-options overrode things like -maltivec.  This patch
uses dg-additional-options instead.

Also, it seems safer not to require exactly 1 instance of each message,
since that depends on the target vector length.

gcc.dg/vect/bb-slp-41.c contained invariant constructors that are
vectorised on AArch64 (foo) and constructors that aren't (bar).
This meant that the number of times we print "Found vectorizable
constructor" depended on how many vector sizes we try, since we'd
print it for each failed attempt.

In foo, we create invariant { b[0], ... } and { b[1], ... },
and the test is making sure that the two separate invariant vectors
can be fed from the same vector load at b.  This is a different case
from bb-slp-40.c, where the constructors are naturally separate.
(The expected count is 4 rather than 2 because we can vectorise the
epilogue too.)

However, due to limitations in the loop vectoriser, we still do the
addition of { b[0], ... } and { b[1], ... } in the loop.  Hopefully
that'll be fixed at some point, so this patch adds an alternative test
that directly needs 4 separate invariant constructors.  E.g. with Joel's
SLP optimisation, the new test generates:

        ldr     q4, [x1]
        dup     v7.4s, v4.s[0]
        dup     v6.4s, v4.s[1]
        dup     v5.4s, v4.s[2]
        dup     v4.4s, v4.s[3]

instead of the somewhat bizarre:

        ldp     s6, s5, [x1, 4]
        ldr     s4, [x1, 12]
        ld1r    {v7.4s}, [x1]
        dup     v6.4s, v6.s[0]
        dup     v5.4s, v5.s[0]
        dup     v4.4s, v4.s[0]

The patch then disables vectorisation of the original foo in
bb-vect-slp-41.c, so that we get the same correctness testing
for bar but don't need to test for specific counts.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/testsuite/
PR testsuite/92366
* gcc.dg/vect/bb-slp-40.c: Use dg-additional-options instead
of dg-options.  Remove expected counts.
* gcc.dg/vect/bb-slp-41.c: Remove dg-options and explicit
dg-do run.  Suppress vectorization of foo.
* gcc.dg/vect/bb-slp-42.c: New test.

From-SVN: r278262

4 years agore PR tree-optimization/92506 (Wrong code with -fwrapv since r277979)
Andrew MacLeod [Thu, 14 Nov 2019 19:02:48 +0000 (19:02 +0000)]
re PR tree-optimization/92506 (Wrong code with -fwrapv since r277979)

2019-11-14  Andrew MacLeod  <amacleod@redhat.com>

PR tree-optimization/92506
* range-op.cc (range_operator::fold_range): Start with range undefined.
(operator_abs::wi_fold): Fix wrong line copy... With wrapv, abs with
overflow is varying.

From-SVN: r278259

4 years agoRemove range_intersect, range_invert, and range_union.
Aldy Hernandez [Thu, 14 Nov 2019 17:51:31 +0000 (17:51 +0000)]
Remove range_intersect, range_invert, and range_union.

From-SVN: r278258

4 years agolibstdc++: Implement new predicate concepts from P1716R3
Jonathan Wakely [Thu, 14 Nov 2019 16:53:18 +0000 (16:53 +0000)]
libstdc++: Implement new predicate concepts from P1716R3

* include/bits/iterator_concepts.h (__iter_concept_impl): Add
comments.
(indirect_relation): Rename to indirect_binary_predicate and adjust
definition as per P1716R3.
(indirect_equivalence_relation): Define.
(indirectly_comparable): Adjust definition.
* include/std/concepts (equivalence_relation): Define.
* testsuite/std/concepts/concepts.callable/relation.cc: Add tests for
equivalence_relation.

From-SVN: r278256

4 years agolibstdc++: Rename disable_sized_sentinel [P1871R1]
Jonathan Wakely [Thu, 14 Nov 2019 16:53:03 +0000 (16:53 +0000)]
libstdc++: Rename disable_sized_sentinel [P1871R1]

* include/bits/iterator_concepts.h (disable_sized_sentinel): Rename to
disable_sized_sentinel_for.
* testsuite/24_iterators/headers/iterator/synopsis_c++20.cc: Adjust.

From-SVN: r278255

4 years agoMake flag_thread_jumps a gate of pass_jump_after_combine
Ilya Leoshkevich [Thu, 14 Nov 2019 16:40:33 +0000 (16:40 +0000)]
Make flag_thread_jumps a gate of pass_jump_after_combine

This is a follow-up to
https://gcc.gnu.org/ml/gcc-patches/2019-11/msg00919.html (r278095).
Dominance info is deleted even if we don't perform jump threading.
Since the whole point of this pass is to perform jump threading (other
cleanups are not valuable at this point), skip it completely when
flag_thread_jumps is not set.

gcc/ChangeLog:

2019-11-14  Ilya Leoshkevich  <iii@linux.ibm.com>

PR rtl-optimization/92430
* cfgcleanup.c (pass_jump_after_combine::gate): New function.
(pass_jump_after_combine::execute): Perform jump threading
unconditionally.

From-SVN: r278254

4 years agoUpdate the arm-*-vxworks* support
Jerome Lambourg [Thu, 14 Nov 2019 16:11:30 +0000 (16:11 +0000)]
Update the arm-*-vxworks* support

2019-11-13  Jerome Lambourg  <lambourg@adacore.com>
            Doug Rupp <rupp@adacore.com>
            Olivier Hainque  <hainque@adacore.com>

gcc/
* config.gcc: Collapse the arm-vxworks entries into
a single arm-wrs-vxworks7* one, bpabi based.  Update
the default cpu from arm8 to armv7-a
* config/arm/vxworks.h (CC1_SPEC): Simplify, knowing that
we always use ARM_UNWIND_INFO.
(DWARF2_UNWIND_INFO): Remove redefinition.
(ARM_TARGET2_DWARF_FORMAT): Likewise.
(VXWORKS_PERSONALITY): Define, to "llvm".
(VXWORKS_EXTRA_LIBS_RTP): Define, to "-lllvm".

libgcc/
* config.host: Collapse the arm-vxworks entries into
a single arm-wrs-vxworks7* one.
* config/arm/unwind-arm-vxworks.c: Update comments.  Provide
__gnu_Unwind_Find_exidx and a weak dummy __cxa_type_match for
kernel modules, to be overriden by libstdc++ when we link with
it.  Rely on externally provided __exidx_start/end.

Co-Authored-By: Doug Rupp <rupp@adacore.com>
Co-Authored-By: Olivier Hainque <hainque@adacore.com>
From-SVN: r278253

4 years agoHousekeeping on TARGET_OS_CPP_BUILTINS for arm-vxworks
Jerome Lambourg [Thu, 14 Nov 2019 16:08:19 +0000 (16:08 +0000)]
Housekeeping on TARGET_OS_CPP_BUILTINS for arm-vxworks

2019-11-14  Jerome Lambourg  <lambourg@adacore.com>

        * config/arm/vxworks.h (TARGET_OS_CPP_BUILTINS): Use
        _VX_CPU instead of CPU and handle arm_arch8.

From-SVN: r278252

4 years agoBase support for vxworks 7 on aarch64
Doug Rupp [Thu, 14 Nov 2019 16:05:08 +0000 (16:05 +0000)]
Base support for vxworks 7 on aarch64

2019-11-14  Doug Rupp  <rupp@adacore.com>
           Olivier Hainque  <hainque@adacore.com>
           Jerome Lambourg  <lambourg@adacore.com>

       gcc/
       * config.gcc: Handle aarch64*-wrs-vxworks7*.
       * config/aarch64/aarch64-vxworks.h: New file.
       * config/aarch64/t-aarch64-vxworks: New file.

       libgcc/
       * config.host: Handle aarch64*-wrs-vxworks7*.

Co-Authored-By: Jerome Lambourg <lambourg@adacore.com>
Co-Authored-By: Olivier Hainque <hainque@adacore.com>
From-SVN: r278251

4 years agoUpdate the libgcc support for VxWorks AE/653
Olivier Hainque [Thu, 14 Nov 2019 16:00:55 +0000 (16:00 +0000)]
Update the libgcc support for VxWorks AE/653

2019-11-12  Olivier Hainque  <hainque@adacore.com>

libgcc/

        * config/t-gthr-vxworksae: New file, add all the gthr-vxworks
        sources except the cxx0x support to LIB2ADDEH.  We don't support
        cxx0x on AE/653.
        * config/t-vxworksae: New file.

        * config.host: Handle *-*-vxworksae: Add the two aforementioned
Makefile fragment files at their expected position in the tmake_file
list, in accordance with what is done for other VxWorks variants.

From-SVN: r278250

4 years agoImprove the thread support for VxWorks
Corentin Gay [Thu, 14 Nov 2019 15:58:31 +0000 (15:58 +0000)]
Improve the thread support for VxWorks

2019-11-12  Corentin Gay  <gay@adacore.com>
    Jerome Lambourg  <lambourg@adacore.com>
    Olivier Hainque  <hainque@adacore.com>

libgcc/

* config/t-gthr-vxworks: New file, add all the gthr-vxworks
sources to LIB2ADDEH.
* config/t-vxworks: Remove adjustments to LIB2ADDEH.
* config/t-vxworks7: Likewise.

* config.host: Append a block at the end of the file to add the
t-gthr files to the tmake_file list for VxWorks after everything
else.

* config/vxlib.c: Rename as gthr-vxworks.c.
* config/vxlib-tls.c: Rename as gthr-vxworks-tls.c.

* config/gthr-vxworks.h: Simplify a few comments.  Expose a TAS
API and a basic error checking API, both internal.  Simplify the
__gthread_once_t type definition and initializers.  Add sections
for condition variables support and for the C++0x thread support,
conditioned against Vx653 for the latter.

* config/gthr-vxworks.c (__gthread_once): Simplify comments and
implementation, leveraging the TAS internal API.
* config/gthr-vxworks-tls.c: Introduce an internal TLS data access
API, leveraging the general availability of TLS services in VxWorks7
post SR6xxx.
(__gthread_setspecific, __gthread_setspecific): Use it.
(tls_delete_hook): Likewise, and simplify the enter/leave dtor logic.
* config/gthr-vxworks-cond.c: New file.  GTHREAD_COND variable
support based on VxWorks primitives.
* config/gthr-vxworks-thread.c: New file.  GTHREAD_CXX0X support
based on VxWorks primitives.

Co-Authored-By: Jerome Lambourg <lambourg@adacore.com>
Co-Authored-By: Olivier Hainque <hainque@adacore.com>
From-SVN: r278249

4 years agoIntroduce vxworks specific crtstuff support
Jerome Lambourg [Thu, 14 Nov 2019 15:53:23 +0000 (15:53 +0000)]
Introduce vxworks specific crtstuff support

2019-11-06  Jerome Lambourg  <lambourg@adacore.com>
            Olivier Hainque  <hainque@adacore.com>

libgcc/
* config/vxcrtstuff.c: New file.
* config/t-vxcrtstuff: New Makefile fragment.
* config.host: Append t-vxcrtstuff to the tmake_file list
on all VxWorks ports using dwarf for table based EH.

gcc/
* config/vx-common.h (USE_TM_CLONE_REGISTRY): Remove
definition, pointless with a VxWorks specific version
of crtstuff.
(DWARF2_UNWIND_INFO): Conditionalize on !ARM_UNWIND_INFO.
* config/vxworks.h (VX_CRTBEGIN_SPEC, VX_CRTEND_SPEC):
New local macros, controlling the addition of vxworks specific
crtstuff objects depending on the EH mechanism and kind of
module being linked.
(VXWORKS_STARTFILE_SPEC, VXWORKS_ENDFILE_SPEC): Use them.

Co-Authored-By: Olivier Hainque <hainque@adacore.com>
From-SVN: r278248

4 years agoCommon ground work for vxworks7 ports updates
Pat Bernardi [Thu, 14 Nov 2019 15:45:50 +0000 (15:45 +0000)]
Common ground work for vxworks7 ports updates

2019-11-06  Pat Bernardi  <bernardi@adacore.com>
            Jerome Lambourg  <lambourg@adacore.com>
            Olivier Hainque  <hainque@adacore.com>

gcc/
* config.gcc: Add comment to introduce the TARGET_VXWORKS
commong macro definitions, conveying VXWORKS7 or 64bit general
variations.  Add a block to set gcc_cv_initfini_array
unconditionally to "yes" for VxWorks7.
config/vx-common.h (VXWORKS_CC1_SPEC): New macro, empty string
by default.  Update some comments.
config/vxworks.h (VXWORKS_EXTRA_LIBS_RTP): New macro, empty by
default, to be added the end of VXWORKS_LIBS_RTP.
(VXWORKS_LIBS_RTP): Replace hardcoded part by VXWORKS_BASE_LIBS_RTP
and append VXWORKS_EXTRA_LIBS_RTP, both of which specific ports may
redefine.
(VXWORKS_NET_LIBS_RTP): Account for VxWorks7 specificities.
(VXWORKS_CC1_SPEC): Common base definition, with VxWorks7 variation
to account for the now available TLS abilities.
(TARGET_LIBC_HAS_FUNCTION): Account for VxWorks7 abilities.
(VXWORKS_HAVE_TLS): Likewise.

Co-Authored-By: Jerome Lambourg <lambourg@adacore.com>
Co-Authored-By: Olivier Hainque <hainque@adacore.com>
From-SVN: r278247

4 years agoConsider building nodes from scalars in vect_slp_analyze_node_operations
Richard Sandiford [Thu, 14 Nov 2019 15:33:49 +0000 (15:33 +0000)]
Consider building nodes from scalars in vect_slp_analyze_node_operations

If the statements in an SLP node aren't similar enough to be vectorised,
or aren't something the vectoriser has code to handle, the BB vectoriser
tries building the vector from scalars instead.  This patch does the
same thing if we're able to build a viable-looking tree but fail later
during the analysis phase, e.g. because the target doesn't support a
particular vector operation.

This is needed to avoid regressions with a later patch.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vect-slp.c (vect_contains_pattern_stmt_p): New function.
(vect_slp_convert_to_external): Likewise.
(vect_slp_analyze_node_operations): If analysis fails, try building
the node from scalars instead.

gcc/testsuite/
* gcc.dg/vect/bb-slp-div-2.c: New test.

From-SVN: r278246

4 years agoVectorise conversions between differently-sized integer vectors
Richard Sandiford [Thu, 14 Nov 2019 15:31:25 +0000 (15:31 +0000)]
Vectorise conversions between differently-sized integer vectors

This patch adds AArch64 patterns for converting between 64-bit and
128-bit integer vectors, and makes the vectoriser and expand pass
use them.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-cfg.c (verify_gimple_assign_unary): Handle conversions
between vector types.
* tree-vect-stmts.c (vectorizable_conversion): Extend the
non-widening and non-narrowing path to handle standard
conversion codes, if the target supports them.
* expr.c (convert_move): Try using the extend and truncate optabs
for vectors.
* optabs-tree.c (supportable_convert_operation): Likewise.
* config/aarch64/iterators.md (Vnarroqw): New iterator.
* config/aarch64/aarch64-simd.md (<optab><Vnarrowq><mode>2)
(trunc<mode><Vnarrowq>2): New patterns.

gcc/testsuite/
* gcc.dg/vect/bb-slp-pr69907.c: Do not expect BB vectorization
to fail for aarch64 targets.
* gcc.dg/vect/no-scevccp-outer-12.c: Expect the test to pass
on aarch64 targets.
* gcc.dg/vect/vect-double-reduc-5.c: Likewise.
* gcc.dg/vect/vect-outer-4e.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_5.c: New test.
* gcc.target/aarch64/vect_mixed_sizes_6.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_7.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_8.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_9.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_10.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_11.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_12.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_13.c: Likewise.

From-SVN: r278245

4 years agoAllow mixed vector sizes within a single vectorised stmt
Richard Sandiford [Thu, 14 Nov 2019 15:16:40 +0000 (15:16 +0000)]
Allow mixed vector sizes within a single vectorised stmt

Although a previous patch allowed mixed vector sizes within a vector
region, we generally still required equal vector sizes within a vector
stmt.  Specifically, vect_get_vector_types_for_stmt computes two vector
types: the vector type corresponding to STMT_VINFO_VECTYPE and the
vector type that determines the minimum vectorisation factor for the
stmt ("nunits_vectype").  It then required these two types to be
the same size.

There doesn't seem to be any need for that restriction though.  AFAICT,
all vectorizable_* functions either do their own compatibility checks
or don't need to do them (because gimple guarantees that the scalar
types are compatible).

It should always be the case that nunits_vectype has at least as many
elements as the other vectype, but that's something we can assert for.

I couldn't resist a couple of other tweaks while there:

- there's no need to compute nunits_vectype if its element type is
  the same as STMT_VINFO_VECTYPE's.

- it's useful to distinguish the nunits_vectype from the main vectype
  in dump messages

- when reusing the existing STMT_VINFO_VECTYPE, it's useful to say so
  in the dump, and say what the type is

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vect-stmts.c (vect_get_vector_types_for_stmt): Don't
require vectype and nunits_vectype to have the same size;
instead assert that nunits_vectype has at least as many
elements as vectype.  Don't compute a separate nunits_vectype
if the scalar type is obviously the same as vectype's.
Tweak dump messages.

From-SVN: r278244

4 years ago[AArch64] Support vectorising with multiple vector sizes
Richard Sandiford [Thu, 14 Nov 2019 15:15:34 +0000 (15:15 +0000)]
[AArch64] Support vectorising with multiple vector sizes

This patch makes the vectoriser try mixtures of 64-bit and 128-bit
vector modes on AArch64.  It fixes some existing XFAILs and allows
kernel 24 from the Livermore Loops test to be vectorised (by using
a mixture of V2DF and V2SI).

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* config/aarch64/aarch64.c (aarch64_vectorize_related_mode): New
function.
(aarch64_autovectorize_vector_modes): Also add V4HImode and V2SImode.
(TARGET_VECTORIZE_RELATED_MODE): Define.

gcc/testsuite/
* gcc.dg/vect/vect-outer-4f.c: Expect the test to pass on aarch64
targets.
* gcc.dg/vect/vect-outer-4g.c: Likewise.
* gcc.dg/vect/vect-outer-4k.c: Likewise.
* gcc.dg/vect/vect-outer-4l.c: Likewise.
* gfortran.dg/vect/vect-8.f90: Expect kernel 24 to be vectorized
for aarch64.
* gcc.target/aarch64/vect_mixed_sizes_1.c: New test.
* gcc.target/aarch64/vect_mixed_sizes_2.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_3.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_4.c: Likewise.

From-SVN: r278243

4 years agoAvoid retrying with the same vector modes
Richard Sandiford [Thu, 14 Nov 2019 15:14:33 +0000 (15:14 +0000)]
Avoid retrying with the same vector modes

A later patch makes the AArch64 port add four entries to
autovectorize_vector_modes.  Each entry describes a different
vector mode assignment for vector code that mixes 8-bit, 16-bit,
32-bit and 64-bit elements.  But if (as usual) the vector code has
fewer element sizes than that, we could end up trying the same
combination of vector modes multiple times.  This patch adds a
check to prevent that.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vectorizer.h (vec_info::mode_set): New typedef.
(vec_info::used_vector_mode): New member variable.
(vect_chooses_same_modes_p): Declare.
* tree-vect-stmts.c (get_vectype_for_scalar_type): Record each
chosen vector mode in vec_info::used_vector_mode.
(vect_chooses_same_modes_p): New function.
* tree-vect-loop.c (vect_analyze_loop): Use it to avoid trying
the same vector statements multiple times.
* tree-vect-slp.c (vect_slp_bb_region): Likewise.

From-SVN: r278242

4 years agoSupport vectorisation with mixed vector sizes
Richard Sandiford [Thu, 14 Nov 2019 15:12:58 +0000 (15:12 +0000)]
Support vectorisation with mixed vector sizes

After previous patches, it's now possible to make the vectoriser
support multiple vector sizes in the same vector region, using
related_vector_mode to pick the right vector mode for a given
element mode.  No port yet takes advantage of this, but I have
a follow-on patch for AArch64.

This patch also seemed like a good opportunity to add some more dump
messages: one to make it clear which vector size/mode was being used
when analysis passed or failed, and another to say when we've decided
to skip a redundant vector size/mode.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* machmode.h (opt_machine_mode::operator==): New function.
(opt_machine_mode::operator!=): Likewise.
* tree-vectorizer.h (vec_info::vector_mode): Update comment.
(get_related_vectype_for_scalar_type): Delete.
(get_vectype_for_scalar_type_and_size): Declare.
* tree-vect-slp.c (vect_slp_bb_region): Print dump messages to say
whether analysis passed or failed, and with what vector modes.
Use related_vector_mode to check whether trying a particular
vector mode would be redundant with the autodetected mode,
and print a dump message if we decide to skip it.
* tree-vect-loop.c (vect_analyze_loop): Likewise.
(vect_create_epilog_for_reduction): Use
get_related_vectype_for_scalar_type instead of
get_vectype_for_scalar_type_and_size.
* tree-vect-stmts.c (get_vectype_for_scalar_type_and_size): Replace
with...
(get_related_vectype_for_scalar_type): ...this new function.
Take a starting/"prevailing" vector mode rather than a vector size.
Take an optional nunits argument, with the same meaning as for
related_vector_mode.  Use related_vector_mode when not
auto-detecting a mode, falling back to mode_for_vector if no
target mode exists.
(get_vectype_for_scalar_type): Update accordingly.
(get_same_sized_vectype): Likewise.
* tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise.

From-SVN: r278240

4 years agoRequire equal type sizes for vectorised calls
Richard Sandiford [Thu, 14 Nov 2019 15:09:24 +0000 (15:09 +0000)]
Require equal type sizes for vectorised calls

As explained in the comment, vectorizable_call needs more work to
support mixtures of sizes.  This avoids testsuite fallout for
later SVE patches.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vect-stmts.c (vectorizable_call): Require the types
to have the same size.

From-SVN: r278239

4 years agoMake less use of get_same_sized_vectype
Richard Sandiford [Thu, 14 Nov 2019 15:06:34 +0000 (15:06 +0000)]
Make less use of get_same_sized_vectype

Some callers of get_same_sized_vectype were dealing with operands that
are constant or defined externally, and so have no STMT_VINFO_VECTYPE
available.  Under the current model, using get_same_sized_vectype for
that case is equivalent to using get_vectype_for_scalar_type, since
get_vectype_for_scalar_type always returns vectors of the same size,
once a size is fixed.

Using get_vectype_for_scalar_type is arguably more obvious though:
if we're using the same scalar type as we would for internal
definitions, we should use the same vector type too.  (Constant and
external definitions sometimes let us change the original scalar type
to a "nicer" scalar type, but that isn't what's happening here.)

This is a prerequisite to supporting multiple vector sizes in the same
vec_info.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vect-stmts.c (vectorizable_call): If an operand is
constant or external, use get_vectype_for_scalar_type
rather than get_same_sized_vectype to get its vector type.
(vectorizable_conversion, vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.

From-SVN: r278238

4 years agoReplace vec_info::vector_size with vec_info::vector_mode
Richard Sandiford [Thu, 14 Nov 2019 15:05:37 +0000 (15:05 +0000)]
Replace vec_info::vector_size with vec_info::vector_mode

This patch replaces vec_info::vector_size with vec_info::vector_mode,
but for now continues to use it as a way of specifying a single
vector size.  This makes it easier for later patches to use
related_vector_mode instead.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vectorizer.h (vec_info::vector_size): Replace with...
(vec_info::vector_mode): ...this new field.
* tree-vect-loop.c (vect_update_vf_for_slp): Update accordingly.
(vect_analyze_loop, vect_transform_loop): Likewise.
* tree-vect-loop-manip.c (vect_do_peeling): Likewise.
* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
(vect_make_slp_decision, vect_slp_bb_region): Likewise.
* tree-vect-stmts.c (get_vectype_for_scalar_type): Likewise.
* tree-vectorizer.c (try_vectorize_loop_1): Likewise.

gcc/testsuite/
* gcc.dg/vect/vect-tail-nomask-1.c: Update expected epilogue
vectorization message.

From-SVN: r278237

4 years agoReplace autovectorize_vector_sizes with autovectorize_vector_modes
Richard Sandiford [Thu, 14 Nov 2019 15:03:17 +0000 (15:03 +0000)]
Replace autovectorize_vector_sizes with autovectorize_vector_modes

This is another patch in the series to remove the assumption that
all modes involved in vectorisation have to be the same size.
Rather than have the target provide a list of vector sizes,
it makes the target provide a list of vector "approaches",
with each approach represented by a mode.

A later patch will pass this mode to targetm.vectorize.related_mode
to get the vector mode for a given element mode.  Until then, the modes
simply act as an alternative way of specifying the vector size.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* target.h (vector_sizes, auto_vector_sizes): Delete.
(vector_modes, auto_vector_modes): New typedefs.
* target.def (autovectorize_vector_sizes): Replace with...
(autovectorize_vector_modes): ...this new hook.
* doc/tm.texi.in (TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_SIZES):
Replace with...
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): ...this new hook.
* doc/tm.texi: Regenerate.
* targhooks.h (default_autovectorize_vector_sizes): Delete.
(default_autovectorize_vector_modes): New function.
* targhooks.c (default_autovectorize_vector_sizes): Delete.
(default_autovectorize_vector_modes): New function.
* omp-general.c (omp_max_vf): Use autovectorize_vector_modes instead
of autovectorize_vector_sizes.  Use the number of units in the mode
to calculate the maximum VF.
* omp-low.c (omp_clause_aligned_alignment): Use
autovectorize_vector_modes instead of autovectorize_vector_sizes.
Use a loop based on related_mode to iterate through all supported
vector modes for a given scalar mode.
* optabs-query.c (can_vec_mask_load_store_p): Use
autovectorize_vector_modes instead of autovectorize_vector_sizes.
* tree-vect-loop.c (vect_analyze_loop, vect_transform_loop): Likewise.
* tree-vect-slp.c (vect_slp_bb_region): Likewise.
* config/aarch64/aarch64.c (aarch64_autovectorize_vector_sizes):
Replace with...
(aarch64_autovectorize_vector_modes): ...this new function.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_SIZES): Delete.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): Define.
* config/arc/arc.c (arc_autovectorize_vector_sizes): Replace with...
(arc_autovectorize_vector_modes): ...this new function.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_SIZES): Delete.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): Define.
* config/arm/arm.c (arm_autovectorize_vector_sizes): Replace with...
(arm_autovectorize_vector_modes): ...this new function.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_SIZES): Delete.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): Define.
* config/i386/i386.c (ix86_autovectorize_vector_sizes): Replace with...
(ix86_autovectorize_vector_modes): ...this new function.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_SIZES): Delete.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): Define.
* config/mips/mips.c (mips_autovectorize_vector_sizes): Replace with...
(mips_autovectorize_vector_modes): ...this new function.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_SIZES): Delete.
(TARGET_VECTORIZE_AUTOVECTORIZE_VECTOR_MODES): Define.

From-SVN: r278236

4 years agoUse consistent compatibility checks in vectorizable_shift
Richard Sandiford [Thu, 14 Nov 2019 14:58:21 +0000 (14:58 +0000)]
Use consistent compatibility checks in vectorizable_shift

The validation phase of vectorizable_shift used TYPE_MODE to check
whether the shift amount vector was compatible with the shifted vector:

      if ((op1_vectype == NULL_TREE
   || TYPE_MODE (op1_vectype) != TYPE_MODE (vectype))
    && (!slp_node
        || SLP_TREE_DEF_TYPE
     (SLP_TREE_CHILDREN (slp_node)[1]) != vect_constant_def))

But the generation phase was stricter and required the element types to
be equivalent:

   && !useless_type_conversion_p (TREE_TYPE (vectype),
  TREE_TYPE (op1)))

This difference led to an ICE with a later patch.

The first condition seems a bit too lax given that the function
supports vect_worthwhile_without_simd_p, where two different vector
types could have the same integer mode.  But it seems too strict
to reject signed shifts by unsigned amounts or unsigned shifts by
signed amounts; verify_gimple_assign_binary is happy with those.

This patch therefore goes for a middle ground of checking both TYPE_MODE
and TYPE_VECTOR_SUBPARTS, using the same condition in both places.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vect-stmts.c (vectorizable_shift): Check the number
of vector elements as well as the type mode when deciding
whether an op1_vectype is compatible.  Reuse the result of
this check when generating vector statements.

From-SVN: r278235

4 years agoUse build_vector_type_for_mode in get_vectype_for_scalar_type_and_size
Richard Sandiford [Thu, 14 Nov 2019 14:57:26 +0000 (14:57 +0000)]
Use build_vector_type_for_mode in get_vectype_for_scalar_type_and_size

Except for one case, get_vectype_for_scalar_type_and_size calculates
what the vector mode should be and then calls build_vector_type,
which recomputes the mode from scratch.  This patch makes it use
build_vector_type_for_mode instead.

The exception mentioned above is when preferred_simd_mode returns
an integer mode, which it does if no appropriate vector mode exists.
The integer mode in question is usually word_mode, although epiphany
can return a doubleword mode in some cases.

There's no guarantee that this integer mode is appropriate, since for
example the scalar type could be a float.  The traditional behaviour is
therefore to use the integer mode to determine a size only, and leave
mode_for_vector to pick the TYPE_MODE.  (Note that it can actually end
up picking a vector mode if the target defines a disabled vector mode.
We therefore still need to check TYPE_MODE after building the type.)

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree-vect-stmts.c (get_vectype_for_scalar_type_and_size): If
targetm.vectorize.preferred_simd_mode returns an integer mode,
use mode_for_vector to decide what the vector type's mode
should actually be.  Use build_vector_type_for_mode instead
of build_vector_type.

From-SVN: r278234

4 years agoPass the data vector mode to get_mask_mode
Richard Sandiford [Thu, 14 Nov 2019 14:55:12 +0000 (14:55 +0000)]
Pass the data vector mode to get_mask_mode

This patch passes the data vector mode to get_mask_mode, rather than its
size and nunits.  This is a bit simpler and allows targets to distinguish
between modes that happen to have the same size and number of elements.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* target.def (get_mask_mode): Take a vector mode itself as argument,
instead of properties about the vector mode.
* doc/tm.texi: Regenerate.
* targhooks.h (default_get_mask_mode): Update to reflect new
get_mode_mask interface.
* targhooks.c (default_get_mask_mode): Likewise.  Use
related_int_vector_mode.
* optabs-query.c (can_vec_mask_load_store_p): Update call
to get_mask_mode.
* tree-vect-stmts.c (check_load_store_masking): Likewise, checking
first that the original mode really is a vector.
* tree.c (build_truth_vector_type_for): Likewise.
* config/aarch64/aarch64.c (aarch64_get_mask_mode): Update for new
get_mode_mask interface.
(aarch64_expand_sve_vcond): Update call accordingly.
* config/gcn/gcn.c (gcn_vectorize_get_mask_mode): Update for new
get_mode_mask interface.
* config/i386/i386.c (ix86_get_mask_mode): Likewise.

From-SVN: r278233

4 years agoRemove build_{same_sized_,}truth_vector_type
Richard Sandiford [Thu, 14 Nov 2019 14:49:36 +0000 (14:49 +0000)]
Remove build_{same_sized_,}truth_vector_type

build_same_sized_truth_vector_type was confusingly named, since for
SVE and AVX512 the returned vector isn't the same byte size (although
it does have the same number of elements).  What it really returns
is the "truth" vector type for a given data vector type.

The more general truth_type_for provides the same thing when passed
a vector and IMO has a more descriptive name, so this patch replaces
all uses of build_same_sized_truth_vector_type with that.  It does
the same for a call to build_truth_vector_type, leaving truth_type_for
itself as the only remaining caller.

It's then more natural to pass build_truth_vector_type the original
vector type rather than its size and nunits, especially since the
given size isn't the size of the returned vector.  This in turn allows
a future patch to simplify the interface of get_mask_mode.  Doing this
also fixes a bug in which truth_type_for would pass a size of zero for
BLKmode vector types.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree.h (build_truth_vector_type): Delete.
(build_same_sized_truth_vector_type): Likewise.
* tree.c (build_truth_vector_type): Rename to...
(build_truth_vector_type_for): ...this.  Make static and take
a vector type as argument.
(truth_type_for): Update accordingly.
(build_same_sized_truth_vector_type): Delete.
* tree-vect-generic.c (expand_vector_divmod): Use truth_type_for
instead of build_same_sized_truth_vector_type.
* tree-vect-loop.c (vect_create_epilog_for_reduction): Likewise.
(vect_record_loop_mask, vect_get_loop_mask): Likewise.
* tree-vect-patterns.c (build_mask_conversion): Likeise.
* tree-vect-slp.c (vect_get_constant_vectors): Likewise.
* tree-vect-stmts.c (vect_get_vec_def_for_operand): Likewise.
(vect_build_gather_load_calls, vectorizable_call): Likewise.
(scan_store_can_perm_p, vectorizable_scan_store): Likewise.
(vectorizable_store, vectorizable_condition): Likewise.
(get_mask_type_for_scalar_type, get_same_sized_vectype): Likewise.
(vect_get_mask_type_for_stmt): Use truth_type_for instead of
build_truth_vector_type.
* config/aarch64/aarch64-sve-builtins.cc (gimple_folder::convert_pred):
Use truth_type_for instead of build_same_sized_truth_vector_type.
* config/rs6000/rs6000-call.c (fold_build_vec_cmp): Likewise.

gcc/c/
* c-typeck.c (build_conditional_expr): Use truth_type_for instead
of build_same_sized_truth_vector_type.
(build_vec_cmp): Likewise.

gcc/cp/
* call.c (build_conditional_expr_1): Use truth_type_for instead
of build_same_sized_truth_vector_type.
* typeck.c (build_vec_cmp): Likewise.

gcc/d/
* d-codegen.cc (build_boolop): Use truth_type_for instead of
build_same_sized_truth_vector_type.

From-SVN: r278232

4 years agoAdd build_truth_vector_type_for_mode
Richard Sandiford [Thu, 14 Nov 2019 14:45:49 +0000 (14:45 +0000)]
Add build_truth_vector_type_for_mode

Callers of vect_halve_mask_nunits and vect_double_mask_nunits
already know what mode the resulting vector type should have,
so we might as well create the vector type directly with that mode,
just like build_vector_type_for_mode lets us build normal vectors
with a known mode.  This avoids the current awkwardness of having
to recompute the mode starting from vec_info::vector_size, which
hard-codes the assumption that all vectors have to be the same size.

A later patch gets rid of build_truth_vector_type and
build_same_sized_truth_vector_type, so the net effect of the
series is to reduce the number of type functions by one.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* tree.h (build_truth_vector_type_for_mode): Declare.
* tree.c (build_truth_vector_type_for_mode): New function,
split out from...
(build_truth_vector_type): ...here.
(build_opaque_vector_type): Fix head comment.
* tree-vectorizer.h (supportable_narrowing_operation): Remove
vec_info parameter.
(vect_halve_mask_nunits): Replace vec_info parameter with the
mode of the new vector.
(vect_double_mask_nunits): Likewise.
* tree-vect-loop.c (vect_halve_mask_nunits): Likewise.
(vect_double_mask_nunits): Likewise.
* tree-vect-loop-manip.c: Include insn-config.h, rtl.h and recog.h.
(vect_maybe_permute_loop_masks): Remove vinfo parameter.  Update call
to vect_halve_mask_nunits, getting the required mode from the unpack
patterns.
(vect_set_loop_condition_masked): Update call accordingly.
* tree-vect-stmts.c (supportable_narrowing_operation): Remove vec_info
parameter and update call to vect_double_mask_nunits.
(vectorizable_conversion): Update call accordingly.
(simple_integer_narrowing): Likewise.  Remove vec_info parameter.
(vectorizable_call): Update call accordingly.
(supportable_widening_operation): Update call to
vect_halve_mask_nunits.
* config/aarch64/aarch64-sve-builtins.cc (register_builtin_types):
Use build_truth_vector_type_mode instead of build_truth_vector_type.

From-SVN: r278231

4 years agoReplace mode_for_int_vector with related_int_vector_mode
Richard Sandiford [Thu, 14 Nov 2019 14:39:57 +0000 (14:39 +0000)]
Replace mode_for_int_vector with related_int_vector_mode

mode_for_int_vector, like mode_for_vector, can sometimes return
an integer mode or an unsupported vector mode.  But no callers
are interested in that case, and only want supported vector modes.
This patch therefore replaces mode_for_int_vector with
related_int_vector_mode, which gives the target a chance to pick
its preferred vector mode for the given element mode and size.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* machmode.h (mode_for_int_vector): Delete.
(related_int_vector_mode): Declare.
* stor-layout.c (mode_for_int_vector): Delete.
(related_int_vector_mode): New function.
* optabs.c (expand_vec_perm_1): Use related_int_vector_mode
instead of mode_for_int_vector.
(expand_vec_perm_const): Likewise.
* config/aarch64/aarch64.c (aarch64_emit_approx_sqrt): Likewise.
(aarch64_evpc_sve_tbl): Likewise.
* config/s390/s390.c (s390_expand_vec_compare_cc): Likewise.
(s390_expand_vcond): Likewise.

From-SVN: r278230

4 years agoAdd a targetm.vectorize.related_mode hook
Richard Sandiford [Thu, 14 Nov 2019 14:36:26 +0000 (14:36 +0000)]
Add a targetm.vectorize.related_mode hook

This patch is the first of a series that tries to remove two
assumptions:

(1) that all vectors involved in vectorisation must be the same size

(2) that there is only one vector mode for a given element mode and
    number of elements

Relaxing (1) helps with targets that support multiple vector sizes or
that require the number of elements to stay the same.  E.g. if we're
vectorising code that operates on narrow and wide elements, and the
narrow elements use 64-bit vectors, then on AArch64 it would normally
be better to use 128-bit vectors rather than pairs of 64-bit vectors
for the wide elements.

Relaxing (2) makes it possible for -msve-vector-bits=128 to produce
fixed-length code for SVE.  It also allows unpacked/half-size SVE
vectors to work with -msve-vector-bits=256.

The patch adds a new hook that targets can use to control how we
move from one vector mode to another.  The hook takes a starting vector
mode, a new element mode, and (optionally) a new number of elements.
The flexibility needed for (1) comes in when the number of elements
isn't specified.

All callers in this patch specify the number of elements, but a later
vectoriser patch doesn't.

2019-11-14  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
* target.def (related_mode): New hook.
* doc/tm.texi.in (TARGET_VECTORIZE_RELATED_MODE): New hook.
* doc/tm.texi: Regenerate.
* targhooks.h (default_vectorize_related_mode): Declare.
* targhooks.c (default_vectorize_related_mode): New function.
* machmode.h (related_vector_mode): Declare.
* stor-layout.c (related_vector_mode): New function.
* expmed.c (extract_bit_field_1): Use it instead of mode_for_vector.
* optabs-query.c (qimode_for_vec_perm): Likewise.
* tree-vect-stmts.c (get_group_load_store_type): Likewise.
(vectorizable_store, vectorizable_load): Likewise

From-SVN: r278229

4 years agoaarch64: Add testsuite checks for asm-flag
Richard Henderson [Thu, 14 Nov 2019 13:45:01 +0000 (13:45 +0000)]
aarch64: Add testsuite checks for asm-flag

Inspired by the tests in gcc.target/i386.  Testing code generation,
diagnostics, and execution.

* gcc.target/aarch64/asm-flag-1.c: New test.
* gcc.target/aarch64/asm-flag-3.c: New test.
* gcc.target/aarch64/asm-flag-5.c: New test.
* gcc.target/aarch64/asm-flag-6.c: New test.

From-SVN: r278228

4 years agoarm: Add testsuite checks for asm-flag
Richard Henderson [Thu, 14 Nov 2019 13:44:48 +0000 (13:44 +0000)]
arm: Add testsuite checks for asm-flag

Inspired by the tests in gcc.target/i386.  Testing code generation,
diagnostics, and execution.

* gcc.target/arm/asm-flag-1.c: New test.
* gcc.target/arm/asm-flag-3.c: New test.
* gcc.target/arm/asm-flag-5.c: New test.
* gcc.target/arm/asm-flag-6.c: New test.

From-SVN: r278227

4 years agoarm, aarch64: Add support for __GCC_ASM_FLAG_OUTPUTS__
Richard Henderson [Thu, 14 Nov 2019 13:44:34 +0000 (13:44 +0000)]
arm, aarch64: Add support for __GCC_ASM_FLAG_OUTPUTS__

Since all but a couple of lines is shared between the two targets,
enable them both at once.

* config/arm/aarch-common-protos.h (arm_md_asm_adjust): Declare.
* config/arm/aarch-common.c (arm_md_asm_adjust): New.
* config/arm/arm-c.c (arm_cpu_builtins): Define
__GCC_ASM_FLAG_OUTPUTS__.
* config/arm/arm.c (TARGET_MD_ASM_ADJUST): New.
* config/aarch64/aarch64-c.c (aarch64_define_unconditional_macros):
Define __GCC_ASM_FLAG_OUTPUTS__.
* config/aarch64/aarch64.c (TARGET_MD_ASM_ADJUST): New.
* doc/extend.texi (FlagOutputOperands): Add documentation
for ARM and AArch64.

From-SVN: r278226

4 years agoarm: Rename CC_NOOVmode to CC_NZmode
Richard Henderson [Thu, 14 Nov 2019 13:44:18 +0000 (13:44 +0000)]
arm: Rename CC_NOOVmode to CC_NZmode

CC_NZmode is a more accurate description of what we require
from the mode, and matches up with the definition in aarch64.

Rename noov_comparison_operator to nz_comparison_operator
in order to match.

* config/arm/arm-modes.def (CC_NZ): Rename from CC_NOOV.
* config/arm/predicates.md (nz_comparison_operator): Rename
from noov_comparison_operator.
* config/arm/arm.c (arm_select_cc_mode): Use CC_NZmode name.
(arm_gen_dicompare_reg): Likewise.
(maybe_get_arm_condition_code): Likewise.
(thumb1_final_prescan_insn): Likewise.
(arm_emit_coreregs_64bit_shift): Likewise.
* config/arm/arm.md (addsi3_compare0): Likewise.
(*addsi3_compare0_scratch, subsi3_compare0): Likewise.
(*mulsi3_compare0, *mulsi3_compare0_v6): Likewise.
(*mulsi3_compare0_scratch, *mulsi3_compare0_scratch_v6): Likewise.
(*mulsi3addsi_compare0, *mulsi3addsi_compare0_v6): Likewise.
(*mulsi3addsi_compare0_scratch): Likewise.
(*mulsi3addsi_compare0_scratch_v6): Likewise.
(*andsi3_compare0, *andsi3_compare0_scratch): Likewise.
(*zeroextractsi_compare0_scratch): Likewise.
(*ne_zeroextractsi, *ne_zeroextractsi_shifted): Likewise.
(*ite_ne_zeroextractsi, *ite_ne_zeroextractsi_shifted): Likewise.
(andsi_not_shiftsi_si_scc_no_reuse): Likewise.
(andsi_not_shiftsi_si_scc): Likewise.
(*andsi_notsi_si_compare0, *andsi_notsi_si_compare0_scratch): Likewise.
(*iorsi3_compare0, *iorsi3_compare0_scratch): Likewise.
(*xorsi3_compare0, *xorsi3_compare0_scratch): Likewise.
(*shiftsi3_compare0, *shiftsi3_compare0_scratch): Likewise.
(*not_shiftsi_compare0, *not_shiftsi_compare0_scratch): Likewise.
(*notsi_compare0, *notsi_compare0_scratch): Likewise.
(return_addr_mask, *check_arch2): Likewise.
(*arith_shiftsi_compare0, *arith_shiftsi_compare0_scratch): Likewise.
(*sub_shiftsi_compare0, *sub_shiftsi_compare0_scratch): Likewise.
(compare_scc splitters): Likewise.
(movcond_addsi): Likewise.
* config/arm/thumb2.md (thumb2_addsi3_compare0): Likewise.
(*thumb2_addsi3_compare0_scratch): Likewise.
(*thumb2_mulsi_short_compare0): Likewise.
(*thumb2_mulsi_short_compare0_scratch): Likewise.
(compare peephole2s): Likewise.
* config/arm/thumb1.md (thumb1_cbz): Use CC_NZmode and
nz_comparison_operator names.
(cbranchsi4_insn): Likewise.

From-SVN: r278225

4 years agoarm: Fix the "c" constraint
Richard Henderson [Thu, 14 Nov 2019 13:44:05 +0000 (13:44 +0000)]
arm: Fix the "c" constraint

The existing definition using register class CC_REG does not
work because CC_REGNUM does not support normal modes, and so
fails to match register_operand.  Use a non-register constraint
and the cc_register predicate instead.

        * config/arm/constraints.md (c): Use cc_register predicate.

From-SVN: r278224

4 years agoaarch64: Add "c" constraint
Richard Henderson [Thu, 14 Nov 2019 13:43:50 +0000 (13:43 +0000)]
aarch64: Add "c" constraint

Mirror arm in letting "c" match the condition code register.

* config/aarch64/constraints.md (c): New constraint.

From-SVN: r278223