Martin Liska [Wed, 23 Oct 2019 08:55:05 +0000 (10:55 +0200)]
Do not ICE in IPA inliner.
2019-10-23 Martin Liska <mliska@suse.cz>
PR ipa/91969
* ipa-inline.c (recursive_inlining): Do not print
when curr->count is not initialized.
2019-10-23 Martin Liska <mliska@suse.cz>
PR ipa/91969
* g++.dg/ipa/pr91969.C: New test.
From-SVN: r277309
Richard Biener [Wed, 23 Oct 2019 06:45:03 +0000 (06:45 +0000)]
tree-vect-slp.c (vect_build_slp_tree_2): Do not build op from scalars in case there's a constant operand in its definition.
2019-10-23 Richard Biener <rguenther@suse.de>
* tree-vect-slp.c (vect_build_slp_tree_2): Do not build
op from scalars in case there's a constant operand in its
definition.
From-SVN: r277308
Iain Sandoe [Wed, 23 Oct 2019 05:39:32 +0000 (05:39 +0000)]
[Darwin, PPC] Check for out of range asm values.
There are some cases in which the value for the max skip to a p2align
directive can be negative. The older assembler just ignores these cases
where newer tools produce an error. To preserve behaviour, we avoid
emitting out of range values.
gcc/ChangeLog:
2019-10-23 Iain Sandoe <iain@sandoe.co.uk>
* config/rs6000/darwin.h (ASM_OUTPUT_MAX_SKIP_ALIGN): Guard
against out of range max skip or log values.
From-SVN: r277307
GCC Administrator [Wed, 23 Oct 2019 00:16:24 +0000 (00:16 +0000)]
Daily bump.
From-SVN: r277306
Jonathan Wakely [Tue, 22 Oct 2019 21:48:57 +0000 (22:48 +0100)]
Restore use of tr1::unordered_map in testsuite
My recent change to this file broke running the testsuite with
-std=c++98 because std::unordered_map isn't available. This fixes it.
* testsuite/util/testsuite_abi.h: Restore use of tr1/unordered_map
when compiled as C++98.
From-SVN: r277302
Jonathan Wakely [Tue, 22 Oct 2019 21:48:53 +0000 (22:48 +0100)]
Do not declare std::uses_allocator before C++11
* include/bits/memoryfwd.h (uses_allocator): Do not declare for C++98.
* testsuite/17_intro/names.cc: Check uses_allocator in C++98.
From-SVN: r277301
Jonathan Wakely [Tue, 22 Oct 2019 21:48:39 +0000 (22:48 +0100)]
Remove redundant std::allocator members for C++20
C++20 removes a number of std::allocator members that have correct
defaults provided by std::allocator_traits, so aren't needed.
Several extensions including __gnu_cxx::hash_map and tr1 containers are
no longer usable with std::allocator in C++20 mode. They need to be
updated to use __gnu_cxx::__alloc_traits in a follow-up patch.
* include/bits/alloc_traits.h
(allocator_traits<allocator<T>>::allocate): Ignore hint for C++20.
(allocator_traits<allocator<T>>::construct): Perform placement new
directly for C++20, instead of calling allocator<T>::construct.
(allocator_traits<allocator<T>>::destroy): Call destructor directly
for C++20, instead of calling allocator<T>::destroy.
(allocator_traits<allocator<T>>::max_size): Return value directly
for C++20, instead of calling std::allocator<T>::max_size().
(__do_alloc_on_copy, __do_alloc_on_move, __do_alloc_on_swap): Do not
define for C++17 and up.
(__alloc_on_copy, __alloc_on_move, __alloc_on_swap): Use if-constexpr
for C++17 and up, instead of tag dispatching.
* include/bits/allocator.h (allocator<void>): Remove for C++20.
(allocator::pointer, allocator::const_pointer, allocator::reference)
(allocator::const_reference, allocator::rebind): Remove for C++20.
* include/bits/basic_string.h (basic_string): Use __alloc_traits to
rebind allocator.
* include/bits/memoryfwd.h (allocator<void>): Remove for C++20.
* include/ext/debug_allocator.h: Use __alloc_traits for rebinding.
* include/ext/malloc_allocator.h (malloc_allocator::~malloc_allocator)
(malloc_allocator::pointer, malloc_allocator::const_pointer)
(malloc_allocator::reference, malloc_allocator::const_reference)
(malloc_allocator::rebind, malloc_allocator::max_size)
(malloc_allocator::construct, malloc_allocator::destroy): Do not
define for C++20.
(malloc_allocator::_M_max_size): Define new function.
* include/ext/new_allocator.h (new_allocator::~new_allocator)
(new_allocator::pointer, new_allocator::const_pointer)
(new_allocator::reference, new_allocator::const_reference)
(new_allocator::rebind, new_allocator::max_size)
(new_allocator::construct, new_allocator::destroy): Do not
define for C++20.
(new_allocator::_M_max_size): Define new function.
* include/ext/rc_string_base.h (__rc_string_base::_Rep): Use
__alloc_traits to rebind allocator.
* include/ext/rope (_Rope_rep_base, _Rope_base): Likewise.
(rope::rope(CharT, const allocator_type&)): Use __alloc_traits
to construct character.
* include/ext/slist (_Slist_base): Use __alloc_traits to rebind
allocator.
* include/ext/sso_string_base.h (__sso_string_base::_M_max_size):
Use __alloc_traits.
* include/ext/throw_allocator.h (throw_allocator): Do not use optional
members of std::allocator, use __alloc_traits members instead.
* include/ext/vstring.h (__versa_string): Use __alloc_traits.
* include/ext/vstring_util.h (__vstring_utility): Likewise.
* include/std/memory: Include <bits/alloc_traits.h>.
* testsuite/20_util/allocator/8230.cc: Use __gnu_test::max_size.
* testsuite/20_util/allocator/rebind_c++20.cc: New test.
* testsuite/20_util/allocator/requirements/typedefs.cc: Do not check
for pointer, const_pointer, reference, const_reference or rebind in
C++20.
* testsuite/20_util/allocator/requirements/typedefs_c++20.cc: New test.
* testsuite/23_containers/deque/capacity/29134.cc: Use
__gnu_test::max_size.
* testsuite/23_containers/forward_list/capacity/1.cc: Likewise.
* testsuite/23_containers/list/capacity/29134.cc: Likewise.
* testsuite/23_containers/map/capacity/29134.cc: Likewise.
* testsuite/23_containers/multimap/capacity/29134.cc: Likewise.
* testsuite/23_containers/multiset/capacity/29134.cc: Likewise.
* testsuite/23_containers/set/capacity/29134.cc: Likewise.
* testsuite/23_containers/vector/capacity/29134.cc: Likewise.
* testsuite/ext/malloc_allocator/variadic_construct.cc: Do not run
test for C++20.
* testsuite/ext/new_allocator/variadic_construct.cc: Likewise.
* testsuite/ext/vstring/capacity/29134.cc: Use __gnu_test::max_size.
* testsuite/util/replacement_memory_operators.h: Do not assume
Alloc::pointer exists.
* testsuite/util/testsuite_allocator.h (__gnu_test::max_size): Define
helper to call max_size for any allocator.
From-SVN: r277300
Giuliano Belinassi [Tue, 22 Oct 2019 19:05:49 +0000 (19:05 +0000)]
Fix incorrect merge of conflictant names in `dump_graphviz`
When using lto-dump -callgraph with two or more .o files containing distinct
functions with the same name, dump_graphviz incorrectly merged those functions
into a single node. This patch fixes this issue by calling `dump_name` instead
of `name`, therefore concat'ing the function name with the node's id.
To understeand what was the issue, let's say you have two files:
a.c: static void foo (void) { do_something (); }
b.c: static void foo (void) { do_something_else (); }
These are distinct functions and should be represented as distinct nodes in the
callgraph dump.
2019-10-22 Giuliano Belinassi <giuliano.belinassi@usp.br>
* cgraph.c (dump_graphviz): Change name to dump_name
From-SVN: r277299
Steven G. Kargl [Tue, 22 Oct 2019 18:18:59 +0000 (18:18 +0000)]
re PR fortran/92174 (runtime error: index 15 out of bounds for type 'gfc_expr *[15])
2019-10-22 Steven G. Kargl <kargl@gcc.gnu.org>
PR fortran/92174
* decl.c (attr_decl1): Move check for F2018:C822 from here ...
* array.c (gfc_set_array_spec): ... to here.
From-SVN: r277297
Jakub Jelinek [Tue, 22 Oct 2019 14:52:52 +0000 (16:52 +0200)]
re PR tree-optimization/85887 (Missing DW_TAG_lexical_block PC range)
PR tree-optimization/85887
* decl.c (expand_static_init): Drop ECF_LEAF from __cxa_guard_acquire
and __cxa_guard_release.
From-SVN: r277293
Marc Glisse [Tue, 22 Oct 2019 14:42:38 +0000 (16:42 +0200)]
PR c++/85746: Don't fold __builtin_constant_p prematurely
2019-10-22 Marc Glisse <marc.glisse@inria.fr>
gcc/cp/
* constexpr.c (cxx_eval_builtin_function_call): Only set
force_folding_builtin_constant_p if manifestly_const_eval.
gcc/testsuite/
* g++.dg/pr85746.C: New file.
From-SVN: r277292
Tamar Christina [Tue, 22 Oct 2019 14:25:38 +0000 (14:25 +0000)]
Arm: Fix arm libsanitizer bootstrap failure
Glibc has recently introduced changed to the mode field in ipc_perm
in commit
2f959dfe849e0646e27403f2e4091536496ac0f0. For Arm this
means that the mode field no longer has the same size.
This causes an assert failure against libsanitizer's internal copy
of ipc_perm. Since this change can't be easily detected I am adding
arm to the list of targets that are excluded from this check. libsanitizer
doesn't use this field (and others, it in fact uses only 1 field) so this check
can be ignored.
Padding bits were used by glibc when the field was changed so sizeof and offsets
of the remaining fields should be the same.
libsanitizer/ChangeLog:
PR sanitizer/92154
* sanitizer_common/sanitizer_platform_limits_posix.cpp (defined):
Cherry-pick compiler-rt revision r375220.
From-SVN: r277291
Richard Earnshaw [Tue, 22 Oct 2019 13:19:15 +0000 (13:19 +0000)]
[arm] Match subtraction from carry_operation
On Arm we have both carry and borrow operations, but borrow is
essentially '~carry'. Of course, with boolean logic ~carry is also
1-carry.
GCC transforms
(1 - X - LTU (cc, 0))
into
(GEU (cc, 0) - X)
Now the former matches a real insn in Arm state, using the RSC
instruction with #1 as the immediate, but we currently do not
recognize the canonicalized form. Nevertheless, given the above
logic, this turns out to be quite straight forward as the original
expression matches arm_borrow_operation and the revised form can be
used with arm_carry_operation. Since we match this new pattern we
also update rtx_costs to handle it.
* config/arm/arm.md (rsbsi_carryin_reg): New pattern.
* config/arm/arm.c (arm_rtx_costs_internal, case MINUS): Handle
subtraction from a carry operation.
From-SVN: r277290
Richard Earnshaw [Tue, 22 Oct 2019 13:16:42 +0000 (13:16 +0000)]
[arm] make arm_carry_operation and arm_borrow_operation duals
Arm_carry_operation and arm_borrow_operation are duals: given that we
have a comparison that returns a result that relies solely in the
carry flag one is the inverse of the other. So there's no reason for
one to have a CC mode that the other does not have. This patch
restores that equivalence.
* config/arm/predicates.md (arm_borrow_operation): Handle CC_ADCmode.
From-SVN: r277289
Richard Biener [Tue, 22 Oct 2019 13:08:53 +0000 (13:08 +0000)]
re PR tree-optimization/92173 (ICE in optab_for_tree_code, at optabs-tree.c:81)
2019-10-22 Richard Biener <rguenther@suse.de>
PR tree-optimization/92173
* tree-vect-loop.c (vectorizable_reduction): If
vect_transform_reduction cannot handle code-generation try without
the single-def-use-cycle optimization. Pass optab_vector to
optab_for_tree_code to get vector shifts as that's what we'd
generate.
* gcc.dg/torture/pr92173.c: New testcase.
From-SVN: r277288
Michael Matz [Tue, 22 Oct 2019 12:25:03 +0000 (12:25 +0000)]
re PR middle-end/90796 (GCC: O2 vs O3 output differs on simple test)
Fix PR middle-end/90796
PR middle-end/90796
* gimple-loop-jam.c (any_access_function_variant_p): New function.
(adjust_unroll_factor): Use it to constrain safety, new parameter.
(tree_loop_unroll_and_jam): Adjust call and profitable unroll factor.
testsuite/
* gcc.dg/unroll-and-jam.c: Add three invalid and one valid case.
From-SVN: r277287
Richard Biener [Tue, 22 Oct 2019 11:51:52 +0000 (11:51 +0000)]
re PR tree-optimization/92173 (ICE in optab_for_tree_code, at optabs-tree.c:81)
2019-10-22 Richard Biener <rguenther@suse.de>
PR tree-optimization/92173
* tree-vect-loop.c (vectorizable_reduction): If
vect_transform_reduction cannot handle code-generation try without
the single-def-use-cycle optimization. Pass optab_vector to
optab_for_tree_code to get vector shifts as that's what we'd
generate.
* gcc.dg/torture/pr92173.c: New testcase.
From-SVN: r277286
Andreas Schwab [Tue, 22 Oct 2019 10:32:52 +0000 (10:32 +0000)]
* config/abi/post/aarch64-linux-gnu/baseline_symbols.txt: Update.
From-SVN: r277285
Martin Liska [Tue, 22 Oct 2019 09:58:27 +0000 (11:58 +0200)]
Come up with json::integer_number and use it in GCOV.
2019-10-22 Martin Liska <mliska@suse.cz>
* diagnostic-format-json.cc (json_from_expanded_location):
Use json::integer_number.
* gcov.c (output_intermediate_json_line): Use new
json::integer_number.
(output_json_intermediate_file): Likewise.
* json.cc (number::print): Move to ...
(float_number::print): ... this.
(integer_number::print): New.
(test_writing_numbers): Move to ...
(test_writing_float_numbers): ... this.
(test_writing_integer_numbers): New.
(json_cc_tests): Register test_writing_integer_numbers.
* json.h (class value): Add forward declaration
for float_number and integer_number.
(enum kind): Add JSON_INTEGER and JSON_FLOAT.
(class number): Move to ...
(class float_number): ... this.
(class integer_number): New.
* optinfo-emit-json.cc (optrecord_json_writer::impl_location_to_json):
Use json::integer_number.
(optrecord_json_writer::location_to_json): Likewise.
(optrecord_json_writer::profile_count_to_json): Likewise.
(optrecord_json_writer::pass_to_json): Likewise.
From-SVN: r277284
Martin Liska [Tue, 22 Oct 2019 09:29:52 +0000 (09:29 +0000)]
Fix PR reference in ChangeLog.
From-SVN: r277283
Richard Sandiford [Tue, 22 Oct 2019 08:43:01 +0000 (08:43 +0000)]
Fix use after free in vector_size change
r277235 was a bit too mechanical and ended up introducing use
after free bugs in both loop and SLP vectorisation.
2019-10-22 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-slp.c (vect_slp_bb_region): Check whether
autodetected_vector_size rather than vector_size is zero.
* tree-vect-loop.c (vect_analyze_loop): Likewise.
Set autodetected_vector_size immediately after calling
vect_analyze_loop_2. Check for a fatal error before advancing
next_size.
From-SVN: r277282
Richard Sandiford [Tue, 22 Oct 2019 07:47:07 +0000 (07:47 +0000)]
[C++] Avoid exposing internal details in aka types
This patch extends r276951 to work for C++ too.
2019-10-22 Richard Sandiford <richard.sandiford@arm.com>
gcc/cp/
* cp-tree.h (STF_USER_VISIBLE): New constant.
(strip_typedefs, strip_typedefs_expr): Take a flags argument.
* tree.c (strip_typedefs, strip_typedefs_expr): Likewise,
updating mutual calls accordingly. When STF_USER_VISIBLE is true,
only look through typedefs if user_facing_original_type_p.
* error.c (dump_template_bindings, type_to_string): Pass
STF_USER_VISIBLE to strip_typedefs.
(dump_type): Likewise, unless pp_c_flag_gnu_v3 is set.
gcc/testsuite/
* g++.dg/diagnostic/aka5.h: New test.
* g++.dg/diagnostic/aka5a.C: Likewise.
* g++.dg/diagnostic/aka5b.C: Likewise.
* g++.target/aarch64/diag_aka_1.C: Likewise.
From-SVN: r277281
Iain Sandoe [Tue, 22 Oct 2019 03:40:26 +0000 (03:40 +0000)]
[testsuite] Make the Wnonnull independent of system headers.
To avoid the result of this test depending on the implementation of
the system 'string.h', provide prototypes for the two functions used
in the test.
gcc/testsuite/ChangeLog:
2019-10-22 Iain Sandoe <iain@sandoe.co.uk>
* gcc.dg/Wnonnull.c: Provide prototypes for strlen and memcpy.
Use __SIZE_TYPE__ instead of size_t.
From-SVN: r277280
Jason Merrill [Tue, 22 Oct 2019 03:30:48 +0000 (23:30 -0400)]
* lock-and-run.sh: Tweak command order.
From-SVN: r277279
Jason Merrill [Tue, 22 Oct 2019 03:12:04 +0000 (23:12 -0400)]
* .gitattributes: Also check ChangeLog whitespace.
From-SVN: r277278
Jason Merrill [Tue, 22 Oct 2019 03:09:41 +0000 (23:09 -0400)]
lock-and-run.sh: Check for process existence rather than timeout.
* lock-and-run.sh: Check for process existence rather than timeout.
Matthias Klose noted that on less powerful targets, a link might take more
than 5 minutes; he mentions a figure of 3 hours for an LTO link. So this
patch changes the timeout to a check for whether the locking process still
exists. If the lock exists in an erroneous state (no pid file or can't
signal the pid) for 30 sec, steal it.
From-SVN: r277277
GCC Administrator [Tue, 22 Oct 2019 00:16:18 +0000 (00:16 +0000)]
Daily bump.
From-SVN: r277276
Jozef Lawrynowicz [Mon, 21 Oct 2019 20:44:16 +0000 (20:44 +0000)]
expr.c (expand_expr_real_2): Don't widen constant op1 when expanding widening multiplication.
2019-10-21 Jozef Lawrynowicz <jozef.l@mittosystems.com>
* expr.c (expand_expr_real_2): Don't widen constant op1 when expanding
widening multiplication.
From-SVN: r277271
Kamlesh Kumar [Mon, 21 Oct 2019 20:19:28 +0000 (20:19 +0000)]
PR c++/83434 - typeinfo for noexcept function lacks noexcept information
2019-10-21 Kamlesh Kumar <kamleshbhalui@gmail.com>
* rtti.c (get_tinfo_decl_dynamic): Do not call
TYPE_MAIN_VARIANT for function.
(get_typeid): Likewise.
* g++.dg/rtti/pr83534.C: New Test.
Reviewed-by: Jason Merrill <jason@redhat.com>
Co-Authored-By: Jason Merrill <jason@redhat.com>
From-SVN: r277270
Paolo Carlini [Mon, 21 Oct 2019 19:29:41 +0000 (19:29 +0000)]
parser.c (cp_parser_class_head): Improve error recovery upon extra qualification error.
/cp
2019-10-21 Paolo Carlini <paolo.carlini@oracle.com>
* parser.c (cp_parser_class_head): Improve error recovery upon
extra qualification error.
/testsuite
2019-10-21 Paolo Carlini <paolo.carlini@oracle.com>
* g++.dg/parse/qualified2.C: Tighten dg-error directive.
* g++.old-deja/g++.other/decl5.C: Don't expect redundant error.
From-SVN: r277268
Jakub Jelinek [Mon, 21 Oct 2019 18:51:43 +0000 (20:51 +0200)]
re PR c++/92015 (internal compiler error: in cxx_eval_array_reference, at cp/constexpr.c:2568)
PR c++/92015
* constexpr.c (cxx_eval_component_reference, cxx_eval_bit_field_ref):
Use STRIP_ANY_LOCATION_WRAPPER on CONSTRUCTOR elts.
* g++.dg/cpp0x/constexpr-92015.C: New test.
From-SVN: r277267
Marek Polacek [Mon, 21 Oct 2019 18:45:45 +0000 (18:45 +0000)]
PR c++/92062 - ODR-use ignored for static member of class template.
has_value_dependent_address wasn't stripping location wrappers so it
gave the wrong answer for "&x" in the static_assert. That led us to
thinking that the expression isn't instantiation-dependent, and we
skipped static initialization of A<0>::x.
This patch adds stripping so that has_value_dependent_address gives the
same answer as it used to before the location wrappers addition.
* pt.c (has_value_dependent_address): Strip location wrappers.
* g++.dg/cpp0x/constexpr-odr1.C: New test.
* g++.dg/cpp0x/constexpr-odr2.C: New test.
From-SVN: r277266
Marek Polacek [Mon, 21 Oct 2019 18:22:41 +0000 (18:22 +0000)]
PR c++/92106 - ICE with structured bindings and -Wreturn-local-addr.
* typeck.c (maybe_warn_about_returning_address_of_local): Avoid
recursing on null initializer and return false instead.
* g++.dg/cpp1z/decomp50.C: New test.
From-SVN: r277264
Richard Earnshaw [Mon, 21 Oct 2019 15:52:58 +0000 (15:52 +0000)]
[arm] clean up alu+shift patterns
My DImode arithmetic patches introduced a bug on thumb2 where we could
generate a register controlled shift into an ALU operation. In
fairness the bug was always present, but latent.
As part of cleaning this up (and auditing to ensure I've caught them
all this time) I've gone through all the shift generating patterns in
the MD files and cleaned them up, reducing some duplicate patterns
between the arm and thumb2 descriptions where we can now share the
same pattern. In some cases we were missing the shift attribute; in
most cases I've eliminated an ugly attribute setting using the fact
that we normally need separate alternatives for shift immediate and
shift reg to simplify the logic.
* config/arm/iterators.md (t2_binop0): Fix typo in comment.
* config/arm/arm.md (addsi3_carryin_shift): Simplify selection of the
type attribute.
(subsi3_carryin_shift): Separate into register and constant controlled
alternatives. Use shift_amount_operand for operand 4. Set shift
attribute and simplify type attribute.
(subsi3_carryin_shift_alt): Likewise.
(rsbsi3_carryin_shift): Likewise.
(rsbsi3_carryin_shift_alt): Likewise.
(andsi_not_shiftsi_si): Enable for TARGET_32BIT. Separate constant
and register controlled shifts into distinct alternatives.
(andsi_not_shiftsi_si_scc_no_reuse): Likewise.
(andsi_not_shiftsi_si_scc): Likewise.
(arm_cmpsi_negshiftsi_si): Likewise.
(not_shiftsi): Remove redundant M constraint from alternative 1.
(not_shiftsi_compare0): Likewise.
(arm_cmpsi_insn): Remove redundant alternative 2.
(cmpsi_shift_swp): Likewise.
(sub_shiftsi): Likewise.
(sub_shiftsi_compare0_scratch): Likewise.
* config/arm/thumb2.md (thumb_andsi_not_shiftsi_si): Delete pattern.
(thumb2_cmpsi_neg_shiftsi): Likewise.
From-SVN: r277262
Richard Biener [Mon, 21 Oct 2019 13:43:19 +0000 (13:43 +0000)]
re PR tree-optimization/92162 (ICE in vect_create_epilog_for_reduction, at tree-vect-loop.c:4252)
2019-10-21 Richard Biener <rguenther@suse.de>
PR tree-optimization/92162
* tree-vect-loop.c (vect_create_epilog_for_reduction): Lookup
STMT_VINFO_REDUC_IDX in reduc_info.
* tree-vect-stmts.c (vectorizable_condition): Likewise.
* gcc.dg/pr92162.c: New testcase.
From-SVN: r277261
Andrew Burgess [Mon, 21 Oct 2019 12:41:29 +0000 (13:41 +0100)]
contrib: Add KPASS support to dg-extract-results.{sh,py}
Extend dg-extract-results.sh and dg-extract-results.py to support the
KPASS test result status. This is required by GDB which uses a copy
of the dg-extract-results.{sh,py} scripts that it tries to keep in
sync with GCC.
ChangeLog:
* contrib/dg-extract-results.sh: Add support for KPASS.
* contrib/dg-extract-results.py: Likewise.
From-SVN: r277260
Richard Biener [Mon, 21 Oct 2019 11:34:00 +0000 (11:34 +0000)]
tree-vectorizer.h (_slp_tree::ops): New member.
2019-10-21 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (_slp_tree::ops): New member.
(SLP_TREE_SCALAR_OPS): New.
(vect_get_slp_defs): Adjust prototype.
* tree-vect-slp.c (vect_free_slp_tree): Release
SLP_TREE_SCALAR_OPS.
(vect_create_new_slp_node): Initialize it. New overload for
initializing by an operands array.
(_slp_oprnd_info::ops): New member.
(vect_create_oprnd_info): Initialize it.
(vect_free_oprnd_info): Release it.
(vect_get_and_check_slp_defs): Populate the operands array.
Do not swap operands in the IL when not necessary.
(vect_build_slp_tree_2): Build SLP nodes for invariant operands.
Record SLP_TREE_SCALAR_OPS for all invariant nodes. Also
swap operands in the operands array. Do not swap operands in
the IL.
(vect_slp_rearrange_stmts): Re-arrange SLP_TREE_SCALAR_OPS as well.
(vect_gather_slp_loads): Fix.
(vect_detect_hybrid_slp_stmts): Likewise.
(vect_slp_analyze_node_operations_1): Search for a internal
def child for computing reduction SLP_TREE_NUMBER_OF_VEC_STMTS.
(vect_slp_analyze_node_operations): Skip ops-only stmts for
the def-type push/pop dance.
(vect_get_constant_vectors): Compute number_of_vectors here.
Use SLP_TREE_SCALAR_OPS and simplify greatly.
(vect_get_slp_vect_defs): Use gimple_get_lhs also for PHIs.
(vect_get_slp_defs): Simplify greatly.
* tree-vect-loop.c (vectorize_fold_left_reduction): Simplify.
(vect_transform_reduction): Likewise.
* tree-vect-stmts.c (vect_get_vec_defs): Simplify.
(vectorizable_call): Likewise.
(vectorizable_operation): Likewise.
(vectorizable_load): Likewise.
(vectorizable_condition): Likewise.
(vectorizable_comparison): Likewise.
From-SVN: r277241
Richard Biener [Mon, 21 Oct 2019 11:32:25 +0000 (11:32 +0000)]
re PR tree-optimization/92161 (ICE in vect_get_vec_def_for_stmt_copy, at tree-vect-stmts.c:1687)
2019-10-21 Richard Biener <rguenther@suse.de>
PR tree-optimization/92161
* tree-vect-loop.c (vect_analyze_loop_2): Reset stmts def-type
for reductions.
* gfortran.dg/pr92161.f: New testcase.
From-SVN: r277240
Kyrylo Tkachov [Mon, 21 Oct 2019 10:52:05 +0000 (10:52 +0000)]
[AArch64] Implement __rndr, __rndrrs intrinsics
This patch implements the recently published[1] __rndr and __rndrrs
intrinsics used to access the RNG in Armv8.5-A.
The __rndrrs intrinsics can be used to reseed the generator too.
They are guarded by the __ARM_FEATURE_RNG feature macro.
A quirk with these intrinsics is that they store the random number in
their pointer argument and return a status
code if the generation succeeded.
The instructions themselves write the CC flags indicating the success of
the operation that we can then read with a CSET.
Therefore this implementation makes use of the IGNORE indicator to the
builtin expand machinery to avoid generating
the CSET if its result is unused (the CC reg clobbering effect is still
reflected in the pattern).
I've checked that using unspec_volatile prevents undesirable CSEing of
the instructions.
[1] https://developer.arm.com/docs/101028/latest/data-processing-intrinsics
* config/aarch64/aarch64.md (UNSPEC_RNDR, UNSPEC_RNDRRS): Define.
(aarch64_rndr): New define_insn.
(aarch64_rndrrs): Likewise.
* config/aarch64/aarch64.h (AARCH64_ISA_RNG): Define.
(TARGET_RNG): Likewise.
* config/aarch64/aarch64.c (aarch64_expand_builtin): Use IGNORE
argument.
* config/aarch64/aarch64-protos.h (aarch64_general_expand_builtin):
Add fourth argument in prototype.
* config/aarch64/aarch64-builtins.c (enum aarch64_builtins):
Add AARCH64_BUILTIN_RNG_RNDR, AARCH64_BUILTIN_RNG_RNDRRS.
(aarch64_init_rng_builtins): Define.
(aarch64_general_init_builtins): Call aarch64_init_rng_builtins.
(aarch64_expand_rng_builtin): Define.
(aarch64_general_expand_builtin): Use IGNORE argument, handle
RNG builtins.
* config/aarch64/aarch64-c.c (aarch64_update_cpp_builtins): Define
__ARM_FEATURE_RNG when TARGET_RNG.
* config/aarch64/arm_acle.h (__rndr, __rndrrs): Define.
* gcc.target/aarch64/acle/rng_1.c: New test.
From-SVN: r277239
Andre Vieira [Mon, 21 Oct 2019 10:12:18 +0000 (10:12 +0000)]
[vect] Only change base alignment if more restrictive
This patch makes sure ensure_base_align only changes alignment if the new
alignment is more restrictive. It already did this if we were dealing with
symbols, but it now does it for all types of declarations.
gcc/ChangeLog:
2019-10-21 Andre Vieira <andre.simoesdiasvieira@arm.com>
* tree-vect-stmts (ensure_base_align): Only change alignment if new
alignment is more restrictive.
From-SVN: r277238
Prathamesh Kulkarni [Mon, 21 Oct 2019 07:31:45 +0000 (07:31 +0000)]
re PR tree-optimization/91532 ([SVE] Redundant predicated store in gcc.target/aarch64/fmla_2.c)
2019-10-21 Prathamesh Kulkarni <prathamesh.kulkarni@linaro.org>
PR tree-optimization/91532
* gcc.target/aarch64/sve/fmla_2.c: Add dg-scan check for two st1d
insns.
From-SVN: r277237
Georg-Johann Lay [Mon, 21 Oct 2019 06:54:42 +0000 (06:54 +0000)]
Fix some fallout for small targets.
PR testsuite/52641
* gcc.dg/torture/pr86034.c: Use 32-bit base type for a bitfield of
width > 16 bits.
* gcc.dg/torture/pr90972.c [avr]: Add option "-w".
* gcc.dg/torture/pr87693.c: Same.
* gcc.dg/torture/pr91178.c: Add dg-require-effective-target size32plus.
* gcc.dg/torture/pr91178-2.c: Same.
* gcc.dg/torture/
20181024-1.c
* gcc.dg/torture/pr86554-1.c: Use 32-bit integers.
* gcc.dg/tree-ssa/pr91091-1.c: Same.
From-SVN: r277236
Richard Sandiford [Mon, 21 Oct 2019 06:41:36 +0000 (06:41 +0000)]
Replace current_vector_size with vec_info::vector_size
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vec_info::vector_size): New member variable.
(vect_update_max_nunits): Update comment.
(current_vector_size): Delete.
* tree-vect-stmts.c (current_vector_size): Likewise.
(get_vectype_for_scalar_type): Use vec_info::vector_size instead
of current_vector_size.
(get_mask_type_for_scalar_type): Likewise.
* tree-vectorizer.c (try_vectorize_loop_1): Likewise.
* tree-vect-loop.c (vect_update_vf_for_slp): Likewise.
(vect_analyze_loop, vect_halve_mask_nunits): Likewise.
(vect_double_mask_nunits, vect_transform_loop): Likewise.
* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
(vect_make_slp_decision, vect_slp_bb_region): Likewise.
From-SVN: r277235
Richard Sandiford [Mon, 21 Oct 2019 06:41:31 +0000 (06:41 +0000)]
Pass a vec_info to vect_double_mask_nunits
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_double_mask_nunits): Take a vec_info.
* tree-vect-loop.c (vect_double_mask_nunits): Likewise.
* tree-vect-stmts.c (supportable_narrowing_operation): Update call
accordingly.
From-SVN: r277234
Richard Sandiford [Mon, 21 Oct 2019 06:41:25 +0000 (06:41 +0000)]
Pass a vec_info to vect_halve_mask_nunits
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_halve_mask_nunits): Take a vec_info.
* tree-vect-loop.c (vect_halve_mask_nunits): Likewise.
* tree-vect-loop-manip.c (vect_maybe_permute_loop_masks): Update
call accordingly.
* tree-vect-stmts.c (supportable_widening_operation): Likewise.
From-SVN: r277233
Richard Sandiford [Mon, 21 Oct 2019 06:41:21 +0000 (06:41 +0000)]
Pass a loop_vec_info to vect_maybe_permute_loop_masks
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-loop-manip.c (vect_maybe_permute_loop_masks): Take
a loop_vec_info.
(vect_set_loop_condition_masked): Update call accordingly.
From-SVN: r277232
Richard Sandiford [Mon, 21 Oct 2019 06:41:15 +0000 (06:41 +0000)]
Pass a vec_info to supportable_narrowing_operation
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (supportable_narrowing_operation): Take a vec_info.
* tree-vect-stmts.c (supportable_narrowing_operation): Likewise.
(simple_integer_narrowing): Update call accordingly.
(vectorizable_conversion): Likewise.
From-SVN: r277231
Richard Sandiford [Mon, 21 Oct 2019 06:41:10 +0000 (06:41 +0000)]
Pass a vec_info to simple_integer_narrowing
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-stmts.c (simple_integer_narrowing): Take a vec_info.
(vectorizable_call): Update call accordingly.
From-SVN: r277230
Richard Sandiford [Mon, 21 Oct 2019 06:41:05 +0000 (06:41 +0000)]
Pass a vec_info to can_duplicate_and_interleave_p
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (can_duplicate_and_interleave_p): Take a vec_info.
* tree-vect-slp.c (can_duplicate_and_interleave_p): Likewise.
(duplicate_and_interleave): Update call accordingly.
* tree-vect-loop.c (vectorizable_reduction): Likewise.
From-SVN: r277229
Richard Sandiford [Mon, 21 Oct 2019 06:41:01 +0000 (06:41 +0000)]
Pass a vec_info to duplicate_and_interleave
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (duplicate_and_interleave): Take a vec_info.
* tree-vect-slp.c (duplicate_and_interleave): Likewise.
(vect_get_constant_vectors): Update call accordingly.
* tree-vect-loop.c (get_initial_defs_for_reduction): Likewise.
From-SVN: r277228
Richard Sandiford [Mon, 21 Oct 2019 06:40:53 +0000 (06:40 +0000)]
Pass a vec_info to get_vectype_for_scalar_type
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (get_vectype_for_scalar_type): Take a vec_info.
* tree-vect-stmts.c (get_vectype_for_scalar_type): Likewise.
(vect_prologue_cost_for_slp_op): Update call accordingly.
(vect_get_vec_def_for_operand, vect_get_gather_scatter_ops)
(vect_get_strided_load_store_ops, vectorizable_simd_clone_call)
(vect_supportable_shift, vect_is_simple_cond, vectorizable_comparison)
(get_mask_type_for_scalar_type): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
* tree-vect-data-refs.c (vect_analyze_data_refs): Likewise.
* tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
(get_initial_def_for_reduction, build_vect_cond_expr): Likewise.
* tree-vect-patterns.c (vect_supportable_direct_optab_p): Likewise.
(vect_split_statement, vect_convert_input): Likewise.
(vect_recog_widen_op_pattern, vect_recog_pow_pattern): Likewise.
(vect_recog_over_widening_pattern, vect_recog_mulhs_pattern): Likewise.
(vect_recog_average_pattern, vect_recog_cast_forwprop_pattern)
(vect_recog_rotate_pattern, vect_recog_vector_vector_shift_pattern)
(vect_synth_mult_by_constant, vect_recog_mult_pattern): Likewise.
(vect_recog_divmod_pattern, vect_recog_mixed_size_cond_pattern)
(check_bool_pattern, adjust_bool_pattern_cast, adjust_bool_pattern)
(search_type_for_mask_1, vect_recog_bool_pattern): Likewise.
(vect_recog_mask_conversion_pattern): Likewise.
(vect_add_conversion_to_pattern): Likewise.
(vect_recog_gather_scatter_pattern): Likewise.
* tree-vect-slp.c (vect_build_slp_tree_2): Likewise.
(vect_analyze_slp_instance, vect_get_constant_vectors): Likewise.
From-SVN: r277227
Richard Sandiford [Mon, 21 Oct 2019 06:40:49 +0000 (06:40 +0000)]
Pass a vec_info to get_mask_type_for_scalar_type
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (get_mask_type_for_scalar_type): Take a vec_info.
* tree-vect-stmts.c (get_mask_type_for_scalar_type): Likewise.
(vect_check_load_store_mask): Update call accordingly.
(vect_get_mask_type_for_stmt): Likewise.
* tree-vect-patterns.c (check_bool_pattern): Likewise.
(search_type_for_mask_1, vect_recog_mask_conversion_pattern): Likewise.
(vect_convert_mask_for_vectype): Likewise.
From-SVN: r277226
Richard Sandiford [Mon, 21 Oct 2019 06:40:44 +0000 (06:40 +0000)]
Pass a vec_info to vect_supportable_direct_optab_p
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-patterns.c (vect_supportable_direct_optab_p): Take
a vec_info.
(vect_recog_dot_prod_pattern): Update call accordingly.
(vect_recog_sad_pattern, vect_recog_pow_pattern): Likewise.
(vect_recog_widen_sum_pattern): Likewise.
From-SVN: r277225
Richard Sandiford [Mon, 21 Oct 2019 06:40:41 +0000 (06:40 +0000)]
Pass a vec_info to vect_supportable_shift
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.h (vect_supportable_shift): Take a vec_info.
* tree-vect-stmts.c (vect_supportable_shift): Likewise.
* tree-vect-patterns.c (vect_synth_mult_by_constant): Update call
accordingly.
From-SVN: r277224
Richard Sandiford [Mon, 21 Oct 2019 06:40:36 +0000 (06:40 +0000)]
Avoid setting current_vector_size in get_vec_alignment_for_array_type
The increase_alignment pass was using get_vectype_for_scalar_type
to get the preferred vector type for each array element type.
This has the effect of carrying over the vector size chosen by
the first successful call to all subsequent calls, whereas it seems
more natural to treat each array type independently and pick the
"best" vector type for each element type.
2019-10-21 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vectorizer.c (get_vec_alignment_for_array_type): Use
get_vectype_for_scalar_type_and_size instead of
get_vectype_for_scalar_type.
From-SVN: r277223
GCC Administrator [Mon, 21 Oct 2019 00:16:16 +0000 (00:16 +0000)]
Daily bump.
From-SVN: r277221
Bernd Edlinger [Sun, 20 Oct 2019 21:29:27 +0000 (21:29 +0000)]
common.opt (-fcommon): Fix description.
2019-10-20 Bernd Edlinger <bernd.edlinger@hotmail.de>
* common.opt (-fcommon): Fix description.
From-SVN: r277217
Jakub Jelinek [Sun, 20 Oct 2019 20:44:26 +0000 (22:44 +0200)]
i386-protos.h (ix86_pre_reload_split): Declare.
* config/i386/i386-protos.h (ix86_pre_reload_split): Declare.
* config/i386/i386.c (ix86_pre_reload_split): New function.
* config/i386/i386.md (*fix_trunc<mode>_i387_1, *add<mode>3_eq,
*add<mode>3_ne, *add<mode>3_eq_0, *add<mode>3_ne_0, *add<mode>3_eq,
*add<mode>3_ne, *add<mode>3_eq_1, *add<mode>3_eq_0, *add<mode>3_ne_0,
*anddi3_doubleword, *andndi3_doubleword, *<code>di3_doubleword,
*one_cmpldi2_doubleword, *ashl<dwi>3_doubleword_mask,
*ashl<dwi>3_doubleword_mask_1, *ashl<mode>3_mask, *ashl<mode>3_mask_1,
*<shift_insn><mode>3_mask, *<shift_insn><mode>3_mask_1,
*<shift_insn><dwi>3_doubleword_mask,
*<shift_insn><dwi>3_doubleword_mask_1, *<rotate_insn><mode>3_mask,
*<rotate_insn><mode>3_mask_1, *<btsc><mode>_mask, *<btsc><mode>_mask_1,
*btr<mode>_mask, *btr<mode>_mask_1, *jcc_bt<mode>, *jcc_bt<mode>_1,
*jcc_bt<mode>_mask, *popcounthi2_1, frndintxf2_<rounding>,
*fist<mode>2_<rounding>_1, *<code><mode>3_1, *<code>di3_doubleword):
Use ix86_pre_reload_split instead of can_create_pseudo_p in condition.
* config/i386/sse.md (*sse4_1_<code>v8qiv8hi2<mask_name>_2,
*avx2_<code>v8qiv8si2<mask_name>_2,
*sse4_1_<code>v4qiv4si2<mask_name>_2,
*sse4_1_<code>v4hiv4si2<mask_name>_2,
*avx512f_<code>v8qiv8di2<mask_name>_2,
*avx2_<code>v4qiv4di2<mask_name>_2, *avx2_<code>v4hiv4di2<mask_name>_2,
*sse4_1_<code>v2hiv2di2<mask_name>_2,
*sse4_1_<code>v2siv2di2<mask_name>_2, sse4_2_pcmpestr,
sse4_2_pcmpistr): Likewise.
From-SVN: r277216
Gerald Pfeifer [Sun, 20 Oct 2019 20:15:28 +0000 (20:15 +0000)]
install.texi (Configuration, [...]): hboehm.info now defaults to https.
* doc/install.texi (Configuration, --enable-objc-gc): hboehm.info
now defaults to https.
From-SVN: r277215
Jan Hubicka [Sun, 20 Oct 2019 18:53:37 +0000 (20:53 +0200)]
tree-ssa-alias.c (nonoverlapping_refs_since_match_p): Do not skip non-zero array accesses.
* tree-ssa-alias.c (nonoverlapping_refs_since_match_p): Do not
skip non-zero array accesses.
* gcc.c-torture/execute/alias-access-path-2.c: New testcase.
* gcc.dg/tree-ssa/alias-access-path-11.c: xfail.
From-SVN: r277214
Richard Sandiford [Sun, 20 Oct 2019 12:59:45 +0000 (12:59 +0000)]
Move code out of vect_slp_analyze_bb_1
After the previous patch, it seems more natural to apply the
PARAM_SLP_MAX_INSNS_IN_BB threshold as soon as we know what
the region is, rather than delaying it to vect_slp_analyze_bb_1.
(But rather than carve out the biggest region possible and then
reject it, wouldn't it be better to stop when the region gets
too big, to at least give us a chance of vectorising something?)
It also seems more natural for vect_slp_bb_region to create the
bb_vec_info itself rather than (a) having to pass bits of data down
for the initialisation and (b) forcing vect_slp_analyze_bb_1 to free
on every failure return.
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-slp.c (vect_slp_analyze_bb_1): Take a bb_vec_info
and return a boolean success value. Move the allocation and
initialization of the bb_vec_info to...
(vect_slp_bb_region): ...here. Update call accordingly.
(vect_slp_bb): Apply PARAM_SLP_MAX_INSNS_IN_BB here rather
than in vect_slp_analyze_bb_1.
From-SVN: r277211
Richard Sandiford [Sun, 20 Oct 2019 12:58:22 +0000 (12:58 +0000)]
Avoid recomputing data references in BB SLP
If the first attempt at applying BB SLP to a region fails, the main loop
in vect_slp_bb recomputes the region's bounds and datarefs for the next
vector size. AFAICT this isn't needed any more; we should be able
to reuse the datarefs from the first attempt instead.
2019-10-20 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* tree-vect-slp.c (vect_slp_analyze_bb_1): Call save_datarefs
when processing the given datarefs for the first time and
check_datarefs subsequently.
(vect_slp_bb_region): New function, split out of...
(vect_slp_bb): ...here. Don't recompute the region bounds and
dataref sets when retrying with a different vector size.
From-SVN: r277210
GCC Administrator [Sun, 20 Oct 2019 00:16:28 +0000 (00:16 +0000)]
Daily bump.
From-SVN: r277209
Jakub Jelinek [Sat, 19 Oct 2019 22:27:10 +0000 (00:27 +0200)]
nodiscard-reason-only-one.C: In dg-error or dg-warning remove (?n) uses and replace .* with \[^\n\r]*.
* g++.dg/cpp2a/nodiscard-reason-only-one.C: In dg-error or dg-warning
remove (?n) uses and replace .* with \[^\n\r]*.
* g++.dg/cpp2a/nodiscard-reason.C: Likewise.
* g++.dg/cpp2a/nodiscard-once.C: Likewise.
* g++.dg/cpp2a/nodiscard-reason-nonstring.C: Likewise.
From-SVN: r277205
Paul Thomas [Sat, 19 Oct 2019 16:44:06 +0000 (16:44 +0000)]
re PR fortran/91926 (assumed rank optional)
2019-10-19 Paul Thomas <pault@gcc.gnu.org>
PR fortran/91926
* runtime/ISO_Fortran_binding.c (cfi_desc_to_gfc_desc): Revert
the change made on 2019-10-05.
From-SVN: r277204
Jakub Jelinek [Sat, 19 Oct 2019 12:46:57 +0000 (14:46 +0200)]
re PR target/92140 (clang vs gcc optimizing with adc/sbb)
PR target/92140
* config/i386/predicates.md (int_nonimmediate_operand): New special
predicate.
* config/i386/i386.md (*add<mode>3_eq, *add<mode>3_ne,
*add<mode>3_eq_0, *add<mode>3_ne_0, *sub<mode>3_eq, *sub<mode>3_ne,
*sub<mode>3_eq_1, *sub<mode>3_eq_0, *sub<mode>3_ne_0): New
define_insn_and_split patterns.
* gcc.target/i386/pr92140.c: New test.
* gcc.c-torture/execute/pr92140.c: New test.
Co-Authored-By: Uros Bizjak <ubizjak@gmail.com>
From-SVN: r277203
Iain Sandoe [Sat, 19 Oct 2019 07:44:49 +0000 (07:44 +0000)]
[Darwin, testsuite] Fix Wnonnull on Darwin.
Darwin does not mark entries in string.h with nonnull attributes
so the test fails. Since the purpose of the test is to check that
the warnings are issued for an inlined function, not that the target
headers are marked up, we can provide marked up headers for Darwin.
gcc/testsuite/ChangeLog:
2019-10-19 Iain Sandoe <iain@sandoe.co.uk>
* gcc.dg/Wnonnull.c: Add attributed function declarations for
memcpy and strlen for Darwin.
From-SVN: r277202
Iain Sandoe [Sat, 19 Oct 2019 07:34:23 +0000 (07:34 +0000)]
[PPC] Delete out of date comment.
Removes a comment that's no longer relevant.
gcc/ChangeLog:
2019-10-19 Iain Sandoe <iain@sandoe.co.uk>
* config/rs6000/rs6000.md: Delete out--of-date comment about
special-casing integer loads.
From-SVN: r277201
JeanHeyd Meneide [Sat, 19 Oct 2019 04:51:59 +0000 (04:51 +0000)]
Implement C++20 P1301 [[nodiscard("should have a reason")]].
2019-10-17 JeanHeyd Meneide <phdofthehouse@gmail.com>
gcc/
* escaped_string.h (escaped_string): New header.
* tree.c (escaped_string): Remove escaped_string class.
gcc/c-family
* c-lex.c (c_common_has_attribute): Update nodiscard value.
gcc/cp/
* tree.c (handle_nodiscard_attribute) Added C++2a nodiscard
string message.
(std_attribute_table) Increase nodiscard argument handling
max_length from 0 to 1.
* parser.c (cp_parser_check_std_attribute): Add requirement
that nodiscard only be seen once in attribute-list.
(cp_parser_std_attribute): Check that empty parenthesis lists are
not specified for attributes that have max_length > 0 (e.g.
[[attr()]]).
* cvt.c (maybe_warn_nodiscard): Add nodiscard message to
output, if applicable.
(convert_to_void): Allow constructors to be nodiscard-able (P1771).
gcc/testsuite/g++.dg/cpp0x
* gen-attrs-67.C: Test new error message for empty-parenthesis-list.
gcc/testsuite/g++.dg/cpp2a
* nodiscard-construct.C: New test.
* nodiscard-once.C: New test.
* nodiscard-reason-nonstring.C: New test.
* nodiscard-reason-only-one.C: New test.
* nodiscard-reason.C: New test.
Reviewed-by: Jason Merrill <jason@redhat.com>
From-SVN: r277200
GCC Administrator [Sat, 19 Oct 2019 00:18:25 +0000 (00:18 +0000)]
Daily bump.
From-SVN: r277199
Martin Sebor [Fri, 18 Oct 2019 22:26:39 +0000 (22:26 +0000)]
PR tree-optimization/92157 - incorrect strcmp() == 0 result for unknown strings
gcc/testsuite/ChangeLog:
PR tree-optimization/92157
* gcc.dg/strlenopt-69.c: Disable test failing due to PR 92155.
* gcc.dg/strlenopt-87.c: New test.
gcc/ChangeLog:
PR tree-optimization/92157
* tree-ssa-strlen.c (handle_builtin_string_cmp): Be prepared for
compute_string_length to return a negative result.
From-SVN: r277194
Richard Earnshaw [Fri, 18 Oct 2019 19:05:25 +0000 (19:05 +0000)]
[arm] Fix testsuite nit when compiling for thumb2
In thumb2 we now generate a NEGS instruction rather than RSBS, so this
test needs updating.
* gcc.target/arm/negdi-3.c: Update expected output to allow NEGS.
From-SVN: r277192
Richard Earnshaw [Fri, 18 Oct 2019 19:05:16 +0000 (19:05 +0000)]
[arm] Improvements to negvsi4 and negvdi4.
The generic expansion code for negv does not try the subv patterns,
but instead emits a sub and a compare separately. Fortunately, the
patterns can make use of the new subv operations, so just call those.
We can also rewrite this using an iterator to simplify things further.
Finally, we can now make negvdi4 work on Thumb2 as well as Arm.
* config/arm/arm.md (negv<SIDI:mode>3): New expansion rule.
(negvsi3, negvdi3): Delete.
(negdi2_compare): Delete.
From-SVN: r277191
Richard Earnshaw [Fri, 18 Oct 2019 19:05:09 +0000 (19:05 +0000)]
[arm] Early expansion of subvdi4
This patch adds early expansion of subvdi4. The expansion sequence
is broadly based on the expansion of usubvdi4.
* config/arm/arm.md (subvdi4): Decompose calculation into 32-bit
operations.
(subdi3_compare1): Delete pattern.
(subvsi3_borrow): New insn pattern.
(subvsi3_borrow_imm): Likewise.
From-SVN: r277190
Richard Earnshaw [Fri, 18 Oct 2019 19:05:01 +0000 (19:05 +0000)]
[arm] Improve constant handling for subvsi4.
This patch addresses constant handling in subvsi4. Either operand may
be a constant. If the second input (operand[2]) is a constant, then
we can canonicalize this into an addition form, providing we take care
of the INT_MIN case. In that case the negation has to handle the fact
that -INT_MIN is still INT_MIN and we need to ensure that a subtract
operation is performed rather than an addition. The remaining cases
are largely duals of the usubvsi4 expansion.
This patch also fixes a technical correctness bug in the old
expansion, where we did not realy describe the test for overflow in
the RTL. We seem to have got away with that, however...
* config/arm/arm.md (subv<mode>4): Delete.
(subvdi4): New expander pattern.
(subvsi4): Likewise. Handle some immediate values.
(subvsi3_intmin): New insn pattern.
(subvsi3): Likewise.
(subvsi3_imm1): Likewise.
* config/arm/arm.c (select_cc_mode): Also allow minus for CC_V
idioms.
From-SVN: r277189
Richard Earnshaw [Fri, 18 Oct 2019 19:04:54 +0000 (19:04 +0000)]
[arm] Early expansion of usubvdi4.
This patch adds early expansion of usubvdi4, allowing us to handle some
constants in place, which previously we were unable to do.
* config/arm/arm.md (usubvdi4): Allow registers or integers for
incoming operands. Early split the calculation into SImode
operations.
(usubvsi3_borrow): New insn pattern.
(usubvsi3_borrow_imm): Likewise.
From-SVN: r277188
Richard Earnshaw [Fri, 18 Oct 2019 19:04:46 +0000 (19:04 +0000)]
[arm] Improve constant handling for usubvsi4.
This patch improves the expansion of usubvsi4 by allowing suitable
constants to be passed directly. Unlike normal subtraction, either
operand may be a constant (and indeed I have seen cases where both can
be with LTO enabled). One interesting testcase that improves as a
result of this is:
unsigned f6 (unsigned a)
{
unsigned x;
return __builtin_sub_overflow (5U, a, &x) ? 0 : x;
}
Which previously compiled to:
rsbs r3, r0, #5
cmp r0, #5
movls r0, r3
movhi r0, #0
but now generates the optimal sequence:
rsbs r0, r0, #5
movcc r0, #0
* config/arm/arm.md (usubv<mode>4): Delete expansion.
(usubvsi4): New pattern. Allow some immediate values for inputs.
(usubvdi4): New pattern.
From-SVN: r277187
Richard Earnshaw [Fri, 18 Oct 2019 19:04:38 +0000 (19:04 +0000)]
[arm] Early split addvdi4
This patch adds early splitting for addvdi4; it's very similar to the
uaddvdi4 splitter, but the details are just different enough in
places, especially for the patterns that match the splitting, where we
have to compare against the non-widened version to detect if overflow
occurred.
I've also added a testcase to the testsuite for a couple of constants
that caught me out during the development of this patch. They're
probably arm-specific values, but the test is generic enough that I've
included it for all targets.
[gcc]
* config/arm/arm.c (arm_select_cc_mode): Allow either the first
or second operand of the PLUS inside a DImode equality test to be
sign-extend when selecting CC_Vmode.
* config/arm/arm.md (addvdi4): Early-split the operation into SImode
instructions.
(addsi3_cin_vout_reg, addsi3_cin_vout_imm, addsi3_cin_vout_0): New
expand patterns.
(addsi3_cin_vout_reg_insn, addsi3_cin_vout_imm_insn): New patterns.
(addsi3_cin_vout_0): Likewise.
(adddi3_compareV): Delete.
[gcc/testsuite]
* gcc.dg/builtin-arith-overflow-3.c: New test.
From-SVN: r277186
Richard Earnshaw [Fri, 18 Oct 2019 19:04:30 +0000 (19:04 +0000)]
[arm] Allow the summation result of signed add-with-overflow to be discarded.
This patch matches the signed add-with-overflow patterns when the
summation itself is dropped. In this case we can use CMN (or CMP with
some immediates). There are a small number of constants in thumb2
where this can result in less dense code (as we lack 16-bit CMN with
immediate patterns). To handle this we use peepholes to try these
alternatives when either a scratch is available (0 <= i <= 7) or the
original register is dead (0 <= i <= 255). We don't use a scratch in
the pattern as if those conditions are not satisfied then the 32-bit
form is preferable to forcing a reload.
* config/arm/arm.md (addsi3_compareV_reg_nosum): New insn.
(addsi3_compareV_imm_nosum): New insn. Also add peephole2 patterns
to transform this back into the summation version when that leads
to smaller code.
From-SVN: r277185
Richard Earnshaw [Fri, 18 Oct 2019 19:04:22 +0000 (19:04 +0000)]
[arm] Improve code generation for addvsi4.
Similar to the improvements for uaddvsi4, this patch improves the code
generation for addvsi4 to handle immediates and to add alternatives
that better target thumb2. To do this we separate out the expansion
of uaddvsi4 from that of uaddvdi4 and then add an additional pattern
to handle constants. Also, while doing this I've fixed the incorrect
usage of NE instead of COMPARE in the generated RTL.
* config/arm/arm.md (addv<mode>4): Delete.
(addvsi4): New pattern. Handle immediate values that the architecture
supports.
(addvdi4): New pattern.
(addsi3_compareV): Rename to ...
(addsi3_compareV_reg): ... this. Add constraints for thumb2 variants
and use COMPARE rather than NE.
(addsi3_compareV_imm): New pattern.
* config/arm/arm.c (arm_select_cc_mode): Return CC_Vmode for
a signed-overflow check.
From-SVN: r277184
Richard Earnshaw [Fri, 18 Oct 2019 19:04:15 +0000 (19:04 +0000)]
[arm] Early expansion of uaddvdi4.
This code borrows strongly on the uaddvti4 expansion for aarch64 since
the principles are similar. Firstly, if the one of the low words of
the expansion is 0, we can simply copy the other low word to the
destination and use uaddvsi4 for the upper word. If that doesn't work
we have to handle three possible cases for the upper work (the lower
word is simply an add-with-carry operation as for adddi3): zero in the
upper word, some other constant and a register (each has a different
canonicalization). We use CC_ADCmode (a new CC mode variant) to
describe the cases as the introduction of the carry means we can
no-longer use the normal overflow trick of comparing the sum against
one of the operands.
* config/arm/arm-modes.def (CC_ADC): New CC mode.
* config/arm/arm.c (arm_select_cc_mode): Detect selection of
CC_ADCmode.
(maybe_get_arm_condition_code): Handle CC_ADCmode.
* config/arm/arm.md (uaddvdi4): Early expansion of unsigned addition
with overflow.
(addsi3_cin_cout_reg, addsi3_cin_cout_imm, addsi3_cin_cout_0): New
expand patterns.
(addsi3_cin_cout_reg_insn, addsi3_cin_cout_0_insn): New insn patterns
(addsi3_cin_cout_imm_insn): Likewise.
(adddi3_compareC): Delete insn.
* config/arm/predicates.md (arm_carry_operation): Handle CC_ADCmode.
From-SVN: r277183
Richard Earnshaw [Fri, 18 Oct 2019 19:04:06 +0000 (19:04 +0000)]
[arm] Handle immediate values in uaddvsi4
The uaddv patterns in the arm back-end do not currenty handle immediates
during expansion. This patch adds this support for uaddvsi4. It's really
a stepping-stone towards early expansion of uaddvdi4, but it complete and
a useful change in its own right.
Whilst making this change I also observed that we really had two patterns
that did exactly the same thing, but with slightly different properties;
consequently I've cleaned up all of the add-and-compare patterns to bring
some consistency.
* config/arm/arm.md (adddi3): Call gen_addsi3_compare_op1.
* (uaddv<mode>4): Delete expansion pattern.
(uaddvsi4): New pattern.
(uaddvdi4): Likewise.
(addsi3_compareC): Delete pattern, change callers to use
addsi3_compare_op1.
(addsi3_compare_op1): No-longer anonymous. Clean up constraints to
reduce the number of alternatives and re-work type attribute handling.
(addsi3_compare_op2): Clean up constraints to reduce the number of
alternatives and re-work type attribute handling.
(compare_addsi2_op0): Likewise.
(compare_addsi2_op1): Likewise.
From-SVN: r277182
Richard Earnshaw [Fri, 18 Oct 2019 19:03:58 +0000 (19:03 +0000)]
[arm] Cleanup dead code - old support for DImode comparisons
Now that all the major patterns for DImode have been converted to
early expansion, we can safely clean up some dead code for the old way
of handling DImode.
* config/arm/arm-modes.def (CC_NCV, CC_CZ): Delete CC modes.
* config/arm/arm.c (arm_select_cc_mode): Remove old selection code
for DImode operands.
(arm_gen_dicompare_reg): Remove unreachable expansion code.
(maybe_get_arm_condition_code): Remove support for CC_CZmode and
CC_NCVmode.
* config/arm/arm.md (arm_cmpdi_insn): Delete.
(arm_cmpdi_unsigned): Delete.
From-SVN: r277181
Richard Earnshaw [Fri, 18 Oct 2019 19:03:50 +0000 (19:03 +0000)]
[arm] Handle some constant comparisons using rsbs+rscs
In a small number of cases it is preferable to handle comparisons with
constants using the sequence
RSBS tmp, Xlo, constlo
RSCS tmp, Xhi, consthi
which allows us to handle a small number of LE/GT/LEU/GEU cases when
changing the code to use LT/GE/LTU/GEU would make the constant more
expensive. Sadly, we cannot do this on Thumb, since we need RSC, so we
now always use the incremented constant in that case since normally that
still works out cheaper than forcing the entire constant into a register.
Further investigation has also shown that the canonicalization of a
reverse subtract and compare is valid for signed as well as unsigned value,
so we relax the restriction on selecting CC_RSBmode to allow all types
of compare.
* config/arm/arm.c (arm_const_double_prefer_rsbs_rsc): New function.
(arm_canonicalize_comparison): For GT/LE/GTU/GEU, use the constant
unchanged only if that will be cheaper.
(arm_select_cc_mode): Recognize a swapped comparison that will
be regenerated using RSBS or RSCS. Relax restriction on selecting
CC_RSBmode.
(arm_gen_dicompare_reg): Handle LE/GT/LEU/GEU comparisons against
a constant.
(arm_gen_compare_reg): Handle compare (CONST, X) when the mode
is CC_RSBmode.
(maybe_get_arm_condition_code): CC_RSBmode now returns the same codes
as CCmode.
* config/arm/arm.md (rsb_imm_compare_scratch): New pattern.
(rscsi3_<CC_EXTEND>out_scratch): New pattern.
From-SVN: r277180
Richard Earnshaw [Fri, 18 Oct 2019 19:03:43 +0000 (19:03 +0000)]
[arm] early split most DImode comparison operations.
This patch does most of the work for early splitting the DImode
comparisons. We now handle EQ, NE, LT, GE, LTU and GEU during early
expansion, in addition to EQ and NE, for which the expansion has now
been reworked to use a standard conditional-compare pattern already in
the back-end.
To handle this we introduce two new condition flag modes that are used
when comparing the upper words of decomposed DImode values: one for
signed, and one for unsigned comparisons. CC_Bmode (B for Borrow) is
essentially the inverse of CC_Cmode and is used when the carry flag is
set by a subtraction of unsigned values.
* config/arm/arm-modes.def (CC_NV, CC_B): New CC modes.
* config/arm/arm.c (arm_select_cc_mode): Recognize constructs that
need these modes.
(arm_gen_dicompare_reg): New code to early expand the sub-operations
of EQ, NE, LT, GE, LTU and GEU.
* config/arm/iterators.md (CC_EXTEND): New code attribute.
* config/arm/predicates.md (arm_adcimm_operand): New predicate..
* config/arm/arm.md (cmpsi3_carryin_<CC_EXTEND>out): New pattern.
(cmpsi3_imm_carryin_<CC_EXTEND>out): Likewise.
(cmpsi3_0_carryin_<CC_EXTEND>out): Likewise.
From-SVN: r277179
Richard Earnshaw [Fri, 18 Oct 2019 19:03:35 +0000 (19:03 +0000)]
[arm] Improve handling of DImode comparisions against constants.
In almost all cases it is better to handle inequality handling against constants
by transforming comparisons of the form (reg <GE/LT/GEU/LTU> const) into
(reg <GT/LE/GTU/LEU> (const+1)). However, there are many cases that we could
handle but currently failed to do so because we forced the constant into a
register too early in the pattern expansion. To permit this to be done we need
to defer forcing the constant into a register until after we've had the chance
to do the transform - in some cases that may even mean that we no-longer need
to force the constant into a register at all. For example, on Arm, the case:
_Bool f8 (unsigned long long a) { return a > 0xffffffff; }
previously compiled to
mov r3, #0
cmp r1, r3
mvn r2, #0
cmpeq r0, r2
movhi r0, #1
movls r0, #0
bx lr
But now compiles to
cmp r1, #1
cmpeq r0, #0
movcs r0, #1
movcc r0, #0
bx lr
Which although not yet completely optimal, is certainly better than
previously.
* config/arm/arm.md (cbranchdi4): Accept reg_or_int_operand for
operand 2.
(cstoredi4): Similarly, but for operand 3.
* config/arm/arm.c (arm_canoncialize_comparison): Allow canonicalization
of unsigned compares with a constant on Arm. Prefer using const+1 and
adjusting the comparison over swapping the operands whenever the
original constant was not valid.
(arm_gen_dicompare_reg): If Y is not a valid operand, force it to a
register here.
(arm_validize_comparison): Do not force invalid DImode operands to
registers here.
From-SVN: r277178
Richard Earnshaw [Fri, 18 Oct 2019 19:03:27 +0000 (19:03 +0000)]
[arm] Early split simple DImode equality comparisons
This is the first step of early splitting all the DImode comparison
operations. We start by factoring the DImode handling out of
arm_gen_compare_reg into its own function.
Simple DImode equality comparisions (such as equality with zero, or
equality with a constant that is zero in one of the two word values
that it comprises) can be done using a single subtract followed by an
ORRS instruction. This avoids the need for conditional execution.
For example, (r0 != 5) can be written as
SUB Rt, R0, #5
ORRS Rt, Rt, R1
The ORRS is now expanded using an SImode pattern that already exists
in the MD file and this gives the register allocator more freedom to
select registers (consecutive pairs are no-longer required).
Furthermore, we can then delete the arm_cmpdi_zero pattern as it is
no-longer required. We use SUB for the value adjustment as this has a
generally more flexible range of immediates than XOR and what's more
has the opportunity to be relaxed in thumb2 to a 16-bit SUBS
instruction.
* config/arm/arm.c (arm_select_cc_mode): For DImode equality tests
return CC_Zmode if comparing against a constant where one word is
zero.
(arm_gen_compare_reg): Split DImode handling to ...
(arm_gen_dicompare_reg): ... here. Handle equality comparisons
against simple constants.
* config/arm/arm.md (arm_cmpdi_zero): Delete pattern.
From-SVN: r277177
Richard Earnshaw [Fri, 18 Oct 2019 19:03:19 +0000 (19:03 +0000)]
[arm] Add alternative canonicalizations for subtract-with-carry + shift
This patch adds a couple of alternative canonicalizations to allow
combine to match a subtract-with-carry operation when one of the operands
is shifted first. The most common case of this is when combining a
sign-extend of one operand with a long-long value during subtraction.
The RSC variant is only enabled for Arm, the SBC variant for any 32-bit
compilation.
* config/arm/arm.md (subsi3_carryin_shift_alt): New pattern.
(rsbsi3_carryin_shift_alt): Likewise.
From-SVN: r277176
Richard Earnshaw [Fri, 18 Oct 2019 19:03:11 +0000 (19:03 +0000)]
[arm] Implement negscc using SBC when appropriate.
When the carry flag is appropriately set by a comprison, negscc
patterns can expand into a simple SBC of a register with itself. This
means we can convert two conditional instructions into a single
non-conditional instruction. Furthermore, in Thumb2 we can avoid the
need for an IT instruction as well. This patch also fixes the remaining
testcase that we initially XFAILed in the first patch of this series.
gcc:
* config/arm/arm.md (negscc_borrow): New pattern.
(mov_negscc): Don't split if the insn would match negscc_borrow.
* config/arm/thumb2.md (thumb2_mov_negscc): Likewise.
(thumb2_mov_negscc_strict_it): Likewise.
testsuite:
* gcc.target/arm/negdi-3.c: Remove XFAIL markers.
From-SVN: r277175
Richard Earnshaw [Fri, 18 Oct 2019 19:03:03 +0000 (19:03 +0000)]
[arm] Reduce cost of insns that are simple reg-reg moves.
Consider this sequence during combine:
Trying 18, 7 -> 22:
18: r118:SI=r122:SI
REG_DEAD r122:SI
7: r114:SI=0x1-r118:SI-ltu(cc:CC_RSB,0)
REG_DEAD r118:SI
REG_DEAD cc:CC_RSB
22: r1:SI=r114:SI
REG_DEAD r114:SI
Failed to match this instruction:
(set (reg:SI 1 r1 [+4 ])
(minus:SI (geu:SI (reg:CC_RSB 100 cc)
(const_int 0 [0]))
(reg:SI 122)))
Successfully matched this instruction:
(set (reg:SI 114)
(geu:SI (reg:CC_RSB 100 cc)
(const_int 0 [0])))
Successfully matched this instruction:
(set (reg:SI 1 r1 [+4 ])
(minus:SI (reg:SI 114)
(reg:SI 122)))
allowing combination of insns 18, 7 and 22
original costs 4 + 4 + 4 = 12
replacement costs 8 + 4 = 12
The costs are all correct, but we really don't want this combination
to take place. The original costs contain an insn that is a simple
move of one pseudo register to another and it is extremely likely that
register allocation will eliminate this insn entirely. On the other
hand, the resulting sequence really does expand into a sequence that
costs 12 (ie 3 insns).
We don't want to prevent combine from eliminating such moves, as this
can expose more combine opportunities, but we shouldn't rate them as
profitable in themselves. We can do this be adjusting the costs
slightly so that the benefit of eliminating such a simple insn is
reduced.
We only do this before register allocation; after allocation we give
such insns their full cost.
* config/arm/arm.c (arm_insn_cost): New function.
(TARGET_INSN_COST): Override default definition.
From-SVN: r277174
Richard Earnshaw [Fri, 18 Oct 2019 19:02:50 +0000 (19:02 +0000)]
[arm] Correct cost calculations involving borrow for subtracts.
The rtx_cost calculations when a borrow operation was being performed were
not being calculated correctly. The borrow is free as part of the
subtract-with-carry instructions. This patch recognizes the various
idioms that can describe this and returns the correct costs.
* config/arm/arm.c (arm_rtx_costs_internal, case MINUS): Handle
borrow operations.
From-SVN: r277173
Richard Earnshaw [Fri, 18 Oct 2019 19:02:43 +0000 (19:02 +0000)]
[arm] Correctly cost addition with a carry-in
The cost routine for Arm and Thumb2 was not recognising the idioms that
describe the addition with carry, this results in the instructions
appearing more expensive than they really are, which occasionally can lead
to poor choices by combine. Recognising all the possible variants is
a little trickier than normal because the expressions can become complex
enough that this is no single canonical from.
* config/arm/arm.c (strip_carry_operation): New function.
(arm_rtx_costs_internal, case PLUS): Handle addtion with carry-in
for SImode.
From-SVN: r277172
Richard Earnshaw [Fri, 18 Oct 2019 19:02:35 +0000 (19:02 +0000)]
[arm] Introduce arm_carry_operation
An earlier patch introduced arm_borrow_operation, this one introduces
the carry variant, which is the same except that the logic of the
carry-setting is inverted. Having done this we can now match more
cases where the carry flag is propagated from comparisons with
different modes without having to define even more patterns. A few
small changes to the expand patterns are required to directly create
the carry representation.
The iterators LTUGEU is no-longer needed and removed, as is the code
attribute 'cnb'.
Finally, we fix a long-standing bug which was probably inert before:
in Thumb2 a shift with ADC can only be by an immediate amount;
register-specified shifts are not permitted.
* config/arm/predicates.md (arm_carry_operation): New special
predicate.
* config/arm/iterators.md (LTUGEU): Delete iterator.
(cnb): Delete code attribute.
(optab): Delete ltu and geu elements.
* config/arm/arm.md (addsi3_carryin): Renamed from
addsi3_carryin_<optab>. Remove iterator and use arm_carry_operand.
(add0si3_carryin): Similarly, but from add0si3_carryin_<optab>.
(addsi3_carryin_alt2): Similarly, but from addsi3_carryin_alt2_<optab>.
(addsi3_carryin_clobercc): Similarly.
(addsi3_carryin_shift): Similarly. Do not allow register shifts in
Thumb2 state.
From-SVN: r277171
Richard Earnshaw [Fri, 18 Oct 2019 19:02:28 +0000 (19:02 +0000)]
[arm] Remove redundant DImode subtract patterns
Now that we early split DImode subtracts, the patterns to emit the
original and to match zero-extend with subtraction or negation are
no-longer useful.
* config/arm/arm.md (arm_subdi3): Delete insn.
(zextendsidi_negsi, negdi_extendsidi): Delete insn_and_split.
From-SVN: r277170
Richard Earnshaw [Fri, 18 Oct 2019 19:02:20 +0000 (19:02 +0000)]
[arm] Early split subdi3
This patch adds early splitting of subdi3 so that the individual
operations can be seen by the optimizers, particuarly combine. This
should allow us to do at least as good a job as previously, but with
far fewer patterns in the machine description.
This is just the initial patch to add the early splitting. The
cleanups will follow later.
A special trick is used to handle the 'reverse subtract and compare'
where a register is subtracted from a constant. The natural
comparison
(COMPARE (const) (reg))
is not canonical in this case and combine will never correctly
generate it (trying to swap the order of the operands. To handle this
we write the comparison as
(COMPARE (NOT (reg)) (~const)),
which has the same result for EQ, NE, LTU, LEU, GTU and GEU, which are
all the cases we are really interested in here.
Finally, we delete the negdi2 pattern. The generic expanders will use
our new subdi3 expander if this pattern is missing and that can handle
the negate case just fine.
* config/arm/arm-modes.def (CC_RSB): New CC mode.
* config/arm/predicates.md (arm_borrow_operation): Handle CC_RSBmode.
* config/arm/arm.c (arm_select_cc_mode): Detect when we should
return CC_RSBmode.
(maybe_get_arm_condition_code): Handle CC_RSBmode.
* config/arm/arm.md (subsi3_carryin): Make this pattern available to
expand.
(subdi3): Rewrite to early-expand the sub-operations.
(rsb_im_compare): New pattern.
(negdi2): Delete.
(negdi2_insn): Delete.
(arm_negsi2): Correct type attribute to alu_imm.
(negsi2_0compare): New insn pattern.
(negsi2_carryin): New insn pattern.
From-SVN: r277169
Richard Earnshaw [Fri, 18 Oct 2019 19:02:12 +0000 (19:02 +0000)]
[arm] fix constraints on addsi3_carryin_alt2
addsi3_carryin_alt2 has a more strict constraint than the predicate
when adding a constant. This leads to sub-optimal code in some
circumstances.
* config/arm/arm.md (addsi3_carryin_alt2): Use arm_not_operand for
operand 2.
From-SVN: r277168
Richard Earnshaw [Fri, 18 Oct 2019 19:02:05 +0000 (19:02 +0000)]
[arm] Rewrite addsi3_carryin_shift_<optab> in canonical form
The add-with-carry operation which involves a shift doesn't match at present
because it isn't matching the canonical form generated by combine. Fixing
this is simply a matter of re-ordering the operands.
* config/arm/arm.md (addsi3_carryin_shift_<optab>): Reorder operands
to match canonical form.
From-SVN: r277167
Richard Earnshaw [Fri, 18 Oct 2019 19:01:57 +0000 (19:01 +0000)]
[arm] Early split zero- and sign-extension
This patch changes the insn patterns for zero- and sign-extend into
define_expands that generate the appropriate word operations
immediately.
* config/arm/arm.md (zero_extend<mode>di2): Convert to define_expand.
(extend<mode>di2): Likewise.
From-SVN: r277166
Richard Earnshaw [Fri, 18 Oct 2019 19:01:49 +0000 (19:01 +0000)]
[arm] Perform early splitting of adddi3.
This patch causes the expansion of adddi3 to split the operation
immediately for Arm and Thumb-2. This is desirable as it frees up the
register allocator to pick what ever combination of registers suits
best and reduces the number of auxiliary patterns that we need in the
back-end. Three of the testcases that we disabled earlier are already
fixed by this patch. Finally, we add a new pattern to match the
canonicalization of add-with-carry when using an immediate of zero.
gcc:
* config/arm/arm-protos.h (arm_decompose_di_binop): New prototype.
* config/arm/arm.c (arm_decompose_di_binop): New function.
* config/arm/arm.md (adddi3): Also accept any const_int for op2.
If not generating Thumb-1 code, decompose the operation into 32-bit
pieces.
* add0si_carryin_<optab>: New pattern.
testsuite:
* gcc.target/arm/pr53447-1.c: Remove XFAIL.
* gcc.target/arm/pr53447-3.c: Remove XFAIL.
* gcc.target/arm/pr53447-4.c: Remove XFAIL.
From-SVN: r277165
Richard Earnshaw [Fri, 18 Oct 2019 19:01:40 +0000 (19:01 +0000)]
[arm] Rip out DImode addition and subtraction splits.
The first step towards early splitting of addition and subtraction at
DImode is to rip out the old patterns that are designed to propagate
DImode through the RTL optimization passes and the do late splitting.
This patch does cause some code size regressions, but it should still
execute correctly. We will progressively add back the optimizations
we had here in later patches.
A small number of tests in the Arm-specific testsuite do fail as a
result of this patch, but that's to be expected, since the
optimizations they are looking for have just been removed. I've kept
the tests, but XFAILed them for now.
One small technical change is also done in this patch as part of the
cleanup: the uaddv<mode>4 expander is changed to use LTU as the branch
comparison. This eliminates the need for CC_Cmode to recognize
somewhat bogus equality constraints.
gcc:
* arm.md (adddi3): Only accept register operands.
(arm_adddi3): Convert to simple insn with no split. Do not accept
constants.
(adddi_sesidi_di): Delete patern.
(adddi_zesidi_di): Likewise.
(uaddv<mode>4): Use LTU as condition for branch.
(adddi3_compareV): Convert to simple insn with no split.
(addsi3_compareV_upper): Delete pattern.
(adddi3_compareC): Convert to simple insn with no split. Correct
flags setting expression.
(addsi3_compareC_upper): Delete pattern.
(addsi3_compareC): Correct flags setting expression.
(subdi3_compare1): Convert to simple insn with no split.
(subsi3_carryin_compare): Delete pattern.
(arm_subdi3): Convert to simple insn with no split.
(subdi_zesidi): Delete pattern.
(subdi_di_sesidi): Delete pattern.
(subdi_zesidi_di): Delete pattern.
(subdi_sesidi_di): Delete pattern.
(subdi_zesidi_zesidi): Delete pattern.
(negvdi3): Use s_register_operand.
(negdi2_compare): Convert to simple insn with no split.
(negdi2_insn): Likewise.
(negsi2_carryin_compare): Delete pattern.
(negdi_zero_extendsidi): Delete pattern.
(arm_cmpdi_insn): Convert to simple insn with no split.
(negdi2): Don't call gen_negdi2_neon.
* config/arm/neon.md (adddi3_neon): Delete pattern.
(subdi3_neon): Delete pattern.
(negdi2_neon): Delete pattern.
(splits for negdi2_neon): Delete splits.
testsuite:
* gcc.target/arm/negdi-3.c: Add XFAILS.
* gcc.target/arm/pr3447-1.c: Likewise.
* gcc.target/arm/pr3447-3.c: Likewise.
* gcc.target/arm/pr3447-4.c: Likewise.
From-SVN: r277164