+2015-01-09 Jakub Jelinek <jakub@redhat.com>
+
+ PR rtl-optimization/64536
+ * cfgrtl.c (rtl_tidy_fallthru_edge): Handle removal of degenerate
+ tablejumps.
+
+2015-01-09 Michael Meissner <meissner@linux.vnet.ibm.com>
+
+ Backport from mainline:
+ 2015-01-06 Michael Meissner <meissner@linux.vnet.ibm.com>
+
+ PR target/64505
+ * config/rs6000/rs6000.c (rs6000_secondary_reload): Return the
+ correct reload handler if -m32 -mpowerpc64 is used.
+
+2015-01-09 Sebastian Huber <sebastian.huber@embedded-brains.de>
+
+ Backport from mainline:
+ 2015-01-09 Sebastian Huber <sebastian.huber@embedded-brains.de>
+
+ * config/rs6000/rtems.h (CPP_OS_RTEMS_SPEC): Define __PPC_CPU_E6500__
+ for -mcpu=e6500.
+ * config/rs6000/t-rtems: Add e6500 multilibs.
+
+2015-01-09 Sebastian Huber <sebastian.huber@embedded-brains.de>
+
+ Backport from mainline:
+ 2015-01-09 Sebastian Huber <sebastian.huber@embedded-brains.de>
+
+ * config/rs6000/t-rtems: Add -mno-spe to soft-float multilib for
+ MPC8540.
+
+2015-01-09 Sebastian Huber <sebastian.huber@embedded-brains.de>
+
+ Backport from mainline:
+ 2015-01-09 Sebastian Huber <sebastian.huber@embedded-brains.de>
+
+ * config/rs6000/t-rtems: Use MULTILIB_REQUIRED instead of
+ MULTILIB_EXCEPTIONS.
+
+2015-01-09 Renlin Li <renlin.li@arm.com>
+
+ Backport from mainline:
+ 2014-08-12 Ramana Radhakrishnan <ramana.radhakrishnan@arm.com>
+
+ PR target/61413
+ * config/arm/arm.h (TARGET_CPU_CPP_BUILTINS): Fix definition
+ of __ARM_SIZEOF_WCHAR_T.
+
+2015-01-08 Christian Bruel <christian.bruel@st.com>
+
+ PR target/64507
+ * config/sh/sh-mem.cc (sh_expand_cmpnstr): Check 0 length.
+
+2015-01-03 John David Anglin <danglin@gcc.gnu.org>
+
+ * config/pa/pa.md (decrement_and_branch_until_zero): Use `Q' constraint
+ instead of `m' constraint. Likewise for unnamed movb comparison
+ patterns using reg_before_reload_operand predicate.
+ * config/pa/predicates.md (reg_before_reload_operand): Tighten
+ predicate to reject register index and LO_SUM DLT memory forms
+ after reload.
+
+2014-12-27 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline:
+ 2014-12-27 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/64409
+ * config/i386/i386.c (ix86_function_type_abi): Issue an error
+ when ms_abi attribute is used with x32.
+
+2014-12-27 Uros Bizjak <ubizjak@gmail.com>
+
+ * config/i386/mmx.md (*vec_extractv2sf_1): Do not emit unpckhps.
+ Emit movshdup for SSE3 and shufps otherwise.
+ (*vec_extractv2si_1): Do not emit punpckhdq and unpckhps.
+ Emit pshufd for SSE2 and shufps otherwise.
+
+2014-12-24 Nick Clifton <nickc@redhat.com>
+
+ Backport from mainline:
+ 2014-06-13 Nick Clifton <nickc@redhat.com>
+
+ * config/rx/rx.h (JUMP_ALIGN): Return the log value if user
+ requested alignment is active.
+ (LABEL_ALIGN): Likewise.
+ (LOOP_ALIGN): Likewise.
+
+ 2014-03-25 Nick Clifton <nickc@redhat.com>
+
+ * config/rx/rx.c (rx_print_operand): Allow R operator to accept
+ SImode values.
+
+2014-12-17 Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
+
+ Backport from mainline
+ 2014-12-03 Ulrich Weigand <Ulrich.Weigand@de.ibm.com>
+
+ PR rtl-optimization/64010
+ * reload.c (push_reload): Before reusing a register contained
+ in an operand as input reload register, ensure that it is not
+ used in CALL_INSN_FUNCTION_USAGE.
+
+2014-12-15 Jakub Jelinek <jakub@redhat.com>
+
+ PR sanitizer/64265
+ * tsan.c (instrument_func_entry): Insert __tsan_func_entry
+ call on edge from entry block to single succ instead
+ of after labels of single succ of entry block.
+
+2014-12-14 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backported from mainline
+ 2014-12-14 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR rtl-optimization/64037
+ * combine.c (setup_incoming_promotions): Pass the argument
+ before any promotions happen to promote_function_mode.
+
+2014-12-14 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backported from mainline
+ 2014-12-06 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/64200
+ * config/i386/i386.c (decide_alg): Don't assert "alg != libcall"
+ for TARGET_INLINE_STRINGOPS_DYNAMICALLY.
+
+2014-12-13 Jakub Jelinek <jakub@redhat.com>
+
+ Backported from mainline
+ 2014-12-12 Jakub Jelinek <jakub@redhat.com>
+
+ PR tree-optimization/64269
+ * tree-ssa-forwprop.c (simplify_builtin_call): Bail out if
+ len2 or diff are too large.
+
+2014-12-11 Eric Botcazou <ebotcazou@adacore.com>
+
+ * doc/md.texi (Insn Lengths): Fix description of (pc).
+
+2014-12-11 Renlin Li <renlin.li@arm.com>
+
+ Backport from mainline
+ 2014-12-11 Renlin Li <renlin.li@arm.com>
+
+ * config/aarch64/aarch64.c (aarch64_parse_cpu): Don't define
+ selected_tune.
+ (aarch64_override_options): Use selected_cpu's tuning.
+
+2014-12-10 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ Backport from mainline
+ 2014-09-02 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * config/rs6000/rs6000-builtin.def (XVCVSXDDP_SCALE): New
+ built-in definition.
+ (XVCVUXDDP_SCALE): Likewise.
+ (XVCVDPSXDS_SCALE): Likewise.
+ (XVCVDPUXDS_SCALE): Likewise.
+ * config/rs6000/rs6000-c.c (altivec_overloaded_builtins): Add
+ entries for VSX_BUILTIN_XVCVSXDDP_SCALE,
+ VSX_BUILTIN_XVCVUXDDP_SCALE, VSX_BUILTIN_XVCVDPSXDS_SCALE, and
+ VSX_BUILTIN_XVCVDPUXDS_SCALE.
+ * config/rs6000/rs6000-protos.h (rs6000_scale_v2df): New
+ prototype.
+ * config/rs6000/rs6000.c (real.h): New include.
+ (rs6000_scale_v2df): New function.
+ * config/rs6000/vsx.md (UNSPEC_VSX_XVCVSXDDP): New unspec.
+ (UNSPEC_VSX_XVCVUXDDP): Likewise.
+ (UNSPEC_VSX_XVCVDPSXDS): Likewise.
+ (UNSPEC_VSX_XVCVDPUXDS): Likewise.
+ (vsx_xvcvsxddp_scale): New define_expand.
+ (vsx_xvcvsxddp): New define_insn.
+ (vsx_xvcvuxddp_scale): New define_expand.
+ (vsx_xvcvuxddp): New define_insn.
+ (vsx_xvcvdpsxds_scale): New define_expand.
+ (vsx_xvcvdpsxds): New define_insn.
+ (vsx_xvcvdpuxds_scale): New define_expand.
+ (vsx_xvcvdpuxds): New define_insn.
+ * doc/extend.texi (vec_ctf): Add new prototypes.
+ (vec_cts): Likewise.
+ (vec_ctu): Likewise.
+ (vec_splat): Likewise.
+ (vec_div): Likewise.
+ (vec_mul): Likewise.
+
+ Backport from mainline
+ 2014-08-28 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * config/rs6000/altivec.h (vec_xl): New #define.
+ (vec_xst): Likewise.
+ * config/rs6000/rs6000-builtin.def (XXSPLTD_V2DF): New built-in.
+ (XXSPLTD_V2DI): Likewise.
+ (DIV_V2DI): Likewise.
+ (UDIV_V2DI): Likewise.
+ (MUL_V2DI): Likewise.
+ * config/rs6000/rs6000-c.c (altivec_overloaded_builtins): Add
+ entries for VSX_BUILTIN_XVRDPI, VSX_BUILTIN_DIV_V2DI,
+ VSX_BUILTIN_UDIV_V2DI, VSX_BUILTIN_MUL_V2DI,
+ VSX_BUILTIN_XXSPLTD_V2DF, and VSX_BUILTIN_XXSPLTD_V2DI).
+ * config/rs6000/vsx.md (UNSPEC_VSX_XXSPLTD): New unspec.
+ (UNSPEC_VSX_DIVSD): Likewise.
+ (UNSPEC_VSX_DIVUD): Likewise.
+ (UNSPEC_VSX_MULSD): Likewise.
+ (vsx_mul_v2di): New insn-and-split.
+ (vsx_div_v2di): Likewise.
+ (vsx_udiv_v2di): Likewise.
+ (vsx_xxspltd_<mode>): New insn.
+
+ Backport from mainline
+ 2014-08-20 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * config/rs6000/altivec.h (vec_cpsgn): New #define.
+ (vec_mergee): Likewise.
+ (vec_mergeo): Likewise.
+ (vec_cntlz): Likewise.
+ * config/rs600/rs6000-c.c (altivec_overloaded_builtins): Add new
+ entries for VEC_AND, VEC_ANDC, VEC_MERGEH, VEC_MERGEL, VEC_NOR,
+ VEC_OR, VEC_PACKSU, VEC_XOR, VEC_PERM, VEC_SEL, VEC_VCMPGT_P,
+ VMRGEW, and VMRGOW.
+ * doc/extend.texi: Document various forms of vec_cpsgn,
+ vec_splats, vec_and, vec_andc, vec_mergeh, vec_mergel, vec_nor,
+ vec_or, vec_perm, vec_sel, vec_sub, vec_xor, vec_all_eq,
+ vec_all_ge, vec_all_gt, vec_all_le, vec_all_lt, vec_all_ne,
+ vec_any_eq, vec_any_ge, vec_any_gt, vec_any_le, vec_any_lt,
+ vec_any_ne, vec_mergee, vec_mergeo, vec_packsu, and vec_cntlz.
+
+ Backport from mainline
+ 2014-07-20 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * config/rs6000/altivec.md (unspec enum): Fix typo in UNSPEC_VSLDOI.
+ (altivec_vsldoi_<mode>): Likewise.
+
+
+2014-12-10 Jakub Jelinek <jakub@redhat.com>
+
+ PR tree-optimization/62021
+ * omp-low.c (simd_clone_adjust_return_type): Use
+ vector of pointer_sized_int_node types instead vector of pointer
+ types.
+ (simd_clone_adjust_argument_types): Likewise.
+
+2014-12-10 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ Backport from mainline:
+ 2014-12-09 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ PR middle-end/64225
+ * tree-ssa-reassoc.c (acceptable_pow_call): Disable transformation
+ for BUILT_IN_POW when flag_errno_math is present.
+
+2014-12-10 Marek Polacek <polacek@redhat.com>
+
+ Backport from mainline
+ 2014-12-10 Marek Polacek <polacek@redhat.com>
+
+ PR tree-optimization/61686
+ * tree-ssa-reassoc.c (range_entry_cmp): Use q->high instead of
+ p->high.
+
+2014-12-09 David Edelsohn <dje.gcc@gmail.com>
+
+ Backport from mainline
+ 2014-12-05 David Edelsohn <dje.gcc@gmail.com>
+
+ * config/rs6000/xcoff.h (ASM_OUTPUT_ALIGNED_LOCAL): Append
+ alignment to section name. Increase default alignment to
+ word.
+
+2014-12-09 Uros Bizjak <ubizjak@gmail.com>
+
+ PR bootstrap/64213
+ Revert:
+ 2014-11-28 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR rtl-optimization/64037
+ * combine.c (setup_incoming_promotions): Pass the argument
+ before any promotions happen to promote_function_mode.
+
+2014-12-09 Richard Biener <rguenther@suse.de>
+
+ PR tree-optimization/64191
+ * tree-vect-stmts.c (vect_stmt_relevant_p): Clobbers are
+ not relevant (nor are their uses).
+
+2014-12-07 Oleg Endo <olegendo@gcc.gnu.org>
+
+ Backport from mainline
+ 2014-12-07 Oleg Endo <olegendo@gcc.gnu.org>
+
+ PR target/50751
+ * config/sh/sh.md (extendqihi2): Allow only for TARGET_SH1.
+
+2014-12-05 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline
+ 2014-12-02 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/64108
+ * config/i386/i386.c (decide_alg): Stop only if there aren't
+ any usable algorithms.
+
+2014-12-05 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline
+ 2014-11-28 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR rtl-optimization/64037
+ * combine.c (setup_incoming_promotions): Pass the argument
+ before any promotions happen to promote_function_mode.
+
2014-12-04 Tobias Burnus <burnus@net-b.de>
* configure.ac
+2015-01-05 Eric Botcazou <ebotcazou@adacore.com>
+
+ PR ada/64492
+ * gcc-interface/Makefile.in (../stamp-tools): Reinstate dropped code.
+
2014-11-24 Eric Botcazou <ebotcazou@adacore.com>
* gcc-interface/trans.c (push_range_check_info): Replace early test
# Build directory for the tools. Let's copy the target-dependent
# sources using the same mechanism as for gnatlib. The other sources are
# accessed using the vpath directive below
-# Note: dummy target, stamp-tools is mainly handled by gnattools.
../stamp-tools:
+ -$(RM) tools/*
+ -$(RMDIR) tools
+ -$(MKDIR) tools
+ -(cd tools; $(LN_S) ../sdefault.adb ../snames.ads ../snames.adb .)
+ -$(foreach PAIR,$(TOOLS_TARGET_PAIRS), \
+ $(RM) tools/$(word 1,$(subst <, ,$(PAIR)));\
+ $(LN_S) $(fsrcpfx)ada/$(word 2,$(subst <, ,$(PAIR))) \
+ tools/$(word 1,$(subst <, ,$(PAIR)));)
touch ../stamp-tools
# when compiling the tools, the runtime has to be first on the path so that
&& (any_uncondjump_p (q)
|| single_succ_p (b)))
{
+ rtx label, table;
+
+ if (tablejump_p (q, &label, &table))
+ {
+ /* The label is likely mentioned in some instruction before
+ the tablejump and might not be DCEd, so turn it into
+ a note instead and move before the tablejump that is going to
+ be deleted. */
+ const char *name = LABEL_NAME (label);
+ PUT_CODE (label, NOTE);
+ NOTE_KIND (label) = NOTE_INSN_DELETED_LABEL;
+ NOTE_DELETED_LABEL_NAME (label) = name;
+ reorder_insns (label, label, PREV_INSN (q));
+ delete_insn (table);
+ }
+
#ifdef HAVE_cc0
/* If this was a conditional jump, we need to also delete
the insn that set cc0. */
uns3 = TYPE_UNSIGNED (DECL_ARG_TYPE (arg));
/* The mode and signedness of the argument as it is actually passed,
- after any TARGET_PROMOTE_FUNCTION_ARGS-driven ABI promotions. */
- mode3 = promote_function_mode (DECL_ARG_TYPE (arg), mode2, &uns3,
+ see assign_parm_setup_reg in function.c. */
+ mode3 = promote_function_mode (TREE_TYPE (arg), mode1, &uns3,
TREE_TYPE (cfun->decl), 0);
/* The mode of the register in which the argument is being passed. */
if (strlen (cpu->name) == len && strncmp (cpu->name, str, len) == 0)
{
selected_cpu = cpu;
- selected_tune = cpu;
aarch64_isa_flags = selected_cpu->flags;
if (ext != NULL)
gcc_assert (selected_cpu);
- /* The selected cpu may be an architecture, so lookup tuning by core ID. */
if (!selected_tune)
- selected_tune = &all_cores[selected_cpu->core];
+ selected_tune = selected_cpu;
aarch64_tune_flags = selected_tune->flags;
aarch64_tune = selected_tune->core;
builtin_define_with_int_value ( \
"__ARM_SIZEOF_MINIMAL_ENUM", \
flag_short_enums ? 1 : 4); \
- builtin_define_with_int_value ( \
- "__ARM_SIZEOF_WCHAR_T", WCHAR_TYPE_SIZE); \
+ builtin_define_type_sizeof ("__ARM_SIZEOF_WCHAR_T", \
+ wchar_type_node); \
if (TARGET_ARM_ARCH_PROFILE) \
builtin_define_with_int_value ( \
"__ARM_ARCH_PROFILE", TARGET_ARM_ARCH_PROFILE); \
if (abi == SYSV_ABI)
{
if (lookup_attribute ("ms_abi", TYPE_ATTRIBUTES (fntype)))
- abi = MS_ABI;
+ {
+ if (TARGET_X32)
+ {
+ static bool warned = false;
+ if (!warned)
+ {
+ error ("X32 does not support ms_abi attribute");
+ warned = true;
+ }
+ }
+ abi = MS_ABI;
+ }
}
else if (lookup_attribute ("sysv_abi", TYPE_ATTRIBUTES (fntype)))
abi = SYSV_ABI;
*noalign = alg_noalign;
return alg;
}
- break;
+ else if (!any_alg_usable_p)
+ break;
}
else if (alg_usable_p (candidate, memset))
{
alg = decide_alg (count, max / 2, min_size, max_size, memset,
zero_memset, dynamic_check, noalign);
gcc_assert (*dynamic_check == -1);
- gcc_assert (alg != libcall);
if (TARGET_INLINE_STRINGOPS_DYNAMICALLY)
*dynamic_check = max;
+ else
+ gcc_assert (alg != libcall);
return alg;
}
return (alg_usable_p (algs->unknown_size, memset)
;; Avoid combining registers from different units in a single alternative,
;; see comment above inline_secondary_memory_needed function in i386.c
(define_insn "*vec_extractv2sf_1"
- [(set (match_operand:SF 0 "nonimmediate_operand" "=y,x,y,x,f,r")
+ [(set (match_operand:SF 0 "nonimmediate_operand" "=y,x,x,y,x,f,r")
(vec_select:SF
- (match_operand:V2SF 1 "nonimmediate_operand" " 0,0,o,o,o,o")
+ (match_operand:V2SF 1 "nonimmediate_operand" " 0,x,x,o,o,o,o")
(parallel [(const_int 1)])))]
"TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))"
"@
punpckhdq\t%0, %0
- unpckhps\t%0, %0
+ %vmovshdup\t{%1, %0|%0, %1}
+ shufps\t{$0xe5, %1, %0|%0, %1, 0xe5}
#
#
#
#"
- [(set_attr "type" "mmxcvt,sselog1,mmxmov,ssemov,fmov,imov")
- (set_attr "mode" "DI,V4SF,SF,SF,SF,SF")])
+ [(set_attr "isa" "*,sse3,noavx,*,*,*,*")
+ (set_attr "type" "mmxcvt,sse,sseshuf1,mmxmov,ssemov,fmov,imov")
+ (set_attr "length_immediate" "*,*,1,*,*,*,*")
+ (set_attr "prefix_rep" "*,1,*,*,*,*,*")
+ (set_attr "prefix" "orig,maybe_vex,orig,orig,orig,orig,orig")
+ (set_attr "mode" "DI,V4SF,V4SF,SF,SF,SF,SF")])
(define_split
[(set (match_operand:SF 0 "register_operand")
;; Avoid combining registers from different units in a single alternative,
;; see comment above inline_secondary_memory_needed function in i386.c
(define_insn "*vec_extractv2si_1"
- [(set (match_operand:SI 0 "nonimmediate_operand" "=y,x,x,x,y,x,r")
+ [(set (match_operand:SI 0 "nonimmediate_operand" "=y,x,x,y,x,r")
(vec_select:SI
- (match_operand:V2SI 1 "nonimmediate_operand" " 0,0,x,0,o,o,o")
+ (match_operand:V2SI 1 "nonimmediate_operand" " 0,x,x,o,o,o")
(parallel [(const_int 1)])))]
"TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))"
"@
punpckhdq\t%0, %0
- punpckhdq\t%0, %0
- pshufd\t{$85, %1, %0|%0, %1, 85}
- unpckhps\t%0, %0
+ %vpshufd\t{$0xe5, %1, %0|%0, %1, 0xe5}
+ shufps\t{$0xe5, %1, %0|%0, %1, 0xe5}
#
#
#"
- [(set (attr "isa")
- (if_then_else (eq_attr "alternative" "1,2")
- (const_string "sse2")
- (const_string "*")))
- (set_attr "type" "mmxcvt,sselog1,sselog1,sselog1,mmxmov,ssemov,imov")
- (set_attr "length_immediate" "*,*,1,*,*,*,*")
- (set_attr "mode" "DI,TI,TI,V4SF,SI,SI,SI")])
+ [(set_attr "isa" "*,sse2,noavx,*,*,*")
+ (set_attr "type" "mmxcvt,sseshuf1,sseshuf1,mmxmov,ssemov,imov")
+ (set_attr "length_immediate" "*,1,1,*,*,*")
+ (set_attr "prefix" "orig,maybe_vex,orig,orig,orig,orig")
+ (set_attr "mode" "DI,TI,V4SF,SI,SI,SI")])
(define_split
[(set (match_operand:SI 0 "register_operand")
;; strength reduction is used. It is actually created when the instruction
;; combination phase combines the special loop test. Since this insn
;; is both a jump insn and has an output, it must deal with its own
-;; reloads, hence the `m' constraints. The `!' constraints direct reload
+;; reloads, hence the `Q' constraints. The `!' constraints direct reload
;; to not choose the register alternatives in the event a reload is needed.
(define_insn "decrement_and_branch_until_zero"
[(set (pc)
(if_then_else
(match_operator 2 "comparison_operator"
[(plus:SI
- (match_operand:SI 0 "reg_before_reload_operand" "+!r,!*f,*m")
+ (match_operand:SI 0 "reg_before_reload_operand" "+!r,!*f,*Q")
(match_operand:SI 1 "int5_operand" "L,L,L"))
(const_int 0)])
(label_ref (match_operand 3 "" ""))
[(match_operand:SI 1 "register_operand" "r,r,r,r") (const_int 0)])
(label_ref (match_operand 3 "" ""))
(pc)))
- (set (match_operand:SI 0 "reg_before_reload_operand" "=!r,!*f,*m,!*q")
+ (set (match_operand:SI 0 "reg_before_reload_operand" "=!r,!*f,*Q,!*q")
(match_dup 1))]
""
"* return pa_output_movb (operands, insn, which_alternative, 0); "
[(match_operand:SI 1 "register_operand" "r,r,r,r") (const_int 0)])
(pc)
(label_ref (match_operand 3 "" ""))))
- (set (match_operand:SI 0 "reg_before_reload_operand" "=!r,!*f,*m,!*q")
+ (set (match_operand:SI 0 "reg_before_reload_operand" "=!r,!*f,*Q,!*q")
(match_dup 1))]
""
"* return pa_output_movb (operands, insn, which_alternative, 1); "
;; This predicate is used for branch patterns that internally handle
;; register reloading. We need to accept non-symbolic memory operands
;; after reload to ensure that the pattern is still valid if reload
-;; didn't find a hard register for the operand.
+;; didn't find a hard register for the operand. We also reject index
+;; and lo_sum DLT address as these are invalid for move destinations.
(define_predicate "reg_before_reload_operand"
(match_code "reg,mem")
{
+ rtx op0;
+
if (register_operand (op, mode))
return true;
- if (reload_completed
- && memory_operand (op, mode)
- && !symbolic_memory_operand (op, mode))
- return true;
+ if (!reload_in_progress && !reload_completed)
+ return false;
- return false;
+ if (! MEM_P (op))
+ return false;
+
+ op0 = XEXP (op, 0);
+
+ return (memory_address_p (mode, op0)
+ && !IS_INDEX_ADDR_P (op0)
+ && !IS_LO_SUM_DLT_ADDR_P (op0)
+ && !symbolic_memory_operand (op, mode));
})
;; True iff OP is a register or const_0 operand for MODE.
#define vec_vcfux __builtin_vec_vcfux
#define vec_cts __builtin_vec_cts
#define vec_ctu __builtin_vec_ctu
+#define vec_cpsgn __builtin_vec_copysign
#define vec_expte __builtin_vec_expte
#define vec_floor __builtin_vec_floor
#define vec_loge __builtin_vec_loge
#define vec_lvsl __builtin_vec_lvsl
#define vec_lvsr __builtin_vec_lvsr
#define vec_max __builtin_vec_max
+#define vec_mergee __builtin_vec_vmrgew
#define vec_mergeh __builtin_vec_mergeh
#define vec_mergel __builtin_vec_mergel
+#define vec_mergeo __builtin_vec_vmrgow
#define vec_min __builtin_vec_min
#define vec_mladd __builtin_vec_mladd
#define vec_msum __builtin_vec_msum
#define vec_sqrt __builtin_vec_sqrt
#define vec_vsx_ld __builtin_vec_vsx_ld
#define vec_vsx_st __builtin_vec_vsx_st
+#define vec_xl __builtin_vec_vsx_ld
+#define vec_xst __builtin_vec_vsx_st
/* Note, xxsldi and xxpermdi were added as __builtin_vsx_<xxx> functions
instead of __builtin_vec_<xxx> */
#define vec_vadduqm __builtin_vec_vadduqm
#define vec_vbpermq __builtin_vec_vbpermq
#define vec_vclz __builtin_vec_vclz
+#define vec_cntlz __builtin_vec_vclz
#define vec_vclzb __builtin_vec_vclzb
#define vec_vclzd __builtin_vec_vclzd
#define vec_vclzh __builtin_vec_vclzh
UNSPEC_VCTSXS
UNSPEC_VLOGEFP
UNSPEC_VEXPTEFP
- UNSPEC_VLSDOI
+ UNSPEC_VSLDOI
UNSPEC_VUNPACK_HI_SIGN
UNSPEC_VUNPACK_LO_SIGN
UNSPEC_VUNPACK_HI_SIGN_DIRECT
(unspec:VM [(match_operand:VM 1 "register_operand" "v")
(match_operand:VM 2 "register_operand" "v")
(match_operand:QI 3 "immediate_operand" "i")]
- UNSPEC_VLSDOI))]
+ UNSPEC_VSLDOI))]
"TARGET_ALTIVEC"
"vsldoi %0,%1,%2,%3"
[(set_attr "type" "vecperm")])
BU_VSX_2 (VEC_MERGEL_V2DI, "mergel_2di", CONST, vsx_mergel_v2di)
BU_VSX_2 (VEC_MERGEH_V2DF, "mergeh_2df", CONST, vsx_mergeh_v2df)
BU_VSX_2 (VEC_MERGEH_V2DI, "mergeh_2di", CONST, vsx_mergeh_v2di)
+BU_VSX_2 (XXSPLTD_V2DF, "xxspltd_2df", CONST, vsx_xxspltd_v2df)
+BU_VSX_2 (XXSPLTD_V2DI, "xxspltd_2di", CONST, vsx_xxspltd_v2di)
+BU_VSX_2 (DIV_V2DI, "div_2di", CONST, vsx_div_v2di)
+BU_VSX_2 (UDIV_V2DI, "udiv_2di", CONST, vsx_udiv_v2di)
+BU_VSX_2 (MUL_V2DI, "mul_2di", CONST, vsx_mul_v2di)
+
+BU_VSX_2 (XVCVSXDDP_SCALE, "xvcvsxddp_scale", CONST, vsx_xvcvsxddp_scale)
+BU_VSX_2 (XVCVUXDDP_SCALE, "xvcvuxddp_scale", CONST, vsx_xvcvuxddp_scale)
+BU_VSX_2 (XVCVDPSXDS_SCALE, "xvcvdpsxds_scale", CONST, vsx_xvcvdpsxds_scale)
+BU_VSX_2 (XVCVDPUXDS_SCALE, "xvcvdpuxds_scale", CONST, vsx_xvcvdpuxds_scale)
/* VSX abs builtin functions. */
BU_VSX_A (XVABSDP, "xvabsdp", CONST, absv2df2)
RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0, 0 },
{ ALTIVEC_BUILTIN_VEC_ROUND, ALTIVEC_BUILTIN_VRFIN,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0, 0 },
+ { ALTIVEC_BUILTIN_VEC_ROUND, VSX_BUILTIN_XVRDPI,
+ RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0, 0 },
{ ALTIVEC_BUILTIN_VEC_RECIP, ALTIVEC_BUILTIN_VRECIPFP,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_RECIP, VSX_BUILTIN_RECIP_V2DF,
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND,
RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC,
RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 },
RS6000_BTI_V4SF, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_CTF, ALTIVEC_BUILTIN_VCFSX,
RS6000_BTI_V4SF, RS6000_BTI_V4SI, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_CTF, VSX_BUILTIN_XVCVSXDDP_SCALE,
+ RS6000_BTI_V2DF, RS6000_BTI_V2DI, RS6000_BTI_INTSI, 0},
+ { ALTIVEC_BUILTIN_VEC_CTF, VSX_BUILTIN_XVCVUXDDP_SCALE,
+ RS6000_BTI_V2DF, RS6000_BTI_unsigned_V2DI, RS6000_BTI_INTSI, 0},
{ ALTIVEC_BUILTIN_VEC_VCFSX, ALTIVEC_BUILTIN_VCFSX,
RS6000_BTI_V4SF, RS6000_BTI_V4SI, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_VCFUX, ALTIVEC_BUILTIN_VCFUX,
RS6000_BTI_V4SF, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_CTS, ALTIVEC_BUILTIN_VCTSXS,
RS6000_BTI_V4SI, RS6000_BTI_V4SF, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_CTS, VSX_BUILTIN_XVCVDPSXDS_SCALE,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DF, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_CTU, ALTIVEC_BUILTIN_VCTUXS,
RS6000_BTI_unsigned_V4SI, RS6000_BTI_V4SF, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_CTU, VSX_BUILTIN_XVCVDPUXDS_SCALE,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_V2DF, RS6000_BTI_INTSI, 0 },
{ VSX_BUILTIN_VEC_DIV, VSX_BUILTIN_XVDIVSP,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 },
{ VSX_BUILTIN_VEC_DIV, VSX_BUILTIN_XVDIVDP,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
+ { VSX_BUILTIN_VEC_DIV, VSX_BUILTIN_DIV_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { VSX_BUILTIN_VEC_DIV, VSX_BUILTIN_UDIV_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
{ ALTIVEC_BUILTIN_VEC_LD, ALTIVEC_BUILTIN_LVX_V2DF,
RS6000_BTI_V2DF, RS6000_BTI_INTSI, ~RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_LD, ALTIVEC_BUILTIN_LVX_V2DI,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_MERGEH, VSX_BUILTIN_VEC_MERGEH_V2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEH, VSX_BUILTIN_VEC_MERGEH_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEH, VSX_BUILTIN_VEC_MERGEH_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEH, VSX_BUILTIN_VEC_MERGEH_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEH, VSX_BUILTIN_VEC_MERGEH_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEH, VSX_BUILTIN_VEC_MERGEH_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
{ ALTIVEC_BUILTIN_VEC_VMRGHW, ALTIVEC_BUILTIN_VMRGHW,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_VMRGHW, ALTIVEC_BUILTIN_VMRGHW,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_MERGEL, VSX_BUILTIN_VEC_MERGEL_V2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEL, VSX_BUILTIN_VEC_MERGEL_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEL, VSX_BUILTIN_VEC_MERGEL_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEL, VSX_BUILTIN_VEC_MERGEL_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEL, VSX_BUILTIN_VEC_MERGEL_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_MERGEL, VSX_BUILTIN_VEC_MERGEL_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
{ ALTIVEC_BUILTIN_VEC_VMRGLW, ALTIVEC_BUILTIN_VMRGLW,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 },
{ ALTIVEC_BUILTIN_VEC_VMRGLW, ALTIVEC_BUILTIN_VMRGLW,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 },
{ VSX_BUILTIN_VEC_MUL, VSX_BUILTIN_XVMULDP,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
+ { VSX_BUILTIN_VEC_MUL, VSX_BUILTIN_MUL_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { VSX_BUILTIN_VEC_MUL, VSX_BUILTIN_MUL_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
{ ALTIVEC_BUILTIN_VEC_MULE, ALTIVEC_BUILTIN_VMULEUB,
RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 },
{ ALTIVEC_BUILTIN_VEC_MULE, ALTIVEC_BUILTIN_VMULESB,
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR,
RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR,
RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 },
RS6000_BTI_unsigned_V8HI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_PACKSU, P8V_BUILTIN_VPKSDUS,
RS6000_BTI_unsigned_V4SI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_PACKSU, P8V_BUILTIN_VPKSDUS,
+ RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
{ ALTIVEC_BUILTIN_VEC_VPKSWUS, ALTIVEC_BUILTIN_VPKSWUS,
RS6000_BTI_unsigned_V8HI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_VPKSHUS, ALTIVEC_BUILTIN_VPKSHUS,
RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_SPLAT, ALTIVEC_BUILTIN_VSPLTW,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_SPLAT, VSX_BUILTIN_XXSPLTD_V2DF,
+ RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_SPLAT, VSX_BUILTIN_XXSPLTD_V2DI,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_SPLAT, VSX_BUILTIN_XXSPLTD_V2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_INTSI, 0 },
+ { ALTIVEC_BUILTIN_VEC_SPLAT, VSX_BUILTIN_XXSPLTD_V2DI,
+ RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_VSPLTW, ALTIVEC_BUILTIN_VSPLTW,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_INTSI, 0 },
{ ALTIVEC_BUILTIN_VEC_VSPLTW, ALTIVEC_BUILTIN_VSPLTW,
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
+ RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
+ RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 },
+ { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR,
RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 },
RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_unsigned_V16QI },
+ { ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_4SF,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_unsigned_V16QI },
{ ALTIVEC_BUILTIN_VEC_PERM, ALTIVEC_BUILTIN_VPERM_4SI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_unsigned_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI },
+ { ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI },
+ { ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI },
+ { ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_2DI,
+ RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_V2DI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SF,
RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI },
{ ALTIVEC_BUILTIN_VEC_SEL, ALTIVEC_BUILTIN_VSEL_4SF,
RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI },
{ ALTIVEC_BUILTIN_VEC_VCMPGT_P, ALTIVEC_BUILTIN_VCMPGTSW_P,
RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_V4SI, RS6000_BTI_V4SI },
+ { ALTIVEC_BUILTIN_VEC_VCMPGT_P, P8V_BUILTIN_VCMPGTUD_P,
+ RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI },
+ { ALTIVEC_BUILTIN_VEC_VCMPGT_P, P8V_BUILTIN_VCMPGTUD_P,
+ RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI },
+ { ALTIVEC_BUILTIN_VEC_VCMPGT_P, P8V_BUILTIN_VCMPGTUD_P,
+ RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI },
+ { ALTIVEC_BUILTIN_VEC_VCMPGT_P, P8V_BUILTIN_VCMPGTSD_P,
+ RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI },
+ { ALTIVEC_BUILTIN_VEC_VCMPGT_P, P8V_BUILTIN_VCMPGTSD_P,
+ RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI },
+ { ALTIVEC_BUILTIN_VEC_VCMPGT_P, P8V_BUILTIN_VCMPGTSD_P,
+ RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_V2DI, RS6000_BTI_V2DI },
{ ALTIVEC_BUILTIN_VEC_VCMPGT_P, ALTIVEC_BUILTIN_VCMPGTFP_P,
RS6000_BTI_INTSI, RS6000_BTI_INTSI, RS6000_BTI_V4SF, RS6000_BTI_V4SF },
{ ALTIVEC_BUILTIN_VEC_VCMPGT_P, VSX_BUILTIN_XVCMPGTDP_P,
{ P8V_BUILTIN_VEC_VMRGEW, P8V_BUILTIN_VMRGEW,
RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI,
RS6000_BTI_unsigned_V4SI, 0 },
+ { P8V_BUILTIN_VEC_VMRGEW, P8V_BUILTIN_VMRGEW,
+ RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ P8V_BUILTIN_VEC_VMRGOW, P8V_BUILTIN_VMRGOW,
RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 },
{ P8V_BUILTIN_VEC_VMRGOW, P8V_BUILTIN_VMRGOW,
RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI,
RS6000_BTI_unsigned_V4SI, 0 },
+ { P8V_BUILTIN_VEC_VMRGOW, P8V_BUILTIN_VMRGOW,
+ RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 },
{ P8V_BUILTIN_VEC_VPOPCNT, P8V_BUILTIN_VPOPCNTB,
RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0, 0 },
extern void altivec_expand_stvex_be (rtx, rtx, enum machine_mode, unsigned);
extern void rs6000_expand_extract_even (rtx, rtx, rtx);
extern void rs6000_expand_interleave (rtx, rtx, rtx, bool);
+extern void rs6000_scale_v2df (rtx, rtx, int);
extern void build_mask64_2_operands (rtx, rtx *);
extern int expand_block_clear (rtx[]);
extern int expand_block_move (rtx[]);
#include "dumpfile.h"
#include "cgraph.h"
#include "target-globals.h"
+#include "real.h"
#if TARGET_XCOFF
#include "xcoffout.h" /* get declarations of xcoff_*_section_name */
#endif
: (offset + 0x8000 < 0x10000 - extra /* legitimate_address_p */
&& (offset & 3) != 0))
{
+ /* -m32 -mpowerpc64 needs to use a 32-bit scratch register. */
if (in_p)
- sri->icode = CODE_FOR_reload_di_load;
+ sri->icode = ((TARGET_32BIT) ? CODE_FOR_reload_si_load
+ : CODE_FOR_reload_di_load);
else
- sri->icode = CODE_FOR_reload_di_store;
+ sri->icode = ((TARGET_32BIT) ? CODE_FOR_reload_si_store
+ : CODE_FOR_reload_di_store);
sri->extra_cost = 2;
ret = NO_REGS;
}
rs6000_do_expand_vec_perm (target, op0, op1, vmode, nelt, perm);
}
+/* Scale a V2DF vector SRC by two to the SCALE and place in TGT. */
+void
+rs6000_scale_v2df (rtx tgt, rtx src, int scale)
+{
+ HOST_WIDE_INT hwi_scale (scale);
+ REAL_VALUE_TYPE r_pow;
+ rtvec v = rtvec_alloc (2);
+ rtx elt;
+ rtx scale_vec = gen_reg_rtx (V2DFmode);
+ (void)real_powi (&r_pow, DFmode, &dconst2, hwi_scale);
+ elt = CONST_DOUBLE_FROM_REAL_VALUE (r_pow, DFmode);
+ RTVEC_ELT (v, 0) = elt;
+ RTVEC_ELT (v, 1) = elt;
+ rs6000_expand_vector_init (scale_vec, gen_rtx_PARALLEL (V2DFmode, v));
+ emit_insn (gen_mulv2df3 (tgt, src, scale_vec));
+}
+
/* Return an RTX representing where to find the function value of a
function returning MODE. */
static rtx
%{mcpu=750: %{!Dppc*: %{!Dmpc*: -Dmpc750} } } \
%{mcpu=821: %{!Dppc*: %{!Dmpc*: -Dmpc821} } } \
%{mcpu=860: %{!Dppc*: %{!Dmpc*: -Dmpc860} } } \
-%{mcpu=8540: %{!Dppc*: %{!Dmpc*: -Dppc8540} } }"
+%{mcpu=8540: %{!Dppc*: %{!Dmpc*: -Dppc8540} } } \
+%{mcpu=e6500: -D__PPC_CPU_E6500__}"
#undef SUBSUBTARGET_EXTRA_SPECS
#define SUBSUBTARGET_EXTRA_SPECS \
# along with GCC; see the file COPYING3. If not see
# <http://www.gnu.org/licenses/>.
-MULTILIB_OPTIONS = \
-mcpu=403/mcpu=505/mcpu=603e/mcpu=604/mcpu=860/mcpu=7400/mcpu=8540 \
-msoft-float/mfloat-gprs=double
+MULTILIB_OPTIONS =
+MULTILIB_DIRNAMES =
+MULTILIB_MATCHES =
+MULTILIB_EXCEPTIONS =
+MULTILIB_REQUIRED =
+
+MULTILIB_OPTIONS += mcpu=403/mcpu=505/mcpu=603e/mcpu=604/mcpu=860/mcpu=7400/mcpu=8540/mcpu=e6500
+MULTILIB_DIRNAMES += m403 m505 m603e m604 m860 m7400 m8540 me6500
+
+MULTILIB_OPTIONS += m32
+MULTILIB_DIRNAMES += m32
-MULTILIB_DIRNAMES = \
-m403 m505 m603e m604 m860 m7400 m8540 \
-nof gprsdouble
+MULTILIB_OPTIONS += msoft-float/mfloat-gprs=double
+MULTILIB_DIRNAMES += nof gprsdouble
+
+MULTILIB_OPTIONS += mno-spe/mno-altivec
+MULTILIB_DIRNAMES += nospe noaltivec
-# MULTILIB_MATCHES = ${MULTILIB_MATCHES_FLOAT}
-MULTILIB_MATCHES =
MULTILIB_MATCHES += ${MULTILIB_MATCHES_ENDIAN}
MULTILIB_MATCHES += ${MULTILIB_MATCHES_SYSV}
# Map 405 to 403
# (mfloat-gprs=single is implicit default)
MULTILIB_MATCHES += mcpu?8540=mcpu?8540/mfloat-gprs?single
-# Soft-float only, default implies msoft-float
-# NOTE: Must match with MULTILIB_MATCHES_FLOAT and MULTILIB_MATCHES
-MULTILIB_SOFTFLOAT_ONLY = \
-*mcpu=401/*msoft-float* \
-*mcpu=403/*msoft-float* \
-*mcpu=405/*msoft-float* \
-*mcpu=801/*msoft-float* \
-*mcpu=821/*msoft-float* \
-*mcpu=823/*msoft-float* \
-*mcpu=860/*msoft-float*
-
-# Hard-float only, take out msoft-float
-MULTILIB_HARDFLOAT_ONLY = \
-*mcpu=505/*msoft-float*
-
-# Targets which do not support gprs
-MULTILIB_NOGPRS = \
-mfloat-gprs=* \
-*mcpu=403/*mfloat-gprs=* \
-*mcpu=505/*mfloat-gprs=* \
-*mcpu=603e/*mfloat-gprs=* \
-*mcpu=604/*mfloat-gprs=* \
-*mcpu=860/*mfloat-gprs=* \
-*mcpu=7400/*mfloat-gprs=*
-
-MULTILIB_EXCEPTIONS =
-
-# Disallow -Dppc and -Dmpc without other options
-MULTILIB_EXCEPTIONS += Dppc* Dmpc*
+# Enumeration of multilibs
-MULTILIB_EXCEPTIONS += \
-${MULTILIB_SOFTFLOAT_ONLY} \
-${MULTILIB_HARDFLOAT_ONLY} \
-${MULTILIB_NOGPRS}
+MULTILIB_REQUIRED += msoft-float
+MULTILIB_REQUIRED += mcpu=403
+MULTILIB_REQUIRED += mcpu=505
+MULTILIB_REQUIRED += mcpu=603e
+MULTILIB_REQUIRED += mcpu=603e/msoft-float
+MULTILIB_REQUIRED += mcpu=604
+MULTILIB_REQUIRED += mcpu=604/msoft-float
+MULTILIB_REQUIRED += mcpu=7400
+MULTILIB_REQUIRED += mcpu=7400/msoft-float
+MULTILIB_REQUIRED += mcpu=8540
+MULTILIB_REQUIRED += mcpu=8540/msoft-float/mno-spe
+MULTILIB_REQUIRED += mcpu=8540/mfloat-gprs=double
+MULTILIB_REQUIRED += mcpu=860
+MULTILIB_REQUIRED += mcpu=e6500/m32
+MULTILIB_REQUIRED += mcpu=e6500/m32/msoft-float/mno-altivec
UNSPEC_VSX_ROUND_IC
UNSPEC_VSX_SLDWI
UNSPEC_VSX_XXSPLTW
+ UNSPEC_VSX_XXSPLTD
+ UNSPEC_VSX_DIVSD
+ UNSPEC_VSX_DIVUD
+ UNSPEC_VSX_MULSD
+ UNSPEC_VSX_XVCVSXDDP
+ UNSPEC_VSX_XVCVUXDDP
+ UNSPEC_VSX_XVCVDPSXDS
+ UNSPEC_VSX_XVCVDPUXDS
])
;; VSX moves
[(set_attr "type" "<VStype_simple>")
(set_attr "fp_type" "<VSfptype_mul>")])
+; Emulate vector with scalar for vec_mul in V2DImode
+(define_insn_and_split "vsx_mul_v2di"
+ [(set (match_operand:V2DI 0 "vsx_register_operand" "=wa")
+ (unspec:V2DI [(match_operand:V2DI 1 "vsx_register_operand" "wa")
+ (match_operand:V2DI 2 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_MULSD))]
+ "VECTOR_MEM_VSX_P (V2DImode)"
+ "#"
+ "VECTOR_MEM_VSX_P (V2DImode) && !reload_completed && !reload_in_progress"
+ [(const_int 0)]
+ "
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ rtx op2 = operands[2];
+ rtx op3 = gen_reg_rtx (DImode);
+ rtx op4 = gen_reg_rtx (DImode);
+ rtx op5 = gen_reg_rtx (DImode);
+ emit_insn (gen_vsx_extract_v2di (op3, op1, GEN_INT (0)));
+ emit_insn (gen_vsx_extract_v2di (op4, op2, GEN_INT (0)));
+ emit_insn (gen_muldi3 (op5, op3, op4));
+ emit_insn (gen_vsx_extract_v2di (op3, op1, GEN_INT (1)));
+ emit_insn (gen_vsx_extract_v2di (op4, op2, GEN_INT (1)));
+ emit_insn (gen_muldi3 (op3, op3, op4));
+ emit_insn (gen_vsx_concat_v2di (op0, op5, op3));
+}"
+ [(set_attr "type" "vecdouble")])
+
(define_insn "*vsx_div<mode>3"
[(set (match_operand:VSX_F 0 "vsx_register_operand" "=<VSr>,?<VSa>")
(div:VSX_F (match_operand:VSX_F 1 "vsx_register_operand" "<VSr>,<VSa>")
[(set_attr "type" "<VStype_div>")
(set_attr "fp_type" "<VSfptype_div>")])
+; Emulate vector with scalar for vec_div in V2DImode
+(define_insn_and_split "vsx_div_v2di"
+ [(set (match_operand:V2DI 0 "vsx_register_operand" "=wa")
+ (unspec:V2DI [(match_operand:V2DI 1 "vsx_register_operand" "wa")
+ (match_operand:V2DI 2 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_DIVSD))]
+ "VECTOR_MEM_VSX_P (V2DImode)"
+ "#"
+ "VECTOR_MEM_VSX_P (V2DImode) && !reload_completed && !reload_in_progress"
+ [(const_int 0)]
+ "
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ rtx op2 = operands[2];
+ rtx op3 = gen_reg_rtx (DImode);
+ rtx op4 = gen_reg_rtx (DImode);
+ rtx op5 = gen_reg_rtx (DImode);
+ emit_insn (gen_vsx_extract_v2di (op3, op1, GEN_INT (0)));
+ emit_insn (gen_vsx_extract_v2di (op4, op2, GEN_INT (0)));
+ emit_insn (gen_divdi3 (op5, op3, op4));
+ emit_insn (gen_vsx_extract_v2di (op3, op1, GEN_INT (1)));
+ emit_insn (gen_vsx_extract_v2di (op4, op2, GEN_INT (1)));
+ emit_insn (gen_divdi3 (op3, op3, op4));
+ emit_insn (gen_vsx_concat_v2di (op0, op5, op3));
+}"
+ [(set_attr "type" "vecdiv")])
+
+(define_insn_and_split "vsx_udiv_v2di"
+ [(set (match_operand:V2DI 0 "vsx_register_operand" "=wa")
+ (unspec:V2DI [(match_operand:V2DI 1 "vsx_register_operand" "wa")
+ (match_operand:V2DI 2 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_DIVUD))]
+ "VECTOR_MEM_VSX_P (V2DImode)"
+ "#"
+ "VECTOR_MEM_VSX_P (V2DImode) && !reload_completed && !reload_in_progress"
+ [(const_int 0)]
+ "
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ rtx op2 = operands[2];
+ rtx op3 = gen_reg_rtx (DImode);
+ rtx op4 = gen_reg_rtx (DImode);
+ rtx op5 = gen_reg_rtx (DImode);
+ emit_insn (gen_vsx_extract_v2di (op3, op1, GEN_INT (0)));
+ emit_insn (gen_vsx_extract_v2di (op4, op2, GEN_INT (0)));
+ emit_insn (gen_udivdi3 (op5, op3, op4));
+ emit_insn (gen_vsx_extract_v2di (op3, op1, GEN_INT (1)));
+ emit_insn (gen_vsx_extract_v2di (op4, op2, GEN_INT (1)));
+ emit_insn (gen_udivdi3 (op3, op3, op4));
+ emit_insn (gen_vsx_concat_v2di (op0, op5, op3));
+}"
+ [(set_attr "type" "vecdiv")])
+
;; *tdiv* instruction returning the FG flag
(define_expand "vsx_tdiv<mode>3_fg"
[(set (match_dup 3)
"xscvspdpn %x0,%x1"
[(set_attr "type" "fp")])
+;; Convert and scale (used by vec_ctf, vec_cts, vec_ctu for double/long long)
+
+(define_expand "vsx_xvcvsxddp_scale"
+ [(match_operand:V2DF 0 "vsx_register_operand" "")
+ (match_operand:V2DI 1 "vsx_register_operand" "")
+ (match_operand:QI 2 "immediate_operand" "")]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ int scale = INTVAL(operands[2]);
+ emit_insn (gen_vsx_xvcvsxddp (op0, op1));
+ if (scale != 0)
+ rs6000_scale_v2df (op0, op0, -scale);
+ DONE;
+})
+
+(define_insn "vsx_xvcvsxddp"
+ [(set (match_operand:V2DF 0 "vsx_register_operand" "=wa")
+ (unspec:V2DF [(match_operand:V2DI 1 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_XVCVSXDDP))]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+ "xvcvsxddp %x0,%x1"
+ [(set_attr "type" "vecdouble")])
+
+(define_expand "vsx_xvcvuxddp_scale"
+ [(match_operand:V2DF 0 "vsx_register_operand" "")
+ (match_operand:V2DI 1 "vsx_register_operand" "")
+ (match_operand:QI 2 "immediate_operand" "")]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ int scale = INTVAL(operands[2]);
+ emit_insn (gen_vsx_xvcvuxddp (op0, op1));
+ if (scale != 0)
+ rs6000_scale_v2df (op0, op0, -scale);
+ DONE;
+})
+
+(define_insn "vsx_xvcvuxddp"
+ [(set (match_operand:V2DF 0 "vsx_register_operand" "=wa")
+ (unspec:V2DF [(match_operand:V2DI 1 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_XVCVUXDDP))]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+ "xvcvuxddp %x0,%x1"
+ [(set_attr "type" "vecdouble")])
+
+(define_expand "vsx_xvcvdpsxds_scale"
+ [(match_operand:V2DI 0 "vsx_register_operand" "")
+ (match_operand:V2DF 1 "vsx_register_operand" "")
+ (match_operand:QI 2 "immediate_operand" "")]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ rtx tmp = gen_reg_rtx (V2DFmode);
+ int scale = INTVAL(operands[2]);
+ if (scale != 0)
+ rs6000_scale_v2df (tmp, op1, scale);
+ emit_insn (gen_vsx_xvcvdpsxds (op0, tmp));
+ DONE;
+})
+
+(define_insn "vsx_xvcvdpsxds"
+ [(set (match_operand:V2DI 0 "vsx_register_operand" "=wa")
+ (unspec:V2DI [(match_operand:V2DF 1 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_XVCVDPSXDS))]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+ "xvcvdpsxds %x0,%x1"
+ [(set_attr "type" "vecdouble")])
+
+(define_expand "vsx_xvcvdpuxds_scale"
+ [(match_operand:V2DI 0 "vsx_register_operand" "")
+ (match_operand:V2DF 1 "vsx_register_operand" "")
+ (match_operand:QI 2 "immediate_operand" "")]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+{
+ rtx op0 = operands[0];
+ rtx op1 = operands[1];
+ rtx tmp = gen_reg_rtx (V2DFmode);
+ int scale = INTVAL(operands[2]);
+ if (scale != 0)
+ rs6000_scale_v2df (tmp, op1, scale);
+ emit_insn (gen_vsx_xvcvdpuxds (op0, tmp));
+ DONE;
+})
+
+(define_insn "vsx_xvcvdpuxds"
+ [(set (match_operand:V2DI 0 "vsx_register_operand" "=wa")
+ (unspec:V2DI [(match_operand:V2DF 1 "vsx_register_operand" "wa")]
+ UNSPEC_VSX_XVCVDPUXDS))]
+ "VECTOR_UNIT_VSX_P (V2DFmode)"
+ "xvcvdpuxds %x0,%x1"
+ [(set_attr "type" "vecdouble")])
+
;; Convert from 64-bit to 32-bit types
;; Note, favor the Altivec registers since the usual use of these instructions
;; is in vector converts and we need to use the Altivec vperm instruction.
"xxspltw %x0,%x1,%2"
[(set_attr "type" "vecperm")])
+;; V2DF/V2DI splat for use by vec_splat builtin
+(define_insn "vsx_xxspltd_<mode>"
+ [(set (match_operand:VSX_D 0 "vsx_register_operand" "=wa")
+ (unspec:VSX_D [(match_operand:VSX_D 1 "vsx_register_operand" "wa")
+ (match_operand:QI 2 "u5bit_cint_operand" "i")]
+ UNSPEC_VSX_XXSPLTD))]
+ "VECTOR_MEM_VSX_P (<MODE>mode)"
+{
+ if ((VECTOR_ELT_ORDER_BIG && INTVAL (operands[2]) == 0)
+ || (!VECTOR_ELT_ORDER_BIG && INTVAL (operands[2]) == 1))
+ return "xxpermdi %x0,%x1,%x1,0";
+ else
+ return "xxpermdi %x0,%x1,%x1,3";
+}
+ [(set_attr "type" "vecperm")])
+
;; V4SF/V4SI interleave
(define_insn "vsx_xxmrghw_<mode>"
[(set (match_operand:VSX_W 0 "vsx_register_operand" "=wf,?<VSa>")
do { fputs (LOCAL_COMMON_ASM_OP, (FILE)); \
RS6000_OUTPUT_BASENAME ((FILE), (NAME)); \
if ((ALIGN) > 32) \
- fprintf ((FILE), ","HOST_WIDE_INT_PRINT_UNSIGNED",%s,%u\n", \
+ fprintf ((FILE), ","HOST_WIDE_INT_PRINT_UNSIGNED",%s%u_,%u\n", \
(SIZE), xcoff_bss_section_name, \
+ floor_log2 ((ALIGN) / BITS_PER_UNIT), \
floor_log2 ((ALIGN) / BITS_PER_UNIT)); \
else if ((SIZE) > 4) \
- fprintf ((FILE), ","HOST_WIDE_INT_PRINT_UNSIGNED",%s,3\n", \
+ fprintf ((FILE), ","HOST_WIDE_INT_PRINT_UNSIGNED",%s3_,3\n", \
(SIZE), xcoff_bss_section_name); \
else \
- fprintf ((FILE), ","HOST_WIDE_INT_PRINT_UNSIGNED",%s\n", \
+ fprintf ((FILE), ","HOST_WIDE_INT_PRINT_UNSIGNED",%s,2\n", \
(SIZE), xcoff_bss_section_name); \
} while (0)
#endif
break;
case 'R':
- gcc_assert (GET_MODE_SIZE (GET_MODE (op)) < 4);
+ gcc_assert (GET_MODE_SIZE (GET_MODE (op)) <= 4);
unsigned_load = true;
/* Fall through. */
case 'Q':
/* Compute the alignment needed for label X in various situations.
If the user has specified an alignment then honour that, otherwise
use rx_align_for_label. */
-#define JUMP_ALIGN(x) (align_jumps ? align_jumps : rx_align_for_label (x, 0))
-#define LABEL_ALIGN(x) (align_labels ? align_labels : rx_align_for_label (x, 3))
-#define LOOP_ALIGN(x) (align_loops ? align_loops : rx_align_for_label (x, 2))
+#define JUMP_ALIGN(x) (align_jumps > 1 ? align_jumps_log : rx_align_for_label (x, 0))
+#define LABEL_ALIGN(x) (align_labels > 1 ? align_labels_log : rx_align_for_label (x, 3))
+#define LOOP_ALIGN(x) (align_loops > 1 ? align_loops_log : rx_align_for_label (x, 2))
#define LABEL_ALIGN_AFTER_BARRIER(x) rx_align_for_label (x, 0)
#define ASM_OUTPUT_MAX_SKIP_ALIGN(STREAM, LOG, MAX_SKIP) \
/* Helper routines for memory move and comparison insns.
- Copyright (C) 2013-2014 Free Software Foundation, Inc.
+ Copyright (C) 2013-2015 Free Software Foundation, Inc.
This file is part of GCC.
emit_move_insn (tmp3, addr2);
emit_move_insn (s2_addr, plus_constant (Pmode, s2_addr, 4));
- /*start long loop. */
+ /* start long loop. */
emit_label (L_loop_long);
emit_move_insn (tmp2, tmp3);
rtx len = force_reg (SImode, operands[3]);
int constp = CONST_INT_P (operands[3]);
- /* Loop on a register count. */
+ /* Loop on a register count. */
if (constp)
{
rtx tmp0 = gen_reg_rtx (SImode);
add_int_reg_note (jump, REG_BR_PROB, prob_likely);
}
- /* word count. Do we have iterations ? */
+ /* word count. Do we have iterations ? */
emit_insn (gen_lshrsi3 (lenw, len, GEN_INT (2)));
/*start long loop. */
/* end loop. Reached max iterations. */
if (! sbytes)
{
+ emit_insn (gen_subsi3 (operands[0], tmp1, tmp2));
jump = emit_jump_insn (gen_jump_compact (L_return));
emit_barrier_after (jump);
}
jump = emit_jump_insn (gen_jump_compact( L_end_loop_byte));
emit_barrier_after (jump);
}
+ else
+ {
+ emit_insn (gen_cmpeqsi_t (len, const0_rtx));
+ emit_move_insn (operands[0], const0_rtx);
+ jump = emit_jump_insn (gen_branch_true (L_return));
+ add_int_reg_note (jump, REG_BR_PROB, prob_unlikely);
+ }
addr1 = adjust_automodify_address (addr1, QImode, s1_addr, 0);
addr2 = adjust_automodify_address (addr2, QImode, s2_addr, 0);
emit_insn (gen_zero_extendqisi2 (tmp2, gen_lowpart (QImode, tmp2)));
emit_insn (gen_zero_extendqisi2 (tmp1, gen_lowpart (QImode, tmp1)));
- emit_label (L_return);
-
emit_insn (gen_subsi3 (operands[0], tmp1, tmp2));
+ emit_label (L_return);
+
return true;
}
-/* Emit code to perform a strlen
+/* Emit code to perform a strlen.
OPERANDS[0] is the destination.
OPERANDS[1] is the string.
addr1 = adjust_automodify_address (addr1, SImode, current_addr, 0);
- /*start long loop. */
+ /* start long loop. */
emit_label (L_loop_long);
/* tmp1 is aligned, OK to load. */
})
(define_expand "extendqihi2"
- [(set (match_operand:HI 0 "arith_reg_dest" "")
- (sign_extend:HI (match_operand:QI 1 "arith_reg_operand" "")))]
- ""
- "")
+ [(set (match_operand:HI 0 "arith_reg_dest")
+ (sign_extend:HI (match_operand:QI 1 "arith_reg_operand")))]
+ "TARGET_SH1")
(define_insn "*extendqihi2_compact_reg"
[(set (match_operand:HI 0 "arith_reg_dest" "=r")
+2015-01-07 Jason Merrill <jason@redhat.com>
+
+ PR c++/64487
+ * semantics.c (finish_offsetof): Handle templates here.
+ * parser.c (cp_parser_builtin_offsetof): Not here.
+
+ PR c++/64352
+ * pt.c (tsubst_copy_and_build): Pass complain to mark_used.
+
+ PR c++/64251
+ * decl2.c (mark_used): Don't mark if in_template_function.
+
+ PR c++/64297
+ * typeck.c (apply_memfn_quals): Correct wrong TYPE_CANONICAL.
+
+ PR c++/64029
+ * decl.c (grok_reference_init): Complete array type.
+
+ PR c++/63657
+ PR c++/38958
+ * call.c (set_up_extended_ref_temp): Set TREE_USED on the reference
+ if the temporary has a non-trivial destructor.
+ * decl.c (poplevel): Don't look through references.
+
+ PR c++/63658
+ * pt.c (convert_nontype_argument): Call convert_from_reference.
+ (check_instantiated_arg): Don't be confused by reference refs.
+ (unify): Look through reference refs on the arg, too.
+ * mangle.c (write_template_arg): Look through reference refs.
+
+2014-12-19 Paolo Carlini <paolo.carlini@oracle.com>
+
+ PR c++/60955
+ * pt.c (struct warning_sentinel): Move it...
+ * cp-tree.h: ... here.
+ * semantics.c (force_paren_expr): Use it.
+
2014-11-21 Jason Merrill <jason@redhat.com>
PR c++/63849
/* Check whether the dtor is callable. */
cxx_maybe_build_cleanup (var, tf_warning_or_error);
}
+ /* Avoid -Wunused-variable warning (c++/38958). */
+ if (TYPE_HAS_NONTRIVIAL_DESTRUCTOR (type)
+ && TREE_CODE (decl) == VAR_DECL)
+ TREE_USED (decl) = DECL_READ_P (decl) = true;
*initp = init;
return var;
#define processing_specialization scope_chain->x_processing_specialization
#define processing_explicit_instantiation scope_chain->x_processing_explicit_instantiation
+/* RAII sentinel to disable certain warnings during template substitution
+ and elsewhere. */
+
+struct warning_sentinel
+{
+ int &flag;
+ int val;
+ warning_sentinel(int& flag, bool suppress=true)
+ : flag(flag), val(flag) { if (suppress) flag = 0; }
+ ~warning_sentinel() { flag = val; }
+};
+
/* The cached class binding level, from the most recently exited
class, or NULL if none. */
push_local_binding where the list of decls returned by
getdecls is built. */
decl = TREE_CODE (d) == TREE_LIST ? TREE_VALUE (d) : d;
- // See through references for improved -Wunused-variable (PR 38958).
- tree type = non_reference (TREE_TYPE (decl));
+ tree type = TREE_TYPE (decl);
if (VAR_P (decl)
&& (! TREE_USED (decl) || !DECL_READ_P (decl))
&& ! DECL_IN_SYSTEM_HEADER (decl)
init = build_x_compound_expr_from_list (init, ELK_INIT,
tf_warning_or_error);
- if (TREE_CODE (TREE_TYPE (type)) != ARRAY_TYPE
+ tree ttype = TREE_TYPE (type);
+ if (TREE_CODE (ttype) != ARRAY_TYPE
&& TREE_CODE (TREE_TYPE (init)) == ARRAY_TYPE)
/* Note: default conversion is only called in very special cases. */
init = decay_conversion (init, tf_warning_or_error);
+ /* check_initializer handles this for non-reference variables, but for
+ references we need to do it here or the initializer will get the
+ incomplete array type and confuse later calls to
+ cp_complete_array_type. */
+ if (TREE_CODE (ttype) == ARRAY_TYPE
+ && TYPE_DOMAIN (ttype) == NULL_TREE
+ && (BRACE_ENCLOSED_INITIALIZER_P (init)
+ || TREE_CODE (init) == STRING_CST))
+ {
+ cp_complete_array_type (&ttype, init, false);
+ if (ttype != TREE_TYPE (type))
+ type = cp_build_reference_type (ttype, TYPE_REF_IS_RVALUE (type));
+ }
+
/* Convert INIT to the reference type TYPE. This may involve the
creation of a temporary, whose lifetime must be the same as that
of the reference. If so, a DECL_EXPR for the temporary will be
--function_depth;
}
- if (processing_template_decl)
+ if (processing_template_decl || in_template_function ())
return true;
/* Check this too in case we're within fold_non_dependent_expr. */
}
}
+ if (REFERENCE_REF_P (node))
+ node = TREE_OPERAND (node, 0);
if (TREE_CODE (node) == NOP_EXPR
&& TREE_CODE (TREE_TYPE (node)) == REFERENCE_TYPE)
{
}
success:
- /* If we're processing a template, we can't finish the semantics yet.
- Otherwise we can fold the entire expression now. */
- if (processing_template_decl)
- expr = build1 (OFFSETOF_EXPR, size_type_node, expr);
- else
- expr = finish_offsetof (expr);
+ expr = finish_offsetof (expr);
failure:
parser->integral_constant_expression_p = save_ice_p;
right type? */
gcc_assert (same_type_ignoring_top_level_qualifiers_p
(type, TREE_TYPE (expr)));
- return expr;
+ return convert_from_reference (expr);
}
/* Subroutine of coerce_template_template_parms, which returns 1 if
return t;
}
-/* Sentinel to disable certain warnings during template substitution. */
-
-struct warning_sentinel {
- int &flag;
- int val;
- warning_sentinel(int& flag, bool suppress=true)
- : flag(flag), val(flag) { if (suppress) flag = 0; }
- ~warning_sentinel() { flag = val; }
-};
-
/* Like tsubst but deals with expressions and performs semantic
analysis. FUNCTION_P is true if T is the "F" in "F (ARGS)". */
/* Remember that there was a reference to this entity. */
if (DECL_P (function))
- mark_used (function);
+ mark_used (function, complain);
/* Put back tf_decltype for the actual call. */
complain |= decltype_flag;
constant. */
else if (TREE_TYPE (t)
&& INTEGRAL_OR_ENUMERATION_TYPE_P (TREE_TYPE (t))
+ && !REFERENCE_REF_P (t)
&& !TREE_CONSTANT (t))
{
if (complain & tf_error)
case INDIRECT_REF:
if (REFERENCE_REF_P (parm))
- return unify (tparms, targs, TREE_OPERAND (parm, 0), arg,
- strict, explain_p);
+ {
+ if (REFERENCE_REF_P (arg))
+ arg = TREE_OPERAND (arg, 0);
+ return unify (tparms, targs, TREE_OPERAND (parm, 0), arg,
+ strict, explain_p);
+ }
/* FALLTHRU */
default:
tree type = unlowered_expr_type (expr);
bool rval = !!(kind & clk_rvalueref);
type = cp_build_reference_type (type, rval);
+ /* This inhibits warnings in, eg, cxx_mark_addressable
+ (c++/60955). */
+ warning_sentinel s (extra_warnings);
expr = build_static_cast (type, expr, tf_error);
if (expr != error_mark_node)
REF_PARENTHESIZED_P (expr) = true;
tree
finish_offsetof (tree expr)
{
+ /* If we're processing a template, we can't finish the semantics yet.
+ Otherwise we can fold the entire expression now. */
+ if (processing_template_decl)
+ {
+ expr = build1 (OFFSETOF_EXPR, size_type_node, expr);
+ return expr;
+ }
+
if (TREE_CODE (expr) == PSEUDO_DTOR_EXPR)
{
error ("cannot apply %<offsetof%> to destructor %<~%T%>",
/* This should really have a different TYPE_MAIN_VARIANT, but that gets
complex. */
tree result = build_qualified_type (type, memfn_quals);
+ if (tree canon = TYPE_CANONICAL (result))
+ if (canon != result)
+ /* check_qualified_type doesn't check the ref-qualifier, so make sure
+ TYPE_CANONICAL is correct. */
+ TYPE_CANONICAL (result)
+ = build_ref_qualified_type (canon, type_memfn_rqual (result));
result = build_exception_variant (result, TYPE_RAISES_EXCEPTIONS (type));
return build_ref_qualified_type (result, rqual);
}
vector bool int vec_cmplt (vector signed int, vector signed int);
vector bool int vec_cmplt (vector float, vector float);
+vector float vec_cpsgn (vector float, vector float);
+
vector float vec_ctf (vector unsigned int, const int);
vector float vec_ctf (vector signed int, const int);
+vector double vec_ctf (vector unsigned long, const int);
+vector double vec_ctf (vector signed long, const int);
vector float vec_vcfsx (vector signed int, const int);
vector float vec_vcfux (vector unsigned int, const int);
vector signed int vec_cts (vector float, const int);
+vector signed long vec_cts (vector double, const int);
vector unsigned int vec_ctu (vector float, const int);
+vector unsigned long vec_ctu (vector double, const int);
void vec_dss (const int);
vector signed int vec_splat (vector signed int, const int);
vector unsigned int vec_splat (vector unsigned int, const int);
vector bool int vec_splat (vector bool int, const int);
+vector signed long vec_splat (vector signed long, const int);
+vector unsigned long vec_splat (vector unsigned long, const int);
+
+vector signed char vec_splats (signed char);
+vector unsigned char vec_splats (unsigned char);
+vector signed short vec_splats (signed short);
+vector unsigned short vec_splats (unsigned short);
+vector signed int vec_splats (signed int);
+vector unsigned int vec_splats (unsigned int);
+vector float vec_splats (float);
vector float vec_vspltw (vector float, const int);
vector signed int vec_vspltw (vector signed int, const int);
vector double vec_and (vector double, vector double);
vector double vec_and (vector double, vector bool long);
vector double vec_and (vector bool long, vector double);
+vector long vec_and (vector long, vector long);
+vector long vec_and (vector long, vector bool long);
+vector long vec_and (vector bool long, vector long);
+vector unsigned long vec_and (vector unsigned long, vector unsigned long);
+vector unsigned long vec_and (vector unsigned long, vector bool long);
+vector unsigned long vec_and (vector bool long, vector unsigned long);
vector double vec_andc (vector double, vector double);
vector double vec_andc (vector double, vector bool long);
vector double vec_andc (vector bool long, vector double);
+vector long vec_andc (vector long, vector long);
+vector long vec_andc (vector long, vector bool long);
+vector long vec_andc (vector bool long, vector long);
+vector unsigned long vec_andc (vector unsigned long, vector unsigned long);
+vector unsigned long vec_andc (vector unsigned long, vector bool long);
+vector unsigned long vec_andc (vector bool long, vector unsigned long);
vector double vec_ceil (vector double);
vector bool long vec_cmpeq (vector double, vector double);
vector bool long vec_cmpge (vector double, vector double);
vector bool long vec_cmpgt (vector double, vector double);
vector bool long vec_cmple (vector double, vector double);
vector bool long vec_cmplt (vector double, vector double);
+vector double vec_cpsgn (vector double, vector double);
vector float vec_div (vector float, vector float);
vector double vec_div (vector double, vector double);
+vector long vec_div (vector long, vector long);
+vector unsigned long vec_div (vector unsigned long, vector unsigned long);
vector double vec_floor (vector double);
vector double vec_ld (int, const vector double *);
vector double vec_ld (int, const double *);
vector unsigned char vec_lvsr (int, const volatile double *);
vector double vec_madd (vector double, vector double, vector double);
vector double vec_max (vector double, vector double);
+vector signed long vec_mergeh (vector signed long, vector signed long);
+vector signed long vec_mergeh (vector signed long, vector bool long);
+vector signed long vec_mergeh (vector bool long, vector signed long);
+vector unsigned long vec_mergeh (vector unsigned long, vector unsigned long);
+vector unsigned long vec_mergeh (vector unsigned long, vector bool long);
+vector unsigned long vec_mergeh (vector bool long, vector unsigned long);
+vector signed long vec_mergel (vector signed long, vector signed long);
+vector signed long vec_mergel (vector signed long, vector bool long);
+vector signed long vec_mergel (vector bool long, vector signed long);
+vector unsigned long vec_mergel (vector unsigned long, vector unsigned long);
+vector unsigned long vec_mergel (vector unsigned long, vector bool long);
+vector unsigned long vec_mergel (vector bool long, vector unsigned long);
vector double vec_min (vector double, vector double);
vector float vec_msub (vector float, vector float, vector float);
vector double vec_msub (vector double, vector double, vector double);
vector float vec_mul (vector float, vector float);
vector double vec_mul (vector double, vector double);
+vector long vec_mul (vector long, vector long);
+vector unsigned long vec_mul (vector unsigned long, vector unsigned long);
vector float vec_nearbyint (vector float);
vector double vec_nearbyint (vector double);
vector float vec_nmadd (vector float, vector float, vector float);
vector double vec_nmadd (vector double, vector double, vector double);
vector double vec_nmsub (vector double, vector double, vector double);
vector double vec_nor (vector double, vector double);
+vector long vec_nor (vector long, vector long);
+vector long vec_nor (vector long, vector bool long);
+vector long vec_nor (vector bool long, vector long);
+vector unsigned long vec_nor (vector unsigned long, vector unsigned long);
+vector unsigned long vec_nor (vector unsigned long, vector bool long);
+vector unsigned long vec_nor (vector bool long, vector unsigned long);
vector double vec_or (vector double, vector double);
vector double vec_or (vector double, vector bool long);
vector double vec_or (vector bool long, vector double);
-vector double vec_perm (vector double,
- vector double,
- vector unsigned char);
+vector long vec_or (vector long, vector long);
+vector long vec_or (vector long, vector bool long);
+vector long vec_or (vector bool long, vector long);
+vector unsigned long vec_or (vector unsigned long, vector unsigned long);
+vector unsigned long vec_or (vector unsigned long, vector bool long);
+vector unsigned long vec_or (vector bool long, vector unsigned long);
+vector double vec_perm (vector double, vector double, vector unsigned char);
+vector long vec_perm (vector long, vector long, vector unsigned char);
+vector unsigned long vec_perm (vector unsigned long, vector unsigned long,
+ vector unsigned char);
vector double vec_rint (vector double);
vector double vec_recip (vector double, vector double);
vector double vec_rsqrt (vector double);
vector double vec_rsqrte (vector double);
vector double vec_sel (vector double, vector double, vector bool long);
vector double vec_sel (vector double, vector double, vector unsigned long);
-vector double vec_sub (vector double, vector double);
+vector long vec_sel (vector long, vector long, vector long);
+vector long vec_sel (vector long, vector long, vector unsigned long);
+vector long vec_sel (vector long, vector long, vector bool long);
+vector unsigned long vec_sel (vector unsigned long, vector unsigned long,
+ vector long);
+vector unsigned long vec_sel (vector unsigned long, vector unsigned long,
+ vector unsigned long);
+vector unsigned long vec_sel (vector unsigned long, vector unsigned long,
+ vector bool long);
+vector double vec_splats (double);
+vector signed long vec_splats (signed long);
+vector unsigned long vec_splats (unsigned long);
vector float vec_sqrt (vector float);
vector double vec_sqrt (vector double);
void vec_st (vector double, int, vector double *);
void vec_st (vector double, int, double *);
+vector double vec_sub (vector double, vector double);
vector double vec_trunc (vector double);
vector double vec_xor (vector double, vector double);
vector double vec_xor (vector double, vector bool long);
vector double vec_xor (vector bool long, vector double);
+vector long vec_xor (vector long, vector long);
+vector long vec_xor (vector long, vector bool long);
+vector long vec_xor (vector bool long, vector long);
+vector unsigned long vec_xor (vector unsigned long, vector unsigned long);
+vector unsigned long vec_xor (vector unsigned long, vector bool long);
+vector unsigned long vec_xor (vector bool long, vector unsigned long);
int vec_all_eq (vector double, vector double);
int vec_all_ge (vector double, vector double);
int vec_all_gt (vector double, vector double);
vector unsigned long long);
int vec_all_eq (vector long long, vector long long);
+int vec_all_eq (vector unsigned long long, vector unsigned long long);
int vec_all_ge (vector long long, vector long long);
+int vec_all_ge (vector unsigned long long, vector unsigned long long);
int vec_all_gt (vector long long, vector long long);
+int vec_all_gt (vector unsigned long long, vector unsigned long long);
int vec_all_le (vector long long, vector long long);
+int vec_all_le (vector unsigned long long, vector unsigned long long);
int vec_all_lt (vector long long, vector long long);
+int vec_all_lt (vector unsigned long long, vector unsigned long long);
int vec_all_ne (vector long long, vector long long);
+int vec_all_ne (vector unsigned long long, vector unsigned long long);
+
int vec_any_eq (vector long long, vector long long);
+int vec_any_eq (vector unsigned long long, vector unsigned long long);
int vec_any_ge (vector long long, vector long long);
+int vec_any_ge (vector unsigned long long, vector unsigned long long);
int vec_any_gt (vector long long, vector long long);
+int vec_any_gt (vector unsigned long long, vector unsigned long long);
int vec_any_le (vector long long, vector long long);
+int vec_any_le (vector unsigned long long, vector unsigned long long);
int vec_any_lt (vector long long, vector long long);
+int vec_any_lt (vector unsigned long long, vector unsigned long long);
int vec_any_ne (vector long long, vector long long);
+int vec_any_ne (vector unsigned long long, vector unsigned long long);
vector long long vec_eqv (vector long long, vector long long);
vector long long vec_eqv (vector bool long long, vector long long);
vector unsigned long long vec_max (vector unsigned long long,
vector unsigned long long);
+vector signed int vec_mergee (vector signed int, vector signed int);
+vector unsigned int vec_mergee (vector unsigned int, vector unsigned int);
+vector bool int vec_mergee (vector bool int, vector bool int);
+
+vector signed int vec_mergeo (vector signed int, vector signed int);
+vector unsigned int vec_mergeo (vector unsigned int, vector unsigned int);
+vector bool int vec_mergeo (vector bool int, vector bool int);
+
vector long long vec_min (vector long long, vector long long);
vector unsigned long long vec_min (vector unsigned long long,
vector unsigned long long);
vector unsigned long long);
vector unsigned int vec_packsu (vector long long, vector long long);
+vector unsigned int vec_packsu (vector unsigned long long,
+ vector unsigned long long);
vector long long vec_rl (vector long long,
vector unsigned long long);
vector long long vec_vbpermq (vector signed char, vector signed char);
vector long long vec_vbpermq (vector unsigned char, vector unsigned char);
+vector long long vec_cntlz (vector long long);
+vector unsigned long long vec_cntlz (vector unsigned long long);
+vector int vec_cntlz (vector int);
+vector unsigned int vec_cntlz (vector int);
+vector short vec_cntlz (vector short);
+vector unsigned short vec_cntlz (vector unsigned short);
+vector signed char vec_cntlz (vector signed char);
+vector unsigned char vec_cntlz (vector unsigned char);
+
vector long long vec_vclz (vector long long);
vector unsigned long long vec_vclz (vector unsigned long long);
vector int vec_vclz (vector int);
@cindex @code{pc} and attributes
@item (pc)
-This refers to the address of the @emph{current} insn. It might have
-been more consistent with other usage to make this the address of the
-@emph{next} insn but this would be confusing because the length of the
+For non-branch instructions and backward branch instructions, this refers
+to the address of the current insn. But for forward branch instructions,
+this refers to the address of the next insn, because the length of the
current insn is to be computed.
@end table
+2015-01-12 Janus Weil <janus@gcc.gnu.org>
+
+ Backport from mainline
+ PR fortran/63733
+ * interface.c (gfc_extend_expr): Look for type-bound operators before
+ non-typebound ones.
+
+2015-01-08 Thomas Koenig <tkoenig@gcc.gnu.org>
+
+ Backport from trunk
+ PR fortran/56867
+ * trans-array.c (gfc_conv_resolve_dependencies): Also check
+ dependencies when there may be substrings of character arrays.
+
+2014-12-23 Janus Weil <janus@gcc.gnu.org>
+
+ Backport from mainline
+ PR fortran/64244
+ * resolve.c (resolve_typebound_call): New argument to pass out the
+ non-overridable attribute of the specific procedure.
+ (resolve_typebound_subroutine): Get overridable flag from
+ resolve_typebound_call.
+
2014-11-28 Jakub Jelinek <jakub@redhat.com>
Backported from mainline
gfc_user_op *uop;
gfc_intrinsic_op i;
const char *gname;
+ gfc_typebound_proc* tbo;
+ gfc_expr* tb_base;
sym = NULL;
i = fold_unary_intrinsic (e->value.op.op);
+ /* See if we find a matching type-bound operator. */
+ if (i == INTRINSIC_USER)
+ tbo = matching_typebound_op (&tb_base, actual,
+ i, e->value.op.uop->name, &gname);
+ else
+ switch (i)
+ {
+#define CHECK_OS_COMPARISON(comp) \
+ case INTRINSIC_##comp: \
+ case INTRINSIC_##comp##_OS: \
+ tbo = matching_typebound_op (&tb_base, actual, \
+ INTRINSIC_##comp, NULL, &gname); \
+ if (!tbo) \
+ tbo = matching_typebound_op (&tb_base, actual, \
+ INTRINSIC_##comp##_OS, NULL, &gname); \
+ break;
+ CHECK_OS_COMPARISON(EQ)
+ CHECK_OS_COMPARISON(NE)
+ CHECK_OS_COMPARISON(GT)
+ CHECK_OS_COMPARISON(GE)
+ CHECK_OS_COMPARISON(LT)
+ CHECK_OS_COMPARISON(LE)
+#undef CHECK_OS_COMPARISON
+
+ default:
+ tbo = matching_typebound_op (&tb_base, actual, i, NULL, &gname);
+ break;
+ }
+
+ /* If there is a matching typebound-operator, replace the expression with
+ a call to it and succeed. */
+ if (tbo)
+ {
+ gcc_assert (tb_base);
+ build_compcall_for_operator (e, actual, tb_base, tbo, gname);
+
+ if (!gfc_resolve_expr (e))
+ return MATCH_ERROR;
+ else
+ return MATCH_YES;
+ }
+
if (i == INTRINSIC_USER)
{
for (ns = gfc_current_ns; ns; ns = ns->parent)
if (sym == NULL)
{
- gfc_typebound_proc* tbo;
- gfc_expr* tb_base;
-
- /* See if we find a matching type-bound operator. */
- if (i == INTRINSIC_USER)
- tbo = matching_typebound_op (&tb_base, actual,
- i, e->value.op.uop->name, &gname);
- else
- switch (i)
- {
-#define CHECK_OS_COMPARISON(comp) \
- case INTRINSIC_##comp: \
- case INTRINSIC_##comp##_OS: \
- tbo = matching_typebound_op (&tb_base, actual, \
- INTRINSIC_##comp, NULL, &gname); \
- if (!tbo) \
- tbo = matching_typebound_op (&tb_base, actual, \
- INTRINSIC_##comp##_OS, NULL, &gname); \
- break;
- CHECK_OS_COMPARISON(EQ)
- CHECK_OS_COMPARISON(NE)
- CHECK_OS_COMPARISON(GT)
- CHECK_OS_COMPARISON(GE)
- CHECK_OS_COMPARISON(LT)
- CHECK_OS_COMPARISON(LE)
-#undef CHECK_OS_COMPARISON
-
- default:
- tbo = matching_typebound_op (&tb_base, actual, i, NULL, &gname);
- break;
- }
-
- /* If there is a matching typebound-operator, replace the expression with
- a call to it and succeed. */
- if (tbo)
- {
- bool result;
-
- gcc_assert (tb_base);
- build_compcall_for_operator (e, actual, tb_base, tbo, gname);
-
- result = gfc_resolve_expr (e);
- if (!result)
- return MATCH_ERROR;
-
- return MATCH_YES;
- }
-
/* Don't use gfc_free_actual_arglist(). */
free (actual->next);
free (actual);
-
return MATCH_NO;
}
/* Resolve a call to a type-bound subroutine. */
static bool
-resolve_typebound_call (gfc_code* c, const char **name)
+resolve_typebound_call (gfc_code* c, const char **name, bool *overridable)
{
gfc_actual_arglist* newactual;
gfc_symtree* target;
if (!resolve_typebound_generic_call (c->expr1, name))
return false;
+ /* Pass along the NON_OVERRIDABLE attribute of the specific TBP. */
+ if (overridable)
+ *overridable = !c->expr1->value.compcall.tbp->non_overridable;
+
/* Transform into an ordinary EXEC_CALL for now. */
if (!resolve_typebound_static (c->expr1, &target, &newactual))
if (c->ts.u.derived == NULL)
c->ts.u.derived = gfc_find_derived_vtab (declared);
- if (!resolve_typebound_call (code, &name))
+ if (!resolve_typebound_call (code, &name, NULL))
return false;
/* Use the generic name if it is there. */
}
if (st == NULL)
- return resolve_typebound_call (code, NULL);
+ return resolve_typebound_call (code, NULL, NULL);
if (!resolve_ref (code->expr1))
return false;
|| (!class_ref && st->n.sym->ts.type != BT_CLASS))
{
gfc_free_ref_list (new_ref);
- return resolve_typebound_call (code, NULL);
+ return resolve_typebound_call (code, NULL, NULL);
}
- if (!resolve_typebound_call (code, &name))
+ if (!resolve_typebound_call (code, &name, &overridable))
{
gfc_free_ref_list (new_ref);
return false;
&& ss_expr->rank)
nDepend = gfc_check_dependency (dest_expr, ss_expr, true);
+ /* Check for cases like c(:)(1:2) = c(2)(2:3) */
+ if (!nDepend && dest_expr->rank > 0
+ && dest_expr->ts.type == BT_CHARACTER
+ && ss_expr->expr_type == EXPR_VARIABLE)
+
+ nDepend = gfc_check_dependency (dest_expr, ss_expr, false);
+
continue;
}
// This is the symbol table.
pname->clear();
}
+ else if (hdr->ar_name[1] == 'S' && hdr->ar_name[2] == 'Y'
+ && hdr->ar_name[3] == 'M' && hdr->ar_name[4] == '6'
+ && hdr->ar_name[5] == '4' && hdr->ar_name[6] == '/'
+ && hdr->ar_name[7] == ' '
+ )
+ {
+ // 64-bit symbol table.
+ pname->clear();
+ }
else if (hdr->ar_name[1] == '/')
{
// This is the extended name table.
if (orig_rettype == void_type_node)
return NULL_TREE;
TREE_TYPE (fndecl) = build_distinct_type_copy (TREE_TYPE (fndecl));
- if (INTEGRAL_TYPE_P (TREE_TYPE (TREE_TYPE (fndecl)))
- || POINTER_TYPE_P (TREE_TYPE (TREE_TYPE (fndecl))))
+ t = TREE_TYPE (TREE_TYPE (fndecl));
+ if (INTEGRAL_TYPE_P (t) || POINTER_TYPE_P (t))
veclen = node->simdclone->vecsize_int;
else
veclen = node->simdclone->vecsize_float;
- veclen /= GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (TREE_TYPE (fndecl))));
+ veclen /= GET_MODE_BITSIZE (TYPE_MODE (t));
if (veclen > node->simdclone->simdlen)
veclen = node->simdclone->simdlen;
+ if (POINTER_TYPE_P (t))
+ t = pointer_sized_int_node;
if (veclen == node->simdclone->simdlen)
- TREE_TYPE (TREE_TYPE (fndecl))
- = build_vector_type (TREE_TYPE (TREE_TYPE (fndecl)),
- node->simdclone->simdlen);
+ t = build_vector_type (t, node->simdclone->simdlen);
else
{
- t = build_vector_type (TREE_TYPE (TREE_TYPE (fndecl)), veclen);
+ t = build_vector_type (t, veclen);
t = build_array_type_nelts (t, node->simdclone->simdlen / veclen);
- TREE_TYPE (TREE_TYPE (fndecl)) = t;
}
+ TREE_TYPE (TREE_TYPE (fndecl)) = t;
if (!node->definition)
return NULL_TREE;
if (veclen > node->simdclone->simdlen)
veclen = node->simdclone->simdlen;
adj.arg_prefix = "simd";
- adj.type = build_vector_type (parm_type, veclen);
+ if (POINTER_TYPE_P (parm_type))
+ adj.type = build_vector_type (pointer_sized_int_node, veclen);
+ else
+ adj.type = build_vector_type (parm_type, veclen);
node->simdclone->args[i].vector_type = adj.type;
for (j = veclen; j < node->simdclone->simdlen; j += veclen)
{
veclen /= GET_MODE_BITSIZE (TYPE_MODE (base_type));
if (veclen > node->simdclone->simdlen)
veclen = node->simdclone->simdlen;
- adj.type = build_vector_type (base_type, veclen);
+ if (POINTER_TYPE_P (base_type))
+ adj.type = build_vector_type (pointer_sized_int_node, veclen);
+ else
+ adj.type = build_vector_type (base_type, veclen);
adjustments.safe_push (adj);
for (j = veclen; j < node->simdclone->simdlen; j += veclen)
end_hard_regno (rel_mode,
regno),
PATTERN (this_insn), inloc)
+ && ! find_reg_fusage (this_insn, USE, XEXP (note, 0))
/* If this is also an output reload, IN cannot be used as
the reload register if it is set in this insn unless IN
is also OUT. */
+2015-01-12 Janus Weil <janus@gcc.gnu.org>
+
+ Backport from mainline
+ PR fortran/63733
+ * gfortran.dg/typebound_operator_20.f90: New.
+
+2015-01-09 Jakub Jelinek <jakub@redhat.com>
+
+ PR rtl-optimization/64536
+ * gcc.dg/pr64536.c: New test.
+
+2015-01-09 Michael Meissner <meissner@linux.vnet.ibm.com>
+
+ Backport from mainline:
+ 2015-01-06 Michael Meissner <meissner@linux.vnet.ibm.com>
+
+ PR target/64505
+ * gcc.target/powerpc/pr64505.c: New file to test -m32 -mpowerpc64
+ fix is correct.
+
+2014-01-08 Thomas Koenig <tkoenig@gcc.gnu.org>
+
+ PR fortran/56867
+ * gfortran.dg/dependency_45.f90: New test.
+
+2015-01-08 Christian Bruel <christian.bruel@st.com>
+
+ PR target/64507
+ * gcc.target/sh/pr64507.c: New test.
+
+2015-01-05 Ian Lance Taylor <iant@google.com>
+
+ Backport from mainline:
+ 2014-11-21 Lynn Boger <laboger@linux.vnet.ibm.com>
+
+ * go.test/go-test.exp (go-set-goarch): Add case for ppc64le goarch
+ value for go testing.
+
+2014-12-28 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline:
+ 2014-12-28 H.J. Lu <hongjiu.lu@intel.com>
+
+ * gcc.target/i386/pr57003.c: Skip on x32.
+ * gcc.target/i386/pr59927.c: Likewise.
+ * gcc.target/i386/pr60516.c: Likewise.
+
+2014-12-27 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline:
+ 2014-12-26 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/64409
+ * gcc.target/i386/pr64409.c: New test.
+
+2014-12-23 Janus Weil <janus@gcc.gnu.org>
+
+ Backport from mainline
+ PR fortran/64244
+ * gfortran.dg/typebound_call_26.f90: New.
+
+2014-12-19 Paolo Carlini <paolo.carlini@oracle.com>
+
+ PR c++/60955
+ * g++.dg/warn/register-parm-1.C: New.
+
+2014-12-15 Jakub Jelinek <jakub@redhat.com>
+
+ PR tree-optimization/63551
+ * gcc.dg/ipa/pr63551.c (fn2): Use 4294967286U instead of
+ 4294967286 to avoid warnings.
+
+2014-12-14 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backported from mainline
+ 2014-12-14 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR rtl-optimization/64037
+ * g++.dg/pr64037.C: New test.
+
+2014-12-14 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backported from mainline
+ 2014-12-06 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/64200
+ * gcc.target/i386/memcpy-strategy-4.c: New test.
+
+2014-12-13 Jakub Jelinek <jakub@redhat.com>
+
+ Backported from mainline
+ 2014-12-12 Jakub Jelinek <jakub@redhat.com>
+
+ PR tree-optimization/64269
+ * gcc.c-torture/compile/pr64269.c: New test.
+
+2014-12-10 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ Backport from mainline
+ 2014-09-02 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * gcc.target/powerpc/builtins-1.c: Add tests for vec_ctf,
+ vec_cts, and vec_ctu.
+ * gcc.target/powerpc/builtins-2.c: Likewise.
+
+ Backport from mainline
+ 2014-08-28 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * gcc.target/powerpc/builtins-1.c: Add tests for vec_xl, vec_xst,
+ vec_round, vec_splat, vec_div, and vec_mul.
+ * gcc.target/powerpc/builtins-2.c: New test.
+
+ Backport from mainline
+ 2014-08-20 Bill Schmidt <wschmidt@linux.vnet.ibm.com>
+
+ * testsuite/gcc.target/powerpc/builtins-1.c: New test.
+
+2014-12-10 Jakub Jelinek <jakub@redhat.com>
+
+ PR tree-optimization/62021
+ * gcc.dg/vect/pr62021.c: New test.
+
+2014-12-09 Uros Bizjak <ubizjak@gmail.com>
+
+ PR bootstrap/64213
+ Revert:
+ 2014-11-28 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR rtl-optimization/64037
+ * g++.dg/pr64037.C: New test.
+
+2014-12-07 Oleg Endo <olegendo@gcc.gnu.org>
+
+ Backport from mainline
+ 2014-12-07 Oleg Endo <olegendo@gcc.gnu.org>
+
+ * gcc.target/h8300/h8300.exp: Fix duplicated text.
+ * gcc.target/h8300/pragma-isr.c: Likewise.
+ * gcc.target/h8300/pragma-isr2.c: Likewise.
+
+2014-12-05 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline
+ 2014-12-02 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR target/64108
+ * gcc.target/i386/memset-strategy-2.c: New test.
+
+2014-12-05 H.J. Lu <hongjiu.lu@intel.com>
+
+ Backport from mainline
+ 2014-11-28 H.J. Lu <hongjiu.lu@intel.com>
+
+ PR rtl-optimization/64037
+ * g++.dg/pr64037.C: New test.
+
2014-12-04 Jakub Jelinek <jakub@redhat.com>
PR c++/56493
--- /dev/null
+// PR c++/64352
+// { dg-do compile { target c++11 } }
+
+template<bool B> struct bool_type
+{ static constexpr bool value = B; };
+
+using true_type = bool_type<true>;
+using false_type = bool_type<false>;
+
+template<typename T> T&& declval();
+
+template<typename...> struct void_ { using type = void; };
+template<typename... I> using void_t = typename void_<I...>::type;
+
+template<typename _Tp, typename = void>
+struct _Has_addressof_free: false_type { };
+
+template<typename _Tp>
+struct _Has_addressof_free
+<_Tp, void_t<decltype( operator&(declval<const _Tp&>()) )>>
+: true_type { };
+
+struct foo {};
+void operator&(foo) = delete;
+
+int main()
+{
+ static_assert( !_Has_addressof_free<int>::value, "" );
+ // error: use of deleted function 'void operator&(foo)'
+ static_assert( !_Has_addressof_free<foo>::value, "" );
+}
--- /dev/null
+// PR c++/64029
+// { dg-do compile { target c++11 } }
+
+const int (&in)[]{1,2,3,4,5};
--- /dev/null
+// PR c++/64297
+// { dg-do compile { target c++11 } }
+
+struct A {
+ typedef int X;
+ template <int> X m_fn1() const;
+};
+template <typename> struct is_function {};
+is_function<int() const &> i;
+struct D {
+ template <typename Y, typename = is_function<Y>> D(Y);
+} b(&A::m_fn1<0>);
--- /dev/null
+// { dg-do run { target i?86-*-* x86_64-*-* } }
+// { dg-options "-std=c++11 -Os" }
+
+enum class X : unsigned char {
+ V = 2,
+};
+
+static void
+__attribute__((noinline,noclone))
+foo(unsigned &out, unsigned a, X b)
+{
+ out = static_cast<unsigned>(b);
+}
+
+int main()
+{
+ unsigned deadbeef = 0xDEADBEEF;
+ asm volatile ("" : "+d" (deadbeef), "+c" (deadbeef));
+
+ unsigned out;
+ foo(out, 2, X::V);
+
+ if (out != 2)
+ __builtin_abort ();
+
+ return 0;
+}
--- /dev/null
+// PR c++/64251
+
+class DictionaryValue {};
+template <typename T> void CreateValue(T) {
+ DictionaryValue(0);
+ CreateValue(0);
+}
--- /dev/null
+// PR c++/64487
+
+struct foo {
+ int member;
+};
+
+template < int N>
+struct bar {};
+
+template <int N>
+struct qux {
+ static bar<N+__builtin_offsetof(foo,member)> static_member;
+};
+
+template <int N>
+bar<N+__builtin_offsetof(foo,member)> qux<N>::static_member;
+
+int main() { }
--- /dev/null
+// PR c++/63658
+
+struct Descriptor {};
+
+template <Descriptor & D>
+struct foo
+{
+ void size ();
+};
+
+Descriptor g_descriptor = {};
+
+template<> void foo<g_descriptor>::size()
+{
+}
--- /dev/null
+// PR c++/63657
+// { dg-options "-Wunused-variable" }
+
+class Bar
+{
+ virtual ~Bar() {}
+};
+Bar& getbar();
+void bar()
+{
+ Bar& b = getbar(); // { dg-warning "unused" }
+}
--- /dev/null
+// PR c++/60955
+// { dg-options "-Wextra" }
+
+unsigned int erroneous_warning(register int a) {
+ if ((a) & 0xff) return 1; else return 0;
+}
+unsigned int no_erroneous_warning(register int a) {
+ if (a & 0xff) return 1; else return 0;
+}
--- /dev/null
+/* PR tree-optimization/64269 */
+
+void
+foo (char *p)
+{
+ __SIZE_TYPE__ s = ~(__SIZE_TYPE__)0;
+ *p = 0;
+ __builtin_memset (p + 1, 0, s);
+}
fn2 ()
{
d = 0;
- union U b = { 4294967286 };
+ union U b = { 4294967286U };
fn1 (b);
}
--- /dev/null
+/* PR rtl-optimization/64536 */
+/* { dg-do link } */
+/* { dg-options "-O2" } */
+/* { dg-additional-options "-fPIC" { target fpic } } */
+
+struct S { long q; } *h;
+long a, b, g, j, k, *c, *d, *e, *f, *i;
+long *baz (void)
+{
+ asm volatile ("" : : : "memory");
+ return e;
+}
+
+void
+bar (int x)
+{
+ int y;
+ for (y = 0; y < x; y++)
+ {
+ switch (b)
+ {
+ case 0:
+ case 2:
+ a++;
+ break;
+ case 3:
+ a++;
+ break;
+ case 1:
+ a++;
+ }
+ if (d)
+ {
+ f = baz ();
+ g = k++;
+ if (&h->q)
+ {
+ j = *f;
+ h->q = *f;
+ }
+ else
+ i = (long *) (h->q = *f);
+ *c++ = (long) f;
+ e += 6;
+ }
+ else
+ {
+ f = baz ();
+ g = k++;
+ if (&h->q)
+ {
+ j = *f;
+ h->q = *f;
+ }
+ else
+ i = (long *) (h->q = *f);
+ *c++ = (long) f;
+ e += 6;
+ }
+ }
+}
+
+int
+main ()
+{
+ return 0;
+}
--- /dev/null
+/* { dg-require-effective-target vect_simd_clones } */
+/* { dg-additional-options "-fopenmp-simd" } */
+/* { dg-additional-options "-mavx" { target avx_runtime } } */
+
+#pragma omp declare simd linear(y)
+__attribute__((noinline)) int *
+foo (int *x, int y)
+{
+ return x + y;
+}
+
+int a[1024];
+int *b[1024] = { &a[0] };
+
+int
+main ()
+{
+ int i;
+ for (i = 0; i < 1024; i++)
+ b[i] = &a[1023 - i];
+ #pragma omp simd
+ for (i = 0; i < 1024; i++)
+ b[i] = foo (b[i], i);
+ for (i = 0; i < 1024; i++)
+ if (b[i] != &a[1023])
+ __builtin_abort ();
+ return 0;
+}
+
+/* { dg-final { cleanup-tree-dump "vect" } } */
# All done.
dg-finish
-# Copyright (C) 2013-2014 Free Software Foundation, Inc.
-
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with GCC; see the file COPYING3. If not see
-# <http://www.gnu.org/licenses/>.
-
-# GCC testsuite that uses the `dg.exp' driver.
-
-# Exit immediately if this isn't a h8300 target.
-if ![istarget h8300*-*-*] then {
- return
-}
-
-# Load support procs.
-load_lib gcc-dg.exp
-
-# If a testcase doesn't have special options, use these.
-global DEFAULT_CFLAGS
-if ![info exists DEFAULT_CFLAGS] then {
- set DEFAULT_CFLAGS " -ansi -pedantic-errors"
-}
-
-# Initialize `dg'.
-dg-init
-
-# Main loop.
-dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cS\]]] \
- "" $DEFAULT_CFLAGS
-
-# All done.
-dg-finish
{
foo ();
}
-/* Check whether rte is generated for two ISRs. */
-/* { dg-do compile { target h8300-*-* } } */
-/* { dg-options "-O3" } */
-/* { dg-final { scan-assembler-times "rte" 2} } */
-
-extern void foo (void);
-
-#pragma interrupt
-void
-isr1 (void)
-{
- foo ();
-}
-
-#pragma interrupt
-void
-isr2 (void)
-{
- foo ();
-}
{
return 0;
}
-/* Check whether rte is generated only for an ISR. */
-/* { dg-do compile { target h8300-*-* } } */
-/* { dg-options "-O" } */
-/* { dg-final { scan-assembler-times "rte" 1 } } */
-
-#pragma interrupt
-void
-isr (void)
-{
-}
-
-void
-delay (int a)
-{
-}
-
-int
-main (void)
-{
- return 0;
-}
--- /dev/null
+/* PR target/64200 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=atom -mmemcpy-strategy=libcall:-1:align -minline-stringops-dynamically" } */
+
+#include <stdarg.h>
+
+extern void bar(char *x);
+
+void foo (int size, ...)
+{
+ struct
+ {
+ char x[size];
+ } d;
+
+ va_list ap;
+ va_start(ap, size);
+ d = va_arg(ap, typeof (d));
+ va_end(ap);
+ bar(d.x);
+}
--- /dev/null
+/* PR target/64108 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=atom -mmemset-strategy=libcall:-1:align -minline-all-stringops" } */
+
+char a[2048];
+void t (void)
+{
+ __builtin_memset (a, 1, 2048);
+}
+
/* PR rtl-optimization/57003 */
-/* { dg-do run } */
+/* { dg-do run { target { ! x32 } } } */
/* { dg-options "-O2" } */
#define N 2001
/* PR target/59927 */
-/* { dg-do compile } */
+/* { dg-do compile { target { ! x32 } } } */
/* { dg-options "-O2 -g" } */
extern void baz (int) __attribute__ ((__ms_abi__));
/* PR target/60516 */
-/* { dg-do compile } */
+/* { dg-do compile { target { ! x32 } } } */
/* { dg-options "-O2" } */
struct S { char c[65536]; };
--- /dev/null
+/* { dg-do compile { target { ! { ia32 } } } } */
+/* { dg-require-effective-target maybe_x32 } */
+/* { dg-options "-O0 -mx32" } */
+
+int a;
+int* __attribute__ ((ms_abi)) fn1 () { return &a; } /* { dg-error "X32 does not support ms_abi attribute" } */
--- /dev/null
+/* { dg-do compile { target { powerpc64le-*-* } } } */
+/* { dg-options "-mcpu=power8 -O0" } */
+
+/* Test that a number of newly added builtin overloads are accepted
+ by the compiler. */
+
+#include <altivec.h>
+
+vector double y = { 2.0, 4.0 };
+vector double z;
+
+int main ()
+{
+ vector float fa = {1.0, 2.0, 3.0, -4.0};
+ vector float fb = {-2.0, -3.0, -4.0, -5.0};
+ vector float fc = vec_cpsgn (fa, fb);
+
+ vector long long la = {5L, 14L};
+ vector long long lb = {3L, 86L};
+ vector long long lc = vec_and (la, lb);
+ vector bool long long ld = {0, -1};
+ vector long long le = vec_and (la, ld);
+ vector long long lf = vec_and (ld, lb);
+
+ vector unsigned long long ua = {5L, 14L};
+ vector unsigned long long ub = {3L, 86L};
+ vector unsigned long long uc = vec_and (ua, ub);
+ vector bool long long ud = {0, -1};
+ vector unsigned long long ue = vec_and (ua, ud);
+ vector unsigned long long uf = vec_and (ud, ub);
+
+ vector long long lg = vec_andc (la, lb);
+ vector long long lh = vec_andc (la, ld);
+ vector long long li = vec_andc (ld, lb);
+
+ vector unsigned long long ug = vec_andc (ua, ub);
+ vector unsigned long long uh = vec_andc (ua, ud);
+ vector unsigned long long ui = vec_andc (ud, ub);
+
+ vector double da = {1.0, -4.0};
+ vector double db = {-2.0, 5.0};
+ vector double dc = vec_cpsgn (da, db);
+
+ vector long long lj = vec_mergeh (la, lb);
+ vector long long lk = vec_mergeh (la, ld);
+ vector long long ll = vec_mergeh (ld, la);
+
+ vector unsigned long long uj = vec_mergeh (ua, ub);
+ vector unsigned long long uk = vec_mergeh (ua, ud);
+ vector unsigned long long ul = vec_mergeh (ud, ua);
+
+ vector long long lm = vec_mergel (la, lb);
+ vector long long ln = vec_mergel (la, ld);
+ vector long long lo = vec_mergel (ld, la);
+
+ vector unsigned long long um = vec_mergel (ua, ub);
+ vector unsigned long long un = vec_mergel (ua, ud);
+ vector unsigned long long uo = vec_mergel (ud, ua);
+
+ vector long long lp = vec_nor (la, lb);
+ vector long long lq = vec_nor (la, ld);
+ vector long long lr = vec_nor (ld, la);
+
+ vector unsigned long long up = vec_nor (ua, ub);
+ vector unsigned long long uq = vec_nor (ua, ud);
+ vector unsigned long long ur = vec_nor (ud, ua);
+
+ vector long long ls = vec_or (la, lb);
+ vector long long lt = vec_or (la, ld);
+ vector long long lu = vec_or (ld, la);
+
+ vector unsigned long long us = vec_or (ua, ub);
+ vector unsigned long long ut = vec_or (ua, ud);
+ vector unsigned long long uu = vec_or (ud, ua);
+
+ vector unsigned char ca = {0,4,8,1,5,9,2,6,10,3,7,11,15,12,14,13};
+ vector long long lv = vec_perm (la, lb, ca);
+ vector unsigned long long uv = vec_perm (ua, ub, ca);
+
+ vector long long lw = vec_sel (la, lb, lc);
+ vector long long lx = vec_sel (la, lb, uc);
+ vector long long ly = vec_sel (la, lb, ld);
+
+ vector unsigned long long uw = vec_sel (ua, ub, lc);
+ vector unsigned long long ux = vec_sel (ua, ub, uc);
+ vector unsigned long long uy = vec_sel (ua, ub, ld);
+
+ vector long long lz = vec_xor (la, lb);
+ vector long long l0 = vec_xor (la, ld);
+ vector long long l1 = vec_xor (ld, la);
+
+ vector unsigned long long uz = vec_xor (ua, ub);
+ vector unsigned long long u0 = vec_xor (ua, ud);
+ vector unsigned long long u1 = vec_xor (ud, ua);
+
+ int ia = vec_all_eq (ua, ub);
+ int ib = vec_all_ge (ua, ub);
+ int ic = vec_all_gt (ua, ub);
+ int id = vec_all_le (ua, ub);
+ int ie = vec_all_lt (ua, ub);
+ int ig = vec_all_ne (ua, ub);
+
+ int ih = vec_any_eq (ua, ub);
+ int ii = vec_any_ge (ua, ub);
+ int ij = vec_any_gt (ua, ub);
+ int ik = vec_any_le (ua, ub);
+ int il = vec_any_lt (ua, ub);
+ int im = vec_any_ne (ua, ub);
+
+ vector int sia = {9, 16, 25, 36};
+ vector int sib = {-8, -27, -64, -125};
+ vector int sic = vec_mergee (sia, sib);
+ vector int sid = vec_mergeo (sia, sib);
+
+ vector unsigned int uia = {9, 16, 25, 36};
+ vector unsigned int uib = {8, 27, 64, 125};
+ vector unsigned int uic = vec_mergee (uia, uib);
+ vector unsigned int uid = vec_mergeo (uia, uib);
+
+ vector bool int bia = {0, -1, -1, 0};
+ vector bool int bib = {-1, -1, 0, -1};
+ vector bool int bic = vec_mergee (bia, bib);
+ vector bool int bid = vec_mergeo (bia, bib);
+
+ vector unsigned int uie = vec_packsu (ua, ub);
+
+ vector long long l2 = vec_cntlz (la);
+ vector unsigned long long u2 = vec_cntlz (ua);
+ vector int sie = vec_cntlz (sia);
+ vector unsigned int uif = vec_cntlz (uia);
+ vector short ssa = {20, -40, -60, 80, 100, -120, -140, 160};
+ vector short ssb = vec_cntlz (ssa);
+ vector unsigned short usa = {81, 72, 63, 54, 45, 36, 27, 18};
+ vector unsigned short usb = vec_cntlz (usa);
+ vector signed char sca = {-4, 3, -9, 15, -31, 31, 0, 0,
+ 1, 117, -36, 99, 98, 97, 96, 95};
+ vector signed char scb = vec_cntlz (sca);
+ vector unsigned char cb = vec_cntlz (ca);
+
+ vector double dd = vec_xl (0, &y);
+ vec_xst (dd, 0, &z);
+
+ vector double de = vec_round (dd);
+
+ vector double df = vec_splat (de, 0);
+ vector double dg = vec_splat (de, 1);
+ vector long long l3 = vec_splat (l2, 0);
+ vector long long l4 = vec_splat (l2, 1);
+ vector unsigned long long u3 = vec_splat (u2, 0);
+ vector unsigned long long u4 = vec_splat (u2, 1);
+ vector bool long long l5 = vec_splat (ld, 0);
+ vector bool long long l6 = vec_splat (ld, 1);
+
+ vector long long l7 = vec_div (l3, l4);
+ vector unsigned long long u5 = vec_div (u3, u4);
+
+ vector long long l8 = vec_mul (l3, l4);
+ vector unsigned long long u6 = vec_mul (u3, u4);
+
+ vector double dh = vec_ctf (la, -2);
+ vector double di = vec_ctf (ua, 2);
+ vector long long l9 = vec_cts (dh, -2);
+ vector unsigned long long u7 = vec_ctu (di, 2);
+
+ return 0;
+}
--- /dev/null
+/* { dg-do run { target { powerpc64le-*-* } } } */
+/* { dg-options "-mcpu=power8 " } */
+
+#include <altivec.h>
+
+void abort (void);
+
+int main ()
+{
+ vector long long sa = {27L, -14L};
+ vector long long sb = {-9L, -2L};
+
+ vector unsigned long long ua = {27L, 14L};
+ vector unsigned long long ub = {9L, 2L};
+
+ vector long long sc = vec_div (sa, sb);
+ vector unsigned long long uc = vec_div (ua, ub);
+
+ if (sc[0] != -3L || sc[1] != 7L || uc[0] != 3L || uc[1] != 7L)
+ abort ();
+
+ vector long long sd = vec_mul (sa, sb);
+ vector unsigned long long ud = vec_mul (ua, ub);
+
+ if (sd[0] != -243L || sd[1] != 28L || ud[0] != 243L || ud[1] != 28L)
+ abort ();
+
+ vector long long se = vec_splat (sa, 0);
+ vector long long sf = vec_splat (sa, 1);
+ vector unsigned long long ue = vec_splat (ua, 0);
+ vector unsigned long long uf = vec_splat (ua, 1);
+
+ if (se[0] != 27L || se[1] != 27L || sf[0] != -14L || sf[1] != -14L
+ || ue[0] != 27L || ue[1] != 27L || uf[0] != 14L || uf[1] != 14L)
+ abort ();
+
+ vector double da = vec_ctf (sa, -2);
+ vector double db = vec_ctf (ua, 2);
+ vector long long sg = vec_cts (da, -2);
+ vector unsigned long long ug = vec_ctu (db, 2);
+
+ if (da[0] != 108.0 || da[1] != -56.0 || db[0] != 6.75 || db[1] != 3.5
+ || sg[0] != 27L || sg[1] != -14L || ug[0] != 27L || ug[1] != 14L)
+ abort ();
+
+ return 0;
+}
--- /dev/null
+/* { dg-do compile { target { powerpc*-*-* && ilp32 } } } */
+/* { dg-options "-O2 -mpowerpc64" } */
+
+/*
+ * (below is inlined and simplified from previously included headers)
+ */
+
+struct fltcom_st {
+ short fltbuf[950];
+} fltcom_ __attribute__((common)) ;
+#define CM_PLIBOR (*(((double *)&fltcom_ + 1)))
+#define CM_QMRG (*(((double *)&fltcom_ + 2)))
+
+struct fltcom2_st {
+ short fltbuf2[56];
+} fltcom2_ __attribute__((common)) ;
+#define CM_FLPRV ((short *)&fltcom2_ + 17)
+#define CM_FLNXT ((short *)&fltcom2_ + 20)
+#define CM_FLCPN (*(((double *)&fltcom2_)))
+#define CM_FLCNT (*(((short *)&fltcom2_ + 12)))
+
+struct aidatcm_st {
+ double cm_aid, cm_ext, cm_basis;
+ short cm_aiday, cm_exday, cm_dperd, cm_aiexf, cm_aidex, cm_aiok,
+ cm_aigdo, cm_aildo, cm_prev[3], cm_next[3], cm_aid_pad[2];
+ double cm_rvgfact, cm_ai1st, cm_ai2nd;
+ int cm_aieurok;
+} aidatcm_ __attribute__((common)) ;
+#define CM_EXDAY aidatcm_.cm_exday
+#define CM_BASIS aidatcm_.cm_basis
+#define CM_PREV aidatcm_.cm_prev
+
+struct cshfcm_st {
+ short bufff[10862];
+} cshfcm_ __attribute__((common)) ;
+#define CM_FNUM (*(((short *)&cshfcm_ + 9038)))
+#define CM_FIFLX ((double *)&cshfcm_ + 1)
+#define CM_FEXTX ((double *)&cshfcm_ + 1201)
+#define CM_FSHDT ((short *)&cshfcm_ + 7230)
+
+struct calctsdb_st {
+ short calctsdbbuff[115];
+} calctsdb_ __attribute__((common)) ;
+#define CM_CTUP_GOOD_TO_GO (*(((short *)&calctsdb_ + 16)))
+#define CM_PAYMENT_FREQUENCY (*(((short *)&calctsdb_ + 61)))
+#define CM_DISCOUNTING_DAYTYP (*(((short *)&calctsdb_ + 59)))
+
+struct cf600cm_st {
+ short bufcf[14404];
+} cf600cm_ __attribute__((common)) ;
+#define CM_FLT_RFIXRATES ((double *)&cf600cm_ + 600)
+
+typedef struct { int id; int type; const char *name; } bregdb_bitinfo_t;
+
+int
+bregdb_eval_bbitcxt_bool_rv(const bregdb_bitinfo_t * const bbit,
+ const int bbit_default,
+ const void * const bregucxt);
+
+static const bregdb_bitinfo_t bbit_calc_dr_d33 =
+ { 160667, 5, "bbit_calc_dr_d33" };
+#define bbit_calc_dr_d33__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_dr_d33, 0, 0)
+static const bregdb_bitinfo_t bbit_calc_sx_b24 =
+ { 158854, 5, "bbit_calc_sx_b24" };
+#define bbit_calc_sx_b24__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_sx_b24, 0, 0)
+static const bregdb_bitinfo_t bbit_calc_dr_d36 =
+ { 161244, 5, "bbit_calc_dr_d36" };
+#define bbit_calc_dr_d36__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_dr_d36, 0, 0)
+static const bregdb_bitinfo_t bbit_calc_dr_d37 =
+ { 161315, 5, "bbit_calc_dr_d37" };
+#define bbit_calc_dr_d37__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_dr_d37, 0, 0)
+static const bregdb_bitinfo_t bbit_calc_dr_d47 =
+ { 163259, 5, "bbit_calc_dr_d47" };
+#define bbit_calc_dr_d47__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_dr_d47, 0, 0)
+static const bregdb_bitinfo_t bbit_calc_dr_d46 =
+ { 163239, 5, "bbit_calc_dr_d46" };
+#define bbit_calc_dr_d46__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_dr_d46, 0, 0)
+static const bregdb_bitinfo_t bbit_calc_dr_d62 =
+ { 166603, 5, "bbit_calc_dr_d62" };
+#define bbit_calc_dr_d62__value() \
+ bregdb_eval_bbitcxt_bool_rv(&bbit_calc_dr_d62, 0, 0)
+
+
+
+int dtyp_is_actact_(short *daytyp);
+double rnd_trunc_numb(double in, short num_digits, short rnd_or_trunc);
+void datetrn_(const short* dt, short* dt2);
+short difday_(short* daytyp_in, short* srtdti, short* enddti, short* ercode);
+
+
+double pow(double x, double y);
+
+
+/*
+ * (above is inlined and simplified from previously included headers)
+ */
+
+
+void calc_1566(
+ short sCalcType,
+ short sDayType,
+ short sFreq,
+ short asSettleDt[3],
+ short asMtyDt[3],
+ short asIssueDt[3],
+ short asFCpnDt[3],
+ double dCpn,
+ short *psNoPer,
+ double *pdExt,
+ double *pdAI,
+ double *pdAI2,
+ double *pdFCpn,
+ short *psRcode)
+{
+
+ short ercode = 0;
+ int isactact;
+ short days_to_next_cpn = 0;
+ const short discDaytype = CM_DISCOUNTING_DAYTYP;
+ int j;
+
+ if(bbit_calc_sx_b24__value())
+ isactact = (dtyp_is_actact_(&sDayType) != 0);
+ else
+ isactact = (sDayType == 1 || sDayType == 10);
+
+ short days_in_current_period = difday_(&sDayType,CM_FLPRV,CM_FLNXT,&ercode);
+ const short sfreq1 = (CM_CTUP_GOOD_TO_GO == 1 && CM_PAYMENT_FREQUENCY == 1);
+
+ for (j = 0; j < CM_FNUM; j++) {
+
+ if(j == 0) {
+ days_to_next_cpn = difday_(&sDayType,asSettleDt,CM_FLNXT,&ercode);
+
+ if(isactact) {
+ CM_FIFLX[j] = CM_FLCPN / sFreq;
+ CM_FEXTX[j] = (double)days_to_next_cpn / (double)days_in_current_period;
+ }
+ else {
+ CM_FIFLX[j] = CM_FLCPN * days_in_current_period;
+ CM_FEXTX[j] = (double)days_to_next_cpn / (double)(1/sfreq1);
+ }
+
+ if(CM_FNUM == 1) {
+ CM_FEXTX[j] = (double)days_to_next_cpn / ((double)1/sfreq1);
+ }
+ }
+ else {
+
+ short days_from_settle, days_in_period;
+
+ if(bbit_calc_dr_d46__value()){
+ days_from_settle = difday_(&sDayType,asSettleDt,
+ &CM_FSHDT[j*3],&ercode);
+ days_in_period = difday_(&sDayType,&CM_FSHDT[(j-1)*3],
+ &CM_FSHDT[j*3],&ercode);
+ }
+
+ double cpn_rate = CM_PLIBOR;
+
+ if(bbit_calc_dr_d62__value()) {
+ if(j < CM_FLCNT && CM_FLT_RFIXRATES[j] != 0) cpn_rate = CM_FLT_RFIXRATES[j];
+ }
+ else {
+ if(j < CM_FLCNT ) cpn_rate = CM_FLT_RFIXRATES[j];
+ }
+
+ if(bbit_calc_dr_d37__value()&& j >= CM_FLCNT && sCalcType == 1570) {
+ cpn_rate = CM_PLIBOR + CM_QMRG;
+
+ if(bbit_calc_dr_d36__value()){
+ double projected_rate = pow((1 + CM_PLIBOR/100.0),
+ (days_in_period)) - 1;
+
+ projected_rate = projected_rate + CM_QMRG/100.0 * days_in_period;
+ cpn_rate = 100 * projected_rate * (1/days_in_period);
+ }
+ }
+
+
+ if(isactact) {
+ CM_FIFLX[j] = cpn_rate / sFreq;
+ CM_FEXTX[j] = CM_FEXTX[j-1] + 1;
+
+ if(bbit_calc_dr_d46__value() && discDaytype != 0) {
+ CM_FEXTX[j] = (double)days_from_settle / (double)(1/sfreq1);
+ }
+ }
+ else {
+ if(!bbit_calc_dr_d46__value()){
+ days_from_settle = difday_(&sDayType,asSettleDt,
+ &CM_FSHDT[j*3],&ercode);
+ days_in_period = difday_(&sDayType,&CM_FSHDT[(j-1)*3],
+ &CM_FSHDT[j*3],&ercode);
+
+ }
+
+ CM_FIFLX[j] = cpn_rate * days_in_period;
+ CM_FEXTX[j] = (double)days_from_settle / (double)(1/sfreq1);
+ }
+
+ }
+
+ if(bbit_calc_dr_d33__value() && CM_CTUP_GOOD_TO_GO != 0) {
+ CM_FIFLX[j] = rnd_trunc_numb (CM_FIFLX[j], 0, 0);
+ }
+
+ }
+
+
+ short accrued_days = difday_(&sDayType,CM_FLPRV,asSettleDt,&ercode);
+
+ if(!bbit_calc_dr_d47__value()) {
+ if(isactact) {
+ *pdAI = (CM_FLCPN / sFreq)* accrued_days / ((double)days_in_current_period);
+ }
+ else{
+ *pdAI = (CM_FLCPN / sFreq)* accrued_days / ((double)1/sFreq);
+ }
+ }
+
+ CM_EXDAY = days_to_next_cpn;
+ CM_BASIS = days_in_current_period;
+ datetrn_(CM_FLPRV,CM_PREV);
+}
--- /dev/null
+/* Check that the __builtin_strnlen returns 0 with with
+ non-constant 0 length. */
+/* { dg-do run } */
+/* { dg-options "-O2" } */
+
+extern int snprintf(char *, int, const char *, ...);
+extern void abort (void);
+
+int main()
+ {
+ int i;
+ int cmp = 0;
+ char buffer[1024];
+ const char* s = "the string";
+
+ snprintf(buffer, 4, "%s", s);
+
+ for (i = 1; i < 4; i++)
+ cmp += __builtin_strncmp(buffer, s, i - 1);
+
+ if (cmp)
+ abort();
+
+ return 0;
+}
--- /dev/null
+! { dg-do run }
+! { dg-options "-Warray-temporaries" }
+! PR 56867 - substrings were not checked for dependency.
+program main
+ character(len=4) :: a
+ character(len=4) :: c(3)
+ c(1) = 'abcd'
+ c(2) = '1234'
+ c(3) = 'wxyz'
+ c(:)(1:2) = c(2)(2:3) ! { dg-warning "array temporary" }
+ if (c(3) .ne. '23yz') call abort
+end program main
--- /dev/null
+! { dg-do compile }
+!
+! PR 64244: [4.8/4.9/5 Regression] ICE at class.c:236 when using non_overridable
+!
+! Contributed by OndÅ™ej ÄŒertÃk <ondrej.certik@gmail.com>
+
+module m
+ implicit none
+
+ type :: A
+ contains
+ generic :: f => g
+ procedure, non_overridable :: g
+ end type
+
+contains
+
+ subroutine g(this)
+ class(A), intent(in) :: this
+ end subroutine
+
+end module
+
+
+program test_non_overridable
+ use m, only: A
+ implicit none
+ class(A), allocatable :: h
+ call h%f()
+end
--- /dev/null
+! { dg-do run }
+!
+! PR 63733: [4.8/4.9/5 Regression] [OOP] wrong resolution for OPERATOR generics
+!
+! Original test case from Alberto F. MartÃn Huertas <amartin@cimne.upc.edu>
+! Slightly modified by Salvatore Filippone <sfilippone@uniroma2.it>
+! Further modified by Janus Weil <janus@gcc.gnu.org>
+
+module overwrite
+ type parent
+ contains
+ procedure :: sum => sum_parent
+ generic :: operator(+) => sum
+ end type
+
+ type, extends(parent) :: child
+ contains
+ procedure :: sum => sum_child
+ end type
+
+contains
+
+ integer function sum_parent(op1,op2)
+ implicit none
+ class(parent), intent(in) :: op1, op2
+ sum_parent = 0
+ end function
+
+ integer function sum_child(op1,op2)
+ implicit none
+ class(child) , intent(in) :: op1
+ class(parent), intent(in) :: op2
+ sum_child = 1
+ end function
+
+end module
+
+program drive
+ use overwrite
+ implicit none
+
+ type(parent) :: m1, m2
+ class(parent), pointer :: mres
+ type(child) :: h1, h2
+ class(parent), pointer :: hres
+
+ if (m1 + m2 /= 0) call abort()
+ if (h1 + m2 /= 1) call abort()
+ if (h1%sum(h2) /= 1) call abort()
+
+end
+
+! { dg-final { cleanup-modules "overwrite" } }
if [check_effective_target_ilp32] {
set goarch "ppc"
} else {
- set goarch "ppc64"
+ if [istarget "powerpc64le-*-*"] {
+ set goarch "ppc64le"
+ } else {
+ set goarch "ppc64"
+ }
}
}
"sparc*-*-*" {
use_operand_p use_p;
if (!tree_fits_shwi_p (val2)
- || !tree_fits_uhwi_p (len2))
+ || !tree_fits_uhwi_p (len2)
+ || compare_tree_int (len2, 1024) == 1)
break;
if (is_gimple_call (stmt1))
{
is not constant, or is bigger than memcpy length, bail out. */
if (diff == NULL
|| !tree_fits_uhwi_p (diff)
- || tree_int_cst_lt (len1, diff))
+ || tree_int_cst_lt (len1, diff)
+ || compare_tree_int (diff, 1024) == 1)
break;
/* Use maximum of difference plus memset length and memcpy length
else
return -1;
}
- else if (p->high != NULL_TREE)
+ else if (q->high != NULL_TREE)
return 1;
/* If both ranges are the same, sort below by ascending idx. */
}
switch (DECL_FUNCTION_CODE (fndecl))
{
CASE_FLT_FN (BUILT_IN_POW):
+ if (flag_errno_math)
+ return false;
+
*base = gimple_call_arg (stmt, 0);
arg1 = gimple_call_arg (stmt, 1);
/* changing memory. */
if (gimple_code (stmt) != GIMPLE_PHI)
- if (gimple_vdef (stmt))
+ if (gimple_vdef (stmt)
+ && !gimple_clobber_p (stmt))
{
if (dump_enabled_p ())
dump_printf_loc (MSG_NOTE, vect_location,
static void
instrument_func_entry (void)
{
- basic_block succ_bb;
- gimple_stmt_iterator gsi;
tree ret_addr, builtin_decl;
gimple g;
-
- succ_bb = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun));
- gsi = gsi_after_labels (succ_bb);
+ gimple_seq seq = NULL;
builtin_decl = builtin_decl_implicit (BUILT_IN_RETURN_ADDRESS);
g = gimple_build_call (builtin_decl, 1, integer_zero_node);
ret_addr = make_ssa_name (ptr_type_node, NULL);
gimple_call_set_lhs (g, ret_addr);
gimple_set_location (g, cfun->function_start_locus);
- gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_seq_add_stmt_without_update (&seq, g);
- builtin_decl = builtin_decl_implicit (BUILT_IN_TSAN_FUNC_ENTRY);
+ builtin_decl = builtin_decl_implicit (BUILT_IN_TSAN_FUNC_ENTRY);
g = gimple_build_call (builtin_decl, 1, ret_addr);
gimple_set_location (g, cfun->function_start_locus);
- gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ gimple_seq_add_stmt_without_update (&seq, g);
+
+ edge e = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
+ gsi_insert_seq_on_edge_immediate (e, seq);
}
/* Instruments function exits. */
+2014-12-09 John David Anglin <danglin@gcc.gnu.org>
+
+ Backport from mainline
+ 2014-11-24 John David Anglin <danglin@gcc.gnu.org>
+
+ * config/pa/linux-atomic.c (ABORT_INSTRUCTION): Use __builtin_trap()
+ instead.
+
+ 2014-11-21 Guy Martin <gmsoft@tuxicoman.be>
+ John David Anglin <danglin@gcc.gnu.org>
+
+ * config/pa/linux-atomic.c (__kernel_cmpxchg2): New.
+ (FETCH_AND_OP_2): New. Use for subword and double word operations.
+ (OP_AND_FETCH_2): Likewise.
+ (COMPARE_AND_SWAP_2): Likewise.
+ (SYNC_LOCK_TEST_AND_SET_2): Likewise.
+ (SYNC_LOCK_RELEASE_2): Likewise.
+ (SUBWORD_SYNC_OP): Remove.
+ (SUBWORD_VAL_CAS): Likewise.
+ (SUBWORD_BOOL_CAS): Likewise.
+ (FETCH_AND_OP_WORD): Update.
+ Consistently use signed types.
+
+2014-12-09 Oleg Endo <olegendo@gcc.gnu.org>
+
+ Backport from mainline
+ 2014-11-30 Oleg Endo <olegendo@gcc.gnu.org>
+
+ PR target/55351
+ * config/sh/lib1funcs.S: Check value of __SHMEDIA__ instead of checking
+ whether it's defined.
+
2014-10-30 Release Manager
* GCC 4.9.2 released.
using the kernel helper defined below. There is no support for
64-bit operations yet. */
-/* A privileged instruction to crash a userspace program with SIGILL. */
-#define ABORT_INSTRUCTION asm ("iitlbp %r0,(%sr0, %r0)")
-
/* Determine kernel LWS function call (0=32-bit, 1=64-bit userspace). */
-#define LWS_CAS (sizeof(unsigned long) == 4 ? 0 : 1)
+#define LWS_CAS (sizeof(long) == 4 ? 0 : 1)
/* Kernel helper for compare-and-exchange a 32-bit value. */
static inline long
: "r1", "r20", "r22", "r23", "r29", "r31", "memory"
);
if (__builtin_expect (lws_errno == -EFAULT || lws_errno == -ENOSYS, 0))
- ABORT_INSTRUCTION;
+ __builtin_trap ();
/* If the kernel LWS call succeeded (lws_errno == 0), lws_ret contains
the old value from memory. If this value is equal to OLDVAL, the
return lws_errno;
}
+static inline long
+__kernel_cmpxchg2 (void * oldval, void * newval, void *mem, int val_size)
+{
+ register unsigned long lws_mem asm("r26") = (unsigned long) (mem);
+ register long lws_ret asm("r28");
+ register long lws_errno asm("r21");
+ register unsigned long lws_old asm("r25") = (unsigned long) oldval;
+ register unsigned long lws_new asm("r24") = (unsigned long) newval;
+ register int lws_size asm("r23") = val_size;
+ asm volatile ( "ble 0xb0(%%sr2, %%r0) \n\t"
+ "ldi %2, %%r20 \n\t"
+ : "=r" (lws_ret), "=r" (lws_errno)
+ : "i" (2), "r" (lws_mem), "r" (lws_old), "r" (lws_new), "r" (lws_size)
+ : "r1", "r20", "r22", "r29", "r31", "fr4", "memory"
+ );
+ if (__builtin_expect (lws_errno == -EFAULT || lws_errno == -ENOSYS, 0))
+ __builtin_trap ();
+
+ /* If the kernel LWS call fails, retrun EBUSY */
+ if (!lws_errno && lws_ret)
+ lws_errno = -EBUSY;
+
+ return lws_errno;
+}
#define HIDDEN __attribute__ ((visibility ("hidden")))
/* Big endian masks */
#define MASK_1 0xffu
#define MASK_2 0xffffu
+#define FETCH_AND_OP_2(OP, PFX_OP, INF_OP, TYPE, WIDTH, INDEX) \
+ TYPE HIDDEN \
+ __sync_fetch_and_##OP##_##WIDTH (TYPE *ptr, TYPE val) \
+ { \
+ TYPE tmp, newval; \
+ int failure; \
+ \
+ do { \
+ tmp = *ptr; \
+ newval = PFX_OP (tmp INF_OP val); \
+ failure = __kernel_cmpxchg2 (&tmp, &newval, ptr, INDEX); \
+ } while (failure != 0); \
+ \
+ return tmp; \
+ }
+
+FETCH_AND_OP_2 (add, , +, long long, 8, 3)
+FETCH_AND_OP_2 (sub, , -, long long, 8, 3)
+FETCH_AND_OP_2 (or, , |, long long, 8, 3)
+FETCH_AND_OP_2 (and, , &, long long, 8, 3)
+FETCH_AND_OP_2 (xor, , ^, long long, 8, 3)
+FETCH_AND_OP_2 (nand, ~, &, long long, 8, 3)
+
+FETCH_AND_OP_2 (add, , +, short, 2, 1)
+FETCH_AND_OP_2 (sub, , -, short, 2, 1)
+FETCH_AND_OP_2 (or, , |, short, 2, 1)
+FETCH_AND_OP_2 (and, , &, short, 2, 1)
+FETCH_AND_OP_2 (xor, , ^, short, 2, 1)
+FETCH_AND_OP_2 (nand, ~, &, short, 2, 1)
+
+FETCH_AND_OP_2 (add, , +, signed char, 1, 0)
+FETCH_AND_OP_2 (sub, , -, signed char, 1, 0)
+FETCH_AND_OP_2 (or, , |, signed char, 1, 0)
+FETCH_AND_OP_2 (and, , &, signed char, 1, 0)
+FETCH_AND_OP_2 (xor, , ^, signed char, 1, 0)
+FETCH_AND_OP_2 (nand, ~, &, signed char, 1, 0)
+
+#define OP_AND_FETCH_2(OP, PFX_OP, INF_OP, TYPE, WIDTH, INDEX) \
+ TYPE HIDDEN \
+ __sync_##OP##_and_fetch_##WIDTH (TYPE *ptr, TYPE val) \
+ { \
+ TYPE tmp, newval; \
+ int failure; \
+ \
+ do { \
+ tmp = *ptr; \
+ newval = PFX_OP (tmp INF_OP val); \
+ failure = __kernel_cmpxchg2 (&tmp, &newval, ptr, INDEX); \
+ } while (failure != 0); \
+ \
+ return PFX_OP (tmp INF_OP val); \
+ }
+
+OP_AND_FETCH_2 (add, , +, long long, 8, 3)
+OP_AND_FETCH_2 (sub, , -, long long, 8, 3)
+OP_AND_FETCH_2 (or, , |, long long, 8, 3)
+OP_AND_FETCH_2 (and, , &, long long, 8, 3)
+OP_AND_FETCH_2 (xor, , ^, long long, 8, 3)
+OP_AND_FETCH_2 (nand, ~, &, long long, 8, 3)
+
+OP_AND_FETCH_2 (add, , +, short, 2, 1)
+OP_AND_FETCH_2 (sub, , -, short, 2, 1)
+OP_AND_FETCH_2 (or, , |, short, 2, 1)
+OP_AND_FETCH_2 (and, , &, short, 2, 1)
+OP_AND_FETCH_2 (xor, , ^, short, 2, 1)
+OP_AND_FETCH_2 (nand, ~, &, short, 2, 1)
+
+OP_AND_FETCH_2 (add, , +, signed char, 1, 0)
+OP_AND_FETCH_2 (sub, , -, signed char, 1, 0)
+OP_AND_FETCH_2 (or, , |, signed char, 1, 0)
+OP_AND_FETCH_2 (and, , &, signed char, 1, 0)
+OP_AND_FETCH_2 (xor, , ^, signed char, 1, 0)
+OP_AND_FETCH_2 (nand, ~, &, signed char, 1, 0)
+
#define FETCH_AND_OP_WORD(OP, PFX_OP, INF_OP) \
int HIDDEN \
__sync_fetch_and_##OP##_4 (int *ptr, int val) \
FETCH_AND_OP_WORD (xor, , ^)
FETCH_AND_OP_WORD (nand, ~, &)
-#define NAME_oldval(OP, WIDTH) __sync_fetch_and_##OP##_##WIDTH
-#define NAME_newval(OP, WIDTH) __sync_##OP##_and_fetch_##WIDTH
-
-/* Implement both __sync_<op>_and_fetch and __sync_fetch_and_<op> for
- subword-sized quantities. */
-
-#define SUBWORD_SYNC_OP(OP, PFX_OP, INF_OP, TYPE, WIDTH, RETURN) \
- TYPE HIDDEN \
- NAME##_##RETURN (OP, WIDTH) (TYPE *ptr, TYPE val) \
- { \
- int *wordptr = (int *) ((unsigned long) ptr & ~3); \
- unsigned int mask, shift, oldval, newval; \
- int failure; \
- \
- shift = (((unsigned long) ptr & 3) << 3) ^ INVERT_MASK_##WIDTH; \
- mask = MASK_##WIDTH << shift; \
- \
- do { \
- oldval = *wordptr; \
- newval = ((PFX_OP (((oldval & mask) >> shift) \
- INF_OP (unsigned int) val)) << shift) & mask; \
- newval |= oldval & ~mask; \
- failure = __kernel_cmpxchg (oldval, newval, wordptr); \
- } while (failure != 0); \
- \
- return (RETURN & mask) >> shift; \
- }
-
-SUBWORD_SYNC_OP (add, , +, unsigned short, 2, oldval)
-SUBWORD_SYNC_OP (sub, , -, unsigned short, 2, oldval)
-SUBWORD_SYNC_OP (or, , |, unsigned short, 2, oldval)
-SUBWORD_SYNC_OP (and, , &, unsigned short, 2, oldval)
-SUBWORD_SYNC_OP (xor, , ^, unsigned short, 2, oldval)
-SUBWORD_SYNC_OP (nand, ~, &, unsigned short, 2, oldval)
-
-SUBWORD_SYNC_OP (add, , +, unsigned char, 1, oldval)
-SUBWORD_SYNC_OP (sub, , -, unsigned char, 1, oldval)
-SUBWORD_SYNC_OP (or, , |, unsigned char, 1, oldval)
-SUBWORD_SYNC_OP (and, , &, unsigned char, 1, oldval)
-SUBWORD_SYNC_OP (xor, , ^, unsigned char, 1, oldval)
-SUBWORD_SYNC_OP (nand, ~, &, unsigned char, 1, oldval)
-
#define OP_AND_FETCH_WORD(OP, PFX_OP, INF_OP) \
int HIDDEN \
__sync_##OP##_and_fetch_4 (int *ptr, int val) \
OP_AND_FETCH_WORD (xor, , ^)
OP_AND_FETCH_WORD (nand, ~, &)
-SUBWORD_SYNC_OP (add, , +, unsigned short, 2, newval)
-SUBWORD_SYNC_OP (sub, , -, unsigned short, 2, newval)
-SUBWORD_SYNC_OP (or, , |, unsigned short, 2, newval)
-SUBWORD_SYNC_OP (and, , &, unsigned short, 2, newval)
-SUBWORD_SYNC_OP (xor, , ^, unsigned short, 2, newval)
-SUBWORD_SYNC_OP (nand, ~, &, unsigned short, 2, newval)
+typedef unsigned char bool;
+
+#define COMPARE_AND_SWAP_2(TYPE, WIDTH, INDEX) \
+ TYPE HIDDEN \
+ __sync_val_compare_and_swap_##WIDTH (TYPE *ptr, TYPE oldval, \
+ TYPE newval) \
+ { \
+ TYPE actual_oldval; \
+ int fail; \
+ \
+ while (1) \
+ { \
+ actual_oldval = *ptr; \
+ \
+ if (__builtin_expect (oldval != actual_oldval, 0)) \
+ return actual_oldval; \
+ \
+ fail = __kernel_cmpxchg2 (&actual_oldval, &newval, ptr, INDEX); \
+ \
+ if (__builtin_expect (!fail, 1)) \
+ return actual_oldval; \
+ } \
+ } \
+ \
+ bool HIDDEN \
+ __sync_bool_compare_and_swap_##WIDTH (TYPE *ptr, TYPE oldval, \
+ TYPE newval) \
+ { \
+ int failure = __kernel_cmpxchg2 (&oldval, &newval, ptr, INDEX); \
+ return (failure != 0); \
+ }
-SUBWORD_SYNC_OP (add, , +, unsigned char, 1, newval)
-SUBWORD_SYNC_OP (sub, , -, unsigned char, 1, newval)
-SUBWORD_SYNC_OP (or, , |, unsigned char, 1, newval)
-SUBWORD_SYNC_OP (and, , &, unsigned char, 1, newval)
-SUBWORD_SYNC_OP (xor, , ^, unsigned char, 1, newval)
-SUBWORD_SYNC_OP (nand, ~, &, unsigned char, 1, newval)
+COMPARE_AND_SWAP_2 (long long, 8, 3)
+COMPARE_AND_SWAP_2 (short, 2, 1)
+COMPARE_AND_SWAP_2 (char, 1, 0)
int HIDDEN
__sync_val_compare_and_swap_4 (int *ptr, int oldval, int newval)
}
}
-#define SUBWORD_VAL_CAS(TYPE, WIDTH) \
- TYPE HIDDEN \
- __sync_val_compare_and_swap_##WIDTH (TYPE *ptr, TYPE oldval, \
- TYPE newval) \
- { \
- int *wordptr = (int *)((unsigned long) ptr & ~3), fail; \
- unsigned int mask, shift, actual_oldval, actual_newval; \
- \
- shift = (((unsigned long) ptr & 3) << 3) ^ INVERT_MASK_##WIDTH; \
- mask = MASK_##WIDTH << shift; \
- \
- while (1) \
- { \
- actual_oldval = *wordptr; \
- \
- if (__builtin_expect (((actual_oldval & mask) >> shift) \
- != (unsigned int) oldval, 0)) \
- return (actual_oldval & mask) >> shift; \
- \
- actual_newval = (actual_oldval & ~mask) \
- | (((unsigned int) newval << shift) & mask); \
- \
- fail = __kernel_cmpxchg (actual_oldval, actual_newval, \
- wordptr); \
- \
- if (__builtin_expect (!fail, 1)) \
- return (actual_oldval & mask) >> shift; \
- } \
- }
-
-SUBWORD_VAL_CAS (unsigned short, 2)
-SUBWORD_VAL_CAS (unsigned char, 1)
-
-typedef unsigned char bool;
-
bool HIDDEN
__sync_bool_compare_and_swap_4 (int *ptr, int oldval, int newval)
{
return (failure == 0);
}
-#define SUBWORD_BOOL_CAS(TYPE, WIDTH) \
- bool HIDDEN \
- __sync_bool_compare_and_swap_##WIDTH (TYPE *ptr, TYPE oldval, \
- TYPE newval) \
+#define SYNC_LOCK_TEST_AND_SET_2(TYPE, WIDTH, INDEX) \
+TYPE HIDDEN \
+ __sync_lock_test_and_set_##WIDTH (TYPE *ptr, TYPE val) \
{ \
- TYPE actual_oldval \
- = __sync_val_compare_and_swap_##WIDTH (ptr, oldval, newval); \
- return (oldval == actual_oldval); \
+ TYPE oldval; \
+ int failure; \
+ \
+ do { \
+ oldval = *ptr; \
+ failure = __kernel_cmpxchg2 (&oldval, &val, ptr, INDEX); \
+ } while (failure != 0); \
+ \
+ return oldval; \
}
-SUBWORD_BOOL_CAS (unsigned short, 2)
-SUBWORD_BOOL_CAS (unsigned char, 1)
+SYNC_LOCK_TEST_AND_SET_2 (long long, 8, 3)
+SYNC_LOCK_TEST_AND_SET_2 (short, 2, 1)
+SYNC_LOCK_TEST_AND_SET_2 (signed char, 1, 0)
int HIDDEN
__sync_lock_test_and_set_4 (int *ptr, int val)
return oldval;
}
-#define SUBWORD_TEST_AND_SET(TYPE, WIDTH) \
- TYPE HIDDEN \
- __sync_lock_test_and_set_##WIDTH (TYPE *ptr, TYPE val) \
- { \
- int failure; \
- unsigned int oldval, newval, shift, mask; \
- int *wordptr = (int *) ((unsigned long) ptr & ~3); \
- \
- shift = (((unsigned long) ptr & 3) << 3) ^ INVERT_MASK_##WIDTH; \
- mask = MASK_##WIDTH << shift; \
- \
- do { \
- oldval = *wordptr; \
- newval = (oldval & ~mask) \
- | (((unsigned int) val << shift) & mask); \
- failure = __kernel_cmpxchg (oldval, newval, wordptr); \
- } while (failure != 0); \
- \
- return (oldval & mask) >> shift; \
+#define SYNC_LOCK_RELEASE_2(TYPE, WIDTH, INDEX) \
+ void HIDDEN \
+ __sync_lock_release_##WIDTH (TYPE *ptr) \
+ { \
+ TYPE failure, oldval, zero = 0; \
+ \
+ do { \
+ oldval = *ptr; \
+ failure = __kernel_cmpxchg2 (&oldval, &zero, ptr, INDEX); \
+ } while (failure != 0); \
}
-SUBWORD_TEST_AND_SET (unsigned short, 2)
-SUBWORD_TEST_AND_SET (unsigned char, 1)
+SYNC_LOCK_RELEASE_2 (long long, 8, 3)
+SYNC_LOCK_RELEASE_2 (short, 2, 1)
+SYNC_LOCK_RELEASE_2 (signed char, 1, 0)
-#define SYNC_LOCK_RELEASE(TYPE, WIDTH) \
- void HIDDEN \
- __sync_lock_release_##WIDTH (TYPE *ptr) \
- { \
- *ptr = 0; \
- }
+void HIDDEN
+__sync_lock_release_4 (int *ptr)
+{
+ int failure, oldval;
-SYNC_LOCK_RELEASE (int, 4)
-SYNC_LOCK_RELEASE (short, 2)
-SYNC_LOCK_RELEASE (char, 1)
+ do {
+ oldval = *ptr;
+ failure = __kernel_cmpxchg (oldval, 0, ptr);
+ } while (failure != 0);
+}
#endif
ENDFUNC(GLOBAL(sdivsi3_2))
#endif
-#elif defined __SHMEDIA__
+#elif __SHMEDIA__
/* m5compact-nofpu */
// clobbered: r18,r19,r20,r21,r25,tr0,tr1,tr2
.mode SHmedia
add.l r18,r25,r0
blink tr0,r63
#endif
-#elif defined (__SHMEDIA__)
+#elif __SHMEDIA__
/* m5compact-nofpu - more emphasis on code size than on speed, but don't
ignore speed altogether - div1 needs 9 cycles, subc 7 and rotcl 4.
So use a short shmedia loop. */
bnei r25,-32,tr1
add.l r20,r63,r0
blink tr2,r63
-#else /* ! defined (__SHMEDIA__) */
+#else /* ! __SHMEDIA__ */
LOCAL(div8):
div1 r5,r4
LOCAL(div7):
#endif /* L_udivsi3 */
#ifdef L_udivdi3
-#ifdef __SHMEDIA__
+#if __SHMEDIA__
.mode SHmedia
.section .text..SHmedia32,"ax"
.align 2
#endif /* L_udivdi3 */
#ifdef L_divdi3
-#ifdef __SHMEDIA__
+#if __SHMEDIA__
.mode SHmedia
.section .text..SHmedia32,"ax"
.align 2
#endif /* L_divdi3 */
#ifdef L_umoddi3
-#ifdef __SHMEDIA__
+#if __SHMEDIA__
.mode SHmedia
.section .text..SHmedia32,"ax"
.align 2
#endif /* L_umoddi3 */
#ifdef L_moddi3
-#ifdef __SHMEDIA__
+#if __SHMEDIA__
.mode SHmedia
.section .text..SHmedia32,"ax"
.align 2
#ifdef L_div_table
#if __SH5__
-#if defined(__pic__) && defined(__SHMEDIA__)
+#if defined(__pic__) && __SHMEDIA__
.global GLOBAL(sdivsi3)
FUNC(GLOBAL(sdivsi3))
#if __SH5__ == 32
#else /* ! __pic__ || ! __SHMEDIA__ */
.section .rodata
#endif /* __pic__ */
-#if defined(TEXT_DATA_BUG) && defined(__pic__) && defined(__SHMEDIA__)
+#if defined(TEXT_DATA_BUG) && defined(__pic__) && __SHMEDIA__
.balign 2
.type Local_div_table,@object
.size Local_div_table,128
LIBGO_IS_SPARC64_TRUE
LIBGO_IS_SPARC_FALSE
LIBGO_IS_SPARC_TRUE
+LIBGO_IS_PPC64LE_FALSE
+LIBGO_IS_PPC64LE_TRUE
LIBGO_IS_PPC64_FALSE
LIBGO_IS_PPC64_TRUE
LIBGO_IS_PPC_FALSE
lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
lt_status=$lt_dlunknown
cat > conftest.$ac_ext <<_LT_EOF
-#line 11118 "configure"
+#line 11120 "configure"
#include "confdefs.h"
#if HAVE_DLFCN_H
lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
lt_status=$lt_dlunknown
cat > conftest.$ac_ext <<_LT_EOF
-#line 11224 "configure"
+#line 11226 "configure"
#include "confdefs.h"
#if HAVE_DLFCN_H
mips_abi=unknown
is_ppc=no
is_ppc64=no
+is_ppc64le=no
is_sparc=no
is_sparc64=no
is_x86_64=no
if ac_fn_c_try_compile "$LINENO"; then :
is_ppc=yes
else
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+/* end confdefs.h. */
+
+#if defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN__)
+#error 64be
+#endif
+_ACEOF
+if ac_fn_c_try_compile "$LINENO"; then :
+ is_ppc64le=yes
+else
is_ppc64=yes
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+fi
+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
if test "$is_ppc" = "yes"; then
GOARCH=ppc
- else
+ elif test "$is_ppc64" = "yes"; then
GOARCH=ppc64
+ else
+ GOARCH=ppc64le
fi
;;
sparc*-*-*)
LIBGO_IS_PPC64_FALSE=
fi
+ if test $is_ppc64le = yes; then
+ LIBGO_IS_PPC64LE_TRUE=
+ LIBGO_IS_PPC64LE_FALSE='#'
+else
+ LIBGO_IS_PPC64LE_TRUE='#'
+ LIBGO_IS_PPC64LE_FALSE=
+fi
+
if test $is_sparc = yes; then
LIBGO_IS_SPARC_TRUE=
LIBGO_IS_SPARC_FALSE='#'
as_fn_error "conditional \"LIBGO_IS_PPC64\" was never defined.
Usually this means the macro was only invoked conditionally." "$LINENO" 5
fi
+if test -z "${LIBGO_IS_PPC64LE_TRUE}" && test -z "${LIBGO_IS_PPC64LE_FALSE}"; then
+ as_fn_error "conditional \"LIBGO_IS_PPC64LE\" was never defined.
+Usually this means the macro was only invoked conditionally." "$LINENO" 5
+fi
if test -z "${LIBGO_IS_SPARC_TRUE}" && test -z "${LIBGO_IS_SPARC_FALSE}"; then
as_fn_error "conditional \"LIBGO_IS_SPARC\" was never defined.
Usually this means the macro was only invoked conditionally." "$LINENO" 5
mips_abi=unknown
is_ppc=no
is_ppc64=no
+is_ppc64le=no
is_sparc=no
is_sparc64=no
is_x86_64=no
#ifdef _ARCH_PPC64
#error 64-bit
#endif],
-[is_ppc=yes], [is_ppc64=yes])
+[is_ppc=yes],
+ [AC_COMPILE_IFELSE([
+#if defined(_BIG_ENDIAN) || defined(__BIG_ENDIAN__)
+#error 64be
+#endif],
+[is_ppc64le=yes],[is_ppc64=yes])])
if test "$is_ppc" = "yes"; then
GOARCH=ppc
- else
+ elif test "$is_ppc64" = "yes"; then
GOARCH=ppc64
+ else
+ GOARCH=ppc64le
fi
;;
sparc*-*-*)
AM_CONDITIONAL(LIBGO_IS_MIPSO64, test $mips_abi = o64)
AM_CONDITIONAL(LIBGO_IS_PPC, test $is_ppc = yes)
AM_CONDITIONAL(LIBGO_IS_PPC64, test $is_ppc64 = yes)
+AM_CONDITIONAL(LIBGO_IS_PPC64LE, test $is_ppc64le = yes)
AM_CONDITIONAL(LIBGO_IS_SPARC, test $is_sparc = yes)
AM_CONDITIONAL(LIBGO_IS_SPARC64, test $is_sparc64 = yes)
AM_CONDITIONAL(LIBGO_IS_X86_64, test $is_x86_64 = yes)
func (i R_386) String() string { return stringName(uint32(i), r386Strings, false) }
func (i R_386) GoString() string { return stringName(uint32(i), r386Strings, true) }
+// Relocation types for ppc64.
+type R_PPC64 int
+
+const (
+ R_PPC64_NONE R_PPC64 = 0 /* No relocation. */
+ R_PPC64_ADDR32 R_PPC64 = 1
+ R_PPC64_ADDR24 R_PPC64 = 2
+ R_PPC64_ADDR16 R_PPC64 = 3
+ R_PPC64_ADDR16_LO R_PPC64 = 4
+ R_PPC64_ADDR16_HI R_PPC64 = 5
+ R_PPC64_ADDR16_HA R_PPC64 = 6
+ R_PPC64_ADDR14 R_PPC64 = 7
+ R_PPC64_ADDR14_BRTAKEN R_PPC64 = 8
+ R_PPC64_ADDR14_BRNTAKEN R_PPC64 = 9
+ R_PPC64_REL24 R_PPC64 = 10
+ R_PPC64_REL14 R_PPC64 = 11
+ R_PPC64_REL14_BRTAKEN R_PPC64 = 12
+ R_PPC64_REL14_BRNTAKEN R_PPC64 = 13
+ R_PPC64_GOT16 R_PPC64 = 14
+ R_PPC64_GOT16_LO R_PPC64 = 15
+ R_PPC64_GOT16_HI R_PPC64 = 16
+ R_PPC64_GOT16_HA R_PPC64 = 17
+
+ R_PPC64_COPY R_PPC64 = 19
+ R_PPC64_GLOB_DAT R_PPC64 = 20
+ R_PPC64_JMP_SLOT R_PPC64 = 21
+ R_PPC64_RELATIVE R_PPC64 = 22
+
+ R_PPC64_UADDR32 R_PPC64 = 24
+ R_PPC64_UADDR16 R_PPC64 = 25
+ R_PPC64_REL32 R_PPC64 = 26
+ R_PPC64_PLT32 R_PPC64 = 27
+ R_PPC64_PLTREL32 R_PPC64 = 28
+ R_PPC64_PLT16_LO R_PPC64 = 29
+ R_PPC64_PLT16_HI R_PPC64 = 30
+ R_PPC64_PLT16_HA R_PPC64 = 31
+
+ R_PPC64_SECTOFF R_PPC64 = 33
+ R_PPC64_SECTOFF_LO R_PPC64 = 34
+ R_PPC64_SECTOFF_HI R_PPC64 = 35
+ R_PPC64_SECTOFF_HA R_PPC64 = 36
+ R_PPC64_REL30 R_PPC64 = 37
+ R_PPC64_ADDR64 R_PPC64 = 38
+ R_PPC64_ADDR16_HIGHER R_PPC64 = 39
+ R_PPC64_ADDR16_HIGHERA R_PPC64 = 40
+ R_PPC64_ADDR16_HIGHEST R_PPC64 = 41
+ R_PPC64_ADDR16_HIGHESTA R_PPC64 = 42
+ R_PPC64_UADDR64 R_PPC64 = 43
+ R_PPC64_REL64 R_PPC64 = 44
+ R_PPC64_PLT64 R_PPC64 = 45
+ R_PPC64_PLTREL64 R_PPC64 = 46
+ R_PPC64_TOC16 R_PPC64 = 47
+ R_PPC64_TOC16_LO R_PPC64 = 48
+ R_PPC64_TOC16_HI R_PPC64 = 49
+ R_PPC64_TOC16_HA R_PPC64 = 50
+ R_PPC64_TOC R_PPC64 = 51
+ R_PPC64_PLTGOT16 R_PPC64 = 52
+ R_PPC64_PLTGOT16_LO R_PPC64 = 53
+ R_PPC64_PLTGOT16_HI R_PPC64 = 54
+ R_PPC64_PLTGOT16_HA R_PPC64 = 55
+
+ R_PPC64_ADDR16_DS R_PPC64 = 56
+ R_PPC64_ADDR16_LO_DS R_PPC64 = 57
+ R_PPC64_GOT16_DS R_PPC64 = 58
+ R_PPC64_GOT16_LO_DS R_PPC64 = 59
+ R_PPC64_PLT16_LO_DS R_PPC64 = 60
+ R_PPC64_SECTOFF_DS R_PPC64 = 61
+ R_PPC64_SECTOFF_LO_DS R_PPC64 = 62
+ R_PPC64_TOC16_DS R_PPC64 = 63
+ R_PPC64_TOC16_LO_DS R_PPC64 = 64
+ R_PPC64_PLTGOT16_DS R_PPC64 = 65
+ R_PPC64_PLTGOT16_LO_DS R_PPC64 = 66
+
+ R_PPC64_TLS R_PPC64 = 67
+ R_PPC64_DTPMOD64 R_PPC64 = 68
+ R_PPC64_TPREL16 R_PPC64 = 69
+ R_PPC64_TPREL16_LO R_PPC64 = 70
+ R_PPC64_TPREL16_HI R_PPC64 = 71
+ R_PPC64_TPREL16_HA R_PPC64 = 72
+ R_PPC64_TPREL64 R_PPC64 = 73
+ R_PPC64_DTPREL16 R_PPC64 = 74
+ R_PPC64_DTPREL16_LO R_PPC64 = 75
+ R_PPC64_DTPREL16_HI R_PPC64 = 76
+ R_PPC64_DTPREL16_HA R_PPC64 = 77
+ R_PPC64_DTPREL64 R_PPC64 = 78
+ R_PPC64_GOT_TLSGD16 R_PPC64 = 79
+ R_PPC64_GOT_TLSGD16_LO R_PPC64 = 80
+ R_PPC64_GOT_TLSGD16_HI R_PPC64 = 81
+ R_PPC64_GOT_TLSGD16_HA R_PPC64 = 82
+ R_PPC64_GOT_TLSLD16 R_PPC64 = 83
+ R_PPC64_GOT_TLSLD16_LO R_PPC64 = 84
+ R_PPC64_GOT_TLSLD16_HI R_PPC64 = 85
+ R_PPC64_GOT_TLSLD16_HA R_PPC64 = 86
+ R_PPC64_GOT_TPREL16_DS R_PPC64 = 87
+ R_PPC64_GOT_TPREL16_LO_DS R_PPC64 = 88
+ R_PPC64_GOT_TPREL16_HI R_PPC64 = 89
+ R_PPC64_GOT_TPREL16_HA R_PPC64 = 90
+ R_PPC64_GOT_DTPREL16_DS R_PPC64 = 91
+ R_PPC64_GOT_DTPREL16_LO_DS R_PPC64 = 92
+ R_PPC64_GOT_DTPREL16_HI R_PPC64 = 93
+ R_PPC64_GOT_DTPREL16_HA R_PPC64 = 94
+ R_PPC64_TPREL16_DS R_PPC64 = 95
+ R_PPC64_TPREL16_LO_DS R_PPC64 = 96
+ R_PPC64_TPREL16_HIGHER R_PPC64 = 97
+ R_PPC64_TPREL16_HIGHERA R_PPC64 = 98
+ R_PPC64_TPREL16_HIGHEST R_PPC64 = 99
+ R_PPC64_TPREL16_HIGHESTA R_PPC64 = 100
+ R_PPC64_DTPREL16_DS R_PPC64 = 101
+ R_PPC64_DTPREL16_LO_DS R_PPC64 = 102
+ R_PPC64_DTPREL16_HIGHER R_PPC64 = 103
+ R_PPC64_DTPREL16_HIGHERA R_PPC64 = 104
+ R_PPC64_DTPREL16_HIGHEST R_PPC64 = 105
+ R_PPC64_DTPREL16_HIGHESTA R_PPC64 = 106
+
+ R_PPC64_GNU_VTINHERIT R_PPC64 = 253
+ R_PPC64_GNU_VTENTRY R_PPC64 = 254
+)
+
+var rppc64Strings = []intName{
+ {0, "R_PPC64_NONE"},
+ {1, "R_PPC64_ADDR32"},
+ {2, "R_PPC64_ADDR24"},
+ {3, "R_PPC64_ADDR16"},
+ {4, "R_PPC64_ADDR16_LO"},
+ {5, "R_PPC64_ADDR16_HI"},
+ {6, "R_PPC64_ADDR16_HA"},
+ {7, "R_PPC64_ADDR14"},
+ {8, "R_PPC64_ADDR14_BRTAKEN"},
+ {9, "R_PPC64_ADDR14_BRNTAKEN"},
+ {10, "R_PPC64_REL24"},
+ {11, "R_PPC64_REL14"},
+ {12, "R_PPC64_REL14_BRTAKEN"},
+ {13, "R_PPC64_REL14_BRNTAKEN"},
+ {14, "R_PPC64_GOT16"},
+ {15, "R_PPC64_GOT16_LO"},
+ {16, "R_PPC64_GOT16_HI"},
+ {17, "R_PPC64_GOT16_HA"},
+
+ {19, "R_PPC64_COPY"},
+ {20, "R_PPC64_GLOB_DAT"},
+ {21, "R_PPC64_JMP_SLOT"},
+ {22, "R_PPC64_RELATIVE"},
+
+ {24, "R_PPC64_UADDR32"},
+ {25, "R_PPC64_UADDR16"},
+ {26, "R_PPC64_REL32"},
+ {27, "R_PPC64_PLT32"},
+ {28, "R_PPC64_PLTREL32"},
+ {29, "R_PPC64_PLT16_LO"},
+ {30, "R_PPC64_PLT16_HI"},
+ {31, "R_PPC64_PLT16_HA"},
+
+ {33, "R_PPC64_SECTOFF"},
+ {34, "R_PPC64_SECTOFF_LO"},
+ {35, "R_PPC64_SECTOFF_HI"},
+ {36, "R_PPC64_SECTOFF_HA"},
+ {37, "R_PPC64_REL30"},
+ {38, "R_PPC64_ADDR64"},
+ {39, "R_PPC64_ADDR16_HIGHER"},
+ {40, "R_PPC64_ADDR16_HIGHERA"},
+ {41, "R_PPC64_ADDR16_HIGHEST"},
+ {42, "R_PPC64_ADDR16_HIGHESTA"},
+ {43, "R_PPC64_UADDR64"},
+ {44, "R_PPC64_REL64"},
+ {45, "R_PPC64_PLT64"},
+ {46, "R_PPC64_PLTREL64"},
+ {47, "R_PPC64_TOC16"},
+ {48, "R_PPC64_TOC16_LO"},
+ {49, "R_PPC64_TOC16_HI"},
+ {50, "R_PPC64_TOC16_HA"},
+ {51, "R_PPC64_TOC"},
+ {52, "R_PPC64_PLTGOT16"},
+ {53, "R_PPC64_PLTGOT16_LO"},
+ {54, "R_PPC64_PLTGOT16_HI"},
+ {55, "R_PPC64_PLTGOT16_HA"},
+
+ {56, "R_PPC64_ADDR16_DS"},
+ {57, "R_PPC64_ADDR16_LO_DS"},
+ {58, "R_PPC64_GOT16_DS"},
+ {59, "R_PPC64_GOT16_LO_DS"},
+ {60, "R_PPC64_PLT16_LO_DS"},
+ {61, "R_PPC64_SECTOFF_DS"},
+ {62, "R_PPC64_SECTOFF_LO_DS"},
+ {63, "R_PPC64_TOC16_DS"},
+ {64, "R_PPC64_TOC16_LO_DS"},
+ {65, "R_PPC64_PLTGOT16_DS"},
+ {66, "R_PPC64_PLTGOT16_LO_DS"},
+
+ {67, "R_PPC64_TLS"},
+ {68, "R_PPC64_DTPMOD64"},
+ {69, "R_PPC64_TPREL16"},
+ {70, "R_PPC64_TPREL16_LO"},
+ {71, "R_PPC64_TPREL16_HI"},
+ {72, "R_PPC64_TPREL16_HA"},
+ {73, "R_PPC64_TPREL64"},
+ {74, "R_PPC64_DTPREL16"},
+ {75, "R_PPC64_DTPREL16_LO"},
+ {76, "R_PPC64_DTPREL16_HI"},
+ {77, "R_PPC64_DTPREL16_HA"},
+ {78, "R_PPC64_DTPREL64"},
+ {79, "R_PPC64_GOT_TLSGD16"},
+ {80, "R_PPC64_GOT_TLSGD16_LO"},
+ {81, "R_PPC64_GOT_TLSGD16_HI"},
+ {82, "R_PPC64_GOT_TLSGD16_HA"},
+ {83, "R_PPC64_GOT_TLSLD16"},
+ {84, "R_PPC64_GOT_TLSLD16_LO"},
+ {85, "R_PPC64_GOT_TLSLD16_HI"},
+ {86, "R_PPC64_GOT_TLSLD16_HA"},
+ {87, "R_PPC64_GOT_TPREL16_DS"},
+ {88, "R_PPC64_GOT_TPREL16_LO_DS"},
+ {89, "R_PPC64_GOT_TPREL16_HI"},
+ {90, "R_PPC64_GOT_TPREL16_HA"},
+ {91, "R_PPC64_GOT_DTPREL16_DS"},
+ {92, "R_PPC64_GOT_DTPREL16_LO_DS"},
+ {93, "R_PPC64_GOT_DTPREL16_HI"},
+ {94, "R_PPC64_GOT_DTPREL16_HA"},
+ {95, "R_PPC64_TPREL16_DS"},
+ {96, "R_PPC64_TPREL16_LO_DS"},
+ {97, "R_PPC64_TPREL16_HIGHER"},
+ {98, "R_PPC64_TPREL16_HIGHERA"},
+ {99, "R_PPC64_TPREL16_HIGHEST"},
+ {100, "R_PPC64_TPREL16_HIGHESTA"},
+ {101, "R_PPC64_DTPREL16_DS"},
+ {102, "R_PPC64_DTPREL16_LO_DS"},
+ {103, "R_PPC64_DTPREL16_HIGHER"},
+ {104, "R_PPC64_DTPREL16_HIGHERA"},
+ {105, "R_PPC64_DTPREL16_HIGHEST"},
+ {106, "R_PPC64_DTPREL16_HIGHESTA"},
+
+ {253, "R_PPC64_GNU_VTINHERIT"},
+ {254, "R_PPC64_GNU_VTENTRY"},
+}
+
+func (i R_PPC64) String() string { return stringName(uint32(i), rppc64Strings, false) }
+func (i R_PPC64) GoString() string { return stringName(uint32(i), rppc64Strings, true) }
+
// Relocation types for PowerPC.
type R_PPC int
if f.Class == ELFCLASS64 && f.Machine == EM_X86_64 {
return f.applyRelocationsAMD64(dst, rels)
}
+ if f.Class == ELFCLASS64 && f.Machine == EM_PPC64 {
+ return f.applyRelocationsPPC64(dst, rels)
+ }
if f.Class == ELFCLASS64 && f.Machine == EM_AARCH64 {
return f.applyRelocationsARM64(dst, rels)
}
return nil
}
+func (f *File) applyRelocationsPPC64(dst []byte, rels []byte) error {
+ // 24 is the size of Rela64.
+ if len(rels)%24 != 0 {
+ return errors.New("length of relocation section is not a multiple of Sym64Size")
+ }
+
+ symbols, _, err := f.getSymbols(SHT_SYMTAB)
+ if err != nil {
+ return err
+ }
+
+ b := bytes.NewBuffer(rels)
+ var rela Rela64
+
+ for b.Len() > 0 {
+ binary.Read(b, f.ByteOrder, &rela)
+ symNo := rela.Info >> 32
+ t := R_PPC64(rela.Info & 0xffff)
+
+ if symNo == 0 || symNo > uint64(len(symbols)) {
+ continue
+ }
+ sym := &symbols[symNo-1]
+
+ switch t {
+ case R_PPC64_ADDR64:
+ if rela.Off+8 >= uint64(len(dst)) || rela.Addend < 0 {
+ continue
+ }
+ f.ByteOrder.PutUint64(dst[rela.Off:rela.Off+8], uint64(rela.Addend) + uint64(sym.Value))
+ case R_PPC64_ADDR32:
+ if rela.Off+4 >= uint64(len(dst)) || rela.Addend < 0 {
+ continue
+ }
+ f.ByteOrder.PutUint32(dst[rela.Off:rela.Off+4], uint32(rela.Addend) + uint32(sym.Value))
+ }
+ }
+
+ return nil
+}
+
func (f *File) DWARF() (*dwarf.Data, error) {
// There are many other DWARF sections, but these
// are the required ones, and the debug/dwarf package
// If there's a relocation table for .debug_info, we have to process it
// now otherwise the data in .debug_info is invalid for x86-64 objects.
rela := f.Section(".rela.debug_info")
- if rela != nil && rela.Type == SHT_RELA && (f.Machine == EM_X86_64 || f.Machine == EM_AARCH64) {
+ if rela != nil && rela.Type == SHT_RELA && (f.Machine == EM_X86_64 || f.Machine == EM_AARCH64 || f.Machine == EM_PPC64) {
data, err := rela.Data()
if err != nil {
return nil, err
},
},
{
+ "testdata/go-relocation-test-gcc447-ppc64.obj",
+ []relocationTestEntry{
+ {0, &dwarf.Entry{Offset: 0xb, Tag: dwarf.TagCompileUnit, Children: true, Field: []dwarf.Field{dwarf.Field{Attr: dwarf.AttrProducer, Val: "GNU C 4.4.7 20120313 (Red Hat 4.4.7-4)"}, dwarf.Field{Attr: dwarf.AttrLanguage, Val: int64(1)}, dwarf.Field{Attr: dwarf.AttrName, Val: "t.c"}, dwarf.Field{Attr: dwarf.AttrCompDir, Val: "/tmp"}, dwarf.Field{Attr: dwarf.AttrLowpc, Val: uint64(0x0)}, dwarf.Field{Attr: dwarf.AttrHighpc, Val: uint64(0x24)}, dwarf.Field{Attr: dwarf.AttrStmtList, Val: int64(0)}}}},
+ },
+ },
+ {
"testdata/gcc-amd64-openbsd-debug-with-rela.obj",
[]relocationTestEntry{
{203, &dwarf.Entry{Offset: 0xc62, Tag: dwarf.TagMember, Children: false, Field: []dwarf.Field{{Attr: dwarf.AttrName, Val: "it_interval"}, {Attr: dwarf.AttrDeclFile, Val: int64(7)}, {Attr: dwarf.AttrDeclLine, Val: int64(236)}, {Attr: dwarf.AttrType, Val: dwarf.Offset(0xb7f)}, {Attr: dwarf.AttrDataMemberLoc, Val: []byte{0x23, 0x0}}}}},
package build
const goosList = "darwin dragonfly freebsd linux netbsd openbsd plan9 windows solaris "
-const goarchList = "386 amd64 arm arm64 alpha m68k mipso32 mipsn32 mipsn64 mipso64 ppc ppc64 sparc sparc64 "
+const goarchList = "386 amd64 arm arm64 alpha m68k mipso32 mipsn32 mipsn64 mipso64 ppc ppc64 ppc64le sparc sparc64 "
#ifdef TIOCGWINSZ
TIOCGWINSZ_val = TIOCGWINSZ,
#endif
+#ifdef TIOCSWINSZ
+ TIOCSWINSZ_val = TIOCSWINSZ,
+#endif
#ifdef TIOCNOTTY
TIOCNOTTY_val = TIOCNOTTY,
#endif
#ifdef TIOCSIG
TIOCSIG_val = TIOCSIG,
#endif
+#ifdef TCGETS
+ TCGETS_val = TCGETS,
+#endif
+#ifdef TCSETS
+ TCSETS_val = TCSETS,
+#endif
};
EOF
echo 'const TIOCGWINSZ = _TIOCGWINSZ_val' >> ${OUT}
fi
fi
+if ! grep '^const TIOCSWINSZ' ${OUT} >/dev/null 2>&1; then
+ if grep '^const _TIOCSWINSZ_val' ${OUT} >/dev/null 2>&1; then
+ echo 'const TIOCSWINSZ = _TIOCSWINSZ_val' >> ${OUT}
+ fi
+fi
if ! grep '^const TIOCNOTTY' ${OUT} >/dev/null 2>&1; then
if grep '^const _TIOCNOTTY_val' ${OUT} >/dev/null 2>&1; then
echo 'const TIOCNOTTY = _TIOCNOTTY_val' >> ${OUT}
fi
# The ioctl flags for terminal control
-grep '^const _TC[GS]ET' gen-sysinfo.go | \
+grep '^const _TC[GS]ET' gen-sysinfo.go | grep -v _val | \
sed -e 's/^\(const \)_\(TC[GS]ET[^= ]*\)\(.*\)$/\1\2 = _\2/' >> ${OUT}
+if ! grep '^const TCGETS' ${OUT} >/dev/null 2>&1; then
+ if grep '^const _TCGETS_val' ${OUT} >/dev/null 2>&1; then
+ echo 'const TCGETS = _TCGETS_val' >> ${OUT}
+ fi
+fi
+if ! grep '^const TCSETS' ${OUT} >/dev/null 2>&1; then
+ if grep '^const _TCSETS_val' ${OUT} >/dev/null 2>&1; then
+ echo 'const TCSETS = _TCSETS_val' >> ${OUT}
+ fi
+fi
# ioctl constants. Might fall back to 0 if TIOCNXCL is missing, too, but
# needs handling in syscalls.exec.go.
+2015-01-09 Jonathan Wakely <jwakely@redhat.com>
+
+ PR libstdc++/64476
+ * include/bits/stl_uninitialized.h (uninitialized_copy): Fix
+ is_assignable arguments.
+ * testsuite/20_util/specialized_algorithms/uninitialized_copy/64476.cc:
+ New.
+
+2015-01-09 Jonathan Wakely <jwakely@redhat.com>
+
+ PR libstdc++/60966
+ * include/std/future (packaged_task::operator()): Increment the
+ reference count on the shared state until the function returns.
+
+2015-01-09 Tim Shen <timshen@google.com>
+
+ PR libstdc++/64239
+ Backported form mainline
+ 2015-01-09 Tim Shen <timshen@google.com>
+
+ * include/bits/regex.h (match_results<>::swap): Use std::swap
+ instead of swap.
+ * include/bits/regex_compiler.tcc (_Compiler<>::_M_quantifier):
+ Likewise.
+ * testsuite/28_regex/match_results/swap.cc: New testcase.
+
+2014-12-17 Tim Shen <timshen@google.com>
+
+ PR libstdc++/64302
+ PR libstdc++/64303
+ Backported form mainline
+ 2014-12-17 Tim Shen <timshen@google.com>
+
+ * include/bits/regex.h (match_results::cbegin, match_results::cend,
+ regex_token_iterator::regex_token_iterator,
+ regex_token_iterator::_M_normalize_result): Fix match_results cbegin
+ and cend and regex_token_iterator::_M_result invariant.
+ * include/bits/regex.tcc: Fix regex_token_iterator::_M_result invariant.
+ * testsuite/28_regex/iterators/regex_token_iterator/64303.cc: Testcase.
+
+2014-12-13 Tim Shen <timshen@google.com>
+
+ PR libstdc++/64239
+ * include/bits/regex.h (match_results<>::match_results,
+ match_results<>::operator=, match_results<>::position,
+ match_results<>::swap): Fix ctor/assign/swap.
+ * include/bits/regex.tcc: (__regex_algo_impl<>,
+ regex_iterator<>::operator++): Set match_results::_M_begin as
+ "start position".
+ * testsuite/28_regex/iterators/regex_iterator/char/
+ string_position_01.cc: Test cases.
+
+2014-12-09 Jonathan Wakely <jwakely@redhat.com>
+
+ PR libstdc++/64203
+ * include/std/shared_mutex: Fix preprocessor conditions.
+ * testsuite/experimental/feat-cxx14.cc: Check conditions.
+
+2014-12-06 Jonathan Wakely <jwakely@redhat.com>
+
+ PR libstdc++/63840
+ * include/std/functional (function::function(const function&)): Set
+ _M_manager after operations that might throw.
+ * include/tr1/functional (function::function(const function&),
+ function::function(_Functor, _Useless)): Likewise.
+ * testsuite/20_util/function/63840.cc: New.
+ * testsuite/tr1/3_function_objects/function/63840.cc: New.
+
+ PR libstdc++/61947
+ * include/std/tuple (_Head_base): Use allocator_arg_t parameters to
+ disambiguate unary constructors.
+ (_Tuple_impl): Pass allocator_arg_t arguments.
+ * testsuite/20_util/tuple/61947.cc: New.
+ * testsuite/20_util/uses_allocator/cons_neg.cc: Adjust dg-error line.
+
+2014-12-06 Tim Shen <timshen@google.com>
+
+ PR libstdc++/64140
+ Backport form mainline
+ 2014-12-04 Tim Shen <timshen@google.com>
+
+ * include/bits/regex.tcc (regex_iterator<>::operator++): Update
+ prefix.matched after modifying prefix.first.
+ * testsuite/28_regex/iterators/regex_iterator/char/64140.cc: New
+ testcase.
+
2014-12-02 Matthias Klose <doko@ubuntu.com>
PR libstdc++/64103
2014-11-28 Tim Shen <timshen@google.com>
PR libstdc++/63497
- include/bits/regex_executor.tcc (_Executor::_M_dfs,
+ * include/bits/regex_executor.tcc (_Executor::_M_dfs,
_Executor::_M_word_boundary): Avoid dereferecing _M_current at _M_end
or other invalid position.
*/
explicit
match_results(const _Alloc& __a = _Alloc())
- : _Base_type(__a), _M_in_iterator(false)
+ : _Base_type(__a)
{ }
/**
* @brief Copy constructs a %match_results.
*/
- match_results(const match_results& __rhs)
- : _Base_type(__rhs), _M_in_iterator(false)
- { }
+ match_results(const match_results& __rhs) = default;
/**
* @brief Move constructs a %match_results.
*/
- match_results(match_results&& __rhs) noexcept
- : _Base_type(std::move(__rhs)), _M_in_iterator(false)
- { }
+ match_results(match_results&& __rhs) noexcept = default;
/**
* @brief Assigns rhs to *this.
*/
match_results&
- operator=(const match_results& __rhs)
- {
- match_results(__rhs).swap(*this);
- return *this;
- }
+ operator=(const match_results& __rhs) = default;
/**
* @brief Move-assigns rhs to *this.
*/
match_results&
- operator=(match_results&& __rhs)
- {
- match_results(std::move(__rhs)).swap(*this);
- return *this;
- }
+ operator=(match_results&& __rhs) = default;
/**
* @brief Destroys a %match_results object.
difference_type
position(size_type __sub = 0) const
{
- // [28.12.1.4.5]
- if (_M_in_iterator)
- return __sub < size() ? std::distance(_M_begin,
- (*this)[__sub].first) : -1;
- else
- return __sub < size() ? std::distance(this->prefix().first,
- (*this)[__sub].first) : -1;
+ return __sub < size() ? std::distance(_M_begin,
+ (*this)[__sub].first) : -1;
}
/**
*/
const_iterator
cbegin() const
- { return _Base_type::cbegin() + 2; }
+ { return this->begin(); }
/**
* @brief Gets an iterator to one-past-the-end of the collection.
*/
const_iterator
cend() const
- { return _Base_type::cend(); }
+ { return this->end(); }
//@}
*/
void
swap(match_results& __that)
- { _Base_type::swap(__that); }
+ {
+ using std::swap;
+ _Base_type::swap(__that);
+ swap(_M_begin, __that._M_begin);
+ }
//@}
private:
regex_constants::match_flag_type __m
= regex_constants::match_default)
: _M_position(__a, __b, __re, __m),
- _M_subs(__submatches, *(&__submatches+1)), _M_n(0)
+ _M_subs(__submatches, __submatches + _Nm), _M_n(0)
{ _M_init(__a, __b); }
/**
*/
regex_token_iterator(const regex_token_iterator& __rhs)
: _M_position(__rhs._M_position), _M_subs(__rhs._M_subs),
- _M_suffix(__rhs._M_suffix), _M_n(__rhs._M_n), _M_result(__rhs._M_result),
- _M_has_m1(__rhs._M_has_m1)
- {
- if (__rhs._M_result == &__rhs._M_suffix)
- _M_result = &_M_suffix;
- }
+ _M_suffix(__rhs._M_suffix), _M_n(__rhs._M_n), _M_has_m1(__rhs._M_has_m1)
+ { _M_normalize_result(); }
/**
* @brief Assigns a %regex_token_iterator to another.
_M_end_of_seq() const
{ return _M_result == nullptr; }
+ // [28.12.2.2.4]
+ void
+ _M_normalize_result()
+ {
+ if (_M_position != _Position())
+ _M_result = &_M_current_match();
+ else if (_M_has_m1)
+ _M_result = &_M_suffix;
+ else
+ _M_result = nullptr;
+ }
+
_Position _M_position;
std::vector<int> _M_subs;
value_type _M_suffix;
return false;
typename match_results<_BiIter, _Alloc>::_Base_type& __res = __m;
+ __m._M_begin = __s;
__res.resize(__re._M_automaton->_M_sub_count() + 2);
for (auto& __it : __res)
__it.matched = false;
| regex_constants::match_continuous))
{
_GLIBCXX_DEBUG_ASSERT(_M_match[0].matched);
- _M_match.at(_M_match.size()).first = __prefix_first;
- _M_match._M_in_iterator = true;
+ auto& __prefix = _M_match.at(_M_match.size());
+ __prefix.first = __prefix_first;
+ __prefix.matched = __prefix.first != __prefix.second;
+ // [28.12.1.4.5]
_M_match._M_begin = _M_begin;
return *this;
}
if (regex_search(__start, _M_end, _M_match, *_M_pregex, _M_flags))
{
_GLIBCXX_DEBUG_ASSERT(_M_match[0].matched);
- _M_match.at(_M_match.size()).first = __prefix_first;
- _M_match._M_in_iterator = true;
+ auto& __prefix = _M_match.at(_M_match.size());
+ __prefix.first = __prefix_first;
+ __prefix.matched = __prefix.first != __prefix.second;
+ // [28.12.1.4.5]
_M_match._M_begin = _M_begin;
}
else
_M_position = __rhs._M_position;
_M_subs = __rhs._M_subs;
_M_n = __rhs._M_n;
- _M_result = __rhs._M_result;
_M_suffix = __rhs._M_suffix;
_M_has_m1 = __rhs._M_has_m1;
- if (__rhs._M_result == &__rhs._M_suffix)
- _M_result = &_M_suffix;
+ _M_normalize_result();
return *this;
}
{
auto& __tmp = _M_nfa[__stack.top()];
__stack.pop();
- swap(__tmp._M_next, __tmp._M_alt);
+ std::swap(__tmp._M_next, __tmp._M_alt);
}
}
_M_stack.push(__e);
const bool __assignable = true;
#else
// trivial types can have deleted assignment
- typedef typename iterator_traits<_InputIterator>::reference _RefType;
- const bool __assignable = is_assignable<_ValueType1, _RefType>::value;
+ typedef typename iterator_traits<_InputIterator>::reference _RefType1;
+ typedef typename iterator_traits<_ForwardIterator>::reference _RefType2;
+ const bool __assignable = is_assignable<_RefType2, _RefType1>::value;
#endif
return std::__uninitialized_copy<__is_trivial(_ValueType1)
{
if (static_cast<bool>(__x))
{
+ __x._M_manager(_M_functor, __x._M_functor, __clone_functor);
_M_invoker = __x._M_invoker;
_M_manager = __x._M_manager;
- __x._M_manager(_M_functor, __x._M_functor, __clone_functor);
}
}
operator()(_ArgTypes... __args)
{
__future_base::_State_base::_S_check(_M_state);
- _M_state->_M_run(std::forward<_ArgTypes>(__args)...);
+ auto __state = _M_state;
+ __state->_M_run(std::forward<_ArgTypes>(__args)...);
}
void
#else
#include <bits/c++config.h>
-#if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
-# include <mutex>
-# include <condition_variable>
-#endif
+#include <mutex>
+#include <condition_variable>
#include <bits/functexcept.h>
namespace std _GLIBCXX_VISIBILITY(default)
* @{
*/
-#if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
+#ifdef _GLIBCXX_USE_C99_STDINT_TR1
+#ifdef _GLIBCXX_HAS_GTHREADS
#define __cpp_lib_shared_timed_mutex 201402
}
}
};
-#endif // _GLIBCXX_HAS_GTHREADS && _GLIBCXX_USE_C99_STDINT_TR1
+#endif // _GLIBCXX_HAS_GTHREADS
/// shared_lock
template<typename _Mutex>
swap(shared_lock<_Mutex>& __x, shared_lock<_Mutex>& __y) noexcept
{ __x.swap(__y); }
+#endif // _GLIBCXX_USE_C99_STDINT_TR1
+
// @} group mutexes
_GLIBCXX_END_NAMESPACE_VERSION
} // namespace
constexpr _Head_base(const _Head& __h)
: _Head(__h) { }
- template<typename _UHead, typename = typename
- enable_if<!is_convertible<_UHead,
- __uses_alloc_base>::value>::type>
+ constexpr _Head_base(const _Head_base&) = default;
+ constexpr _Head_base(_Head_base&&) = default;
+
+ template<typename _UHead>
constexpr _Head_base(_UHead&& __h)
: _Head(std::forward<_UHead>(__h)) { }
- _Head_base(__uses_alloc0)
+ _Head_base(allocator_arg_t, __uses_alloc0)
: _Head() { }
template<typename _Alloc>
- _Head_base(__uses_alloc1<_Alloc> __a)
+ _Head_base(allocator_arg_t, __uses_alloc1<_Alloc> __a)
: _Head(allocator_arg, *__a._M_a) { }
template<typename _Alloc>
- _Head_base(__uses_alloc2<_Alloc> __a)
+ _Head_base(allocator_arg_t, __uses_alloc2<_Alloc> __a)
: _Head(*__a._M_a) { }
template<typename _UHead>
constexpr _Head_base(const _Head& __h)
: _M_head_impl(__h) { }
- template<typename _UHead, typename = typename
- enable_if<!is_convertible<_UHead,
- __uses_alloc_base>::value>::type>
+ constexpr _Head_base(const _Head_base&) = default;
+ constexpr _Head_base(_Head_base&&) = default;
+
+ template<typename _UHead>
constexpr _Head_base(_UHead&& __h)
: _M_head_impl(std::forward<_UHead>(__h)) { }
- _Head_base(__uses_alloc0)
+ _Head_base(allocator_arg_t, __uses_alloc0)
: _M_head_impl() { }
template<typename _Alloc>
- _Head_base(__uses_alloc1<_Alloc> __a)
+ _Head_base(allocator_arg_t, __uses_alloc1<_Alloc> __a)
: _M_head_impl(allocator_arg, *__a._M_a) { }
template<typename _Alloc>
- _Head_base(__uses_alloc2<_Alloc> __a)
+ _Head_base(allocator_arg_t, __uses_alloc2<_Alloc> __a)
: _M_head_impl(*__a._M_a) { }
template<typename _UHead>
template<typename _Alloc>
_Tuple_impl(allocator_arg_t __tag, const _Alloc& __a)
: _Inherited(__tag, __a),
- _Base(__use_alloc<_Head>(__a)) { }
+ _Base(__tag, __use_alloc<_Head>(__a)) { }
template<typename _Alloc>
_Tuple_impl(allocator_arg_t __tag, const _Alloc& __a,
{
if (static_cast<bool>(__x))
{
+ __x._M_manager(_M_functor, __x._M_functor, __clone_functor);
_M_invoker = __x._M_invoker;
_M_manager = __x._M_manager;
- __x._M_manager(_M_functor, __x._M_functor, __clone_functor);
}
}
if (_My_handler::_M_not_empty_function(__f))
{
+ _My_handler::_M_init_functor(_M_functor, __f);
_M_invoker = &_My_handler::_M_invoke;
_M_manager = &_My_handler::_M_manager;
- _My_handler::_M_init_functor(_M_functor, __f);
}
}
--- /dev/null
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++11" }
+
+#include <functional>
+#include <stdexcept>
+#include <testsuite_hooks.h>
+
+struct functor
+{
+ functor() = default;
+
+ functor(const functor&)
+ {
+ throw std::runtime_error("test");
+ }
+
+ functor(functor&& f) = default;
+
+ void operator()() const { }
+};
+
+
+void
+test01()
+{
+ std::function<void()> f = functor{};
+ try {
+ auto g = f;
+ } catch (const std::runtime_error& e) {
+ return;
+ }
+ VERIFY(false);
+}
+
+int
+main()
+{
+ test01();
+}
--- /dev/null
+// Copyright (C) 2015 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// { dg-options "-std=gnu++11" }
+
+#include <memory>
+#include <testsuite_hooks.h>
+
+struct X
+{
+ X() = default;
+ X(X const &) = default;
+ X& operator=(X const&) = delete;
+};
+
+static_assert(__is_trivial(X), "X is trivial");
+
+int constructed = 0;
+int assigned = 0;
+
+struct Y
+{
+ Y() = default;
+ Y(Y const &) = default;
+ Y& operator=(Y const&) = default;
+
+ Y(const X&) { ++constructed; }
+ Y& operator=(const X&)& { ++assigned; return *this; }
+ Y& operator=(const X&)&& = delete;
+ Y& operator=(X&&) = delete;
+};
+
+static_assert(__is_trivial(Y), "Y is trivial");
+
+void
+test01()
+{
+ X a[100];
+ Y b[100];
+
+ std::uninitialized_copy(a, a+10, b);
+
+ VERIFY(constructed == 0);
+ VERIFY(assigned == 10);
+}
+
+int
+main()
+{
+ test01();
+}
--- /dev/null
+// { dg-options "-std=gnu++11" }
+// { dg-do compile }
+
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+#include <tuple>
+
+struct ConvertibleToAny {
+ template <class T> operator T() const { return T(); }
+};
+
+int main() {
+ std::tuple<ConvertibleToAny&&> t(ConvertibleToAny{});
+}
tuple<Type> t(allocator_arg, a, 1);
}
-// { dg-error "no matching function" "" { target *-*-* } 118 }
+// { dg-error "no matching function" "" { target *-*-* } 119 }
--- /dev/null
+// { dg-options "-std=gnu++11" }
+
+//
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// libstdc++/64140
+
+#include <regex>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+ bool test __attribute__((unused)) = true;
+
+ const std::regex e("z*");
+ const std::string s("ab");
+
+ auto it = std::sregex_iterator(s.begin(), s.end(), e);
+ auto end = std::sregex_iterator();
+ VERIFY(it != end);
+ VERIFY(!it->prefix().matched);
+ ++it;
+ VERIFY(it != end);
+ VERIFY(it->prefix().matched);
+ ++it;
+ VERIFY(it != end);
+ VERIFY(it->prefix().matched);
+ ++it;
+ VERIFY(it == end);
+}
+
+int
+main()
+{
+ test01();
+ return 0;
+}
// Tests iter->position() behavior
#include <regex>
+#include <tuple>
#include <testsuite_hooks.h>
void
}
}
+// PR libstdc++/64239
+void
+test02()
+{
+ bool test __attribute__((unused)) = true;
+
+ std::regex re("\\w+");
+ std::string s("-a-b-c-");
+
+ std::tuple<int, int, const char*> expected[] =
+ {
+ std::make_tuple(1, 1, "a"),
+ std::make_tuple(3, 1, "b"),
+ std::make_tuple(5, 1, "c"),
+ };
+
+ int i = 0;
+ for (auto it1 = std::sregex_iterator(s.begin(), s.end(), re),
+ end = std::sregex_iterator(); it1 != end; ++it1, i++)
+ {
+ auto it2 = it1;
+ VERIFY(it1->position() == std::get<0>(expected[i]));
+ VERIFY(it1->length() == std::get<1>(expected[i]));
+ VERIFY(it1->str() == std::get<2>(expected[i]));
+ VERIFY(it2->position() == std::get<0>(expected[i]));
+ VERIFY(it2->length() == std::get<1>(expected[i]));
+ VERIFY(it2->str() == std::get<2>(expected[i]));
+ }
+}
+
+void
+test03()
+{
+ bool test __attribute__((unused)) = true;
+
+ std::smatch m;
+ std::string s = "abcde";
+ std::regex_search(s, m, std::regex("bcd"));
+ VERIFY(m.position() == 1);
+ VERIFY(m.position() == m.prefix().length());
+}
+
int
main()
{
test01();
+ test02();
+ test03();
return 0;
}
--- /dev/null
+// { dg-do run }
+// { dg-options "-std=gnu++11" }
+
+//
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+// 28.12.2 Class template regex_token_iterator
+
+#include <regex>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+ bool test __attribute__((unused)) = true;
+
+ const std::string s(" 111 222 ");
+ const std::regex re("\\w+");
+
+ std::sregex_token_iterator it1(s.begin(), s.end(), re), it2(it1), end;
+
+ for (; it1 != end; ++it1, ++it2) {
+ VERIFY(it1 == it2);
+ VERIFY(*it1 == *it2);
+ }
+ VERIFY(it2 == end);
+}
+
+int
+main()
+{
+ test01();
+ return 0;
+}
--- /dev/null
+// { dg-options "-std=gnu++11" }
+
+//
+// Copyright (C) 2015 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+#include <regex>
+#include <testsuite_hooks.h>
+
+void
+test01()
+{
+ bool test __attribute__((unused)) = true;
+
+ std::cmatch m;
+ std::regex_match("a", m, std::regex("a"));
+ std::cmatch mm1 = m, mm2;
+ mm1.swap(mm2);
+ VERIFY(m == mm2);
+ std::swap(mm1, mm2);
+ VERIFY(m == mm1);
+}
+
+int
+main()
+{
+ test01();
+ return 0;
+}
# error "<shared_mutex>"
#endif
-#ifndef __cpp_lib_shared_timed_mutex
-# error "__cpp_lib_shared_timed_mutex"
-#elif __cpp_lib_shared_timed_mutex != 201402
-# error "__cpp_lib_shared_timed_mutex != 201402"
+#if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1)
+# ifndef __cpp_lib_shared_timed_mutex
+# error "__cpp_lib_shared_timed_mutex"
+# elif __cpp_lib_shared_timed_mutex != 201402
+# error "__cpp_lib_shared_timed_mutex != 201402"
+# endif
#endif
#ifndef __cpp_lib_is_final
--- /dev/null
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library. This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3. If not see
+// <http://www.gnu.org/licenses/>.
+
+#include <tr1/functional>
+#include <stdexcept>
+#include <testsuite_hooks.h>
+
+struct functor
+{
+ functor() : copies(0) { }
+
+ functor(const functor& f)
+ : copies(f.copies + 1)
+ {
+ if (copies > 1)
+ throw std::runtime_error("functor");
+ }
+
+ void operator()() const { }
+
+ int copies;
+};
+
+
+void
+test01()
+{
+ std::tr1::function<void()> f = functor();
+ try {
+ std::tr1::function<void()> g = f;
+ } catch (const std::runtime_error& e) {
+ return;
+ }
+ VERIFY(false);
+}
+
+int
+main()
+{
+ test01();
+}