Jonathan Wright [Thu, 18 Mar 2021 16:23:50 +0000 (16:23 +0000)]
aarch64: Update attributes of arm_acle.h intrinsics
Update the attributes of all intrinsics defined in arm_acle.h to be
consistent with the attributes of the intrinsics defined in
arm_neon.h. Specifically, this means updating the attributes from:
__extension__ static __inline <type>
__attribute__ ((__always_inline__))
to:
__extension__ extern __inline <type>
__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
gcc/ChangeLog:
2021-03-18 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/arm_acle.h (__attribute__): Make intrinsic
attributes consistent with those defined in arm_neon.h.
Jonathan Wright [Thu, 18 Mar 2021 12:14:48 +0000 (12:14 +0000)]
aarch64: Update attributes of arm_fp16.h intrinsics
Update the attributes of all intrinsics defined in arm_fp16.h to be
consistent with the attributes of the intrinsics defined in
arm_neon.h. Specifically, this means updating the attributes from:
__extension__ static __inline <type>
__attribute__ ((__always_inline__))
to:
__extension__ extern __inline <type>
__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
gcc/ChangeLog:
2021-03-18 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/arm_fp16.h (__attribute__): Make intrinsic
attributes consistent with those defined in arm_neon.h.
Jonathan Wright [Thu, 18 Feb 2021 23:27:00 +0000 (23:27 +0000)]
aarch64: Use RTL builtins for vcvtx intrinsics
Rewrite vcvtx Neon intrinsics to use RTL builtins rather than inline
assembly code, allowing for better scheduling and optimization.
gcc/ChangeLog:
2021-02-18 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Add
float_trunc_rodd builtin generator macros.
* config/aarch64/aarch64-simd.md (aarch64_float_trunc_rodd_df):
Define.
(aarch64_float_trunc_rodd_lo_v2sf): Define.
(aarch64_float_trunc_rodd_hi_v4sf_le): Define.
(aarch64_float_trunc_rodd_hi_v4sf_be): Define.
(aarch64_float_trunc_rodd_hi_v4sf): Define.
* config/aarch64/arm_neon.h (vcvtx_f32_f64): Use RTL builtin
instead of inline asm.
(vcvtx_high_f32_f64): Likewise.
(vcvtxd_f32_f64): Likewise.
* config/aarch64/iterators.md: Add FCVTXN unspec.
Jonathan Wright [Fri, 12 Feb 2021 15:37:05 +0000 (15:37 +0000)]
aarch64: Use RTL builtins for v[q]tbx intrinsics
Rewrite v[q]tbx Neon intrinsics to use RTL builtins rather than
inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-12 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Add tbx1 builtin
generator macros.
* config/aarch64/aarch64-simd.md (aarch64_tbx1<mode>):
Define.
* config/aarch64/arm_neon.h (vqtbx1_s8): USE RTL builtin
instead of inline asm.
(vqtbx1_u8): Likewise.
(vqtbx1_p8): Likewise.
(vqtbx1q_s8): Likewise.
(vqtbx1q_u8): Likewise.
(vqtbx1q_p8): Likewise.
(vtbx2_s8): Likewise.
(vtbx2_u8): Likewise.
(vtbx2_p8): Likewise.
Jonathan Wright [Fri, 12 Feb 2021 12:13:27 +0000 (12:13 +0000)]
aarch64: Use RTL builtins for v[q]tbl intrinsics
Rewrite v[q]tbl Neon intrinsics to use RTL builtins rather than
inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-12 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Add tbl1 builtin
generator macros.
* config/aarch64/arm_neon.h (vqtbl1_p8): Use RTL builtin
instead of inline asm.
(vqtbl1_s8): Likewise.
(vqtbl1_u8): Likewise.
(vqtbl1q_p8): Likewise.
(vqtbl1q_s8): Likewise.
(vqtbl1q_u8): Likewise.
(vtbl1_s8): Likewise.
(vtbl1_u8): Likewise.
(vtbl1_p8): Likewise.
(vtbl2_s8): Likewise.
(vtbl2_u8): Likewise.
(vtbl2_p8): Likewise.
Jonathan Wright [Wed, 10 Feb 2021 13:02:24 +0000 (13:02 +0000)]
aarch64: Use RTL builtins for polynomial vsri[q]_n intrinsics
Rewrite vsri[q]_n_p* Neon intrinsics to use RTL builtins rather than
inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-10 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Add polynomial
ssri_n buitin generator macro.
* config/aarch64/arm_neon.h (vsri_n_p8): Use RTL builtin
instead of inline asm.
(vsri_n_p16): Likewise.
(vsri_n_p64): Likewise.
(vsriq_n_p8): Likewise.
(vsriq_n_p16): Likewise.
(vsriq_n_p64): Likewise.
Jonathan Wright [Wed, 10 Feb 2021 11:39:39 +0000 (11:39 +0000)]
aarch64: Use RTL builtins for polynomial vsli[q]_n intrinsics
Rewrite vsli[q]_n_p* Neon intrinsics to use RTL builtins rather than
inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-10 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Use VALLP mode
iterator for polynomial ssli_n builtin generator macro.
* config/aarch64/arm_neon.h (vsli_n_p8): Use RTL builtin
instead of inline asm.
(vsli_n_p16): Likewise.
(vsliq_n_p8): Likewise.
(vsliq_n_p16): Likewise.
* config/aarch64/iterators.md: Define VALLP mode iterator.
Jonathan Wright [Tue, 9 Feb 2021 01:14:00 +0000 (01:14 +0000)]
aarch64: Use RTL builtins for vpadal_[su]32 intrinsics
Rewrite vpadal_[su]32 Neon intrinsics to use RTL builtins rather than
inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-09 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Use VDQV_L
iterator to generate [su]adalp RTL builtins.
* config/aarch64/aarch64-simd.md: Use VDQV_L iterator in
[su]adalp RTL pattern.
* config/aarch64/arm_neon.h (vpadal_s32): Use RTL builtin
instead of inline asm.
(vpadal_u32): Likewise.
Jonathan Wright [Mon, 8 Feb 2021 21:23:48 +0000 (21:23 +0000)]
aarch64: Use RTL builtins for [su]paddl[q] intrinsics
Rewrite [su]paddl[q] Neon intrinsics to use RTL builtins rather than
inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-08 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Add [su]addlp
builtin generator macros.
* config/aarch64/aarch64-simd.md (aarch64_<su>addlp<mode>):
Define.
* config/aarch64/arm_neon.h (vpaddl_s8): Use RTL builtin
instead of inline asm.
(vpaddl_s16): Likewise.
(vpaddl_s32): Likewise.
(vpaddl_u8): Likewise.
(vpaddl_u16): Likewise.
(vpaddl_u32): Likewise.
(vpaddlq_s8): Likewise.
(vpaddlq_s16): Likewise.
(vpaddlq_s32): Likewise.
(vpaddlq_u8): Likewise.
(vpaddlq_u16): Likewise.
(vpaddlq_u32): Liwewise.
* config/aarch64/iterators.md: Define [SU]ADDLP unspecs with
appropriate attributes.
Jonathan Wright [Mon, 8 Feb 2021 16:50:30 +0000 (16:50 +0000)]
aarch64: Use RTL builtins for vpaddq intrinsics
Rewrite vpaddq Neon intrinsics to use RTL builtins rather than inline
assembly code, allowing for better scheduling and optimization.
gcc/ChangeLog:
2021-02-08 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Use VDQ_I iterator
for aarch64_addp<mode> builtin macro generator.
* config/aarch64/aarch64-simd.md: Use VDQ_I iterator in
aarch64_addp<mode> RTL pattern.
* config/aarch64/arm_neon.h (vpaddq_s8): Use RTL builtin
instead of inline asm.
(vpaddq_s16): Likewise.
(vpaddq_s32): Likewise.
(vpaddq_s64): Likewise.
(vpaddq_u8): Likewise.
(vpaddq_u16): Likewise.
(vpaddq_u32): Likewise.
(vpaddq_u64): Likewise.
Jonathan Wright [Mon, 8 Feb 2021 11:37:29 +0000 (11:37 +0000)]
aarch64: Use RTL builtins for vq[r]dmulh[q]_n intrinsics
Rewrite vq[r]dmulh[q]_n Neon intrinsics to use RTL builtins rather
than inline assembly code, allowing for better scheduling and
optimization.
gcc/ChangeLog:
2021-02-08 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-simd-builtins.def: Add sq[r]dmulh_n
builtin generator macros.
* config/aarch64/aarch64-simd.md (aarch64_sq<r>dmulh_n<mode>):
Define.
* config/aarch64/arm_neon.h (vqdmulh_n_s16): Use RTL builtin
instead of inline asm.
(vqdmulh_n_s32): Likewise.
(vqdmulhq_n_s16): Likewise.
(vqdmulhq_n_s32): Likewise.
(vqrdmulh_n_s16): Likewise.
(vqrdmulh_n_s32): Likewise.
(vqrdmulhq_n_s16): Likewise.
(vqrdmulhq_n_s32): Likewise.
Joseph Myers [Wed, 28 Apr 2021 19:56:03 +0000 (19:56 +0000)]
Update gcc .po files.
* be.po, da.po, de.po, el.po, es.po, fi.po, fr.po, hr.po, id.po,
ja.po, nl.po, ru.po, sr.po, sv.po, tr.po, uk.po, vi.po, zh_CN.po,
zh_TW.po: Update.
Patrick McGehearty [Wed, 28 Apr 2021 19:14:48 +0000 (19:14 +0000)]
Practical improvement to libgcc complex divide
Correctness and performance test programs used during development of
this project may be found in the attachment to:
https://www.mail-archive.com/gcc-patches@gcc.gnu.org/msg254210.html
Summary of Purpose
This patch to libgcc/libgcc2.c __divdc3 provides an
opportunity to gain important improvements to the quality of answers
for the default complex divide routine (half, float, double, extended,
long double precisions) when dealing with very large or very small exponents.
The current code correctly implements Smith's method (1962) [2]
further modified by c99's requirements for dealing with NaN (not a
number) results. When working with input values where the exponents
are greater than *_MAX_EXP/2 or less than -(*_MAX_EXP)/2, results are
substantially different from the answers provided by quad precision
more than 1% of the time. This error rate may be unacceptable for many
applications that cannot a priori restrict their computations to the
safe range. The proposed method reduces the frequency of
"substantially different" answers by more than 99% for double
precision at a modest cost of performance.
Differences between current gcc methods and the new method will be
described. Then accuracy and performance differences will be discussed.
Background
This project started with an investigation related to
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=59714. Study of Beebe[1]
provided an overview of past and recent practice for computing complex
divide. The current glibc implementation is based on Robert Smith's
algorithm [2] from 1962. A google search found the paper by Baudin
and Smith [3] (same Robert Smith) published in 2012. Elen Kalda's
proposed patch [4] is based on that paper.
I developed two sets of test data by randomly distributing values over
a restricted range and the full range of input values. The current
complex divide handled the restricted range well enough, but failed on
the full range more than 1% of the time. Baudin and Smith's primary
test for "ratio" equals zero reduced the cases with 16 or more error
bits by a factor of 5, but still left too many flawed answers. Adding
debug print out to cases with substantial errors allowed me to see the
intermediate calculations for test values that failed. I noted that
for many of the failures, "ratio" was a subnormal. Changing the
"ratio" test from check for zero to check for subnormal reduced the 16
bit error rate by another factor of 12. This single modified test
provides the greatest benefit for the least cost, but the percentage
of cases with greater than 16 bit errors (double precision data) is
still greater than 0.027% (2.7 in 10,000).
Continued examination of remaining errors and their intermediate
computations led to the various tests of input value tests and scaling
to avoid under/overflow. The current patch does not handle some of the
rare and most extreme combinations of input values, but the random
test data is only showing 1 case in 10 million that has an error of
greater than 12 bits. That case has 18 bits of error and is due to
subtraction cancellation. These results are significantly better
than the results reported by Baudin and Smith.
Support for half, float, double, extended, and long double precision
is included as all are handled with suitable preprocessor symbols in a
single source routine. Since half precision is computed with float
precision as per current libgcc practice, the enhanced algorithm
provides no benefit for half precision and would cost performance.
Further investigation showed changing the half precision algorithm
to use the simple formula (real=a*c+b*d imag=b*c-a*d) caused no
loss of precision and modest improvement in performance.
The existing constants for each precision:
float: FLT_MAX, FLT_MIN;
double: DBL_MAX, DBL_MIN;
extended and/or long double: LDBL_MAX, LDBL_MIN
are used for avoiding the more common overflow/underflow cases. This
use is made generic by defining appropriate __LIBGCC2_* macros in
c-cppbuiltin.c.
Tests are added for when both parts of the denominator have exponents
small enough to allow shifting any subnormal values to normal values
all input values could be scaled up without risking overflow. That
gained a clear improvement in accuracy. Similarly, when either
numerator was subnormal and the other numerator and both denominator
values were not too large, scaling could be used to reduce risk of
computing with subnormals. The test and scaling values used all fit
within the allowed exponent range for each precision required by the C
standard.
Float precision has more difficulty with getting correct answers than
double precision. When hardware for double precision floating point
operations is available, float precision is now handled in double
precision intermediate calculations with the simple algorithm the same
as the half-precision method of using float precision for intermediate
calculations. Using the higher precision yields exact results for all
tested input values (64-bit double, 32-bit float) with the only
performance cost being the requirement to convert the four input
values from float to double. If double precision hardware is not
available, then float complex divide will use the same improved
algorithm as the other precisions with similar change in performance.
Further Improvement
The most common remaining substantial errors are due to accuracy loss
when subtracting nearly equal values. This patch makes no attempt to
improve that situation.
NOTATION
For all of the following, the notation is:
Input complex values:
a+bi (a= real part, b= imaginary part)
c+di
Output complex value:
e+fi = (a+bi)/(c+di)
For the result tables:
current = current method (SMITH)
b1div = method proposed by Elen Kalda
b2div = alternate method considered by Elen Kalda
new = new method proposed by this patch
DESCRIPTIONS of different complex divide methods:
NAIVE COMPUTATION (-fcx-limited-range):
e = (a*c + b*d)/(c*c + d*d)
f = (b*c - a*d)/(c*c + d*d)
Note that c*c and d*d will overflow or underflow if either
c or d is outside the range 2^-538 to 2^512.
This method is available in gcc when the switch -fcx-limited-range is
used. That switch is also enabled by -ffast-math. Only one who has a
clear understanding of the maximum range of all intermediate values
generated by an application should consider using this switch.
SMITH's METHOD (current libgcc):
if(fabs(c)<fabs(d) {
r = c/d;
denom = (c*r) + d;
e = (a*r + b) / denom;
f = (b*r - a) / denom;
} else {
r = d/c;
denom = c + (d*r);
e = (a + b*r) / denom;
f = (b - a*r) / denom;
}
Smith's method is the current default method available with __divdc3.
Elen Kalda's METHOD
Elen Kalda proposed a patch about a year ago, also based on Baudin and
Smith, but not including tests for subnormals:
https://gcc.gnu.org/legacy-ml/gcc-patches/2019-08/msg01629.html [4]
It is compared here for accuracy with this patch.
This method applies the most significant part of the algorithm
proposed by Baudin&Smith (2012) in the paper "A Robust Complex
Division in Scilab" [3]. Elen's method also replaces two divides by
one divide and two multiplies due to the high cost of divide on
aarch64. In the comparison sections, this method will be labeled
b1div. A variation discussed in that patch which does not replace the
two divides will be labeled b2div.
inline void improved_internal (MTYPE a, MTYPE b, MTYPE c, MTYPE d)
{
r = d/c;
t = 1.0 / (c + (d * r));
if (r != 0) {
x = (a + (b * r)) * t;
y = (b - (a * r)) * t;
} else {
/* Changing the order of operations avoids the underflow of r impacting
the result. */
x = (a + (d * (b / c))) * t;
y = (b - (d * (a / c))) * t;
}
}
if (FABS (d) < FABS (c)) {
improved_internal (a, b, c, d);
} else {
improved_internal (b, a, d, c);
y = -y;
}
NEW METHOD (proposed by patch) to replace the current default method:
The proposed method starts with an algorithm proposed by Baudin&Smith
(2012) in the paper "A Robust Complex Division in Scilab" [3]. The
patch makes additional modifications to that method for further
reductions in the error rate. The following code shows the #define
values for double precision. See the patch for #define values used
for other precisions.
#define RBIG ((DBL_MAX)/2.0)
#define RMIN (DBL_MIN)
#define RMIN2 (0x1.0p-53)
#define RMINSCAL (0x1.0p+51)
#define RMAX2 ((RBIG)*(RMIN2))
if (FABS(c) < FABS(d)) {
/* prevent overflow when arguments are near max representable */
if ((FABS (d) > RBIG) || (FABS (a) > RBIG) || (FABS (b) > RBIG) ) {
a = a * 0.5;
b = b * 0.5;
c = c * 0.5;
d = d * 0.5;
}
/* minimize overflow/underflow issues when c and d are small */
else if (FABS (d) < RMIN2) {
a = a * RMINSCAL;
b = b * RMINSCAL;
c = c * RMINSCAL;
d = d * RMINSCAL;
}
else {
if(((FABS (a) < RMIN) && (FABS (b) < RMAX2) && (FABS (d) < RMAX2)) ||
((FABS (b) < RMIN) && (FABS (a) < RMAX2) && (FABS (d) < RMAX2))) {
a = a * RMINSCAL;
b = b * RMINSCAL;
c = c * RMINSCAL;
d = d * RMINSCAL;
}
}
r = c/d; denom = (c*r) + d;
if( r > RMIN ) {
e = (a*r + b) / denom ;
f = (b*r - a) / denom
} else {
e = (c * (a/d) + b) / denom;
f = (c * (b/d) - a) / denom;
}
}
[ only presenting the fabs(c) < fabs(d) case here, full code in patch. ]
Before any computation of the answer, the code checks for any input
values near maximum to allow down scaling to avoid overflow. These
scalings almost never harm the accuracy since they are by 2. Values that
are over RBIG are relatively rare but it is easy to test for them and
allow aviodance of overflows.
Testing for RMIN2 reveals when both c and d are less than [FLT|DBL]_EPSILON.
By scaling all values by 1/EPSILON, the code converts subnormals to normals,
avoids loss of accuracy and underflows in intermediate computations
that otherwise might occur. If scaling a and b by 1/EPSILON causes either
to overflow, then the computation will overflow whatever method is used.
Finally, we test for either a or b being subnormal (RMIN) and if so,
for the other three values being small enough to allow scaling. We
only need to test a single denominator value since we have already
determined which of c and d is larger.
Next, r (the ratio of c to d) is checked for being near zero. Baudin
and Smith checked r for zero. This code improves that approach by
checking for values less than DBL_MIN (subnormal) covers roughly 12
times as many cases and substantially improves overall accuracy. If r
is too small, then when it is used in a multiplication, there is a
high chance that the result will underflow to zero, losing significant
accuracy. That underflow is avoided by reordering the computation.
When r is subnormal, the code replaces a*r (= a*(c/d)) with ((a/d)*c)
which is mathematically the same but avoids the unnecessary underflow.
TEST Data
Two sets of data are presented to test these methods. Both sets
contain 10 million pairs of complex values. The exponents and
mantissas are generated using multiple calls to random() and then
combining the results. Only values which give results to complex
divide that are representable in the appropriate precision after
being computed in quad precision are used.
The first data set is labeled "moderate exponents".
The exponent range is limited to -DBL_MAX_EXP/2 to DBL_MAX_EXP/2
for Double Precision (use FLT_MAX_EXP or LDBL_MAX_EXP for the
appropriate precisions.
The second data set is labeled "full exponents".
The exponent range for these cases is the full exponent range
including subnormals for a given precision.
ACCURACY Test results:
Note: The following accuracy tests are based on IEEE-754 arithmetic.
Note: All results reporteed are based on use of fused multiply-add. If
fused multiply-add is not used, the error rate increases, giving more
1 and 2 bit errors for both current and new complex divide.
Differences between using fused multiply and not using it that are
greater than 2 bits are less than 1 in a million.
The complex divide methods are evaluated by determining the percentage
of values that exceed differences in low order bits. If a "2 bit"
test results show 1%, that would mean that 1% of 10,000,000 values
(100,000) have either a real or imaginary part that differs from the
quad precision result by more than the last 2 bits.
Results are reported for differences greater than or equal to 1 bit, 2
bits, 8 bits, 16 bits, 24 bits, and 52 bits for double precision. Even
when the patch avoids overflows and underflows, some input values are
expected to have errors due to the potential for catastrophic roundoff
from floating point subtraction. For example, when b*c and a*d are
nearly equal, the result of subtraction may lose several places of
accuracy. This patch does not attempt to detect or minimize this type
of error, but neither does it increase them.
I only show the results for Elen Kalda's method (with both 1 and
2 divides) and the new method for only 1 divide in the double
precision table.
In the following charts, lower values are better.
current - current complex divide in libgcc
b1div - Elen Kalda's method from Baudin & Smith with one divide
b2div - Elen Kalda's method from Baudin & Smith with two divides
new - This patch which uses 2 divides
===================================================
Errors Moderate Dataset
gtr eq current b1div b2div new
====== ======== ======== ======== ========
1 bit 0.24707% 0.92986% 0.24707% 0.24707%
2 bits 0.01762% 0.01770% 0.01762% 0.01762%
8 bits 0.00026% 0.00026% 0.00026% 0.00026%
16 bits 0.00000% 0.00000% 0.00000% 0.00000%
24 bits 0% 0% 0% 0%
52 bits 0% 0% 0% 0%
===================================================
Table 1: Errors with Moderate Dataset (Double Precision)
Note in Table 1 that both the old and new methods give identical error
rates for data with moderate exponents. Errors exceeding 16 bits are
exceedingly rare. There are substantial increases in the 1 bit error
rates for b1div (the 1 divide/2 multiplys method) as compared to b2div
(the 2 divides method). These differences are minimal for 2 bits and
larger error measurements.
===================================================
Errors Full Dataset
gtr eq current b1div b2div new
====== ======== ======== ======== ========
1 bit 2.05% 1.23842% 0.67130% 0.16664%
2 bits 1.88% 0.51615% 0.50354% 0.00900%
8 bits 1.77% 0.42856% 0.42168% 0.00011%
16 bits 1.63% 0.33840% 0.32879% 0.00001%
24 bits 1.51% 0.25583% 0.24405% 0.00000%
52 bits 1.13% 0.01886% 0.00350% 0.00000%
===================================================
Table 2: Errors with Full Dataset (Double Precision)
Table 2 shows significant differences in error rates. First, the
difference between b1div and b2div show a significantly higher error
rate for the b1div method both for single bit errros and well
beyond. Even for 52 bits, we see the b1div method gets completely
wrong answers more than 5 times as often as b2div. To retain
comparable accuracy with current complex divide results for small
exponents and due to the increase in errors for large exponents, I
choose to use the more accurate method of two divides.
The current method has more 1.6% of cases where it is getting results
where the low 24 bits of the mantissa differ from the correct
answer. More than 1.1% of cases where the answer is completely wrong.
The new method shows less than one case in 10,000 with greater than
two bits of error and only one case in 10 million with greater than
16 bits of errors. The new patch reduces 8 bit errors by
a factor of 16,000 and virtually eliminates completely wrong
answers.
As noted above, for architectures with double precision
hardware, the new method uses that hardware for the
intermediate calculations before returning the
result in float precision. Testing of the new patch
has shown zero errors found as seen in Tables 3 and 4.
Correctness for float
=============================
Errors Moderate Dataset
gtr eq current new
====== ======== ========
1 bit 28.68070% 0%
2 bits 0.64386% 0%
8 bits 0.00401% 0%
16 bits 0.00001% 0%
24 bits 0% 0%
=============================
Table 3: Errors with Moderate Dataset (float)
=============================
Errors Full Dataset
gtr eq current new
====== ======== ========
1 bit 19.98% 0%
2 bits 3.20% 0%
8 bits 1.97% 0%
16 bits 1.08% 0%
24 bits 0.55% 0%
=============================
Table 4: Errors with Full Dataset (float)
As before, the current method shows an troubling rate of extreme
errors.
There very minor changes in accuracy for half-precision since the code
changes from Smith's method to the simple method. 5 out of 1 million
test cases show correct answers instead of 1 or 2 bit errors.
libgcc computes half-precision functions in float precision
allowing the existing methods to avoid overflow/underflow issues
for the allowed range of exponents for half-precision.
Extended precision (using x87 80-bit format on x86) and Long double
(using IEEE-754 128-bit on x86 and aarch64) both have 15-bit exponents
as compared to 11-bit exponents in double precision. We note that the
C standard also allows Long Double to be implemented in the equivalent
range of Double. The RMIN2 and RMINSCAL constants are selected to work
within the Double range as well as with extended and 128-bit ranges.
We will limit our performance and accurancy discussions to the 80-bit
and 128-bit formats as seen on x86 here.
The extended and long double precision investigations were more
limited. Aarch64 does not support extended precision but does support
the software implementation of 128-bit long double precision. For x86,
long double defaults to the 80-bit precision but using the
-mlong-double-128 flag switches to using the software implementation
of 128-bit precision. Both 80-bit and 128-bit precisions have the same
exponent range, with the 128-bit precision has extended mantissas.
Since this change is only aimed at avoiding underflow/overflow for
extreme exponents, I studied the extended precision results on x86 for
100,000 values. The limited exponent dataset showed no differences.
For the dataset with full exponent range, the current and new values
showed major differences (greater than 32 bits) in 567 cases out of
100,000 (0.56%). In every one of these cases, the ratio of c/d or d/c
(as appropriate) was zero or subnormal, indicating the advantage of
the new method and its continued correctness where needed.
PERFORMANCE Test results
In order for a library change to be practical, it is necessary to show
the slowdown is tolerable. The slowdowns observed are much less than
would be seen by (for example) switching from hardware double precison
to a software quad precision, which on the tested machines causes a
slowdown of around 100x).
The actual slowdown depends on the machine architecture. It also
depends on the nature of the input data. If underflow/overflow is
rare, then implementations that have strong branch prediction will
only slowdown by a few cycles. If underflow/overflow is common, then
the branch predictors will be less accurate and the cost will be
higher.
Results from two machines are presented as examples of the overhead
for the new method. The one labeled x86 is a 5 year old Intel x86
processor and the one labeled aarch64 is a 3 year old arm64 processor.
In the following chart, the times are averaged over a one million
value data set. All values are scaled to set the time of the current
method to be 1.0. Lower values are better. A value of less than 1.0
would be faster than the current method and a value greater than 1.0
would be slower than the current method.
================================================
Moderate set full set
x86 aarch64 x86 aarch64
======== =============== ===============
float 0.59 0.79 0.45 0.81
double 1.04 1.24 1.38 1.56
long double 1.13 1.24 1.29 1.25
================================================
Table 5: Performance Comparisons (ratio new/current)
The above tables omit the timing for the 1 divide and 2 multiply
comparison with the 2 divide approach.
The float results show clear performance improvement due to using the
simple method with double precision for intermediate calculations.
The double results with the newer method show less overhead for the
moderate dataset than for the full dataset. That's because the moderate
dataset does not ever take the new branches which protect from
under/overflow. The better the branch predictor, the lower the cost
for these untaken branches. Both platforms are somewhat dated, with
the x86 having a better branch predictor which reduces the cost of the
additional branches in the new code. Of course, the relative slowdown
may be greater for some architectures, especially those with limited
branch prediction combined with a high cost of misprediction.
The long double results are fairly consistent in showing the moderate
additional cost of the extra branches and calculations for all cases.
The observed cost for all precisions is claimed to be tolerable on the
grounds that:
(a) the cost is worthwhile considering the accuracy improvement shown.
(b) most applications will only spend a small fraction of their time
calculating complex divide.
(c) it is much less than the cost of extended precision
(d) users are not forced to use it (as described below)
Those users who find this degree of slowdown unsatisfactory may use
the gcc switch -fcx-fortran-rules which does not use the library
routine, instead inlining Smith's method without the C99 requirement
for dealing with NaN results. The proposed patch for libgcc complex
divide does not affect the code generated by -fcx-fortran-rules.
SUMMARY
When input data to complex divide has exponents whose absolute value
is less than half of *_MAX_EXP, this patch makes no changes in
accuracy and has only a modest effect on performance. When input data
contains values outside those ranges, the patch eliminates more than
99.9% of major errors with a tolerable cost in performance.
In comparison to Elen Kalda's method, this patch introduces more
performance overhead but reduces major errors by a factor of
greater than 4000.
REFERENCES
[1] Nelson H.F. Beebe, "The Mathematical-Function Computation Handbook.
Springer International Publishing AG, 2017.
[2] Robert L. Smith. Algorithm 116: Complex division. Commun. ACM,
5(8):435, 1962.
[3] Michael Baudin and Robert L. Smith. "A robust complex division in
Scilab," October 2012, available at http://arxiv.org/abs/1210.4539.
[4] Elen Kalda: Complex division improvements in libgcc
https://gcc.gnu.org/legacy-ml/gcc-patches/2019-08/msg01629.html
2020-12-08 Patrick McGehearty <patrick.mcgehearty@oracle.com>
gcc/c-family/
* c-cppbuiltin.c (c_cpp_builtins): Add supporting macros for new
complex divide
libgcc/
* libgcc2.c (XMTYPE, XCTYPE, RBIG, RMIN, RMIN2, RMINSCAL, RMAX2):
Define.
(__divsc3, __divdc3, __divxc3, __divtc3): Improve complex divide.
* config/rs6000/_divkc3.c (RBIG, RMIN, RMIN2, RMINSCAL, RMAX2):
Define.
(__divkc3): Improve complex divide.
gcc/testsuite/
* gcc.c-torture/execute/ieee/cdivchkd.c: New test.
* gcc.c-torture/execute/ieee/cdivchkf.c: Likewise.
* gcc.c-torture/execute/ieee/cdivchkld.c: Likewise.
Tobias Burnus [Wed, 28 Apr 2021 19:15:16 +0000 (21:15 +0200)]
doc/install.texi: Document --enable-offload-defaulted config option
Document configure --enable-offload-defaulted option added in
commit r12-218-gfe5bfa6704179f8db7d1ae0b485439e9896df8eb
gcc/ChangeLog:
* doc/install.texi (--enable-offload-defaulted): Document.
Senthil Kumar Selvaraj [Wed, 28 Apr 2021 17:29:12 +0000 (17:29 +0000)]
AVR cc0 conversion
See https://gcc.gnu.org/pipermail/gcc-patches/2021-January/563638.html
for background.
This patch converts the avr backend to MODE_CC. It addresses some of
the comments made in the previous submission over here
(https://gcc.gnu.org/pipermail/gcc-patches/2020-December/561757.html).
Specifically, this patch has
1. Automatic clobber of REG_CC in inline asm statements, via
TARGET_MD_ASM_ADJUST hook.
2. Direct clobber of REG_CC in insns emitted after reload (pro and
epilogue).
3. Regression testing done on atmega8, atmega128, attiny40 and
atxmega128a3 devices (more details below).
4. Verification and fixes for casesi and avr_compare_pattern related
code that inspects insns, by looking at avr-casesi and mach RTL dumps.
5. Use length of parallel instead of passing in operand counts when
generating code for shift patterns.
6. Fixes for indentation glitches.
7. Removal of CC_xxx stuff in avr-protos.h. In the places where the
macros were still used (cond_string), I've replaced them with a bool
hardcoded to false. I expect this will go away/get fixed when I
eventually add specific CC modes.
Things still to do:
1. Adjustment of peepholes/define_splits to match against patterns
with REG_CC clobber.
2. Model effect of non-compare insns on REG_CC using additional CC
modes. I'm hoping to use of a modified version of the cc attribute
and define_subst (again inspired by the cris port), to do this.
3. RTX cost adjustment.
gcc/
* config/avr/avr-dimode.md: Turn existing patterns into
define_insn_and_split style patterns where the splitter
adds a clobber of the condition code register. Drop "cc"
attribute. Add new patterns to match output of
the splitters.
* config/avr/avr-fixed.md: Likewise.
* config/avr/avr.c (cc_reg_rtx): New.
(avr_parallel_insn_from_insns): Adjust insn count
for removal of set of cc0.
(avr_is_casesi_sequence): Likewise.
(avr_casei_sequence_check_operands): Likewise.
(avr_optimize_casesi): Likewise. Also insert
new insns after jump_insn.
(avr_pass_casesi::avr_rest_of_handle_casesi): Adjust
for removal of set of cc0.
(avr_init_expanders): Initialize cc_reg_rtx.
(avr_regno_reg_class): Handle REG_CC.
(cond_string): Remove usage of CC_OVERFLOW_UNUSABLE.
(avr_notice_update_cc): Remove function.
(ret_cond_branch): Remove usage of CC_OVERFLOW_UNUSABLE.
(compare_condition): Adjust for PARALLEL with
REG_CC clobber.
(out_shift_with_cnt): Likewise.
(ashlhi3_out): Likewise.
(ashrhi3_out): Likewise.
(lshrhi3_out): Likewise.
(avr_class_max_nregs): Return single reg for REG_CC.
(avr_compare_pattern): Check for REG_CC instead
of cc0_rtx.
(avr_reorg_remove_redundant_compare): Likewise.
(avr_reorg):Adjust for PARALLEL with REG_CC clobber.
(avr_hard_regno_nregs): Return single reg for REG_CC.
(avr_hard_regno_mode_ok): Allow only CCmode for REG_CC.
(avr_md_asm_adjust): Clobber REG_CC.
(TARGET_HARD_REGNO_NREGS): Define.
(TARGET_CLASS_MAX_NREGS): Define.
(TARGET_MD_ASM_ADJUST): Define.
* config/avr/avr.h (FIRST_PSEUDO_REGISTER): Adjust
for REG_CC.
(enum reg_class): Add CC_REG class.
(NOTICE_UPDATE_CC): Remove.
(CC_OVERFLOW_UNUSABLE): Remove.
(CC_NO_CARRY): Remove.
* config/avr/avr.md: Turn existing patterns into
define_insn_and_split style patterns where the splitter
adds a clobber of the condition code register. Drop "cc"
attribute. Add new patterns to match output of
the splitters.
(sez): Remove unused pattern.
Jonathan Wakely [Wed, 28 Apr 2021 17:14:05 +0000 (18:14 +0100)]
libstdc++: Add testcase for std::pair as a structural type [PR 97930]
This PR was fixed by r12-221-ge1543e694dadf1ea70eb72325219bc0cdc914a35
(for compilers that support C++20 Concepts) so this adds the testcase.
libstdc++-v3/ChangeLog:
PR libstdc++/97930
* testsuite/20_util/pair/requirements/structural.cc: New test.
Alexandre Oliva [Wed, 28 Apr 2021 17:07:43 +0000 (14:07 -0300)]
omit frame pointer in pr89676
This i386 test expects only two movl instructions.
In configurations that --enable-frame-pointer, -O2 won't implicitly
enable -fomit-frame-pointer, so we end up with a third movl to set up
the frame pointer.
This patch enables -fomit-frame-pointer explicitly, so that the result
no longer depends on that configuration option.
for gcc/testsuite/ChangeLog
* gcc.target/i386/pr89676.c: Add -fomit-frame-pointer.
Alexandre Oliva [Wed, 28 Apr 2021 17:07:41 +0000 (14:07 -0300)]
fix asm-not pattern in dwarf2/inline5.c
The test is supposed to check that the abstract lexical block of a
function that was inlined doesn't have attributes, and that the
concrete inlined lexical block does.
There are two patterns to verify the absence of attributes in the
abstract lexical block, one for the case in which the concrete block
appears after the abstract one, and another for the case in which it's
before.
The former has a problem that is not visible when asm comments start
with a single character, but that becomes apparent when they start
with "/ ".
The pattern starts by matching the abstract DW_TAG_lexical_block DIE
header, and checking that the next line has, after any of the
comment-starter characters (e.g. '/'), there are one or more blanks '
+', and then a character other than the '(' that would start another
DIE.
The problem is that '[.../...]+ +[^(].*' matches '/ (DIE...', because
'[^(]' may match the second blank, and after that anything goes. So
we end up recognizing the pattern, as if it was an abstract lexical
block with an attribute.
This could be minimally fixed by changing '[^(]' to '[^ (]', but the
pattern that matches concrete before abstract checks for an explicit
DW_AT after the abstract DIE, so I'm using that in the other pattern
as well.
For reference, the lines that start the unwanted match are:
.uleb128 0xc / (DIE (0xa4) DW_TAG_lexical_block)
.uleb128 0xd / (DIE (0xa5) DW_TAG_variable)
for gcc/testsuite/ChangeLog
* gcc.dg/debug/dwarf2/inline5.c: Adjust pattern to avoid
mismatch when asm comments start with "/ ".
Richard Earnshaw [Wed, 28 Apr 2021 16:56:38 +0000 (17:56 +0100)]
arm: fix UB due to missing mode check [PR100311]
Some places in the compiler iterate over all the fixed registers to
check if that register can be used in a particular mode. The idiom is
to iterate over the register and then for that register, if it
supports the current mode to check all that register and any
additional registers needed (HARD_REGNO_NREGS). If these two checks
are not fully aligned then it is possible to generate a buffer overrun
when testing data objects that are sized by the number of hard regs in
the machine.
The VPR register is a case where these checks were not consistent and
because this is the last HARD register the result was that we ended up
overflowing the fixed_regs array.
gcc:
PR target/100311
* config/arm/arm.c (arm_hard_regno_mode_ok): Only allow VPR to be
used in HImode.
Jonathan Wakely [Wed, 28 Apr 2021 16:46:01 +0000 (17:46 +0100)]
libstdc++: Simplify std::pair constraints using concepts
This re-implements the constraints on the std::pair constructors and
assignment operators in C++20 mode, to use concepts.
The non-standard constructors deprecated for PR 99957 are no longer
supported in C++20 mode, which requires some minor testsuite changes.
Otherwise all tests pass in C++20 mode.
libstdc++-v3/ChangeLog:
* include/bits/stl_pair.h (pair) [__cplusplus > 202002]: Add
new definitions for constructors and assignment operators using
concepts for constraints.
* testsuite/20_util/pair/cons/99957.cc: Disable for C++20 and
later.
* testsuite/20_util/pair/cons/explicit_construct.cc: Adjust
expected error messages to also match C++20 errors.
Jonathan Wakely [Wed, 28 Apr 2021 16:46:01 +0000 (17:46 +0100)]
libstdc++: Deprecate non-standard std::pair constructors [PR 99957]
This deprecates the non-standard std::pair constructors that support
construction from an rvalue and a literal zero used as a null pointer
constant. We can't just add the deprecated attribute to those
constructors, because they're currently used by correct code when they
are a better match than the constructors required by the standard e.g.
int i = 0;
const int j = 0;
std::pair<int, int> p(i, j); // uses pair(U1&&, const int&)
This patch adjusts the parameter types and constraints of those
constructors so that they only get used for literal zeros, and the
pair(U1&&, U2&&) constructor gets used otherwise. Once they're only used
for initializations that should be ill-formed we can add the deprecated
attribute.
The deprecated attribute is used to suggest that the user code uses
nullptr, which avoids the problem of 0 deducing as int instead of a null
pointer constant.
libstdc++-v3/ChangeLog:
PR libstdc++/99957
* include/bits/stl_pair.h (_PCC::_MoveCopyPair, _PCC::_CopyMovePair):
Combine and replace with ...
(_PCC::_DeprConsPair): New SFINAE helper function.
(pair): Merge preprocessor blocks so that all C++03 members
are defined together at the end.
(pair::pair(const _T1&, _U2&&), pair::pair(_U1&&, const _T2&)):
Replace _T1 and _T2 parameters with __null_ptr_constant and
adjust constraints.
* testsuite/20_util/pair/40925.cc: Use nullptr instead of 0.
* testsuite/20_util/pair/cons/explicit_construct.cc: Likewise.
* testsuite/20_util/pair/cons/99957.cc: New test.
Richard Sandiford [Wed, 28 Apr 2021 16:54:52 +0000 (17:54 +0100)]
aarch64: Fix address mode for vec_concat pattern [PR100305]
The load_pair_lanes<mode> patterns match a vec_concat of two
adjacent 64-bit memory locations as a single 128-bit load.
The Utq constraint made sure that the address was suitable
for a 128-bit vector, but this meant that it allowed some
addresses that aren't valid for the 64-bit element mode.
Two obvious fixes were:
(1) Continue to accept addresses that aren't valid for the element
modes. This would mean changing the mode of operands[1] before
printing it. It would also mean using a custom predicate instead
of the current memory_operand.
(2) Restrict addresses to the intersection of those that are valid
element and vector addresses.
The problem with (1) is that, as well as being more complicated,
it doesn't deal with the fact that we still have a memory_operand
for the second element. If we encourage the first operand to be
outside the range of a normal element memory_operand, we'll have
to reload the second operand to make it valid. This reload will
often be dead code, but will be kept around because the RTL
pattern makes it look as though the second element address
is still needed.
This patch therefore does (2) instead.
As mentioned in the PR notes, I think we have a general problem
with the way that the aarch64 port deals with paired addresses.
There's nothing to guarantee that the two addresses will be
reloaded in a way that keeps them “obviously” adjacent, so the
rtx_equal_p conditions could fail if something rechecked them
later.
For this particular pattern, I think it would be better to teach
simplify-rtx.c to fold the vec_concat to a normal vector memory
reference, to remove any suggestion that targets should try to
match the unsimplified form. That obviously wouldn't be suitable
for backports though.
gcc/
PR target/100305
* config/aarch64/constraints.md (Utq): Require the address to
be valid for both the element mode and for V2DImode.
gcc/testsuite/
PR target/100305
* gcc.c-torture/compile/pr100305.c: New test.
Tobias Burnus [Wed, 28 Apr 2021 16:46:47 +0000 (18:46 +0200)]
offload-defaulted: Config option to silently ignore uninstalled offload compilers
If configured with --enable-offload-defaulted, configured but not installed
offload compilers and libgomp plugins are silently ignored. Useful for
distribution compilers where those are in separate optional packages.
2021-04-28 Jakub Jelinek <jakub@redhat.com>
Tobias Burnus <tobias@codesourcery.com>
ChangeLog:
* configure.ac (--enable-offload-defaulted): New.
* configure: Regenerate.
gcc/ChangeLog:
* configure.ac (OFFLOAD_DEFAULTED): AC_DEFINE if offload-defaulted.
* gcc.c (process_command): New variable.
(driver::maybe_putenv_OFFLOAD_TARGETS): If OFFLOAD_DEFAULTED,
set it if -foffload is defaulted.
* lto-wrapper.c (OFFLOAD_TARGET_DEFAULT_ENV): Define.
(compile_offload_image): If OFFLOAD_DEFAULTED and
OFFLOAD_TARGET_DEFAULT is in the environment, don't fail
if corresponding mkoffload can't be found.
(compile_images_for_offload_targets): Likewise. Free and clear
offload_names if no valid offload is found.
* config.in: Regenerate.
* configure: Regenerate.
libgomp/ChangeLog:
* configure.ac (OFFLOAD_DEFAULTED): AC_DEFINE if offload-defaulted.
* target.c (gomp_load_plugin_for_device): If set and if a plugin
can't be dlopened, silently assume it has no devices.
* Makefile.in: Regenerate.
* config.h.in: Regenerate.
* configure: Regenerate.
Jonathan Wakely [Wed, 28 Apr 2021 14:56:04 +0000 (15:56 +0100)]
libstdc++: Define __cpp_lib_constexpr_string macro
As noted in r11-1339-gb6ab9ecd550227684643b41e9e33a4d3466724d8 we define
a non-standard __cpp_lib_constexpr_char_traits feature test macro to
indicate support for P0426R1 and P1032R1. At some point last year the
__cpp_lib_constexpr_string macro was retconned to indicate support for
those papers. This adds the new macro (which we didn't previously
define, because it referred to P0980R1 "Making std::string constexpr"
which we don't support).
libstdc++-v3/ChangeLog:
* include/bits/basic_string.h (__cpp_lib_constexpr_string): Define.
* include/std/version (__cpp_lib_constexpr_string): Define.
* testsuite/21_strings/char_traits/requirements/constexpr_functions_c++17.cc:
Check for __cpp_lib_constexpr_string.
* testsuite/21_strings/char_traits/requirements/constexpr_functions_c++20.cc:
Likewise.
* testsuite/21_strings/char_traits/requirements/version.cc: New test.
Jonathan Wakely [Wed, 28 Apr 2021 13:49:28 +0000 (14:49 +0100)]
libstdc++: Reduce output of 'make doc-pdf-doxygen'
Use '@' to prevent Make from echoing the recipe, so that users don't see
this every time:
if [ -f ${doxygen_pdf} ]; then
mv ${doxygen_pdf} ${api_pdf} ;
echo ":: PDF file is ${api_pdf}";
else
echo "... error";
grep -F 'LaTeX Error' ${doxygen_outdir}/latex/refman.log;
grep -F 'TeX capacity exceeded, sorry' ${doxygen_outdir}/latex/refman.log;
exit 12;
fi
The presence of the "error" strings in the output makes it look like an
error happened. By suppressing the echoing user's will only see "error"
if the 'else' branch is taken.
libstdc++-v3/ChangeLog:
* doc/Makefile.am (stamp-pdf-doxygen): Improve comment about
dealing with errors. Use '@' to prevent shell command being
echoed.
* doc/Makefile.in: Regenerate.
Jonathan Wakely [Wed, 28 Apr 2021 11:45:49 +0000 (12:45 +0100)]
libstdc++: Add missing noexcept on std::thread member function [PR 100298]
The new inline definition of std::thread::hardware_concurrency() for
non-gthreads targets is missing the noexcept-specifier that is on the
declaration.
libstdc++-v3/ChangeLog:
PR libstdc++/100298
* include/bits/std_thread.h (thread::hardware_concurrency): Add
missing noexcept to inline definition for non-gthreads targets.
José Rui Faustino de Sousa [Wed, 28 Apr 2021 11:20:25 +0000 (11:20 +0000)]
Fortran: Fix double function call with -fcheck=pointer [PR]
gcc/fortran/ChangeLog:
PR fortran/82376
* trans-expr.c (gfc_conv_procedure_call): Evaluate function result
and then pass a pointer.
gcc/testsuite/ChangeLog:
PR fortran/82376
* gfortran.dg/PR82376.f90: New test.
Jakub Jelinek [Tue, 27 Apr 2021 09:37:30 +0000 (11:37 +0200)]
c++: Remove #error for release builds
* module.cc: Remove #error that triggers if DEV-PHASE is empty.
Richard Biener [Wed, 28 Apr 2021 08:41:41 +0000 (10:41 +0200)]
tree-optimization/100292 - avoid invalid GIMPLE from vector lowering
We have to avoid folding the condition when building a COND_EXPR
since we no longer gimplify the whole thing. The folding done
at COND_EXPR build time will deal with possible simplifications.
2021-04-28 Richard Biener <rguenther@suse.de>
PR tree-optimization/100292
* tree-vect-generic.c (expand_vector_condition): Do not fold
the comparisons.
* gcc.dg/pr100292.c: New testcase.
Piotr Trojanek [Fri, 11 Dec 2020 09:41:20 +0000 (10:41 +0100)]
[Ada] Style fixes related to calls to List_Length
gcc/ada/
* sem_ch13.adb, sem_util.adb: Fix style.
Piotr Trojanek [Thu, 10 Dec 2020 23:42:22 +0000 (00:42 +0100)]
[Ada] Adjust List_Length description
gcc/ada/
* nlists.ads (List_Length): Adapt comment to match the
behaviour.
Arnaud Charlet [Thu, 10 Dec 2020 16:22:23 +0000 (11:22 -0500)]
[Ada] Fix recent optimization in evaluation of selected component for GNATprove
gcc/ada/
* sem_eval.adb (Eval_Selected_Component): Only consider compile
time known aggregates.
Eric Botcazou [Thu, 10 Dec 2020 20:02:07 +0000 (21:02 +0100)]
[Ada] Fix computation of Prec/Succ of zero without denormals
gcc/ada/
* libgnat/s-fatgen.adb: Add use clause for Interfaces.Unsigned_16
and Interfaces.Unsigned_32.
(Small16): New constant.
(Small32): Likewise.
(Small64): Likewise.
(Small80): Likewise.
(Pred): Declare a local overlay for Small and return it negated
for zero if the type does not support denormalized numbers.
(Succ): Likewise, but return it directly.
Piotr Trojanek [Thu, 10 Dec 2020 12:12:22 +0000 (13:12 +0100)]
[Ada] Remove redundant assignment in Formal_Is_Used_Once
gcc/ada/
* inline.adb (Formal_Is_Used_Once): Refine type of the counter
variable; remove redundant assignment.
Patrick Bernardi [Wed, 9 Dec 2020 21:48:20 +0000 (16:48 -0500)]
[Ada] Install_Restricted_Handlers: define Prio parameter as Interrupt_Priority
gcc/ada/
* libgnarl/s-interr.adb (Install_Restricted_Handlers): Change
Prio parameter to type Interrupt_Priority.
* libgnarl/s-interr.ads (Install_Restricted_Handlers): Likewise.
* libgnarl/s-interr__dummy.adb (Install_Restricted_Handlers):
Likewise.
* libgnarl/s-interr__hwint.adb (Install_Restricted_Handlers):
Likewise.
* libgnarl/s-interr__sigaction.adb (Install_Restricted_Handlers):
Likewise.
* libgnarl/s-interr__vxworks.adb (Install_Restricted_Handlers):
Likewise.
Piotr Trojanek [Tue, 8 Dec 2020 21:34:29 +0000 (22:34 +0100)]
[Ada] Simplify data structures for overloaded interpretations
gcc/ada/
* sem_type.ads (Write_Interp_Ref): Removed; no longer needed.
* sem_type.adb (Headers): Removed; now the hash table is
directly in the Interp_Map alone.
(Interp_Map): Now an instance of the GNAT.HTable.Simple_HTable.
(Last_Overloaded): New variable to emulate Interp_Map.Last.
(Add_One_Interp): Adapt to new data structure.
(Get_First_Interp): Likewise.
(Hash): Likewise.
(Init_Interp_Tables): Likewise.
(New_Interps): Likewise.
(Save_Interps): Likewise; handle O_N variable like in
Get_First_Interp.
(Write_Interp_Ref): Removed; no longer needed.
Piotr Trojanek [Tue, 8 Dec 2020 20:28:24 +0000 (21:28 +0100)]
[Ada] Replace dubious use of Traverse_Func with Traverse_Proc
gcc/ada/
* inline.adb (Do_Reset_Calls): Now an instance of Traverse_Proc.
Piotr Trojanek [Wed, 9 Dec 2020 13:44:00 +0000 (14:44 +0100)]
[Ada] Consistent diagnostic on missing -gnat2020 switch for aspects
gcc/ada/
* sem_ch13.adb (Analyze_Aspect_Static): Reuse
Error_Msg_Ada_2020_Feature for aspect Static.
(Analyze_One_Aspect): Likewise for aspect Full_Access.
Piotr Trojanek [Wed, 9 Dec 2020 14:22:29 +0000 (15:22 +0100)]
[Ada] Refactor repeated checks for the expression of aspect Static
gcc/ada/
* sem_ch13.adb (Analyze_Aspect_Static): Refactor to have a
single check for the expression being present; adapt comments.
Piotr Trojanek [Wed, 9 Dec 2020 13:34:45 +0000 (14:34 +0100)]
[Ada] More precise error about aspects conflicting with Static
gcc/ada/
* sem_ch13.adb (Analyze_Aspect_Static): Use aspect name in the
error message.
Piotr Trojanek [Wed, 9 Dec 2020 16:02:26 +0000 (17:02 +0100)]
[Ada] Simplify folding of selected components with qualified prefixes
gcc/ada/
* sem_eval.adb (Eval_Selected_Component): Simplify with
Unqualify.
Eric Botcazou [Tue, 8 Dec 2020 22:53:43 +0000 (23:53 +0100)]
[Ada] Eliminate early roundoff error for Long_Long_Float on x86
gcc/ada/
* libgnat/s-valrea.adb (Fast2Sum): New function.
(Integer_to_Real): Use it in an iterated addition with exact
error handling for the case where an extra digit is needed.
Move local variable now only used in the exponentiation case.
Yannick Moy [Mon, 7 Dec 2020 15:45:23 +0000 (16:45 +0100)]
[Ada] Use spans instead of locations for compiler diagnostics
gcc/ada/
* errout.adb: (Error_Msg_Internal): Use span instead of
location.
(Error_Msg, Error_Msg_NEL): Add versions with span parameter.
(Error_Msg_F, Error_Msg_FE, Error_Msg_N, Error_Msg_NE,
Error_Msg_NW): Retrieve span from node.
(First_Node): Use the new First_And_Last_Nodes.
(First_And_Last_Nodes): Expand on previous First_Node. Apply to
other nodes than expressions.
(First_Sloc): Protect against inconsistent locations.
(Last_Node): New function based on First_And_Last_Nodes.
(Last_Sloc): New function similar to First_Sloc.
(Output_Messages): Update output when -gnatdF is used. Use
character ~ for making the span visible, similar to what is done
in GCC and Clang.
* errout.ads (Error_Msg, Error_Msg_NEL): Add versions with span
parameter.
(First_And_Last_Nodes, Last_Node, Last_Sloc): New subprograms.
* erroutc.adb: Adapt to Sptr field being a span.
* erroutc.ads (Error_Msg_Object): Change field Sptr from
location to span.
* errutil.adb: Adapt to Sptr field being a span.
* freeze.adb: Use Errout reporting procedures for nodes to get
spans.
* par-ch3.adb: Likewise.
* par-prag.adb: Likewise.
* par-util.adb: Likewise.
* sem_case.adb: Likewise.
* sem_ch13.adb: Likewise.
* sem_ch3.adb: Likewise.
* sem_prag.adb: Likewise.
* types.ads: (Source_Span): New type for spans.
(To_Span): Basic constructors for spans.
Arnaud Charlet [Tue, 8 Dec 2020 17:14:08 +0000 (12:14 -0500)]
[Ada] Assert failure on complex code with private type and discriminant
gcc/ada/
* einfo.adb (Discriminant_Constraint): Refine assertion.
Gary Dismukes [Mon, 7 Dec 2020 06:58:10 +0000 (01:58 -0500)]
[Ada] AI12-0397: Default_Initial_Condition expressions for derived types
gcc/ada/
* exp_util.adb (Add_Own_DIC): Suppress expansion of a DIC pragma
when the pragma occurs for an abstract type, since that could
lead to a call to an abstract function, and such DIC checks can
never be performed for abstract types in any case.
* sem_disp.adb (Check_Dispatching_Context): Suppress the check
for illegal calls to abstract subprograms when the call occurs
within a Default_Initial_Condition aspect and the call is passed
the current instance as an actual.
(Has_Controlling_Current_Instance_Actual): New function to test
a call to see if it has any actuals given by direct references
to a current instance of a type
* sem_res.adb (Resolve_Actuals): Issue an error for a call
within a DIC aspect to a nonprimitive subprogram with an actual
given by the name of the DIC type's current instance (which will
show up as a reference to the formal parameter of a DIC
procedure).
Ed Schonberg [Tue, 8 Dec 2020 00:34:01 +0000 (19:34 -0500)]
[Ada] Crash on inherited component in type extension in generic unit.
gcc/ada/
* exp_ch3.adb (Expand_Record_Extension): Set Parent_Subtype on
the type extension when within a generic unit, even though
expansion is disabled, to allow for proper resolution of
selected components inherited from an ancestor.
Arnaud Charlet [Thu, 3 Dec 2020 15:06:47 +0000 (10:06 -0500)]
[Ada] Crash with declare expression used in a postcondition
gcc/ada/
* sem_aux.adb (Is_Limited_Type): Fix logic to check Is_Type
before assuming Ent is a typo.
* sem_ch4.adb (Analyze_Expression_With_Actions): Update
comments, minor reformatting.
* sem_res.adb (Resolve_Declare_Expression): Add protection
against no type.
Arnaud Charlet [Tue, 8 Dec 2020 13:16:45 +0000 (08:16 -0500)]
[Ada] Incorrect discriminant check on call to access to subprogram
gcc/ada/
* exp_ch6.adb: Fix typo in comment.
* sem_ch3.adb (Access_Subprogram_Declaration): Add missing call
to Create_Extra_Formals. Remove obsolete bootstrap check.
* sem_eval.adb (Eval_Selected_Component): Simplify a
selected_component on an aggregate.
Piotr Trojanek [Mon, 7 Dec 2020 23:22:20 +0000 (00:22 +0100)]
[Ada] Remove double initialization of interpretation tables
gcc/ada/
* fmap.ads (Reset_Tables): Remove outdated references to
GNSA/ASIS.
* sem_eval.ads (Initialize): Likewise.
* sem_type.adb (Headers): Remove initialization at elaboration.
* sem_type.ads (Init_Interp_Tables): Remove outdated reference
to gnatf.
* stringt.ads (Initialize): Fix style in comment.
Piotr Trojanek [Mon, 7 Dec 2020 22:47:12 +0000 (23:47 +0100)]
[Ada] Update reference with description of type resolution
gcc/ada/
* sem_res.ads: Update reference in comment.
* sem_type.ads: Fix casing in a name of a unit.
Yannick Moy [Tue, 8 Dec 2020 08:23:09 +0000 (09:23 +0100)]
[Ada] Improve error message for ghost in predicate
gcc/ada/
* ghost.adb (Check_Ghost_Context): Add continuation message when
in predicate.
Eric Botcazou [Mon, 7 Dec 2020 21:04:43 +0000 (22:04 +0100)]
[Ada] Couple of adjustments for the sake of static analyzers
gcc/ada/
* libgnat/s-valrea.adb (Integer_to_Real): Use a subtype of Num
for the component type of the table of powers of ten.
* libgnat/s-valuer.adb (Round_Extra): Add assertion on Base.
Piotr Trojanek [Mon, 7 Dec 2020 15:54:06 +0000 (16:54 +0100)]
[Ada] Extend compile-time evaluation in case statements to all objects
gcc/ada/
* sem_ch5.adb (Analyze_Case_Statement): Extend optimization to
all objects; fix typo in comment.
Piotr Trojanek [Mon, 7 Dec 2020 14:32:40 +0000 (15:32 +0100)]
[Ada] Cleanups related to entry barrier conditions
gcc/ada/
* exp_ch9.adb (Build_Barrier_Function): Refine type of a
protected type entity.
(Is_Pure_Barrier): Fix style.
Bob Duff [Mon, 7 Dec 2020 13:16:34 +0000 (08:16 -0500)]
[Ada] Incorrect error with Default_Value on private/modular type
gcc/ada/
* exp_ch3.adb (Simple_Init_Defaulted_Type): Simplify the code,
and always use OK_Convert_To, rather than Unchecked_Convert_To
and Convert_To.
Arnaud Charlet [Mon, 7 Dec 2020 12:47:37 +0000 (07:47 -0500)]
[Ada] Remove unused subprograms
gcc/ada/
* sem_ch3.adb (Analyze_Object_Declaration): Remove dead code.
* ali.ads, ali.adb (Scan_ALI): Remove unused parameters.
Remove unused code related to Xref lines.
(Get_Typeref): Removed, no longer used.
Arnaud Charlet [Wed, 2 Dec 2020 09:15:36 +0000 (04:15 -0500)]
[Ada] Bad handling of 'Valid_Scalars and arrays
gcc/ada/
* exp_attr.adb (Build_Array_VS_Func, Build_Record_VS_Func,
Expand_N_Attribute_Reference): Use Get_Fullest_View instead of
Validated_View.
(Build_Record_VS_Func): Adjust to keep using Validated_View.
(Expand_N_Attribute_Reference) [Valid]: Use
Small_Integer_Type_For to allow for more compile time
evaluations.
* sem_util.adb (Cannot_Raise_Constraint_Error): Add more precise
support for N_Indexed_Component and fix support for
N_Selected_Component which wasn't completely safe.
(List_Cannot_Raise_CE): New.
* libgnat/i-cobol.adb (Valid_Packed): Simplify test to address
new GNAT warning.
Arnaud Charlet [Wed, 7 Apr 2021 09:11:57 +0000 (05:11 -0400)]
[Ada] Fix the Sphinx configuration and port it to Python3
gcc/ada/
* .gitignore: New.
* doc/share/conf.py: Add Python 3 compatibility.
* doc/share/gnat.sty: Add missing file.
Richard Wai [Mon, 15 Mar 2021 10:24:00 +0000 (06:24 -0400)]
[Ada] Hashed container Cursor type predefined equality non-conformance
gcc/ada/
* libgnat/a-cohase.ads (Cursor): Synchronize comments for the Cursor
type definition to be consistent with identical definitions in other
container packages. Add additional comments regarding the importance of
maintaining the "Position" component for predefined equality.
* libgnat/a-cohama.ads (Cursor): Likewise.
* libgnat/a-cihama.ads (Cursor): Likewise.
* libgnat/a-cohase.adb (Find, Insert): Ensure that Cursor objects
always have their "Position" component set to ensure predefined
equality works as required.
* libgnat/a-cohama.adb (Find, Insert): Likewise.
* libgnat/a-cihama.adb (Find, Insert): Likewise.
gcc/testsuite/
* gnat.dg/containers2.adb: New test.
Eric Botcazou [Wed, 28 Apr 2021 08:21:59 +0000 (10:21 +0200)]
Avoid creating useless local bounds around calls
This prevents the compiler from creating useless local bounds around calls
that take a parameter of an unconstrained array type when the bounds already
exist somewhere else for the actual parameter.
gcc/ada/
* gcc-interface/decl.c (gnat_to_gnu_subprog_type): Do not demote a
const or pure function because of a parameter whose type is pointer
to function.
* gcc-interface/trans.c (Call_to_gnu): Do not put back a conversion
between an actual and a formal that are unconstrained array types.
(gnat_gimplify_expr) <CALL_EXPR>: New case.
* gcc-interface/utils2.c (build_binary_op): Do not use |= operator.
(gnat_stabilize_reference_1): Likewise.
(gnat_rewrite_reference): Likewise.
(build_unary_op): Do not clear existing TREE_CONSTANT on the result.
(gnat_build_constructor): Also accept the address of a constant
CONSTRUCTOR as constant element.
Eric Botcazou [Wed, 28 Apr 2021 07:58:21 +0000 (09:58 +0200)]
Get rid of useless temporary for call to pure function
This avoids creating a useless temporary for a call to a pure function with
good properties by using the RSO.
gcc/ada/
* gcc-interface/trans.c (is_array_of_scalar_type): New predicate.
(find_decls_r): New function.
(return_slot_opt_for_pure_call_p): New predicate.
(Call_to_gnu): Do not create a temporary for the return value if the
parent node is an aggregate. If there is a target, try to apply the
return slot optimization to regular calls to pure functions returning
an array of scalar type.
Eric Botcazou [Wed, 28 Apr 2021 07:43:02 +0000 (09:43 +0200)]
Fix loss of optimization of array iteration due to inlining
This helps loop-invariant motion to hoist complicated offset computations.
gcc/ada/
* gcc-interface/trans.c (language_function): Add comment.
(loop_info_d): Add fndecl and invariants fields.
(find_loop_for): Test fndecl instead of the context of var.
(find_loop): New function.
(Regular_Loop_to_gnu): Fold back into...
(Loop_Statement_to_gnu): ...this. Emit invariants on entry, if any.
(gnat_to_gnu) <N_Selected_Component>: Record nonconstant invariant
offset computations in loops when optimization is enabled.
* gcc-interface/utils2.c (gnat_invariant_expr): Handle BIT_AND_EXPR.
gcc/testsuite/
* gnat.dg/opt93.ads, gnat.dg/opt93.adb: New test.
Patrick Palka [Wed, 28 Apr 2021 03:21:19 +0000 (23:21 -0400)]
libstdc++: Fix various bugs in ranges_algo.h [PR100187, ...]
This fixes some bugs with our ranges algorithms in uncommon situations,
such as when the return type of a predicate is a non-copyable class type
that's implicitly convertible to bool (PR100187), when a comparison
predicate isn't invocable as an rvalue (PR100237), and when the return
type of a projection function is non-copyable (PR100249).
This also fixes PR100287, which reports that we're moving __first twice
when constructing with it an empty subrange in ranges::partition.
libstdc++-v3/ChangeLog:
PR libstdc++/100187
PR libstdc++/100237
PR libstdc++/100249
PR libstdc++/100287
* include/bits/ranges_algo.h (__search_n_fn::operator()): Give
the __value_comp lambda an explicit bool return type.
(__is_permutation_fn::operator()): Give the __proj_scan local
variable auto&& return type. Give the __comp_scan lambda an
explicit bool return type.
(__remove_fn::operator()): Give the __pred lambda an explicit
bool return type.
(__partition_fn::operator()): Don't std::move __first twice
when returning an empty subrange.
(__min_fn::operator()): Don't std::move __comp.
(__max_fn::operator()): Likewise.
(__minmax_fn::operator()): Likewise.
GCC Administrator [Wed, 28 Apr 2021 00:16:36 +0000 (00:16 +0000)]
Daily bump.
David Edelsohn [Tue, 27 Apr 2021 20:09:07 +0000 (16:09 -0400)]
aix: Alias -m64 to -maix64 and -m32 to -maix32.
GCC on AIX historically has used -maix64 and -maix32 to switch to 64 bit mode
or 32 bit mode, unlike other ports that use -m64 and -m32. The Alias()
directive for options cannot be used because aix64 is expected in multiple
parts of the compiler infrastructure and one cannot switch to -m64 due to
backward compatibility.
This patch defines DRIVER_SELF_SPECS to translate -m64 to -maix64 and
-m32 to -maix32 so that the command line option compatible with other
targets can be used while continuing to allow the historical options.
gcc/ChangeLog:
* config/rs6000/aix.h (SUBTARGET_DRIVER_SELF_SPECS): New.
* config/rs6000/aix64.opt (m64): New.
(m32): New.
Jason Merrill [Fri, 23 Apr 2021 20:41:35 +0000 (16:41 -0400)]
c++: -Wdeprecated-copy and using operator= [PR92145]
For the purpose of [depr.impldec] "if the class has a user-declared copy
assignment operator", an operator= brought in from a base class with 'using'
may be a copy-assignment operator, but it isn't a copy-assignment operator
for the derived class.
gcc/cp/ChangeLog:
PR c++/92145
* class.c (classtype_has_depr_implicit_copy): Check DECL_CONTEXT
of operator=.
gcc/testsuite/ChangeLog:
PR c++/92145
* g++.dg/cpp0x/depr-copy3.C: New test.
Patrick Palka [Tue, 27 Apr 2021 18:18:25 +0000 (14:18 -0400)]
c++: Fix Bases(args...)... base initialization [PR88580]
When substituting into the arguments of a base initializer pack
expansion, tsubst_initializer_list uses a dummy EXPR_PACK_EXPANSION
in order to expand an initializer such as Bases(args)... into
Bases#{0}(args#{0}) and so on. But when an argument inside the base
initializer is itself a pack expansion, as in Bases(args...)..., the
argument is already an EXPR_PACK_EXPANSION so we don't need to wrap it.
It's also independent from the outer expansion of Bases, so we need to
"multiplicatively" append the expansion of args... onto the argument
list of each expanded base.
gcc/cp/ChangeLog:
PR c++/88580
* pt.c (tsubst_initializer_list): Correctly handle the case
where an argument inside a base initializer pack expansion is
itself a pack expansion.
gcc/testsuite/ChangeLog:
PR c++/88580
* g++.dg/cpp0x/variadic182.C: New test.
Patrick Palka [Tue, 27 Apr 2021 18:07:46 +0000 (14:07 -0400)]
libstdc++: Fix up lambda in join_view::_Iterator::operator++ [PR100290]
Currently, the return type of this lambda is decltype(auto), so the
lambda ends up returning a copy of _M_parent->_M_inner rather than a
reference to it when _S_ref_glvalue is false. This means _M_inner and
ranges::end(__inner_range) are respectively an iterator and sentinel for
different ranges, so comparing them is undefined.
libstdc++-v3/ChangeLog:
PR libstdc++/100290
* include/std/ranges (join_view::_Iterator::operator++): Correct
the return type of the lambda to avoid returning a copy of
_M_parent->_M_inner.
* testsuite/std/ranges/adaptors/join.cc (test10): New test.
Maciej W. Rozycki [Wed, 21 Apr 2021 21:33:25 +0000 (23:33 +0200)]
VAX: Accept ASHIFT in address expressions
Fix regressions:
FAIL: gcc.c-torture/execute/
20090113-2.c -O1 (internal compiler error)
FAIL: gcc.c-torture/execute/
20090113-2.c -O1 (test for excess errors)
FAIL: gcc.c-torture/execute/
20090113-3.c -O1 (internal compiler error)
FAIL: gcc.c-torture/execute/
20090113-3.c -O1 (test for excess errors)
triggering if LRA is used rather than old reload and caused by:
(plus:SI (plus:SI (mult:SI (reg:SI 30 [ _10 ])
(const_int 4 [0x4]))
(reg/f:SI 26 [ _6 ]))
(const_int 12 [0xc]))
coming from:
(insn 58 57 59 10 (set (reg:SI 33 [ _13 ])
(zero_extract:SI (mem:SI (plus:SI (plus:SI (mult:SI (reg:SI 30 [ _10 ])
(const_int 4 [0x4]))
(reg/f:SI 26 [ _6 ]))
(const_int 12 [0xc])) [4 _6->bits[_10]+0 S4 A32])
(reg:QI 56)
(reg:SI 53)))
".../gcc/testsuite/gcc.c-torture/execute/
20090113-2.c":64:12 490 {*extzv_non_const}
(expr_list:REG_DEAD (reg:QI 56)
(expr_list:REG_DEAD (reg:SI 53)
(expr_list:REG_DEAD (reg:SI 30 [ _10 ])
(expr_list:REG_DEAD (reg/f:SI 26 [ _6 ])
(nil))))))
being converted into:
(plus:SI (plus:SI (ashift:SI (reg:SI 30 [ _10 ])
(const_int 2 [0x2]))
(reg/f:SI 26 [ _6 ]))
(const_int 12 [0xc]))
which is an rtx the VAX backend currently does not recognize as a valid
machine address, although apparently it is only inside MEM rtx's that
indexed addressing is supposed to be canonicalized to a MULT rather than
ASHIFT form. Handle the ASHIFT form too throughout the backend then.
The change appears to also improve code generation with old reload and
code size stats are as follows, collected from 18153 executables built
in `check-c' GCC testing:
samples average median
--------------------------------------
regressions 47 0.702% 0.521%
unchanged 17503 0.000% 0.000%
progressions 603 -0.920% -0.403%
--------------------------------------
total 18153 -0.029% 0.000%
with a small number of outliers (over 5% size change):
old new change %change filename
----------------------------------------------------
1885 1645 -240 -12.7320 pr53505.exe
1331 1221 -110 -8.2644 pr89634.exe
1553 1473 -80 -5.1513 stdatomic-vm.exe
1413 1341 -72 -5.0955 pr45830.exe
1415 1343 -72 -5.0883 stdatomic-vm.exe
25765 24463 -1302 -5.0533 strlen-5.exe
25765 24463 -1302 -5.0533 strlen-5.exe
25765 24463 -1302 -5.0533 strlen-5.exe
1191 1131 -60 -5.0377
20050527-1.exe
(all changes on the expansion side are below 5%).
gcc/
* config/vax/vax.c (print_operand_address, vax_address_cost_1)
(index_term_p): Handle ASHIFT too.
Maciej W. Rozycki [Wed, 21 Apr 2021 21:33:11 +0000 (23:33 +0200)]
VAX: Fix ill-formed `jbb<ccss>i<mode>' insn operands
The insn has extraneous operand #3 that is aliased in RTL to operand #0
with a constraint. The operands specify a single-bit field in memory
that the machine instruction produced boths reads for the purpose of
determining whether to branch or not and either clears or sets according
to the machine operation selected with the `ccss' iterator. The caller
of the insn is supposed to supply the same rtx for both operands.
This odd arrangement happens to work with old reload, but breaks with
libatomic if LRA is used instead:
.../libatomic/flag.c: In function 'atomic_flag_test_and_set':
.../libatomic/flag.c:36:1: error: unable to generate reloads for:
36 | }
| ^
(jump_insn 7 6 19 2 (unspec_volatile [
(set (pc)
(if_then_else (eq (zero_extract:SI (mem/v:QI (reg:SI 27) [-1 S1 A8])
(const_int 1 [0x1])
(const_int 0 [0]))
(const_int 1 [0x1]))
(label_ref:SI 25)
(pc)))
(set (zero_extract:SI (mem/v:QI (reg:SI 28) [-1 S1 A8])
(const_int 1 [0x1])
(const_int 0 [0]))
(const_int 1 [0x1]))
] 100) ".../libatomic/flag.c":35:10 669 {jbbssiqi}
(nil)
-> 25)
during RTL pass: reload
.../libatomic/flag.c:36:1: internal compiler error: in curr_insn_transform, at lra-constraints.c:4098
0x1112c587 _fatal_insn(char const*, rtx_def const*, char const*, int, char const*)
.../gcc/rtl-error.c:108
0x10ee6563 curr_insn_transform
.../gcc/lra-constraints.c:4098
0x10eeaf87 lra_constraints(bool)
.../gcc/lra-constraints.c:5133
0x10ec97e3 lra(_IO_FILE*)
.../gcc/lra.c:2336
0x10e4633f do_reload
.../gcc/ira.c:5827
0x10e46b27 execute
.../gcc/ira.c:6013
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See <https://gcc.gnu.org/bugs/> for instructions.
Switch to using `match_dup' as expected then for a machine instruction
that in its encoding only has one actual operand in for the single-bit
field.
gcc/
* config/vax/builtins.md (jbb<ccss>i<mode>): Remove operand #3.
(sync_lock_test_and_set<mode>): Adjust accordingly.
(sync_lock_release<mode>): Likewise.
Maciej W. Rozycki [Wed, 21 Apr 2021 21:33:02 +0000 (23:33 +0200)]
VAX: Remove dead `adjacent_operands_p' function
This function has never been used and it is unclear what its intended
purpose was.
gcc/
* config/vax/vax-protos.h (adjacent_operands_p): Remove
prototype.
* config/vax/vax.c (adjacent_operands_p): Remove.
Maciej W. Rozycki [Thu, 3 Dec 2020 11:35:06 +0000 (11:35 +0000)]
ifcvt: Fall through to NCE if getting the CE condition failed
If getting the condition for conditional execution has failed then fall
through and try the non-conditional execution approach instead rather
than giving up with dead code elimination altogether, for a better code
structure if nothing else.
The case may well now be that whenever `cond_exec_get_condition' fails
`noce_get_condition' will as well, however in that case no change in
semantics will result. If they ever diverge, then someone will have to
chase this place.
gcc/
* ifcvt.c (dead_or_predicable) [!IFCVT_MODIFY_TESTS]: Fall
through to the non-conditional execution case if getting the
condition for conditional execution has failed.
Richard Sandiford [Tue, 27 Apr 2021 17:30:36 +0000 (18:30 +0100)]
Fix handling of VEC_COND_EXPR trap tests [PR100284]
Now that VEC_COND_EXPR has normal unnested operands,
operation_could_trap_p can treat it like any other expression.
This fixes many testsuite ICEs for SVE, but it turns out that none
of the tests in gcc.target/aarch64/sve were affected. Anyone testing
on non-SVE aarch64 therefore wouldn't have seen it.
gcc/
PR middle-end/100284
* gimple.c (gimple_could_trap_p_1): Remove VEC_COND_EXPR test.
* tree-eh.c (operation_could_trap_p): Handle VEC_COND_EXPR rather
than asserting on it.
gcc/testsuite/
PR middle-end/100284
* gcc.target/aarch64/sve/pr81003.c: New test.
Martin Sebor [Tue, 27 Apr 2021 17:00:53 +0000 (11:00 -0600)]
Remove malformed dg-warning directives.
gcc/testsuite/ChangeLog:
PR testsuite/100272
* g++.dg/ext/flexary13.C: Remove malformed directives.
David Edelsohn [Tue, 27 Apr 2021 16:59:59 +0000 (16:59 +0000)]
powerpc: fix bootstrap.
gcc/ChangeLog:
* config/rs6000/rs6000.c (rs6000_aix_precompute_tls_p): Protect
with TARGET_AIX_OS.
David Edelsohn [Sun, 11 Apr 2021 23:41:26 +0000 (19:41 -0400)]
aix: TLS precompute register parameters (PR 94177)
AIX uses a compiler-managed TOC for global data, including TLS symbols.
The GCC TOC implementation manages the TOC entries through the constant pool.
TLS symbols sometimes require a function call to obtain the TLS base
pointer. The arguments to the TLS call can conflict with arguments to
a normal function call if the TLS symbol is an argument in the normal call.
GCC specifically checks for this situation and precomputes the TLS
arguments, but the mechanism to check for this requirement utilizes
legitimate_constant_p(). The necessary result of legitimate_constant_p()
for correct TOC behavior and for correct TLS argument behavior is in
conflict.
This patch adds a new target hook precompute_tls_p() to decide if an
argument should be precomputed regardless of the result from
legitmate_constant_p().
gcc/ChangeLog:
PR target/94177
* calls.c (precompute_register_parameters): Additionally test
targetm.precompute_tls_p to pre-compute argument.
* config/rs6000/aix.h (TARGET_PRECOMPUTE_TLS_P): Define.
* config/rs6000/rs6000.c (rs6000_aix_precompute_tls_p): New.
* target.def (precompute_tls_p): New.
* doc/tm.texi.in (TARGET_PRECOMPUTE_TLS_P): Add hook documentation.
* doc/tm.texi: Regenerated.
Jakub Jelinek [Tue, 27 Apr 2021 15:50:53 +0000 (17:50 +0200)]
aarch64: Fix up last commit [PR100200]
Pedantically signed vs. unsigned mismatches in va_arg are only well defined
if the value can be represented in both signed and unsigned integer types.
2021-04-27 Jakub Jelinek <jakub@redhat.com>
PR target/100200
* config/aarch64/aarch64.c (aarch64_print_operand): Cast -UINTVAL
back to HOST_WIDE_INT.
Bernd Edlinger [Wed, 21 Apr 2021 12:13:04 +0000 (14:13 +0200)]
Fix target/100106 ICE in gen_movdi
As the test case shows, the outer mode may have a higher alignment
requirement than the inner mode here.
2021-04-27 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR target/100106
* simplify-rtx.c (simplify_context::simplify_subreg): Check the
memory alignment for the outer mode.
* gcc.c-torture/compile/pr100106.c: New testcase.
H.J. Lu [Mon, 26 Apr 2021 22:36:18 +0000 (15:36 -0700)]
op_by_pieces_d::run: Change a while loop to a do-while loop
Change a while loop in op_by_pieces_d::run to a do-while loop to prepare
for offset adjusted operation for the remaining bytes on the last piece
operation of a memory region.
PR middle-end/90773
* expr.c (op_by_pieces_d::get_usable_mode): New member function.
(op_by_pieces_d::run): Cange a while loop to a do-while loop.
Alex Coplan [Tue, 27 Apr 2021 13:56:15 +0000 (14:56 +0100)]
arm: Fix ICEs with compare-and-swap and -march=armv8-m.base [PR99977]
The PR shows two ICEs with __sync_bool_compare_and_swap and
-mcpu=cortex-m23 (equivalently, -march=armv8-m.base): one in LRA and one
later on, after the CAS insn is split.
The LRA ICE occurs because the
@atomic_compare_and_swap<CCSI:arch><SIDI:mode>_1 pattern attempts to tie
two output operands together (operands 0 and 1 in the third
alternative). LRA can't handle this, since it doesn't make sense for an
insn to assign to the same operand twice.
The later (post-splitting) ICE occurs because the expansion of the
cbranchsi4_scratch insn doesn't quite go according to plan. As it
stands, arm_split_compare_and_swap calls gen_cbranchsi4_scratch,
attempting to pass a register (neg_bval) to use as a scratch register.
However, since the RTL template has a match_scratch here,
gen_cbranchsi4_scratch ignores this argument and produces a scratch rtx.
Since this is all happening after RA, this is doomed to fail (and we get
an ICE about the insn not matching its constraints).
It seems that the motivation for the choice of constraints in the
atomic_compare_and_swap pattern comes from an attempt to satisfy the
constraints of the cbranchsi4_scratch insn. This insn requires the
scratch register to be the same as the input register in the case that
we use a larger negative immediate (one that satisfies J, but not L).
Of course, as noted above, LRA refuses to assign two output operands to
the same register, so this was never going to work.
The solution I'm proposing here is to collapse the alternatives to the
CAS insn (allowing the two output register operands to be matched to
different registers) and to ensure that the constraints for
cbranchsi4_scratch are met in arm_split_compare_and_swap. We do this by
inserting a move to ensure the source and destination registers match if
necessary (i.e. in the case of large negative immediates).
Another notable change here is that we only do:
emit_move_insn (neg_bval, const1_rtx);
for non-negative immediates. This is because the ADDS instruction used in
the negative case suffices to leave a suitable value in neg_bval: if the
operands compare equal, we don't take the branch (so neg_bval will be
set by the load exclusive). Otherwise, the ADDS will leave a nonzero
value in neg_bval, which will correctly signal that the CAS has failed
when it is later negated.
gcc/ChangeLog:
PR target/99977
* config/arm/arm.c (arm_split_compare_and_swap): Fix up codegen
with negative immediates: ensure we expand cbranchsi4_scratch
correctly and ensure we satisfy its constraints.
* config/arm/sync.md
(@atomic_compare_and_swap<CCSI:arch><NARROW:mode>_1): Don't
attempt to tie two output operands together with constraints;
collapse two alternatives.
(@atomic_compare_and_swap<CCSI:arch><SIDI:mode>_1): Likewise.
* config/arm/thumb1.md (cbranchsi4_neg_late): New.
gcc/testsuite/ChangeLog:
PR target/99977
* gcc.target/arm/pr99977.c: New test.
Jakub Jelinek [Tue, 27 Apr 2021 13:46:16 +0000 (15:46 +0200)]
aarch64: Fix UB in the compiler [PR100200]
The following patch fixes UBs in the compiler when negativing
a CONST_INT containing HOST_WIDE_INT_MIN. I've changed the spots where
there wasn't an obvious earlier condition check or predicate that
would fail for such CONST_INTs.
2021-04-27 Jakub Jelinek <jakub@redhat.com>
PR target/100200
* config/aarch64/predicates.md (aarch64_sub_immediate,
aarch64_plus_immediate): Use -UINTVAL instead of -INTVAL.
* config/aarch64/aarch64.md (casesi, rotl<mode>3): Likewise.
* config/aarch64/aarch64.c (aarch64_print_operand,
aarch64_split_atomic_op, aarch64_expand_subvti): Likewise.
Jakub Jelinek [Tue, 27 Apr 2021 13:42:47 +0000 (15:42 +0200)]
veclower: Fix up vec_shl matching of VEC_PERM_EXPR [PR100239]
The following testcase ICEs at -O0, because lower_vec_perm sees the
_1 = { 0, 0, 0, 0, 0, 0, 0, 0 };
_2 = VEC_COND_EXPR <_1, { -1, -1, -1, -1, -1, -1, -1, -1 }, { 0, 0, 0, 0, 0, 0, 0, 0 }>;
_3 = { 6, 0, 0, 0, 0, 0, 0, 0 };
_4 = VEC_PERM_EXPR <{ 0, 0, 0, 0, 0, 0, 0, 0 }, _2, _3>;
and as the ISA is SSE2, there is no support for the particular permutation
nor for variable mask permutation. But, the code to match vec_shl matches
it, because the permutation has the first operand a zero vector and the
mask picks all elements randomly from that vector.
So, in the end that isn't a vec_shl, but the permutation could be in theory
optimized into the first argument. As we keep it as is, it will fail
during expansion though, because that for vec_shl correctly requires that
it actually is a shift:
unsigned firstidx = 0;
for (unsigned int i = 0; i < nelt; i++)
{
if (known_eq (sel[i], nelt))
{
if (i == 0 || firstidx)
return NULL_RTX;
firstidx = i;
}
else if (firstidx
? maybe_ne (sel[i], nelt + i - firstidx)
: maybe_ge (sel[i], nelt))
return NULL_RTX;
}
if (firstidx == 0)
return NULL_RTX;
first = firstidx;
The if (firstidx == 0) return NULL; is what is missing a counterpart
on the lower_vec_perm side.
As with optimize != 0 we fold it in other spots, I think it is not needed
to optimize this cornercase in lower_vec_perm (which would mean we'd need
to recurse on the newly created _4 = { 0, 0, 0, 0, 0, 0, 0, 0 };
whether it is supported or not).
2021-04-27 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/100239
* tree-vect-generic.c (lower_vec_perm): Don't accept constant
permutations with all indices from the first zero element as vec_shl.
* gcc.dg/pr100239.c: New test.
Jakub Jelinek [Tue, 27 Apr 2021 13:29:26 +0000 (15:29 +0200)]
Revert "libstdc++: Add workaround for ia32 floating atomics miscompilations [PR100184]"
This reverts commit
0f4588141fcbe4e0f1fa12776b47200870f6c621.
Jakub Jelinek [Tue, 27 Apr 2021 13:26:24 +0000 (15:26 +0200)]
cfgcleanup: Fix -fcompare-debug issue in outgoing_edges_match [PR100254]
The following testcase fails with -fcompare-debug. The problem is that
outgoing_edges_match behaves differently between -g0 and -g, if
some load/store with REG_EH_REGION is followed by DEBUG_INSNs, the
REG_EH_REGION check is not done, while when there are no DEBUG_INSNs, it is
done.
We already compute last1 and last2 as BB_END (bb{1,2}) with skipped debug
insns and notes, so this patch just uses those.
2021-04-27 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/100254
* cfgcleanup.c (outgoing_edges_match): Check REG_EH_REGION on
last1 and last2 insns rather than BB_END (bb1) and BB_END (bb2) insns.
* g++.dg/opt/pr100254.C: New test.
Richard Biener [Tue, 27 Apr 2021 12:27:40 +0000 (14:27 +0200)]
tree-optimization/99912 - schedule another TODO_remove_unused_locals
This makes sure to remove unused locals and prune CLOBBERs after
the first scalar cleanup phase after IPA optimizations. On the
testcase in the PR this results in 8000 CLOBBERs removed which
in turn unleashes more DSE which otherwise hits its walking limit
of 256 too early on this testcase.
2021-04-27 Richard Biener <rguenther@suse.de>
PR tree-optimization/99912
* passes.def: Add comment about new TODO_remove_unused_locals.
* tree-stdarg.c (pass_data_stdarg): Run TODO_remove_unused_locals
at start.
Richard Biener [Wed, 7 Apr 2021 10:09:44 +0000 (12:09 +0200)]
tree-optimization/99912 - schedule DSE before SRA
For the testcase in the PR the main SRA pass is unable to do some
important scalarizations because dead stores of addresses make
the candiate variables disqualified. The following patch adds
another DSE pass before SRA forming a DCE/DSE pair and moves the
DSE pass that is currently closely after SRA up to after the
next DCE pass, forming another DCE/DSE pair now residing after PRE.
2021-04-07 Richard Biener <rguenther@suse.de>
PR tree-optimization/99912
* passes.def (pass_all_optimizations): Add pass_dse before
the first pass_dce, move the first pass_dse before the
pass_dce following pass_pre.
* gcc.dg/tree-ssa/ldist-33.c: Disable PRE and LIM.
* gcc.dg/tree-ssa/pr96789.c: Adjust dump file scanned.
* gcc.dg/tree-ssa/ssa-dse-28.c: Likewise.
* gcc.dg/tree-ssa/ssa-dse-29.c: Likewise.
Jonathan Wakely [Tue, 27 Apr 2021 12:43:23 +0000 (13:43 +0100)]
libstdc++: Minor refactoring in <experimental/internet>
libstdc++-v3/ChangeLog:
* include/experimental/internet (address_v6::bytes_type): Adjust
formatting.
(basic_endpoint): Define _M_is_v6() to put all checks for
AF_INET6 in one place.
(basic_endpoint::resize): Simplify.
(operator==(const tcp&, const tcp&)): Add constexpr and noexcept.
(operator!=(const tcp&, const tcp&)): Likewise.
(operator==(const udp&, const udp&)): Likewise.
(operator!=(const udp&, const udp&)): Likewise.
* testsuite/experimental/net/internet/tcp.cc: New test.
* testsuite/experimental/net/internet/udp.cc: New test.
Jonathan Wakely [Tue, 27 Apr 2021 12:06:43 +0000 (13:06 +0100)]
libstdc++: Better preprocessor conditions in net::ip [PR 100286]
This improves the use of preprocessor conditionas to enable/disable
members of namespace net::ip according to what is supported by the
target. This fixes PR 100286 by ensuring that the to_string member
functions are always defined for the address_v4 and address_v6 classes.
On the other hand, the IP protocol classes and internet socket option
classes aren't useful at all if the corresponding constants (such as
IPPROTO_TCP or IPV6_MULTICAST_HOPS) aren't define. So those types are
not defined at all if they can't be used.
The net/internet/socket/opt.cc test uses __has_include to check whether
or not to expect the types to be available.
libstdc++-v3/ChangeLog:
PR libstdc++/100286
* include/experimental/internet (resolver_errc, resolver_category())
(make_error_code, make_error_condition): Define unconditionally,
only make enumerators and use of gai_strerror depend on the
availability of <netdb.h>.
(address_v4::to_string): Use correct constant for string length.
(address_v4::to_string, address_v6::to_string): Define
unconditionally, throw if unsupported.
(make_address_v4, make_address_v6): Define unconditionally.
Return an error if unsupported.
(tcp, udp, v6_only, unicast::hops, multicast::*): Define
conditionally,
* testsuite/experimental/net/internet/socket/opt.cc: Check for
<netinet/in.h> and <netinet/tcp.h> before using types from
namespace net::ip.
Jonathan Wakely [Tue, 27 Apr 2021 10:07:47 +0000 (11:07 +0100)]
libstdc++: Define net::socket_base::message_flags operators as friends [PR 100285]
The overloaded operators for socket_base::message_flags should only be
defined when the message_flags type itself is defined. Rather than
duplicate the preprocessor conditional, this moves the operators into
the same scope as the type, defining them as hidden friends.
As well as fixing the bug, this has all the usual advantages of hidden
friends (they are not visible to normal name lookup for unrelated
types).
For consistency, do the same for the resolver_base::flags bitmask
operators too.
libstdc++-v3/ChangeLog:
PR libstdc++/100285
* include/experimental/internet (resolver_base::flags):
Define overloaded operators as hidden friends.
* include/experimental/socket (socket_base::message_flags):
Likewise.
Jakub Jelinek [Tue, 27 Apr 2021 12:47:54 +0000 (14:47 +0200)]
match.pd: Add some __builtin_ctz (x) cmp cst simplifications [PR95527]
This patch adds some ctz simplifications (e.g. ctz (x) >= 3 can be done by
testing if the low 3 bits are zero, etc.).
In addition, I've noticed that in the CLZ case, the
#ifdef CLZ_DEFINED_VALUE_AT_ZERO don't really work as intended, they
are evaluated during genmatch and the macro is not defined then
(but, because of the missing tm.h includes it isn't defined in
gimple-match.c or generic-match.c either). And when tm.h is included,
defaults.h is included which defines a fallback version of that macro.
For GCC 12, I wonder if it wouldn't be better to say in addition to __builtin_c[lt]z*
is always UB at zero that it would be undefined for .C[LT]Z ifn too if it
has just one operand and use a second operand to be the constant we expect
at zero.
2021-04-27 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/95527
* generic-match-head.c: Include tm.h.
* gimple-match-head.c: Include tm.h.
* match.pd (CLZ == INTEGER_CST): Don't use
#ifdef CLZ_DEFINED_VALUE_AT_ZERO, only test CLZ_DEFINED_VALUE_AT_ZERO
if clz == CFN_CLZ. Add missing val declaration.
(CTZ cmp CST): New simplifications.
* gcc.dg/tree-ssa/pr95527-2.c: New test.
Jakub Jelinek [Tue, 27 Apr 2021 12:45:45 +0000 (14:45 +0200)]
expand: Expand x / y * y as x - x % y if the latter is cheaper [PR96696]
The following patch tests both x / y * y and x - x % y expansion for the
former GIMPLE code and chooses the cheaper of those sequences.
2021-04-27 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96696
* expr.c (expand_expr_divmod): New function.
(expand_expr_real_2) <case TRUNC_DIV_EXPR>: Use it for truncations and
divisions. Formatting fixes.
<case MULT_EXPR>: Optimize x / y * y as x - x % y if the latter is
cheaper.
* gcc.target/i386/pr96696.c: New test.
Martin Jambor [Tue, 27 Apr 2021 11:46:10 +0000 (13:46 +0200)]
ipa-sra: Release dead LHS SSA_NAME when removing it (PR 99951)
When IPA-SRA removes an SSA_NAME from a LHS of a call statement
because it is not necessary, it does not release it. This patch fixes
that.
gcc/ChangeLog:
2021-04-08 Martin Jambor <mjambor@suse.cz>
PR ipa/99951
* ipa-param-manipulation.c (ipa_param_adjustments::modify_call):
If removing a call statement LHS SSA name, release it.
Richard Earnshaw [Tue, 27 Apr 2021 11:25:30 +0000 (12:25 +0100)]
arm: fix UB when compiling thumb2 with PIC [PR100236]
arm_compute_save_core_reg_mask contains UB in that the saved PIC
register number is used to create a bit mask. However, for some target
options this register is undefined and we end up with a shift of ~0.
On native compilations this is benign since the shift will still be
large enough to move the bit outside of the range of the mask, but if
cross compiling from a system that truncates out-of-range shifts to
zero (or worse, raises a trap for such values) we'll get potentially
wrong code (or a fault).
gcc:
PR target/100236
* config/arm/arm.c (THUMB2_WORK_REGS): Check PIC_OFFSET_TABLE_REGNUM
is valid before including it in the mask.
Richard Sandiford [Tue, 27 Apr 2021 11:18:03 +0000 (12:18 +0100)]
aarch64: Handle SVE attributes in comp_type_attributes [PR100270]
Even though "SVE type" and "SVE sizeless type" are marked as
affecting type identity, the middle end doesn't truly believe
it unless we also handle them in comp_type_attributes.
gcc/
PR target/100270
* config/aarch64/aarch64.c (aarch64_comp_type_attributes): Handle
SVE attributes.
gcc/testsuite/
PR target/100270
* gcc.target/aarch64/sve/acle/general-c/pr100270_1.c: New test.
* gcc.target/aarch64/sve/acle/general-c/sizeless-2.c: Change
expected error message when subtracting pointers to different
vector types. Expect warnings when mixing them elsewhere.
* gcc.target/aarch64/sve/acle/general/attributes_7.c: Remove
XFAILs. Tweak error messages for some cases.
Richard Sandiford [Tue, 27 Apr 2021 11:18:02 +0000 (12:18 +0100)]
aarch64: Add +nosve to two tests
Adding +nosve is more robust than checking for command-line arguments,
since SVE can be enabled by default or indirectly via other options.
gcc/testsuite/
* gcc.target/aarch64/simd/ssra.c: Use +nosve
* gcc.target/aarch64/simd/usra.c: Likewise.
Richard Biener [Tue, 13 Apr 2021 08:12:03 +0000 (10:12 +0200)]
tree-optimization/100051 - disambiguate access size vs decl
This adds disambiguation of the access size vs. the decl size
in the pointer based vs. decl based disambiguator. We have
a TBAA based check like this already but that's fend off when
seeing alias-sets of zero or when -fno-strict-aliasing is in
effect. Also the perceived dynamic type could be smaller than
the actual access.
2021-04-13 Richard Biener <rguenther@suse.de>
PR tree-optimization/100051
* tree-ssa-alias.c (indirect_ref_may_alias_decl_p): Add
disambiguator based on access size vs. decl size.
* gcc.dg/tree-ssa/ssa-fre-92.c: New testcase.
Richard Biener [Tue, 27 Apr 2021 09:02:03 +0000 (11:02 +0200)]
testsuite/100272 - undo PRE disabling for gcc.dg/tree-ssa/predcom-1.c
This re-enables PRE and fixes the malformed dg directive pointed
out in the PR. It all works as desired and I forgot why I
disabled this in the past.
2021-04-27 Richard Biener <rguenther@suse.de>
PR testsuite/100272
* gcc.dg/tree-ssa/predcom-1.c: Re-enable PRE and fix
malformed dg directive.
Jakub Jelinek [Tue, 27 Apr 2021 09:01:25 +0000 (11:01 +0200)]
Update gennews for GCC 10 and GCC 11.
2021-04-27 Jakub Jelinek <jakub@redhat.com>
* gennews (files): Add files for GCC 10 and GCC 11.
Richard Biener [Tue, 27 Apr 2021 08:45:32 +0000 (10:45 +0200)]
testsuite/100272 - fix some malformed dg directives
The bug points out several malformed dg directives, the following
fixes the obvious ones where the testcases keep working after the
change.
2021-04-27 Richard Biener <rguenther@suse.de>
PR testsuite/100272
* g++.dg/diagnostic/ptrtomem1.C: Fix dg directives.
* g++.dg/ipa/pr45572-2.C: Likewise.
* g++.dg/template/spec26.C: Likewise.
* gcc.dg/pr20126.c: Likewise.
* gcc.dg/tree-ssa/pr20739.c: Likewise.
Richard Biener [Tue, 27 Apr 2021 07:41:38 +0000 (09:41 +0200)]
tree-optimization/100278 - handle mismatched code in TBAA adjust of PRE
PRE has code to adjust TBAA behavior for refs that expects the base
operation code to match. The testcase shows a case where we have
a VAR_DECL vs. a MEM_REF so add code to give up in such cases.
2021-04-27 Richard Biener <rguenther@suse.de>
PR tree-optimization/100278
* tree-ssa-pre.c (compute_avail): Give up when we cannot
adjust TBAA beacuse of mismatching bases.
* gcc.dg/tree-ssa/pr100278.c: New testcase.