[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-03-05 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #42 from Vincent Lefèvre  ---
(In reply to Alexander Cherepanov from comment #40)
> Sure, one possibility is make undefined any program that uses f(x) where x
> could be a zero and f(x) differs for two zeros. But this approach make
> printf and memory-accesss undefined too. Sorry, I don't how you could
> undefine division by zero while not undefining printing of zero.

printf and memory accesss can already yield different results on the same value
(for printf on NaN, a sign can be output or not, and the sign bit of NaN is
generally unspecified). Moreover, it would not be correct to make printf and
memory accesss undefined on zero, because the behavior is defined by the C
standard, and more than that, very useful, while the floating-point division by
0 is undefined behavior (and the definition in Annex F makes sense only if one
has signed zeros, where we care about the sign -- see more about that below).

> Another approach is to say that we don't care which of possible two values
> f(x) returns x is zero. That is, we don't care whether 1/0. is +inf or -inf
> and we don't care whether printf("%g", 0.) outputs 0 or -0.

But that would disable all the related optimizations. I don't think this would
make a noticeable difference for printf in practice (in most cases), but this
can be more problematic for division.

Otherwise it should be said that -fno-signed-zeros also implies that infinity
gets an arbitrary sign that can change at any time. But I think that in such a
case +inf and -inf should compare as equal (+ some other rules), and this would
also be bad for optimization.

> > > This means that you cannot implement you own printf: if you analyze sign 
> > > bit
> > > of your value to decide whether you need to print '-', the sign of zero is
> > > significant in your code.
> > 
> > If you want to implement a printf that takes care of the sign of 0, you must
> > not use -fno-signed-zeros.
> 
> So if I use ordinary printf from a libc with -fno-signed-zeros it's fine but
> if I copy its implementation into my own program it's not fine?

If you use -fno-signed-zeros, you cannot assume that you will get consistent
output. But perhaps the call to printf should be changed in a mode where 0 is
always regarded as of a positive sign (GCC knows the types of the arguments,
thus could wrap printf, and I doubt that this would introduce much overhead).

> > > IOW why do you think that printf is fine while "1 / x == 1 / 0." is not?
> > 
> > printf is not supposed to trigger undefined behavior. Part of its output is
> > unspecified, but that's all.
> 
> Why the same couldn't be said about division? Division by zero is not
> supposed to trigger undefined behavior. Part of its result (the sign of
> infinit) is unspecified, but that's all.

See above.

> Right. But it's well known that x == y doesn't imply that x and y have the
> same value. And the only such case is zeros of different signs (right?).

On numeric types, I think so.

> So compilers deal with this case in a special way.

Only for optimization (the compiler does not have to deal with what the
processor does).

> (E.g., the optimization `if (x == C) use(x)` -> `if (x == C) use(C)` is
> normally done only for non-zero FP constant `C`. -fno-signed-zeros changes
> this.)

Yes.

> The idea that one value could have different representations is not widely
> distributed.

s/is/was/ (see below with decimal).

And what about the padding bytes in structures for alignment? Could there be
issues?

> I didn't manage to construct a testcase for this yesterday but
> I succeeded today -- see pr94035 (affects clang too).

I'm not sure that pseudo-denormal values of x86 long double are regarded as
valid values by GCC (note that they are specified neither by IEEE 754 nor by
Annex F). They could be regarded as trap representations, as defined in 3.19.4:
"an object representation that need not represent a value of the object type".
Reading such a representation yields undefined behavior (6.2.6.1p5), in which
case PR94035 would not be a bug.

> > Note: There's also the case of IEEE 754 decimal floating-point formats (such
> > as _Decimal64), for instance, due to the "cohorts", where two identical
> > values can have different memory representations. Is GCC always correct 
> > here?
> 
> I have used pseudo-denormals in long double (x86_fp80) for this so far. Are
> decimal floating-point formats more interesting?

Yes, because contrary to pseudo-denormals in long double, the support for
different representations of decimal values are fully specified, have their own
use (and can easily be generated with usual operations, e.g. if you have a
cancellation in a subtraction), and cannot be trap representations in C. FYI,
in IEEE 754-2019:

  cohort: The set of all floating-point representations that represent a
  given floating-point number in a given floating-point format. In this
  context −0 and +0 are considered distinct and 

[Bug other/94073] New: ibm-ldouble-format: the given maximum value of the IBM long double format is incorrect

2020-03-06 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94073

Bug ID: 94073
   Summary: ibm-ldouble-format: the given maximum value of the IBM
long double format is incorrect
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: other
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

The IBM long double format (double-double) is specified in
libgcc/config/rs6000/ibm-ldouble-format, which says:

|   Each long double is made up of two IEEE doubles.  The value of the
| long double is the sum of the values of the two parts (except for
| -0.0).  The most significant part is required to be the value of the
| long double rounded to the nearest double, as specified by IEEE.  For
| Inf values, the least significant part is required to be one of +0.0
| or -0.0.  No other requirements are made; so, for example, 1.0 may be
| represented as (1.0, +0.0) or (1.0, -0.0), and the low part of a NaN
| is don't-care.

Thus the maximum value that can be represented is x_h + x_l with
  x_h = 2^1024 - 2^971  (= DBL_MAX)
  x_l = 2^970 - 2^917   (the maximum value less than 1/2 ulp(x_h))

The binary representation of the significand is 111...1110111...111, with 53
bits 1 in both parts.

If x_l >= 1/2 ulp(x_h) = 2^970, then the value of the long double x_h + x_l
would round as a double to infinity, instead of x_h as required above.

This contradicts Section "Limits", which says:

  The maximum representable long double is 2^1024-2^918.

Indeed, this value is too large to satisfy the above constraints.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-03-10 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #44 from Vincent Lefèvre  ---
(In reply to Alexander Cherepanov from comment #43)
> GCC on x86-64 uses the binary encoding for the significand.

In general, yes. This includes the 32-bit ABI under Linux. But it seems to be
different under MS-Windows, at least with MinGW using the 32-bit ABI: according
to my tests of MPFR,

MPFR config.status 4.1.0-dev
configured by ./configure, generated by GNU Autoconf 2.69,
  with options "'--host=i686-w64-mingw32' '--disable-shared'
'--with-gmp=/usr/local/gmp-6.1.2-mingw32' '--enable-assert=full'
'--enable-thread-safe' 'host_alias=i686-w64-mingw32'"
[...]
CC='i686-w64-mingw32-gcc'
[...]
[tversion] Compiler: GCC 8.3-win32 20191201
[...]
[tversion] TLS = yes, float128 = yes, decimal = yes (DPD), GMP internals = no

i.e. GCC uses DPD instead of the usual BID.

> So the first question: does any platform (that gcc supports) use the decimal
> encoding for the significand (aka densely packed decimal encoding)?

DPD is also used on PowerPC (at least the 64-bit ABI), as these processors now
have hardware decimal support.

> Then, the rules about (non)propagation of some encodings blur the boundary
> between values and representations in C. In particular this means that
> different encodings are _not_ equivalent. Take for example the optimization
> `x == C ? C + 0 : x` -> `x` for a constant C that is the unique member of
> its cohort and that has non-canonical encodings (C is an infinity according
> to the above analysis). Not sure about encoding of literals but the result
> of addition `C + 0` is required to have canonical encoding. If `x` has
> non-canonical encoding then the optimization is invalid.

In C, it is valid to choose any possible encoding. Concerning the IEEE 754
conformance, this depends on the bindings. But IEEE 754 does not define the
ternary operator. It depends whether C considers encodings before or possibly
after optimizations (in the C specification, this does not matter, but when
IEEE 754 is taken into account, there may be more restrictions).

> While at it, convertFormat is required to return canonical encodings, so
> after `_Decimal32 x = ..., y = (_Decimal32)(_Decimal64)x;` `y` has to have
> canonical encoding? But these casts are nop in gcc now.

A question is whether casts are regarded as explicit convertFormat operations
or whether simplification is allowed as it does not affect the value, in which
case the canonicalize() function would be needed here. And in any case, when FP
contraction is enabled, I suppose that (_Decimal32)(_Decimal64)x can be
regarded as x.

[Bug sanitizer/85777] [8/9/10 Regression] -fsanitize=undefined makes a -Wmaybe-uninitialized warning disappear

2020-03-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85777

--- Comment #14 from Vincent Lefèvre  ---
(In reply to Vincent Lefèvre from comment #1)
> I've cleaned up the testcase:
> 
> int d;
> int h(void);
> void e(void)
> {
>   int f[2];
>   int g = 0;
>   if (d)
> g++;
>   if (d == 1)
> f[g++] = 2;
>   (void) (f[0] || (g && h()));
> }
[...]
> but
> 
> cventin% gcc-snapshot -Werror=uninitialized -Werror=maybe-uninitialized -O2
> -c file.c -fsanitize=undefined
> cventin%

I now get a warning/error as expected:

file.c: In function ‘e’:
file.c:11:12: error: ‘f[0]’ may be used uninitialized in this function
[-Werror=maybe-uninitialized]
   11 |   (void) (f[0] || (g && h()));
  |   ~^~~
cc1: some warnings being treated as errors

with gcc-10 (Debian 10-20200304-1) 10.0.1 20200304 (experimental) [master
revision 0b0908c1f27:cb0a7e0ca53:94f7d7ec6ebef49a50da777fd71db3d03ee03aa0].

But here's a new testcase:

int foo1 (void);
int foo2 (int);

int bar (void)
{
  int i;
  auto void cf (int *t) { foo2 (i); }
  int t __attribute__ ((cleanup (cf)));

  t = 0;

  if (foo1 ())
i = foo1 ();

  i = ! foo1 () || i;
  foo2 (i);

  return 0;
}

What's strange is that if I change the line

  i = ! foo1 () || i;

to

  i = foo1 () || i;

(i.e. if I just remove the "!", though this shouldn't change anything since GCC
does not have any knowledge on what foo1 returns), I get an error as expected:

uninit-test.c: In function ‘bar’:
uninit-test.c:15:15: error: ‘FRAME.1.i’ may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   15 |   i = foo1 () || i;
  |   ^~~~
cc1: some warnings being treated as errors

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-03-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #46 from Vincent Lefèvre  ---
(In reply to Alexander Cherepanov from comment #45)
> (In reply to Vincent Lefèvre from comment #44)
> > (In reply to Alexander Cherepanov from comment #43)
> > > GCC on x86-64 uses the binary encoding for the significand.
> > 
> > In general, yes. This includes the 32-bit ABI under Linux. But it seems to
> > be different under MS-Windows, at least with MinGW using the 32-bit ABI:
> > according to my tests of MPFR,
[...]
> > i.e. GCC uses DPD instead of the usual BID.
> 
> Strange, I tried mingw from stable Debian on x86-64 and see it behaving the
> same way as the native gcc:

Sorry, I've looked at the code (the detection is done by the configure script),
and in case of cross-compiling, MPFR just assumes that this is DPD. I confirm
that this is actually BID. So MPFR's guess is wrong. It appears that this does
not currently have any consequence on the MPFR build and the tests, so that the
issue remained unnoticed.

> > In C, it is valid to choose any possible encoding. Concerning the IEEE 754
> > conformance, this depends on the bindings. But IEEE 754 does not define the
> > ternary operator. It depends whether C considers encodings before or
> > possibly after optimizations (in the C specification, this does not matter,
> > but when IEEE 754 is taken into account, there may be more restrictions).
> 
> The ternary operator is not important, let's replace it with `if`:
> 
> --
> #include 
> 
> _Decimal32 f(_Decimal32 x)
> {
> _Decimal32 inf = (_Decimal32)INFINITY + 0;
> 
> if (x == inf)
> return inf;
> else
> return x;
> }
> --
> 
> This is optimized into just `return x;`.

I'm still wondering whether the optimization is valid, since this affects only
the encoding, not the value, and in general, C allows the encoding to change
when accessing an object. I don't know whether the C2x draft says something
special about the decimal formats.

> N2478, a recent draft of C2x, lists bindings in F.3 and "convertFormat -
> different formats" corresponds to "cast and implicit conversions". Is this
> enough?

But ditto, is optimization that just modifies the encoding allowed?

[Bug c/94337] New: Incorrect "dereferencing type-punned pointer will break strict-aliasing rules" warning

2020-03-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94337

Bug ID: 94337
   Summary: Incorrect "dereferencing type-punned pointer will
break strict-aliasing rules" warning
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Consider the following example.

#include 

struct s1
{
  int a;
};

struct s2
{
  int a, b;
};

int main (void)
{
  union {
struct s1 m1[1];
struct s2 m2[1];
  } u;

  (u.m2)->b = 17;
  printf ("%d\n", ((struct s2 *) (struct s1 *) u.m2)->b);
  printf ("%d\n", ((struct s2 *) u.m1)->b);
  return 0;
}

zira:~> gcc-10 tst.c -o tst -O2 -Wstrict-aliasing
tst.c: In function ‘main’:
tst.c:22:20: warning: dereferencing type-punned pointer will break
strict-aliasing rules [-Wstrict-aliasing]
   22 |   printf ("%d\n", ((struct s2 *) u.m1)->b);
  |   ~^~~

But there is no type-punning here. All accesses are done via struct s2.
Everything else is pointer conversions, which are not related to the aliasing
rules.

[Bug c/94337] Incorrect "dereferencing type-punned pointer will break strict-aliasing rules" warning

2020-03-26 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94337

--- Comment #2 from Vincent Lefèvre  ---
Why not having a level with no false positives? This would avoid to disable the
warning globally.

IMHO, using it when a union is involved is likely to generate false positives.

[Bug middle-end/91858] [9/10 Regression] Compile time hog w/ complex float trigonometric functions

2020-04-01 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91858

--- Comment #5 from Vincent Lefèvre  ---
I can reproduce the issue on my Debian machine. Do you know which values are
passed to MPC?

I've tried with

  mpc_init2 (x, 24);
  mpc_init2 (y, 24);
  mpc_set_ui_ui (x, 1, 1, MPC_RNDNN);
  mpc_tan (y, x, MPC_RNDNN);

including with a reduced exponent range, but this is very fast. And according
to ldd, the same versions of the GMP/MPFR/MPC libraries are used here with both
GCC and my MPC test.

[Bug middle-end/91858] [9/10 Regression] Compile time hog w/ complex float trigonometric functions

2020-04-01 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91858

--- Comment #7 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #6)
> I guess there's only one limb, the rest looks garbage.

Yes, and 1125899906842624 with _mpfr_exp = 14 corresponds to 1 as
expected (1125899906842624 = 1*2^50).

And even when reusing the input

  mpc_init2 (x, 24);
  mpc_set_ui_ui (x, 1, 1, MPC_RNDNN);
  mpc_tan (x, x, 0);

the program terminates immediately.

I'm going to look at this more closely with gdb. But I confirm I can see
mpc_tan in the backtrace.

[Bug middle-end/91858] [9/10 Regression] Compile time hog w/ complex float trigonometric functions

2020-04-01 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91858

--- Comment #8 from Vincent Lefèvre  ---
The exponent range is important. With ltrace, I can see:

3472505 mpfr_set_emin(0x7f22, 0xbf92, 0xbf92, 46) = 0
3472505 mpfr_set_emax(0x8002, 0xbf92, 0x7fb7c2c9c420, 46) = 0

(BTW, this is buggy, mpfr_set_emin and mpfr_set_emax take one argument only).

And with these corresponding particular values

  mpfr_set_emin (-32990);
  mpfr_set_emax (32770);

my MPC test loops.

[Bug middle-end/91858] [9/10 Regression] Compile time hog w/ complex float trigonometric functions

2020-04-01 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91858

--- Comment #10 from Vincent Lefèvre  ---
(In reply to rguent...@suse.de from comment #9)
> It's likely by us doing
> 
> mpfr_set_emin (-32990);
> mpfr_set_emax (32766);
> 
> during startup to work around a similar bug in MPC (IIRC it also
> was tan ...).

I suspect an internal overflow in the MPC computation, in which case doubling
the precision at each iteration won't solve the issue. Setting emax to 14425 or
below avoids the loop, but the real part of the result is slightly incorrect (0
instead of non-zero), though not affecting the binary32 exponent range, as this
value is tiny:

  1.00110010e-28854

Changing the exponent range will solve the issue for some values, but I fear
that whatever the exponent range is chosen, there may remain values that will
trigger the bug.

Paul Zimmermann says that this bug is fixed in the MPC development version.

[Bug middle-end/91858] [9/10 Regression] Compile time hog w/ complex float trigonometric functions

2020-04-01 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91858

--- Comment #11 from Vincent Lefèvre  ---
(In reply to Vincent Lefèvre from comment #10)
> Paul Zimmermann says that this bug is fixed in the MPC development version.

I could check that the bug is actually fixed, but the does not solve the GCC
issue, as the time complexity is too large. With 1000 instead of 1,
you'll still notice that it takes too much time.

[Bug middle-end/91858] [9/10 Regression] Compile time hog w/ complex float trigonometric functions

2020-04-01 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91858

--- Comment #12 from Vincent Lefèvre  ---
(In reply to Vincent Lefèvre from comment #11)
> (In reply to Vincent Lefèvre from comment #10)
> > Paul Zimmermann says that this bug is fixed in the MPC development version.
> 
> I could check that the bug is actually fixed, but the does not solve the GCC
> issue, as the time complexity is too large. With 1000 instead of 1,
> you'll still notice that it takes too much time.

But as Paul says, keeping the exponent range reduced as above allows an
immediate answer. So the solution will be to upgrade to the next MPC version
(not released yet) and do not increase the exponent range (this allows early
underflow/overflow checking, thus may avoid costly computations in some
domains).

[Bug middle-end/34678] Optimization generates incorrect code with -frounding-math option (#pragma STDC FENV_ACCESS not implemented)

2020-04-16 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=34678

--- Comment #43 from Vincent Lefèvre  ---
Note that the effect of changing the rounding mode after a computation, whether
-frounding-math is used or not, is not just that the change of rounding mode
may not be honored. If can yield inconsistencies in a block where the rounding
mode is not changed.

#include 
#include 
#include 

#pragma STDC FENV_ACCESS ON

#define CST 0x1p-200

int main (void)
{
  volatile double a = CST;
  double b = a, c = a, d;
  printf ("%a\n", 1.0 - b);
  fesetround (FE_DOWNWARD);
  printf ("%a\n", 1.0 - c);

  if (b == c && b == CST && c == CST)
{
  printf ("%d\n", 1.0 - b == 1.0 - c);
  printf ("1: %a\n", 1.0 - b);
  printf ("2: %a\n", 1.0 - c);
  d = b == CST ? b : (abort (), 1.0);
  printf ("3: %a\n", 1.0 - d);
  d = b == CST ? b : 1.0;
  printf ("4: %a\n", 1.0 - d);
}

  return 0;
}

With -std=c17 -frounding-math -O3 -lm, I get:

0x1p+0
0x1.fp-1
0
1: 0x1p+0
2: 0x1.fp-1
3: 0x1p+0
4: 0x1.fp-1

[Bug analyzer/94713] New: Analyzer is buggy on uninitialized pointer

2020-04-22 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94713

Bug ID: 94713
   Summary: Analyzer is buggy on uninitialized pointer
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: analyzer
  Assignee: dmalcolm at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Test with: gcc-10 (Debian 10-20200418-1) 10.0.1 20200418 (experimental) [master
revision 27c171775ab:4c277008be0:c5bac7d127f288fd2f8a1f15c3f30da5903141c6]

Consider:

void f1 (int *);
void f2 (int);

int foo (void)
{
  int *p;

  f1 (p);
  f2 (p[0]);
  return 0;
}

zira% gcc-10 -Wall tst2.c -O3 -c -fanalyzer
tst2.c: In function ‘foo’:
tst2.c:8:3: warning: ‘p’ is used uninitialized in this function
[-Wuninitialize]
8 |   f1 (p);
  |   ^~
tst2.c:9:3: warning: use of uninitialized value ‘p’ [CWE-457]
[-Wanalyzer-use-of-uninitialized-value]
9 |   f2 (p[0]);
  |   ^
  ‘foo’: event 1
|
|

-Wuninitialize works as expected, but -Wanalyzer-use-of-uninitialized-value
outputs the warning message on p[0], though the ‘p’ in the warning message is
correct.

If I comment out the "f2 (p[0]);" line, I no longer get the warning from the
analyzer, which means that it is the p[0] that triggers the warning instead of
p.

[Bug analyzer/94714] New: Analyzer: no warning on access of an uninitialized variable of automatic storage duration

2020-04-22 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94714

Bug ID: 94714
   Summary: Analyzer: no warning on access of an uninitialized
variable of automatic storage duration
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: analyzer
  Assignee: dmalcolm at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Test with: gcc-10 (Debian 10-20200418-1) 10.0.1 20200418 (experimental) [master
revision 27c171775ab:4c277008be0:c5bac7d127f288fd2f8a1f15c3f30da5903141c6]

Consider:

#include 

int main (void)
{
  int *p;
  int i;

  p = &i;
  printf ("%d\n", p[0]);

  return 0;
}

With "gcc-10 tst.c -o tst -O3 -fanalyzer", I do not get any warning from the
analyzer, though p[0] is uninitialized. Ditto without optimizations (in which
case, running the program really shows that i is uninitialized).

[Bug analyzer/94732] New: Analyzer: false positive in MPFR's atan.c

2020-04-23 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94732

Bug ID: 94732
   Summary: Analyzer: false positive in MPFR's atan.c
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: analyzer
  Assignee: dmalcolm at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Created attachment 48360
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=48360&action=edit
testcase

Test with: gcc-10 (Debian 10-20200418-1) 10.0.1 20200418 (experimental) [master
revision 27c171775ab:4c277008be0:c5bac7d127f288fd2f8a1f15c3f30da5903141c6]

When I want to compile GNU MPFR with -fanalyzer, the compilation of atan.c
fails on what appears to be a false positive. I've managed to reduce the
6000-line preprocessed code to code with fewer than 300 lines (attached bug.i
file). More specifically, I've removed
  * blank lines and comments;
  * unused declarations/definitions;
  * code that could have an influence only after the "error";
  * code testing and handling special cases.

I order to see where the issue could come from, I've added 2 lines
  * "((y)->_mpfr_d)[0] = 0;" at the beginning of mpfr_atan_aux;
  * "((tmp2)->_mpfr_d)[0] = 0;" just before the call to mpfr_atan_aux.

Without these 2 lines, "gcc-10 -c -fanalyzer bug.i" gives:

bug.i: In function ‘set_table’:
bug.i:145:9: warning: use of uninitialized value ‘yp’ [CWE-457]
[-Wanalyzer-use-of-uninitialized-value]
  145 |   yp[0] &= ~(((void) 0), sh ==
  | ^~

where yp is set with

  mp_limb_t *yp = ((y)->_mpfr_d);

So I suppose that the analyzer complains that (y)->_mpfr_d is uninitialized.
This comes from mpfr_atan_aux, and "((y)->_mpfr_d)[0] = 0;" at the beginning of
this function should trigger the same error. If I add this line, I get in a
consistent way:

bug.i: In function ‘mpfr_atan_aux’:
bug.i:154:19: warning: use of uninitialized value ‘’ [CWE-457]
[-Wanalyzer-use-of-uninitialized-value]
  154 | ((y)->_mpfr_d)[0] = 0;
  | ~~^~~

This mpfr_atan_aux function is called at only one place:

  mpfr_atan_aux (tmp2, ukz, twopoweri, n0 - i, tabz);

So I added "((tmp2)->_mpfr_d)[0] = 0;" just before this call. I thought that I
would get an error on this, but I still get an error only on "((y)->_mpfr_d)[0]
= 0;" in mpfr_atan_aux. If I remove this line (just keeping the one before the
call to mpfr_atan_aux), I get the error in set_table only, just like in the
first test.

Now, this appears to be a false positif since (tmp2)->_mpfr_d was initialized
earlier.

I could probably simplify the code even further, focusing on (tmp2)->_mpfr_d
only.

[Bug analyzer/94732] Analyzer: false positive in MPFR's atan.c

2020-04-23 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94732

--- Comment #1 from Vincent Lefèvre  ---
Here's the corresponding simple testcase:

typedef struct { int *a; } S;
int *f (void);
static void g (S *x)
{
  int *p = x->a;
  p[0] = 0;
}
void h (void)
{
  S x[1];
  x->a = f ();
  g (x);
}

$ gcc-10 -c -fanalyzer bug.i
bug.i: In function ‘g’:
bug.i:6:8: warning: use of uninitialized value ‘p’ [CWE-457]
[-Wanalyzer-use-of-uninitialized-value]
6 |   p[0] = 0;
  |   ~^~~
  ‘h’: events 1-2
|
|8 | void h (void)
|  |  ^
|  |  |
|  |  (1) entry to ‘h’
|..
|   12 |   g (x);
|  |   ~
|  |   |
|  |   (2) calling ‘g’ from ‘h’
|
+--> ‘g’: events 3-4
   |
   |3 | static void g (S *x)
   |  | ^
   |  | |
   |  | (3) entry to ‘g’
   |..
   |6 |   p[0] = 0;
   |  |      
   |  ||
   |  |(4) use of uninitialized value ‘p’ here
   |

[Bug c/94773] Unhelpful warning "right shift count >= width of type [-Wshift-count-overflow]" on unreachable code.

2020-04-26 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94773

Vincent Lefèvre  changed:

   What|Removed |Added

 CC||vincent-gcc at vinc17 dot net

--- Comment #1 from Vincent Lefèvre  ---
Dup of PR4210?

[Bug middle-end/4210] should not warning with dead code

2020-05-04 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=4210

--- Comment #33 from Vincent Lefèvre  ---
(In reply to Niels Möller from comment #32)
> 4. I also wonder what happens if, for some reason, a constant invalid shift
> count is passed through all the way to code generation? Most architectures
> would represent a constant shift count for a 32-bit value as 5-bit field in
> the opcode, and then the invalid shift counts aren't representable at all.
> Will gcc silently ignore higher bits,

That's undefined behavior, so that GCC can do whatever it wants for the
generated code.

> or is it an internal compiler error,

This would not be acceptable.

[Bug analyzer/95026] New: "leak of FILE" false positive [CWE-775] [-Wanalyzer-file-leak]

2020-05-09 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95026

Bug ID: 95026
   Summary: "leak of FILE" false positive [CWE-775]
[-Wanalyzer-file-leak]
   Product: gcc
   Version: 10.1.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: analyzer
  Assignee: dmalcolm at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

On the following program (obtained after simplifying Mutt's imap/message.c)

struct _IO_FILE;
typedef struct _IO_FILE FILE;
typedef struct _message
{
  FILE *fp;
} MESSAGE;
extern FILE *fopen (const char *__restrict __filename,
const char *__restrict __modes);
FILE *f (void);
int imap_fetch_message (int i, MESSAGE *msg, char *p)
{
  if ((msg->fp = i ? 0 : f ()))
return 0;
  if (p)
msg->fp = fopen (p, "r");
  return -1;
}

I get:

zira:~> gcc-10 -c -O2 -fanalyzer tst.i
In function ‘imap_fetch_message’:
tst.i:15:13: warning: leak of FILE ‘’ [CWE-775] [-Wanalyzer-file-leak]
   15 | msg->fp = fopen (p, "r");
  | ^~~~
  ‘imap_fetch_message’: events 1-6
|
|   12 |   if ((msg->fp = i ? 0 : f ()))
|  |  ^
|  |  |
|  |  (1) following ‘false’ branch...
|   13 | return 0;
|   14 |   if (p)
|  |  ~
|  |  |
|  |  (2) ...to here
|  |  (3) following ‘true’ branch (when ‘p’ is non-NULL)...
|   15 | msg->fp = fopen (p, "r");
|  | 
|  | | |
|  | | (4) ...to here
|  | | (5) opened here
|  | (6) ‘’ leaks here; was opened at (5)
|

Tested with: gcc-10 (Debian 10.1.0-1) 10.1.0

Note: if I replace the return value -1 by 0, then the warning disappears!

[Bug c/95057] New: missing -Wunused-but-set-variable warning on multiple assignments, not all of them used

2020-05-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95057

Bug ID: 95057
   Summary: missing -Wunused-but-set-variable warning on multiple
assignments, not all of them used
   Product: gcc
   Version: 10.1.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

With -Wunused-but-set-variable, GCC does not warn in the following cases:

int f (int i)
{
  int j;
  j = i + 1;  /* unused */
  j = i + 2;
  return j;
}

int g (int i)
{
  int j, k;
  j = i + 1;
  k = j + 1;
  j = i + 2;  /* unused */
  return k;
}

Note: it is possible that PR44677 may be regarded as a particular case of this
bug.

[Bug c/95057] missing -Wunused-but-set-variable warning on multiple assignments, not all of them used

2020-05-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95057

--- Comment #2 from Vincent Lefèvre  ---
(In reply to Jakub Jelinek from comment #1)
> That is something this warning can't warn about.
> The warning is a simple FE warning that uses two bits, one for whether
> certain decl was used and one whether it was read.  The warning then
> diagnoses those that have the former bit set and not the latter (of course,
> there are some exceptions etc.).

But when the variable is assigned, the "read" bit could be reset.

[Bug c/95057] missing -Wunused-but-set-variable warning on multiple assignments, not all of them used

2020-05-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95057

--- Comment #4 from Vincent Lefèvre  ---
OK, so the work (for this warning or a new one) should be done later, but early
enough not to be affected by optimizations.

One of the goals would be to detect a missing check of a return value of a
function call (typically an error status). For such a typical usage, the same
variable can be reused for each error status.

  int r;

  r = call1 (...);
  check (r);
  /* ... */
  r = call2 (...);
  check (r);
  /* ... */
  r = call3 (...);
  check (r);

If a check is missing, there should be a warning.

[Bug c/95699] New: __builtin_constant_p inconsistencies

2020-06-16 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95699

Bug ID: 95699
   Summary: __builtin_constant_p inconsistencies
   Product: gcc
   Version: 10.1.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Consider the following code on x86_64 (written for a 64-bit long), compiled
with -O2.

With gcc-9 (Debian 9.3.0-13) 9.3.0, I get:
0
1
1

With gcc-10 (Debian 10.1.0-3) 10.1.0, I get:
0
1
0

I'm not sure about the exact __builtin_constant_p specification, i.e. whether
it may be duplicated into two branches so that the argument can become a
constant for GCC (I think it should as this may allow the selection of code
optimized for constants), and how this can be influenced by "volatile".

But the first two cases are very similar, so that I would expect the same
value, but I get 0 / 1 with both GCC 9 and GCC 10. Concerning the third case,
the constantness has been lost in GCC 10, which is unexpected if a result 1 is
allowed.

Moreover, if I remove the second condition ("&& ...") in the printf, I always
get 0 (i.e. false), which is counter-intuitive: adding a "&& ..." condition
should never change a false condition to true (the results 1 I obtain above).

int printf (const char *__restrict __format, ...);

#undef C
#define C 0x7fffUL

static void tst1a (void)
{
  volatile unsigned long r0 = 0;
  unsigned long r;

  r = r0;
  if (r < C)
r = C;

  printf ("%d\n", __builtin_constant_p (r) && r == C);
}

#undef C
#define C 0x8000UL

static void tst1b (void)
{
  volatile unsigned long r0 = 0;
  unsigned long r;

  r = r0;
  if (r < C)
r = C;

  printf ("%d\n", __builtin_constant_p (r) && r == C);
}

static void tst2 (void)
{
  volatile unsigned long r0 = 0;
  unsigned long r;

  r = r0;
  if (r < 0x8000)
r = 0x8000;
  r *= r;

  printf ("%d\n", __builtin_constant_p (r) && r < 1);
}

int main (void)
{
  tst1a ();
  tst1b ();
  tst2 ();
  return 0;
}

[Bug c/95699] __builtin_constant_p inconsistencies

2020-06-17 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95699

--- Comment #5 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #4)
> I'm inclined to close as WONTFIX or INVALID.  There are several other PRs
> which
> show "surprising" behavior with respect to __builtin_constant_p and jump
> threading.

But concerning tst1a and tst1b is there any reason that 0x7fffUL
and 0x8000UL are handled in a different way, while the only type
involved in these tests is unsigned long? I don't see why the value of the most
significant bit would matter here.

And isn't a missed optimization regarded as a valid bug?

[Bug c/95699] __builtin_constant_p inconsistencies

2020-06-17 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95699

--- Comment #8 from Vincent Lefèvre  ---
(In reply to Jakub Jelinek from comment #6)
> I don't see why that should be considered a bug.
> All the tests are using __builtin_constant_p in a way that it wasn't
> designed for, where it changes the behavior of the program whether it
> evaluates to 0 or 1.

Well, this was just meant to be a simplified testcase and to easily see whether
__builtin_constant_p gave true or false. But in GMP's longlong.h file (used by
GNU MPFR), there is similar code for aarch64, where the assembly code is
different whether the variable is regarded as a constant or not:

#define sub_ddmmss(sh, sl, ah, al, bh, bl) \
  do {  \
if (__builtin_constant_p (bl) && -(UDItype)(bl) < 0x1000)   \
  __asm__ ("adds\t%1, %x4, %5\n\tsbc\t%0, %x2, %x3" \
   : "=r,r" (sh), "=&r,&r" (sl) \
   : "rZ,rZ" ((UDItype)(ah)), "rZ,rZ" ((UDItype)(bh)),  \
 "r,Z"   ((UDItype)(al)), "rI,r" (-(UDItype)(bl))
__CLOBBER_CC);\
else\
  __asm__ ("subs\t%1, %x4, %5\n\tsbc\t%0, %x2, %x3" \
   : "=r,r" (sh), "=&r,&r" (sl) \
   : "rZ,rZ" ((UDItype)(ah)), "rZ,rZ" ((UDItype)(bh)),  \
 "r,Z"   ((UDItype)(al)), "rI,r"  ((UDItype)(bl))
__CLOBBER_CC);\
  } while(0);

The assembly code is actually buggy in the "if" case (this was how one could
find out that there was a difference between GCC 9, where the "if" case was
selected, and GCC 10, where the "else" case was selected), but I doubt that GCC
is aware about this bug:

  https://sympa.inria.fr/sympa/arc/mpfr/2020-06/msg00052.html
  https://sympa.inria.fr/sympa/arc/mpfr/2020-06/msg00059.html

I suppose that the general goal of this test using __builtin_constant_p is to
have faster assembly code when some value is known at compile time. So the
intent (with the fixed assembly code[*]) is to have the same behavior, but have
faster code if possible. This is what we got with GCC 9, but this is no longer
the case with GCC 10.

[*] https://gmplib.org/list-archives/gmp-bugs/2020-June/004807.html

[Bug middle-end/96044] GCC hangs in tight loop resolving __builtin_jn using MPFR

2020-07-03 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96044

--- Comment #6 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #5)
> Vincent, any guidance on that?  I guess the actual runtime implementation in
> glibc may be "fast" because it's not accurate (evaluating takes 0.00s with
> glibc 2.26 ...)

I confirm the issue. I've just added this testcase in MPFR.

[Bug middle-end/96044] GCC hangs in tight loop resolving __builtin_jn using MPFR

2020-07-06 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96044

--- Comment #10 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #8)
> The issue with timeouts (as in wall-clock) is that it makes builds
> dependent on CPU speed which is something we generally avoid.  For ISL
> computations where we employ this kind of thing there's tracking of
> "number of ops" that are performed by ISL itself plus the ability for
> the client to set an upper bound after which operations return with
> an appropriate error.  gmp/mpfr/mpc could for example track the number
> of 64bit multiplies and support limiting there - but I expect this would
> be a lot of work.  Eventually we could make GCC use mini-gmp (would
> a host libmpfr then use the built-in mini-gmp?) and patch in this kind
> of accounting there...

The result would depend on the algorithm, i.e. on the MPFR version. And for
this reason, I agree with:

> But IMHO simply classifying a value-range where we do and where we do not
> constant fold certain functions would be reasonable as well - unless C++
> at some point magically requires to constexpr evaluate jn().

[Bug target/96173] New: double to _Decimal64 or _Decimal128 conversion with BID generates 3 MB of code

2020-07-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96173

Bug ID: 96173
   Summary: double to _Decimal64 or _Decimal128 conversion with
BID generates 3 MB of code
   Product: gcc
   Version: 11.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: target
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Consider the following code:

int main (void)
{
  volatile double x = 0.0;
  volatile _Decimal128 i = x;
  return i != 0;
}

On x86_64:

zira:~> gcc-snapshot -O2 tst.c -o tst
zira:~> ll --human-readable tst
-rwxr-xr-x 1 vinc17 vinc17 3.3M 2020-07-12 01:44:05 tst*

With _Decimal64 instead of _Decimal128, tst is a bit smaller: 2.9M

Tested with gcc (Debian 20200616-1) 11.0.0 20200616 (experimental) [master
revision beaf12b49ae:aed76232726:b70eeb248efe2b3e9bdb5e26b490e3d8aa07022d]

[Bug target/96173] double to _Decimal64 or _Decimal128 conversion with BID generates 3 MB of code

2020-07-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96173

--- Comment #2 from Vincent Lefèvre  ---
IMHO, the implementation is highly inefficient. Even with all these functions
(which are similar, thus should share most code), 3 MB seems a lot to me.

In particular, some user complained that the size of the GNU MPFR library
(which now uses such conversions) has been multiplied by 5. This is even worse
with the GCC 11 snapshot, using ./configure CC=gcc-snapshot CFLAGS="-O2":

 663880 with --disable-decimal-float
4836016 with --enable-decimal-float
1914376 with --enable-decimal-float and hardcoded values instead of conversions
 668240 with --enable-decimal-float and even more hardcoded values

Note that this is MPFR that does the binary-to-decimal conversion itself (MPFR
uses _Decimal128 operations just for the format conversion, to generate either
NaN/±Inf/±0 from a double or some regular value from a decimal character
sequence). If MPFR can do this conversion within its few hundreds of KB[*], I
don't see why this can't be done by GCC.

[*] This does not include the small part of GMP on which MPFR is based, but
this includes much unrelated code, for all the functions MPFR implements.

[Bug rtl-optimization/80491] [7 Regression] Compiler regression for long-add case.

2019-11-14 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80491

--- Comment #16 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #14)
> Fixed for GCC 8.

With gcc-8 (Debian 8.3.0-24) 8.3.0, it is not fixed (OK with -O1, not with
-O3). And there does not seem to be a reference to this bug in the svn log.

[Bug rtl-optimization/80491] [7 Regression] Compiler regression for long-add case.

2019-11-14 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80491

--- Comment #17 from Vincent Lefèvre  ---
(In reply to Jakub Jelinek from comment #12)
> Fixed on the trunk.

I tested with

gcc (Debian 20191008-1) 10.0.0 20191008 (experimental) [trunk revision 276697]

and the bug still occurs. Maybe another regression?

[Bug middle-end/82123] [7 regression] spurious -Wformat-overflow warning for converted vars

2019-11-14 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82123

--- Comment #16 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #15)
> Fixed in GCC 8.

I can still reproduce the bug with:
gcc-8 (Debian 8.3.0-24) 8.3.0

However, I can no longer reproduce it with:
gcc-9 (Debian 9.2.1-19) 9.2.1 20191109

(tested with the original test case from PR79257, which had been marked as a
duplicate of this bug)

[Bug c/66773] sign-compare warning for == and != are pretty useless

2019-11-15 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66773

Vincent Lefèvre  changed:

   What|Removed |Added

 CC||vincent-gcc at vinc17 dot net

--- Comment #7 from Vincent Lefèvre  ---
(In reply to Segher Boessenkool from comment #1)
> There certainly are cases where these warnings are inconvenient, but
> there also are cases where they are quite useful -- e.g.
> 
> int f(void) { return 0x == -1; }

I'd say that the warning is inconvenient here, since if the developer really
wants to do a test equivalent to the above one, there seems no good way to
avoid the warning with portable code.

[Bug c/66773] sign-compare warning for == and != are pretty useless

2019-11-22 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66773

--- Comment #14 from Vincent Lefèvre  ---
(In reply to Segher Boessenkool from comment #11)
> Do you have examples of perfectly fine code where you get a warning?

In addition to Daniel's example, I would say that one can have types that are
signed but the values are always nonnegative in practice (the goal of having
signed types is to be able to do signed arithmetic on them when using other
signed types). For instance, this is the case of mpfr_prec_t in GNU MPFR (this
contains a precision). Thus it is fine to compare it with a value of type
unsigned.

And adding casts to avoid the warning (in addition to being a hack) is not even
always possible, because one may not know the associated unsigned type.

[Bug c/66773] sign-compare warning for == and != are pretty useless

2019-11-22 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66773

--- Comment #17 from Vincent Lefèvre  ---
(In reply to Segher Boessenkool from comment #15)
> A much better fix is
> 
> void f1(long s, unsigned long u) { unsigned long su = s; if (su == u) g(); }

But what if s is some arbitrary integer type, e.g. that comes from a library?

Even using uintmax_t may be insufficient. For instance, GCC has __uint128.

> Still much better is to not mixed signedness in types at all.

One does not necessarily have the choice. The signedness of some types is not
specified.

[Bug c/66773] sign-compare warning for == and != are pretty useless

2019-11-29 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66773

--- Comment #22 from Vincent Lefèvre  ---
(In reply to Segher Boessenkool from comment #21)
> If, as I said, the user uses explicit casts, that's not good.  Much better
> already is to use implicit casts, as I said;

There's no such thing as implicit casts. Casts are always explicit.

> and much better than that is
> to fix the problems at the root (use proper types everywhere), as I said.

This will not necessarily solve the problem, because the size and/or the
signedness of a type may be unknown (the signedness can be detected with a
macro, but there's no way to change the sign of a type).

[Bug c/92875] GCC ignores the floating-point 'f' suffix in C11 mode

2019-12-09 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92875

--- Comment #7 from Vincent Lefèvre  ---
(In reply to jos...@codesourcery.com from comment #5)
> Lack of direct float and double arithmetic requires FLT_EVAL_METHOD == 2

No, FLT_EVAL_METHOD could also be negative, in which case GCC would be allowed
to evaluate floating-point constants of type float in whatever range and
precision it wishes.

[Bug c/92875] GCC ignores the floating-point 'f' suffix in C11 mode

2019-12-09 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92875

Vincent Lefèvre  changed:

   What|Removed |Added

 CC||vincent-gcc at vinc17 dot net

--- Comment #8 from Vincent Lefèvre  ---
(In reply to and...@wahrzeichnen.de from comment #6)
>   6.6 (5) "An expression that evaluates to a constant is required in several
> contexts. If a floating expression is evaluated in the translation
> environment, the arithmetic range and precision shall be at least as great
> as if the expression were being evaluated in the execution environment.
> --footnote: The use of evaluation formats as characterized by
> FLT_EVAL_METHOD also applies to evaluation in the translation environment."

This is about constant expressions and is not applicable to constants. See
5.2.4.2.2p9 for constants.

[Bug c/93410] New: can't use _Decimal64 in C99/C11/C17 mode

2020-01-23 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93410

Bug ID: 93410
   Summary: can't use _Decimal64 in C99/C11/C17 mode
   Product: gcc
   Version: 9.2.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

With GCC 9.2.1 under Debian/unstable (x86_64):

cventin% cat tst.c
int main (void)
{
  _Decimal64 x = 1;
  return x != 1;
}
cventin% gcc-9 tst.c -o tst 
cventin%

but

cventin% gcc-9 -std=c99 tst.c -o tst
tst.c: In function ‘main’:
tst.c:3:3: error: unknown type name ‘_Decimal64’
3 |   _Decimal64 x = 1;
  |   ^~
cventin% gcc-9 -std=c11 tst.c -o tst
tst.c: In function ‘main’:
tst.c:3:3: error: unknown type name ‘_Decimal64’
3 |   _Decimal64 x = 1;
  |   ^~
cventin% gcc-9 -std=c17 tst.c -o tst
tst.c: In function ‘main’:
tst.c:3:3: error: unknown type name ‘_Decimal64’
3 |   _Decimal64 x = 1;
  |   ^~
cventin%

There is no such issue with the trunk (where one just gets a warning if
-pedantic is used, which is expected).

[Bug c/93410] [9 only] can't use _Decimal64 in C99/C11/C17 mode

2020-01-24 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93410

--- Comment #3 from Vincent Lefèvre  ---
(In reply to jos...@codesourcery.com from comment #2)
> But that's not the sort of change we make on past release branches.

OK, but note that the GCC manual does not mention any limitation of this kind.
This is rather confusing.

[Bug middle-end/323] optimized code gives strange floating point results

2020-02-07 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

--- Comment #215 from Vincent Lefèvre  ---
According to https://gcc.gnu.org/bugzilla/page.cgi?id=fields.html#bug_status
the possible status are UNCONFIRMED, CONFIRMED and IN_PROGRESS. I think that
the correct one is CONFIRMED.

[Bug c/85957] i686: Integers appear to be different, but compare as equal

2020-02-08 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85957

--- Comment #13 from Vincent Lefèvre  ---
(In reply to Rich Felker from comment #12)
> [...] and making the floating point results even more semantically incorrect
> (double-rounding all over the place, mismatching FLT_EVAL_METHOD==2)

No problems: FLT_EVAL_METHOD==2 means "evaluate all operations and constants to
the range and precision of the long double type", which is what really occurs.
The consequence is indeed double rounding when storing in memory, but this can
happen at *any* time even without -ffloat-store (due to spilling), because you
are never sure that registers are still available; see some reports in bug 323.

Double rounding can be a problem with some codes, but this just means that the
code is not compatible with FLT_EVAL_METHOD==2. For some floating-point
algorithms, double rounding is not a problem at all, while keeping a result in
extended precision will make them fail.

[Bug c/82318] -fexcess-precision=standard has no effect on a libm function call

2020-02-08 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82318

--- Comment #8 from Vincent Lefèvre  ---
(In reply to Rich Felker from comment #7)
> Note that such an option would be nice to have anyway, for arbitrary
> functions, since it's necessary for being able to call code that was
> compiled with -fexcess-precision=fast from code that can't accept the
> non-conforming/optimizer-unsafe behavior and safely use the return value. It
> should probably be an attribute, with a flag to set the global default. For
> example, __attribute__((__returns_excess_precision__)).

If you're talking about arbitrary functions, they may have been implemented in
languages other than C, anyway. So that's an ABI issue. If the ABI allows
excess precision, then GCC must assume that excess precision is possible.

[Bug c/85957] i686: Integers appear to be different, but compare as equal

2020-02-09 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85957

--- Comment #15 from Vincent Lefèvre  ---
(In reply to Rich Felker from comment #14)
> It sounds like you misunderstand the standard's requirements on, and GCC's
> implementation of, FLT_EVAL_METHOD==2/excess-precision. The availability of
> registers does not in any way affect the result, because when expressions
> are evaluated with excess precision, any spills must take place in the
> format of float_t or double_t (long double) and are thereby transparent to
> the application.

The types float_t or double_t correspond to the evaluation format. Thus they
are equivalent to long double if FLT_EVAL_METHOD is 2 (see 7.12p2). And GCC
does not do spills in this format, as see in bug 323.

> With standards-conforming behavior, the rounding of an operation and of
> storage to an object of float/double type are discrete roundings and you can
> observe and handle the intermediate value between them. With -ffloat-store,
> every operation inherently has a double-rounding attached to it. This
> behavior is non-conforming

This is conforming as there is no requirement to keep intermediate results in
excess precision and range.

[Bug preprocessor/93677] New: Create a warning for duplicate macro definition

2020-02-11 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93677

Bug ID: 93677
   Summary: Create a warning for duplicate macro definition
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: preprocessor
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

There should be a warning for duplicate macro definition (i.e. with the same
value), such as

#define FOO 1
#define FOO 1

Such code is valid, but a duplicate macro definition may occur inadvertently in
the code, and it does not ease code maintenance and readability. For instance,
one definition may occur in some header file, and another one in a .c file for
some completely different meaning, while the value in both definitions is the
same only by chance. Such code introduces confusion and makes code evolution
more error prone: if for some reason one of the values needs to be changed, the
macro name needs to be changed too (this is what should have been done earlier,
but it would have been better to detect the issue as soon as the duplicate
macro definition was added).

For function-like macros, issues may also occur when one wants to change the
implementation, keeping the same behavior: the change in one of the definitions
becomes invalid. Thus it is better to define such macros in only one place, and
the warning would detect a failure to do that.

The switch could be combined with the request in PR83773 (which is about macro
redefinition with a different value).

[Bug c/85957] i686: Integers appear to be different, but compare as equal

2020-02-12 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85957

--- Comment #27 from Vincent Lefèvre  ---
(In reply to Rich Felker from comment #25)
> I think standards-conforming excess precision should be forced on, and added
> to C++; there are just too many dangerous ways things can break as it is
> now.

+1

People who currently build x87 software could choose between different
solutions, such as switching to SSE if possible, switching back to "fast"
excess precision if speed is really important and they know that it will work
(this requires testing). They could even try -ffast-math (some of its
optimizations are actually less dangerous than "fast" excess precision)...

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-19 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #4 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #3)
> with -funsafe-math-optimizations you get the 1 + x != 1 -> x != 0
> optimization which is unsafe because a rounding step is removed.
> But you asked for that.  So no "wrong-code" here, just another case
> of "instabilities" or how you call that via conditional equivalence
> propagation.

I disagree. -funsafe-math-optimizations just means that floating-point
expressions can be rearranged at the source level according to math rules
(valid in the real numbers), thus changing how rounding is done. From that, an
integer variable will always be stable (and IMHO, this should also be the case
of a floating-point variable to avoid some consistency issues, perhaps unless
you design a specific model with multivalued FP variables that would change
them to a stable value once the variable is used to obtain a non-FP value).

This option does not mean that the optimizer may assume that math rules are
valid on floating point. Otherwise one can prove that 1 == 0 (as seen above),
and then it is valid to transform the program to

  int main () { return 0; }

Indeed, you can assume an "if (1)" around the main code, and since 1 == 0, you
can transform it to "if (0)", and since this is always false, you can remove
the code entirely. This makes no sense!

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-19 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #7 from Vincent Lefèvre  ---
(In reply to rguent...@suse.de from comment #5)
> From below I implicitely assume you say that "1. + x != 1." -> "x != 0."
> isn't "rearranging at the source level".

No, it depends on how you do that. If in the source you have

  int c = 1. + x != 1.;

then you might choose to transform this to

  int c = x != 0.;

under -funsafe-math-optimizations (though this transformation is currently not
documented, see below). What the optimizer MUST NOT do is to replace an
occurrence of c in the code by the expression used to compute it, as doing such
a thing is likely to yield major inconsistencies (this might be acceptable if
the value of c is read only once, but IMHO, even this case should not be
optimized as this could be too dangerous).

> Note our documentation
> on -funsafe-math-optimizations is quite vague and I'd argue that
> "rearranging at the source level" is covered by -fassociative-math
> instead.

BTW, strictly speaking, transforming "1. + x != 1." to "x != 0." does not just
use the associativity, but also the "additive property" or "cancellation
property". In math, if "a = b", then "a + x = b + x". However, for an arbitrary
operation, the converse is not necessarily true, even though the operation may
be associative. That is, if "a != b", then "a + x != b + x" is not necessarily
true. Having the cancellation property under -funsafe-math-optimizations might
be OK (here, this is similar to assuming associativity, but possibly stronger,
so that it could be preferable to have a separate option for that).

But I think that this is not directly related to this bug.

The gcc(1) man page says:

-fassociative-math
Allow re-association of operands in series of floating-point
operations.  [...]

As I read it, this is possible only inside an expression, otherwise it should
not be worded like that. What the optimizer does here is to apply
re-association of operands beyond expressions, changing the allowed behaviors
to an unexpected one; thus, IMHO, the "as-if rule" is broken by the optimizer
here.

> It's not clear how to classify the above specific transform though.
> There's -ffp-contract which also enables removing of rounding steps.
> So the classification as "unsafe" is probably correct (and vague).

Note that the conditions under FP contraction are rather strict. A consequence
of what -funsafe-math-optimizations can do is to increase the error of the
considered expression, which is not allowed by FP contraction.

[Bug c/93848] New: missing -Warray-bounds warning for array subscript 1 is outside array bounds

2020-02-20 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93848

Bug ID: 93848
   Summary: missing -Warray-bounds warning for array subscript 1
is outside array bounds
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Consider the following C code:

void foo_aux (int);

void foo (void)
{
  int i;
  int *p = &i;
  foo_aux (p[1]);
}

void bar_aux (int *);

void bar (void)
{
  int i[4];
  int (*p)[4] = &i;
  bar_aux (p[1]);
}

As expected, I get a warning concerning foo:

tst.c: In function ‘foo’:
tst.c:7:3: warning: array subscript 1 is outside array bounds of ‘int[1]’
[-Warray-bounds]
7 |   foo_aux (p[1]);
  |   ^~
tst.c:5:7: note: while referencing ‘i’
5 |   int i;
  |   ^

but I don't get a warning concerning bar (probably because there's no memory
access in this particular case), even though this use is forbidden by the ISO C
standard. Indeed, the end of 6.5.6p8 (about the "pointer + integer" case) says:

  If the result points one past the last element of the array object,
  it shall not be used as the operand of a unary * operator that is
  evaluated.

Tested with gcc-10 (Debian 10-20200211-1) 10.0.1 20200211 (experimental), using

  gcc-10 -O3 -std=c11 -pedantic -Warray-bounds=2 -c tst.c

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-20 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #9 from Vincent Lefèvre  ---
(In reply to Alexander Cherepanov from comment #8)
> A similar problem happens with -fno-signed-zeros -fno-trapping-math. Not
> sure if a separate PR should be filed...

Concerning -fno-signed-zeros, your code has undefined behavior since the sign
of zero is significant in "1 / x == 1 / 0.", i.e. changing the sign of 0
changes the result. If you use this option, you are telling GCC that this
cannot be the case. Thus IMHO, this is not a bug.

I would say that -fno-trapping-math should have no effect because there are no
traps by default (for floating point). But since your code has undefined
behavior, this is not necessarily surprising.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-20 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #11 from Vincent Lefèvre  ---
(In reply to Rich Felker from comment #10)
> I don't think it's at all clear that -fno-signed-zeros is supposed to mean
> the programmer is promising that their code has behavior independent of the
> sign of zeros, and that any construct which would be influenced by the sign
> of a zero has undefined behavior. I've always read it as a license to
> optimize in ways that disregard the sign of a zero or change the sign of a
> zero, but with internal consistency of the program preserved.

But what does "internal consistency" mean? IMHO, if you choose a strict, safe
meaning, then the optimization option is likely to have no effect in practice.

An example:

int foo (double a)
{
  double b, c;
  b = 1 / a;
  c = 1 / a;
  return b == -c;
}

If a is a zero, would you regard a result 1 as correct? Some users may regard
this as inconsistent, since even though they do not care about the sign of zero
when computing a value, they may assume that the sign of a will not change
magically.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-20 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #13 from Vincent Lefèvre  ---
(In reply to Rich Felker from comment #12)
> To me the meaning of internal consistency is very clear: that the semantics
> of the C language specification are honored and that the only valid
> transformations are those that follow the "as-if rule".

which is not clear...

> Since C without Annex F allows arbitrarily awful floating point results,

In C without Annex F, division by 0 is undefined behavior (really undefined
behavior, not an unspecified result, which would be very different).

With the examples using divisions by 0, you need to assume that Annex F
applies, but at the same time, with your interpretation, -fno-signed-zeros
breaks Annex F in some cases, e.g. if you have floating-point divisions by 0.
So I don't follow you...

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-20 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #15 from Vincent Lefèvre  ---
Note that there are very few ways to be able to distinguish the sign of zero.
The main one is division by zero. Other ones are:

* Conversion to a character string, e.g. via printf(). But in this case, if
-fno-signed-zeros is used, whether "0" or "-0" is output (even in a way that
seems to be inconsistent) doesn't matter since the user does not care about the
sign of 0, i.e. "0" and "-0" are regarded as equivalent (IIRC, this would be a
bit like NaN, which has a sign bit in IEEE 754, but the output does not need to
match its sign bit).

* Memory analysis. Again, the sign does not matter, but for instance, reading
an object twice as a byte sequence while the object has not been changed by the
code must give the same result. I doubt that this is affected by optimization.

* copysign(). The C standard is clear: "On implementations that represent a
signed zero but do not treat negative zero consistently in arithmetic
operations, the copysign functions regard the sign of zero as positive." Thus
with -fno-signed-zeros, the sign of zero must be regarded as positive with this
function. If GCC chooses to deviate from the standard here, this needs to be
documented.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #20 from Vincent Lefèvre  ---
(In reply to rguent...@suse.de from comment #18)
> GCC indeed happily evaluates a floating-point expression multiple times,
> for example for
> 
> void foo(float a, float b, float *x, float *y)
> {
>   float tem = a + b;
>   *x = tem - a;
>   *y = tem - b;
> }
> 
> I would expect GCC to turn this into
> 
>   *x = (a + b) - a;
>   *y = (a + b) - b;
> 
> and then simplify further to b and a.  Does this violate the "as-if rule"?

I think that if optimization goes beyond expressions, this is unexpected, thus
violate the "as-if rule". Assignments should be regarded as barriers. Perhaps
casts too.

Now, if you disregard Annex F entirely and assume that the operations may be
inaccurate (e.g. due to the optimizer), then in foo(), *x can be any value and
*y can be any value, so that the simplification of *x to b and *y to a would be
valid (as long as GCC assumes that their values are stable, i.e. that it will
not "recompute" a same unmodified variable multiple times). But IMHO, this kind
of aggressive optimization is not helpful to the user.

BTW, I fear that due to FP contraction, GCC might be broken even without
"unsafe" optimizations. For instance, consider:

  double a, b, c, r, s, t;
  /* ... */
  r = a * b + c;
  s = a * b;
  t = s + c;

possibly slightly modified, without changing the semantics and still allowing
FP contraction for r (I mean that things like "opaque" and volatile could be
introduced in the code to change how optimization is done).

Here, if FP contraction is allowed, the compiler may replace a * b + c by
fma(a,b,c), i.e. compute r with a single rounding instead of two, so that r and
t may have different values. My question is the following: Due to the fact that
r and t are computed with the same Level-1 expression a * b + c (i.e. at the
level of real numbers, without rounding), is it possible that GCC's optimizer
regard r and t as equal, even though they may actually be different? If this is
possible, this would be a bug.

[Bug c/20785] Pragma STDC * (C99 FP) unimplemented

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=20785

--- Comment #8 from Vincent Lefèvre  ---
Concerning the STDC FP_CONTRACT pragma, implementing it would not be
sufficient. GCC would also need to restrict how it does contraction, as it
currently does not contract only expressions, but also sequences of
expressions, which is invalid. Example:

double foo (double a, double b, double c)
{
  double p = a * b;
  return p + c;
}

Since a * b and p + c are separate expressions, contraction to FMA must not be
done according to the C standard. But when contraction is allowed, GCC
generates a FMA on my x86_64 processor:

.cfi_startproc
vfmadd132sd %xmm1, %xmm2, %xmm0
ret
.cfi_endproc

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #22 from Vincent Lefèvre  ---
(In reply to rguent...@suse.de from comment #21)
> Note that GCC does FP contraction across stmt boundaries so
> even s = a * b; t = s + c; is contracted.  If that is already
> a bug in your eyes then of couse value-numbering replacing
> t by r is already a bug.

Yes, with testing, I've eventually noticed that and mentioned this fact in
PR20785 (i.e. this is very important when the STDC FP_CONTRACT pragma will be
implemented, as this is not conforming).

I haven't found an example with a major inconsistency, but when contraction is
enabled, e.g. with "gcc -march=native -O3" on an x86 processor with a FMA, the
following program

#include 

__attribute__((noipa)) // imagine it in a separate TU
static double opaque(double d) { return d; }

void foo (double a, double b, double c)
{
  double s, t, u, v;
  s = a * b;
  t = opaque(s) + c;
  u = a * opaque(b);
  v = u + c;
  printf ("%d %a %a\n", t != v, t, v);
}

int main (void)
{
  volatile double a = 0x1.0001p0, c = -1;
  foo (a, a, c);
  return 0;
}

outputs

1 0x1p-47 0x1.8p-47

Though t and v are computed from equivalent expressions, their values are
different. However, in t != v, GCC correctly notices that they are different:
it generates the comparison instruction, without optimizing; and this is still
the case if -ffinite-math-only is used (since NaNs could prevent the
optimization of t != v).

Note that if one adds "if (s == u)" (which is true, and noticed by GCC) before
the printf, one correctly gets:

0 0x1p-47 0x1p-47

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #23 from Vincent Lefèvre  ---
(In reply to Vincent Lefèvre from comment #22)
> Note that if one adds "if (s == u)" (which is true, and noticed by GCC)

Sorry, this is not noticed by GCC (I used an incorrect command line).

Anyway, the opaque's prevent any optimization.

[Bug c/93848] missing -Warray-bounds warning for array subscript 1 is outside array bounds

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93848

--- Comment #2 from Vincent Lefèvre  ---
(In reply to Richard Biener from comment #1)
> Hmm, but as you say there isn't an actual access and taking the address of
> one-after the array is allowed.  With p[2] it appropriately warns.

No, what I'm saying is:
* There isn't an actual access, which seems to be the reason why GCC does not
warn.
* But even though there isn't an actual access, the operation is explicitly
forbidden by the C standard (the end of 6.5.6p8), so that GCC should warn
anyway.

Note also that the end of 6.5.6p8 is not the only part of the C standard that
is concerned. According to 6.5.3.2p4, the unary * operator can be used only
when the operand points to a function or to an object (and it doesn't matter
whether there is an access or not). But past the end of an array, there isn't
necessarily an object.

[Bug c/93848] missing -Warray-bounds warning for array subscript 1 is outside array bounds

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93848

--- Comment #4 from Vincent Lefèvre  ---
Perhaps this was not the intent of the standard (and this is far from being
clear because this might affect optimizations -- there are already many things
that are forbidden with pointers though they could have a valid interpretation
with a basic memory model). But this is currently forbidden. Note that in my
example, the * operator is evaluated, so that both the end of 6.5.6p8 and
6.5.3.2p4 (which is about evaluation since it describes what the result is)
apply.

FYI, this is not supported by some tcc version, though I suspect that this may
be due to a bug that could affect valid code too. But who knows...

[Bug tree-optimization/93301] Wrong optimization: instability of uninitialized variables leads to nonsense

2020-02-21 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93301

Vincent Lefèvre  changed:

   What|Removed |Added

 CC||vincent-gcc at vinc17 dot net

--- Comment #14 from Vincent Lefèvre  ---
(In reply to rguent...@suse.de from comment #10)
> I'd say "using" an uninitialized value is UB.

In general, but not for unsigned char. C17 6.2.4p6 says for objects with
automatic storage duration (which is the case here): "The initial value of the
object is indeterminate." 3.19.2 says "indeterminate value: either an
unspecified value or a trap representation". Since we have an unsigned char
here, this is not a trap representation. Thus this is an unspecified value.
3.19.3 says "unspecified value: valid value of the relevant type where this
International Standard imposes no requirements on which value is chosen in any
instance" (and the note says that this "cannot be a trap representation").

In short, by reading such an uninitialized unsigned char variable, you get a
valid value, but you don't know which one. And by reading the variable again,
you still get a valid value, which may be different. No UB here.

[Bug middle-end/93848] missing -Warray-bounds warning for array subscript 1 is outside array bounds

2020-02-22 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93848

--- Comment #6 from Vincent Lefèvre  ---
bar_aux can be any function. It doesn't need to do any thing. As soon as p[1]
is evaluated, one has undefined behavior, just like in function foo.

[Bug middle-end/93902] New: conversion from 64-bit long or unsigned long to double prevents simple optimization

2020-02-24 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93902

Bug ID: 93902
   Summary: conversion from 64-bit long or unsigned long to double
prevents simple optimization
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Optimizations that are done with conversions from 32-bit unsigned int to double
are no longer done with conversions from 64-bit unsigned long to double. For
the case 64-bit long to double, this depends.

Example:

void bar (void);

void foo1 (unsigned int a, unsigned int b)
{
  if (a == b)
{
  if ((double) a != (double) b)
bar ();
}
}

void foo2 (long a, long b)
{
  if (a == b)
{
  if ((double) a != (double) b)
bar ();
}
}

void foo3 (unsigned long a, unsigned long b)
{
  if (a == b)
{
  if ((double) a != (double) b)
bar ();
}
}

Tests done on x86_64 with: gcc-10 (Debian 10-20200222-1) 10.0.1 20200222
(experimental) [master revision
01af7e0a0c2:487fe13f218:e99b18cf7101f205bfdd9f0f29ed51caaec52779]

First, using only -O3 gives:

* For foo1, just a "ret", i.e. everything has been optimized.

* For foo2:

.cfi_startproc
cmpq%rsi, %rdi
je  .L7
.L3:
ret
.p2align 4,,10
.p2align 3
.L7:
pxor%xmm0, %xmm0
cvtsi2sdq   %rdi, %xmm0
ucomisd %xmm0, %xmm0
jnp .L3
jmp bar@PLT
.cfi_endproc

I assume that this might be different from foo1 because the conversion can
yield a rounding error (since 64 is larger than 53). However, both roundings
are done in the same way. The only thing that could prevent optimization is the
side effect introduced by the inexact operation, which raises the inexact flag.
But GCC ignores it by default (it assumes that the STDC FENV_ACCESS pragma is
off). And GCC knows how to optimize this case (see the other test below).

* For foo3, this is even much more complicated, even though the C code seems
simpler (because the integers can take only non-negative values):

.cfi_startproc
cmpq%rsi, %rdi
je  .L16
.L8:
ret
.p2align 4,,10
.p2align 3
.L16:
testq   %rsi, %rsi
js  .L10
pxor%xmm1, %xmm1
cvtsi2sdq   %rsi, %xmm1
.L11:
testq   %rsi, %rsi
js  .L12
pxor%xmm0, %xmm0
cvtsi2sdq   %rsi, %xmm0
.L13:
ucomisd %xmm0, %xmm1
jp  .L15
comisd  %xmm0, %xmm1
je  .L8
.L15:
jmp bar@PLT
.p2align 4,,10
.p2align 3
.L12:
movq%rsi, %rax
andl$1, %esi
pxor%xmm0, %xmm0
shrq%rax
orq %rsi, %rax
cvtsi2sdq   %rax, %xmm0
addsd   %xmm0, %xmm0
jmp .L13
.p2align 4,,10
.p2align 3
.L10:
movq%rsi, %rax
movq%rsi, %rdx
pxor%xmm1, %xmm1
shrq%rax
andl$1, %edx
orq %rdx, %rax
cvtsi2sdq   %rax, %xmm1
addsd   %xmm1, %xmm1
jmp .L11
.cfi_endproc

Now, let's add the -ffinite-math-only option, i.e.: "-O3 -ffinite-math-only".
Since integers cannot be NaN or Inf, and overflow in the conversion to double
is not possible, this should not change anything.

* foo1 is still optimized. Good.

* foo2 is now optimized like foo1, i.e. to just a "ret". This is good, but
surprising compared to the foo2 case without -ffinite-math-only, and to foo3
below.

* foo3 is still not optimized and is almost as complicated, with just 2
instructions removed, probably due to the -ffinite-math-only (in case GCC
thought that special values were possible).

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #27 from Vincent Lefèvre  ---
(In reply to jos...@codesourcery.com from comment #26)
> I wouldn't be surprised if such a test could be constructed in the absence 
> of -funsafe-math-optimizations, that does a single conversion of an 
> out-of-range integer to a floating-point type in the abstract machine but 

I suppose that you meant the opposite: floating-point to integer.

> where that conversion gets duplicated so that one copy is done at compile 
> time (valid with -fno-trapping-math, covered by other bugs in the 
> -ftrapping-math case where it loses exceptions) and the other copy is done 
> at run time and the particular instruction used doesn't follow the logic 
> in fold_convert_const_int_from_real of converting NaN to zero and 
> saturating other values.

Yes, here's an example:

#include 

__attribute__((noipa)) // imagine it in a separate TU
static double opaque(double d) { return d; }

int main (void)
{
  double x = opaque(50.0);
  int i;

  i = x;
  printf ("%d\n", i);
  if (x == 50.0)
printf ("%d\n", i);
  return 0;
}

With -O3, I get:

-2147483648
2147483647

Tested with:

gcc-10 (Debian 10-20200222-1) 10.0.1 20200222 (experimental) [master revision
01af7e0a0c2:487fe13f218:e99b18cf7101f205bfdd9f0f29ed51caaec52779]

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #28 from Vincent Lefèvre  ---
A slightly modified version of the example, showing the issue with GCC 5 to 7
(as the noipa attribute directive has been added in GCC 8):

#include 

int main (void)
{
  volatile double d = 50.0;
  double x = d;
  int i = x;

  printf ("%d\n", i);
  if (x == 50.0)
printf ("%d\n", i);
  return 0;
}

The -O1 optimization level is sufficient to make the bug appear.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #29 from Vincent Lefèvre  ---
And with unsigned too (this should be a bit more readable):

#include 

int main (void)
{
  volatile double d = -1.0;
  double x = d;
  unsigned int i = x;

  printf ("%u\n", i);
  if (x == -1.0)
printf ("%u\n", i);
  return 0;
}

gives

4294967295
0

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #30 from Vincent Lefèvre  ---
(In reply to Vincent Lefèvre from comment #28)
> A slightly modified version of the example, showing the issue with GCC 5 to
> 7 (as the noipa attribute directive has been added in GCC 8):

Correction: This allows to test with old GCC versions, and this shows that the
bug has been introduced in GCC 6. GCC 5 outputs consistent values.

[Bug c/53182] GNU C: attributes without underscores should be discouraged / no longer be documented e.g. as examples

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53182

--- Comment #8 from Vincent Lefèvre  ---
(In reply to Jonathan Wakely from comment #7)
> (In reply to Vincent Lefèvre from comment #6)
> > Also, note that identifiers that are not reserved should not be used,
> > because they could be defined as macros by the developer, who has not tested
> > his code with GCC (I'm saying GCC here, but this applies to any compiler).
> 
> Should not be used by who?

by non-user code, but this applies in particular to what's inside
__attribute__.

Here's an example. In MPFR's mpfr.h file, we use:

#define __MPFR_SENTINEL_ATTR
#if defined (__GNUC__)
# if __GNUC__ >= 4
#  undef __MPFR_SENTINEL_ATTR
#  define __MPFR_SENTINEL_ATTR __attribute__ ((sentinel))
# endif
#endif

But if in my personal C code, I have

#define sentinel 1

before the #include's or, in a similar way, compile my program with -Dsentinel,
then I get a compilation failure. Here the end user is not supposed to know
that the identifier "sentinel" (which is not reserved) is used internally.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #33 from Vincent Lefèvre  ---
I couldn't find a failing test with FP contraction, but this seems to be
because GCC doesn't optimize as much as it could. Still, the following example
could be interesting in the future or as a non-regression test:

#include 

#define A 0x0.p25
#define C -0x1.p50

int main (void)
{
  volatile double a = A;
  double b = a, c = C;
  int i;

#if 0
  double x = b * b;
  printf ("%a\n", x);
#endif
  // if (b == A && c == C)
  {
i = b * b + c;
printf ("%d\n", i);
if (b == A && c == C)
  {
// i = b * b + c;
printf ("%d\n", i == -7);
  }
  }
  return 0;
}

By "GCC doesn't optimize as much as it could", I mean that the comparison with
-7 is generated, even though the result of this comparison is known at compile
time. Note that this is shown by the fact that uncommenting the second "i = b *
b + c;" does not change anything: a comparison with -7 is still generated.

With contraction enabled, one currently gets:

-7
1

(which is correct). If one uncomments the first "if (b == A && c == C)", one
gets:

-8
0

(still correct) showing that contraction is disabled at compile time.

Changing the "#if 0" to "#if 1" allows one to avoid contraction at run time.
Such a change can be useful if the rule at compile time changes in the future,
say: contraction is not used at run time, but at compile time, the optimization
bug might have the effect to re-evaluate i with contraction, yielding
inconsistent output.

[Bug middle-end/93939] New: missing optimization for floating-point expression converted to integer whose result is known at compile time

2020-02-25 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93939

Bug ID: 93939
   Summary: missing optimization for floating-point expression
converted to integer whose result is known at compile
time
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

Consider the following example.

#include 

typedef double T;

int main (void)
{
  volatile T a = 8;
  T b = a;
  int i;

  i = 3 * b;
  printf ("%d\n", (int) i);

  if (b == 8)
{
  i = 3 * b;
  printf ("%d\n", i == 24);
}

  return 0;
}

Even if one compiles it with -O3, a comparison instruction for i == 24 is
generated, while its result 1 is known at compile time.

If I change T to int, or change the type of i to double, or comment out the
first printf (so that the first i is not used), then the i == 24 is optimized
as expected.

Tested under Debian/unstable / x86_64 with:

gcc-10 (Debian 10-20200222-1) 10.0.1 20200222 (experimental) [master revision
01af7e0a0c2:487fe13f218:e99b18cf7101f205bfdd9f0f29ed51caaec52779]

[Bug tree-optimization/93681] Wrong optimization: instability of x87 floating-point results leads to nonsense

2020-02-26 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93681

--- Comment #4 from Vincent Lefèvre  ---
Instead of "-m32 -march=i686", one can also compile with "-mfpmath=387". This
is useful if one does not have the 32-bit libs.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-03-03 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #35 from Vincent Lefèvre  ---
(In reply to Alexander Cherepanov from comment #34)
> (In reply to Vincent Lefèvre from comment #13)
> > In C without Annex F, division by 0 is undefined behavior (really undefined
> > behavior, not an unspecified result, which would be very different).
> > 
> > With the examples using divisions by 0, you need to assume that Annex F
> > applies, but at the same time, with your interpretation, -fno-signed-zeros
> > breaks Annex F in some cases, e.g. if you have floating-point divisions by
> > 0. So I don't follow you...
> 
> You seem to say that either Annex F is fully there or not at all but why?
> -fno-signed-zeros breaks Annex F but only parts of it. Isn't it possible to
> retain the other parts of it? Maybe it's impossible or maybe it's impossible
> to retain division by zero, I don't know. What is your logic here?

This issue is that the nice property x == y implies f(x) == f(y), in
particular, x == y implies 1 / x == 1 / y is no longer valid with signed zeros.
Thus one intent of -fno-signed-zeros could be to enable optimizations based on
this property. But this means that division by zero becomes undefined behavior
(like in C without Annex F). Major parts of Annex F would still remain valid.

> (In reply to Vincent Lefèvre from comment #15)
> > Note that there are very few ways to be able to distinguish the sign of
> > zero. The main one is division by zero. Other ones are:
> > 
> > * Conversion to a character string, e.g. via printf(). But in this case, if
> > -fno-signed-zeros is used, whether "0" or "-0" is output (even in a way that
> > seems to be inconsistent) doesn't matter since the user does not care about
> > the sign of 0, i.e. "0" and "-0" are regarded as equivalent (IIRC, this
> > would be a bit like NaN, which has a sign bit in IEEE 754, but the output
> > does not need to match its sign bit).
> 
> This means that you cannot implement you own printf: if you analyze sign bit
> of your value to decide whether you need to print '-', the sign of zero is
> significant in your code.

If you want to implement a printf that takes care of the sign of 0, you must
not use -fno-signed-zeros.

> IOW why do you think that printf is fine while "1 / x == 1 / 0." is not?

printf is not supposed to trigger undefined behavior. Part of its output is
unspecified, but that's all.

> > * Memory analysis. Again, the sign does not matter, but for instance,
> > reading an object twice as a byte sequence while the object has not been
> > changed by the code must give the same result. I doubt that this is affected
> > by optimization.
> 
> Working with objects on byte level is often optimized too:

Indeed, there could be invalid optimization... But I would have thought that in
such a case, the same kind of issue could also occur without -fno-signed-zeros.
Indeed, if x == y, then this does not mean that x and y have the same memory
representation. Where does -fno-signed-zeros introduce a difference?

Note: There's also the case of IEEE 754 decimal floating-point formats (such as
_Decimal64), for instance, due to the "cohorts", where two identical values can
have different memory representations. Is GCC always correct here?

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-03-04 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #37 from Vincent Lefèvre  ---
(In reply to rguent...@suse.de from comment #36)
> We're actually careful about the sign of zero here when recording
> requivalences for propagation.

But shouldn't the use of -fno-signed-zeros imply that the sign of zero never
matches (i.e. the actual sign is unknown, because unsafe optimizations could
have modified it in an inconsistent way)?

[Bug middle-end/94031] New: missing floating-point optimization of x + 0 when x is not zero

2020-03-04 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94031

Bug ID: 94031
   Summary: missing floating-point optimization of x + 0 when x is
not zero
   Product: gcc
   Version: 10.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net
  Target Milestone: ---

In floating point, when x is not 0 (and not sNaN when supported), x + 0 can be
optimized to x. GCC misses this optimization:

void bar (double);

void foo1 (double x)
{
  x *= 2.0;  /* cannot be sNaN */
  if (x != 0)
bar (x - 0);
}

void foo2 (double x)
{
  x *= 2.0;  /* cannot be sNaN */
  if (x != 0)
bar (x + 0);
}

In foo1, x - 0 can be optimized to x in rounding to nearest (even when x is
±0), and GCC optimizes as expected:

.L4:
jmp bar@PLT

In foo2, in order to be able to optimize, one needs x != 0, guaranteed by the
"if" condition, but GCC misses the optimization:

.L10:
addsd   %xmm1, %xmm0
jmp bar@PLT

Tested under Debian x86_64 with gcc-10 (Debian 10-20200222-1) 10.0.1 20200222
(experimental) [master revision
01af7e0a0c2:487fe13f218:e99b18cf7101f205bfdd9f0f29ed51caaec52779]

[Bug middle-end/94031] missing floating-point optimization of x + 0 when x is not zero

2020-03-04 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94031

--- Comment #3 from Vincent Lefèvre  ---
(In reply to Andrew Pinski from comment #1)
> This needs something like VRP for floating point types.

Thus this is related in some way to PR24021 and PR68097.

[Bug tree-optimization/24021] VRP does not work with floating points

2020-03-04 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=24021

--- Comment #3 from Vincent Lefèvre  ---
But note that the optimization should be modified or disabled in contexts where
floating-point exceptions need to be honored, as the i+=0.1f will sometimes
trigger the inexact exception.

[Bug middle-end/93806] Wrong optimization: instability of floating-point results with -funsafe-math-optimizations leads to nonsense

2020-03-04 Thread vincent-gcc at vinc17 dot net
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93806

--- Comment #39 from Vincent Lefèvre  ---
So I wonder whether -fno-signed-zeros should be removed. It seems too
dangerous. The -ffinite-math-only option is fine because the programmer may be
able to prove that Inf and NaN never occur in his code (at least with
non-erroneous inputs). But zero is a value that is too common, and not having a
very rigorous specification for -fno-signed-zeros is bad.

Now, if the only issue with -fno-signed-zeros (assuming division by 0 does not
occur) concerns values with multiple representations, solving this more general
issue could be sufficient, though.

[Bug c/56281] New: missed VRP optimization from undefined left shift in ISO C99

2013-02-10 Thread vincent-gcc at vinc17 dot net


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56281



 Bug #: 56281

   Summary: missed VRP optimization from undefined left shift in

ISO C99

Classification: Unclassified

   Product: gcc

   Version: 4.8.0

Status: UNCONFIRMED

  Severity: enhancement

  Priority: P3

 Component: c

AssignedTo: unassig...@gcc.gnu.org

ReportedBy: vincent-...@vinc17.net





It seems that GCC is unaware that a left shift is undefined in ISO C99 when it

yields an integer overflow. See the following testcase with -O3 -std=c99:



int foo (int i)

{

  if (i < 0)

i = -i;

  i *= 2;

  if (i < 0)

i++;

  return i;

}



int bar (int i)

{

  if (i < 0)

i = -i;

  i <<= 1;

  if (i < 0)

i++;

  return i;

}



GCC optimizes foo() at the .065t.vrp1 step, but not bar().


[Bug c/56281] missed VRP optimization from undefined left shift in ISO C99

2013-02-11 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56281

Vincent Lefèvre  changed:

   What|Removed |Added

 Status|RESOLVED|UNCONFIRMED
 Resolution|DUPLICATE   |

--- Comment #3 from Vincent Lefèvre  2013-02-11 
10:38:27 UTC ---
Not the same request. Bug 31178 (VRP can infer a range for b in a >> b and a <<
b) is about a range for the second operand b (independent from the value of the
first operand, BTW). Here it is a range for the first operand and the result.
This is quite different due to the asymmetry of shift operators.

Concerning existing code, there was also much code assuming wrapping in case of
integer overflow for +. Code needs to be fixed or should not be compiled with
options like -std=c99.


[Bug middle-end/36296] bogus uninitialized warning (loop representation, VRP missed-optimization)

2013-04-17 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296

--- Comment #16 from Vincent Lefèvre  2013-04-17 
08:40:09 UTC ---
(In reply to comment #3)
> A way to tell gcc a variable is not uninitialized is to perform
> self-initialization like
> 
>  int i = i;
> 
> this will cause no code generation but inhibits the warning.  Other compilers
> may warn about this construct of course.

What makes things worse about this workaround is that even protecting this by a

#if defined(__GNUC__)

may not be sufficient as other compilers may claim GNUC compatibility and
behave differently. This is the case of clang (at least under Debian):
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=705583

The only good solution would be to fix the bug. I've checked that it is still
there in the trunk revision 197260 (current Debian's gcc-snapshot).

[Bug middle-end/36296] bogus uninitialized warning (loop representation, VRP missed-optimization)

2013-04-17 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296

--- Comment #20 from Vincent Lefèvre  2013-04-17 
11:17:14 UTC ---
(In reply to comment #18)
> In fact, we should have removed the i=i idiom a long time ago. The correct
> thing to do (as Linus says) is to initialize the variable to a sensible value
> to silence the warning: http://lwn.net/Articles/529954/

There is no real sensible value except some trap value. Letting the variable
uninitialized at that point (the declaration) allows some tools, like the
Formalin compiler described in WG14/N1637, to detect potential problems if the
variable is really used uninitialized.

(In reply to comment #19)
> Note that -Wmaybe-uninitialized is available since at least GCC 4.8.0

OK, so a solution would be to add a configure test for projects that don't want
such warnings (while still using -Wall) to see whether -Wno-maybe-uninitialized
is supported.

[Bug middle-end/36296] bogus uninitialized warning (loop representation, VRP missed-optimization)

2013-04-17 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296

--- Comment #23 from Vincent Lefèvre  2013-04-17 
12:24:56 UTC ---
(In reply to comment #21)
> When an unrecognized warning option is requested (e.g., -Wunknown-warning), 
> GCC
> will emit a diagnostic stating that the option is not recognized.  However, if
> the -Wno- form is used, the behavior is slightly different: No diagnostic will
> be
> produced for -Wno-unknown-warning unless other diagnostics are being 
> produced. 

That was mainly for pre-4.7 GCC versions, where without the i=i idiom, one
would get the usual "may be used uninitialized in this function" warning
because -Wno-maybe-uninitialized is not supported, but also the

  unrecognized command line option "-Wno-maybe-uninitialized"

warning because there was already a warning. However this may not really be
important.

[Bug middle-end/36296] bogus uninitialized warning (loop representation, VRP missed-optimization)

2013-04-17 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296

--- Comment #24 from Vincent Lefèvre  2013-04-17 
12:34:40 UTC ---
BTW, since with the latest GCC versions (such as Debian's GCC 4.7.2), the
warning is no longer issued with -Wno-maybe-uninitialized, perhaps the bug
severity could be lowered to "enhancement".

[Bug c/57029] New: GCC doesn't set the inexact flag on inexact compile-time int-to-float conversion

2013-04-22 Thread vincent-gcc at vinc17 dot net


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57029



 Bug #: 57029

   Summary: GCC doesn't set the inexact flag on inexact

compile-time int-to-float conversion

Classification: Unclassified

   Product: gcc

   Version: 4.9.0

Status: UNCONFIRMED

  Severity: normal

  Priority: P3

 Component: c

AssignedTo: unassig...@gcc.gnu.org

ReportedBy: vincent-...@vinc17.net





GCC doesn't set the inexact flag on inexact compile-time int-to-float

conversion:



#include 

#include 



#pragma STDC FENV_ACCESS ON



void test1 (void)

{

  volatile float c;



  c = 0x7fbf;

  printf ("c = %a, inexact = %d\n", c, fetestexcept (FE_INEXACT));

}



void test2 (void)

{

  volatile float c;

  volatile int i = 0x7fbf;



  c = i;

  printf ("c = %a, inexact = %d\n", c, fetestexcept (FE_INEXACT));

}



int main (void)

{

  test1 ();

  test2 ();

  return 0;

}



Under Linux/x86_64:



$ gcc -std=c99 -O3 conv-int-flt-inex.c -o conv-int-flt-inex -lm

$ ./conv-int-flt-inex

c = 0x1.fep+30, inexact = 0

c = 0x1.fep+30, inexact = 32



Ditto without optimizations.



Note: the STDC FENV_ACCESS pragma is currently not supported (PR 34678), but I

don't think it is directly related (this is not an instruction ordering

problem...).



This bug has been found from:



  http://gcc.gnu.org/ml/gcc-help/2013-04/msg00164.html


[Bug c/57029] GCC doesn't set the inexact flag on inexact compile-time int-to-float conversion

2013-04-22 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57029

--- Comment #3 from Vincent Lefèvre  2013-04-22 
13:32:10 UTC ---
(In reply to comment #2)
> There is -ftrapping-math, which I think is supposed to have an effect here. 
> And
> if this was implemented properly, I hope it would be turned off by default.

-ftrapping-math is on by default. And whether -ftrapping-math or
-fno-trapping-math is used, the behavior is the same.

[Bug middle-end/54615] New: unclear documentation on -fomit-frame-pointer for -O

2012-09-18 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=54615

 Bug #: 54615
   Summary: unclear documentation on -fomit-frame-pointer for -O
Classification: Unclassified
   Product: gcc
   Version: 4.8.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
AssignedTo: unassig...@gcc.gnu.org
ReportedBy: vincent-...@vinc17.net


http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html says about -O:

  -O also turns on -fomit-frame-pointer on machines where doing so does not
interfere with debugging.

and about -fomit-frame-pointer:

  Enabled at levels -O, -O2, -O3, -Os.

So, it is not clear whether -fomit-frame-pointer is always enabled at level -O
or not.

This bug is related to bug 51019 (unclear documentation on -fomit-frame-pointer
default for -Os and different platforms).


[Bug middle-end/55145] Different bits for long double constant depending on long int size

2012-11-04 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55145

--- Comment #8 from Vincent Lefèvre  2012-11-04 
23:43:44 UTC ---
(In reply to comment #7)
> Here are different internal values from the same input:
> 
> 32-bit long: 1.57079632679489661925640447970309310221637133509
> Input:   
> 1.5707963267948966192021943710788178805159986950457096099853515625
> 64-bit long: 1.57079632679489661914798426245454265881562605500221252441
> 
> Input value is extremely close to a half-way value between 32-bit
> and 64-bit longs.

1.5707963267948966192021943710788178805159986950457096099853515625 is *exactly*
the 65-bit binary number
1.100100111011010101000100011011010001110001101001, thus
exactly a halfway value between two consecutive long double numbers (for 64-bit
precision):
  1.10010011101101010100010001101101000111000110100
and
  1.10010011101101010100010001101101000111000110101

I suppose that the difference is due to the fact that the algorithm used in GCC
has not been written to round correctly, and if this algorithm uses variables
of type "long" internally, it is not surprising to get different results on
different architectures (32-bit long and 64-bit long).


[Bug middle-end/21718] real.c rounding not perfect

2012-11-04 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21718

--- Comment #12 from Vincent Lefèvre  2012-11-05 
00:16:32 UTC ---
(In reply to comment #11)
> Really I'd consider this just a variant on bug 21718 (real.c rounding not 
> perfect).  That would ideally be fixed by using MPFR for this in GCC ... 
> except that for any MPFR versions before 3.1.1p2, the bug I found with the 
> ternary value from mpfr_strtofr could cause problems for subnormal 
> results.

Alternatively, you can write code based on MPFR without using the ternary
value. The algorithm would be:
1. Round to the target precision.
2. If the result is in the subnormal range (this can be detected by looking at
the exponent of the result), then deduce the "real" precision from the
exponent, and recompute the result in this precision directly.


[Bug middle-end/21718] real.c rounding not perfect

2012-11-05 Thread vincent-gcc at vinc17 dot net

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21718

--- Comment #14 from Vincent Lefèvre  2012-11-05 
08:12:08 UTC ---
Otherwise, how about taking code from the glibc implementation of
strtof/strtod/strtold? Code in strtod was recently fixed. I don't know about
strtold...


[Bug tree-optimization/57994] Constant folding of infinity

2013-07-28 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57994

--- Comment #12 from Vincent Lefèvre  ---
(In reply to Marc Glisse from comment #9)
> I believe there are far fewer special cases (and thus
> risks) with MPFR, but that would indeed require a suitable testsuite for all
> functions for which we enable it (at least if MPFR doesn't already have such
> a testsuite, and maybe even then, to make sure we call it properly).

MPFR's testsuite is just against the MPFR implementation. These are actually
non-regression tests. For comparisons with the functions from the C library,
there's mpcheck:

  https://gforge.inria.fr/projects/mpcheck/

but I don't know whether it includes special values (it wasn't its goal).

Note that we tried to follow C99's Annex F when this made sense. MPFR also
supports some special functions that are not in ISO C (yet), but may be
provided by the C library on some platforms (e.g. Bessel functions, which are
also specified in POSIX).

Don't forget that the specific rules for signed zeroes are also concerned;
again, we tried to follow C99's Annex F, IIRC, even when the specification was
rather inconsistent (e.g. under the rounding mode toward negative infinity, the
subtraction 1 - 1 returns -0, but log(1) is required to return +0).

[Bug tree-optimization/57994] Constant folding of infinity

2013-08-01 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57994

--- Comment #13 from Vincent Lefèvre  ---
A difference that may occur in the future if the C library adds a rsqrt
function (based on the IEEE 754-2008 rSqrt function) or constant folding is
used on builtins: in MPFR, mpfr_rec_sqrt on -0 gives +Inf, not -Inf.

[Bug c/48341] LDBL_EPSILON is wrong on IRIX 6.5

2013-08-02 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48341

--- Comment #4 from Vincent Lefèvre  ---
I can see the same problem under Linux (gcc110.fsffrance.org).

According to the C standard (C99 and C11), the *_EPSILON values are "the
difference between 1 and the least value greater than 1 that is representable
in the given floating point type, b^(1-p)".

Here b = 2 and p = LDBL_MANT_DIG = 106.

I think that the C standard is badly worded. It should have said "the
difference between 1 and the least floating-point value greater than 1 that is
representable in the given type, b^(1-p)". What is regarded as a floating-point
value is specified by the standard: see 5.2.4.2.2p2 "A floating-point number
(x) is defined by the following model: [...]".

[Bug c/48341] LDBL_EPSILON is wrong on IRIX 6.5

2013-08-02 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48341

--- Comment #5 from Vincent Lefèvre  ---
(In reply to Vincent Lefèvre from comment #4)
> I can see the same problem under Linux (gcc110.fsffrance.org).

In case this wasn't clear, the architecture is also a PowerPC. The
double-double format comes from the PowerPC ABI, and isn't directly related to
the OS itself (FYI it was also used under Mac OS X / PowerPC).

Thus the summary of this bug should be changed to:

  LDBL_EPSILON is wrong on PowerPC

[Bug c/48341] LDBL_EPSILON is wrong on IRIX 6.5

2013-08-02 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48341

--- Comment #7 from Vincent Lefèvre  ---
(In reply to r...@cebitec.uni-bielefeld.de from comment #6)
> Certainly not: IRIX isn't PowerPC but MIPS!

OK, that wasn't clear because the original but report mentioned:
"gcc.target/powerpc/rs6000-ldouble-2.c".
^^^

> If need be, just refer to the double/double format.

Yes.

[Bug tree-optimization/57994] Constant folding of infinity

2013-08-27 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57994

--- Comment #15 from Vincent Lefèvre  ---
If GCC intends to handle Bessel functions j0, j1, jn, y0, y1, yn (POSIX), there
may be differences with GNU MPFR. See my messages and bug report:
  http://permalink.gmane.org/gmane.comp.standards.posix.austin.general/7982
  http://permalink.gmane.org/gmane.comp.standards.posix.austin.general/7990
  http://sourceware.org/bugzilla/show_bug.cgi?id=15901

[Bug target/58429] New: _Decimal64 support is broken on powerpc64 with the mode32 ABI (-m32 -mpowerpc64)

2013-09-15 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58429

Bug ID: 58429
   Summary: _Decimal64 support is broken on powerpc64 with the
mode32 ABI (-m32 -mpowerpc64)
   Product: gcc
   Version: 4.7.2
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: target
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net

On gcc110.fsffrance.org:

$ cat ./decimal64.c
#include 
int main(void)
{
  _Decimal64 x = 1;
  if (x != x)
{
  printf ("Error!\n");
  return 1;
}
  else
{
  printf ("OK\n");
  return 0;
}
}

$ gcc -Wall -std=gnu99 -m32 -mpowerpc64 decimal64.c -o decimal64 -O
$ ./decimal64
OK
$ gcc -Wall -std=gnu99 -m32 -mpowerpc64 decimal64.c -o decimal64
$ ./decimal64
Error!

(The lack of error with -O is due to obvious optimization; a "volatile" on the
variable makes the error appear in every case.)

The problem doesn't appear with -m32 alone or -mpowerpc64 alone. Both are
needed to make it appear.

$ gcc --version
gcc (GCC) 4.7.2 20121109 (Red Hat 4.7.2-8)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


[Bug c/58485] New: [4.9 Regression] GMP test on subnormal fails with LTO and -O3

2013-09-20 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58485

Bug ID: 58485
   Summary: [4.9 Regression] GMP test on subnormal fails with LTO
and -O3
   Product: gcc
   Version: 4.9.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
  Assignee: unassigned at gcc dot gnu.org
  Reporter: vincent-gcc at vinc17 dot net

When I build GMP with

GNU MP config.status 5.1.2
configured by ./configure, generated by GNU Autoconf 2.69,
  with options "'--disable-shared' 'CC=gcc-snapshot' 'CFLAGS=-march=native -O3
-flto=jobserve -fuse-linker-plugin'"

one of the tests fails:

mpn_get_d wrong on denorm
  n=1
  exp   -1020
  sign  0
  got   =[00 00 00 00 00 00 F0 7F] inf
  want  =[00 00 00 00 00 00 30 00] 8.9002954340288055324e-308

with:

gcc (Debian 20130917-1) 4.9.0 20130917 (experimental) [trunk revision 202647]

This doesn't occur if I replace -O3 by -O2 or if I do not enable LTO.
This doesn't occur either with the same options as above and:

gcc-4.8.real (Debian 4.8.1-10) 4.8.1
gcc-4.7.real (Debian 4.7.3-7) 4.7.3


[Bug c/58485] [4.9 Regression] GMP test on subnormal fails with LTO and -O3

2013-09-20 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58485

--- Comment #1 from Vincent Lefèvre  ---
I forgot:

  Version:   GNU MP 5.1.2
  Host type: coreinhm-unknown-linux-gnu
  ABI:   64
  Install prefix:/usr/local
  Compiler:  gcc-snapshot -std=gnu99
  Static libraries:  yes
  Shared libraries:  no

(x86_64).

[Bug c/58485] [4.9 Regression] GMP test on subnormal fails with LTO and -O3

2013-09-20 Thread vincent-gcc at vinc17 dot net
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58485

Vincent Lefèvre  changed:

   What|Removed |Added

 Status|UNCONFIRMED |RESOLVED
 Resolution|--- |INVALID

--- Comment #2 from Vincent Lefèvre  ---
After looking at the mpn_get_d code, there's an integer overflow. So, that's a
bug in GMP.

  1   2   3   4   5   6   >