> 
> Andrew Pinski <[EMAIL PROTECTED]> writes:
> 
> | > 
> | > Paul Eggert <[EMAIL PROTECTED]> writes:
> | > 
> | > >         * NEWS: AC_PROG_CC, AC_PROG_CXX, and AC_PROG_OBJC now take an
> | > >         optional second argument specifying the default optimization
> | > >         options for GCC.  These optimizations now default to "-O2 
> -fwrapv"
> | > >         instead of to "-O2".  This partly attacks the problem reported 
> by
> | > >         Ralf Wildenhues in
> | > >         
> <http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html>
> | > >         and in <http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html>.
> | > 
> | > Does anybody think that Paul's proposed patch to autoconf would be
> | > better than changing VRP?
> | 
> | I think both ways are incorrect way forward.
> | What about coding the loops like:
> | 
> | if (sizeof(time_t) == sizeof(unsigned int))
> | {
> |   // do loop using unsigned int
> |   // convert to time_t and then see if an overflow happened
> | }
> | //etc. for the other type
> 
> Yuck.

It might be Yuck but that is the only real portable way to do this loop.
Now if other compilers actually treat signed overflow as undefined, you
will still run into this, even if GCC gets changed.

> If the above is the only without Autoconf change, I would highly
> recommend Autoconf change if GCC optimizers highly value benchmarks
> over running real world code.

Which one, mine or Paul's?  Because Paul's change just makes everything
compiles with -fwrapv anyways which is not going to work for other
compilers which could actually treat signed overflow as undefined in any
form.

Since autoconf is not only for compiling with GCC but any compiler, you would
run into it with a different compiler anyways.


-- Pinski

Reply via email to