https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71885

--- Comment #17 from Kern Sibbald <kern at sibbald dot com> ---
It is pretty difficult to argue with the developers because they know the
"rules", better than most programmers.  However, here in my opinion they used
very poor judgement, by implementing a change that they were fully aware would
break many programs (they documented it as such).  

It is one thing to "force" C++ programmers to do what the developers think is
correct, but it is very bad thing to modify a compiler knowing that it will
break a lot of code.  There are probably thousands of users out there who are
now experiencing seg faults due to changes such as this.  

Many programmers such as myself rely on C++ but we prefer not to run on
bleeding edge systems, and we do not have the means to test our software on all
new systems and new compilers.  In this case, various bleeding edge distros
switch to gcc 6.0 and compile and release programs that seg fault out of the
box. These distros do not have the time or personnel to test every single
program. The result is that bad code is released.  This is and was unnecessary.
 Yes, we the programmers has options and can fix it.  What was unnecessary was
to release a new compiler that the developers knew would produce code that
fails.

The g++ developers could have realized that in especially in "undefined"
territory where they knew they would break code the conservative way to do it
without creating chaos is to add new strict warning message for a period of
time (one to two years) prior to making their changes default.

Reply via email to