RFC: Update top level libtool files

2017-10-10 Thread Nick Clifton
Hi Guys,

  I would like to update the top level libtool files (libtool.m4,
  ltoptions.m4, ltsugar.m4, ltversion.m4 and lt~obsolete.m4) used by
  gcc, gdb and binutils.  Currently we have version 2.2.7a installed in
  the source trees and I would like to switch to the latest official
  version: 2.4.6.

  The motivation for doing this is an attempt to reduce the number of
  patches being carried round by the Fedora binutils releases.
  Currently one of the patches there is to fix a bug in the 2.2.7a
  libtool which causes it to select /lib and /usr/lib as the system
  library search paths even for 64-bit hosts.  Rather than just bring
  this patch into the sources however, I thought that it would be better
  to upgrade to the latest official libtool release and use that
  instead.

  I have successfully run an x86_64 gcc bootstrap, built and tested lots
  of different binutils configurations, and built and run an x86_64 gdb.
  One thing that worries me though, is why hasn't this been done before?
  Ie is there a special reason for staying with the old 2.2.7a libtool ?
  If not, then does anyone object to my upgrading the gcc, gdb and
  binutils mainline sources ?
  
Cheers
  Nick




Re: RFC: Update top level libtool files

2017-10-10 Thread Markus Trippelsdorf
On 2017.10.10 at 12:45 +0100, Nick Clifton wrote:
> Hi Guys,
> 
>   I would like to update the top level libtool files (libtool.m4,
>   ltoptions.m4, ltsugar.m4, ltversion.m4 and lt~obsolete.m4) used by
>   gcc, gdb and binutils.  Currently we have version 2.2.7a installed in
>   the source trees and I would like to switch to the latest official
>   version: 2.4.6.
> 
>   The motivation for doing this is an attempt to reduce the number of
>   patches being carried round by the Fedora binutils releases.
>   Currently one of the patches there is to fix a bug in the 2.2.7a
>   libtool which causes it to select /lib and /usr/lib as the system
>   library search paths even for 64-bit hosts.  Rather than just bring
>   this patch into the sources however, I thought that it would be better
>   to upgrade to the latest official libtool release and use that
>   instead.
> 
>   I have successfully run an x86_64 gcc bootstrap, built and tested lots
>   of different binutils configurations, and built and run an x86_64 gdb.
>   One thing that worries me though, is why hasn't this been done before?
>   Ie is there a special reason for staying with the old 2.2.7a libtool ?
>   If not, then does anyone object to my upgrading the gcc, gdb and
>   binutils mainline sources ?

Last time I've looked in 2011, libtool's "with_sysroot" was not
compatible with gcc's. So a naive copy doesn't work. But reverting
commit 3334f7ed5851ef1 in libtool before copying should work.

-- 
Markus


Re: RFC: Update top level libtool files

2017-10-10 Thread Joseph Myers
On Tue, 10 Oct 2017, Nick Clifton wrote:

> Hi Guys,
> 
>   I would like to update the top level libtool files (libtool.m4,
>   ltoptions.m4, ltsugar.m4, ltversion.m4 and lt~obsolete.m4) used by
>   gcc, gdb and binutils.  Currently we have version 2.2.7a installed in
>   the source trees and I would like to switch to the latest official
>   version: 2.4.6.

As per previous discussions on the issue: it's necessary to revert libtool 
commit 3334f7ed5851ef1e96b052f2984c4acdbf39e20c, see 
.  I do not know 
if there are other local libtool changes that are not in version 2.4.6; it 
would be necessary to check all differences from 2.2.7a to determine 
whether any need to be re-applied to 2.4.6.

>   I have successfully run an x86_64 gcc bootstrap, built and tested lots
>   of different binutils configurations, and built and run an x86_64 gdb.

Testing cross compilers, including Canadian crosses, would be important as 
well.

>   One thing that worries me though, is why hasn't this been done before?
>   Ie is there a special reason for staying with the old 2.2.7a libtool ?
>   If not, then does anyone object to my upgrading the gcc, gdb and
>   binutils mainline sources ?

Given appropriate testing and checks of local libtool changes, I think 
such updates are a good idea.  (As, for that matter, would be resyncing 
the toplevel configure/build code in the newlib-cygwin repository with 
that in GCC and binutils-gdb - but I don't know if newlib-cygwin has any 
unique local patches not in the other repositories, and given the 
out-of-sync nature at present I don't think such a resync can be required 
as part of an update in GCC and binutils-gdb.)

For that matter, these trees are also using very old autoconf and automake 
versions and using the current versions of those (2.69 and 1.15.1) would 
be a good idea as well.  Hopefully version dependencies are loose enough 
that it's possible to update one tool at a time (so update libtool without 
needing to update autoconf or automake at the same time).

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: RFC: Update top level libtool files

2017-10-10 Thread David Edelsohn
On Tue, Oct 10, 2017 at 7:45 AM, Nick Clifton  wrote:
> Hi Guys,
>
>   I would like to update the top level libtool files (libtool.m4,
>   ltoptions.m4, ltsugar.m4, ltversion.m4 and lt~obsolete.m4) used by
>   gcc, gdb and binutils.  Currently we have version 2.2.7a installed in
>   the source trees and I would like to switch to the latest official
>   version: 2.4.6.
>
>   The motivation for doing this is an attempt to reduce the number of
>   patches being carried round by the Fedora binutils releases.
>   Currently one of the patches there is to fix a bug in the 2.2.7a
>   libtool which causes it to select /lib and /usr/lib as the system
>   library search paths even for 64-bit hosts.  Rather than just bring
>   this patch into the sources however, I thought that it would be better
>   to upgrade to the latest official libtool release and use that
>   instead.
>
>   I have successfully run an x86_64 gcc bootstrap, built and tested lots
>   of different binutils configurations, and built and run an x86_64 gdb.
>   One thing that worries me though, is why hasn't this been done before?
>   Ie is there a special reason for staying with the old 2.2.7a libtool ?
>   If not, then does anyone object to my upgrading the gcc, gdb and
>   binutils mainline sources ?

AIX is another target that will need to be tested carefully for such a change.

Thanks, David


GCC 5 branch is now closed

2017-10-10 Thread Jakub Jelinek
After the GCC 5.5 release the GCC 5 branch is now closed.  Please
refrain from committing to it from now on.

Thanks
Jakub


GCC 5.5 Released

2017-10-10 Thread Jakub Jelinek
The GNU Compiler Collection version 5.5 has been released.

GCC 5.5 is a bug-fix release from the GCC 5 branch
containing important fixes for regressions and serious bugs in
GCC 5.4 with more than 250 bugs fixed since the previous release.

This is also the last release from the GCC 5 branch, GCC continues
to be maintained on the GCC 6 and GCC 7 branches and the development
trunk.

This release is available from the FTP servers listed at:

  http://www.gnu.org/order/ftp.html

Please do not contact me directly regarding questions or comments
about this release.  Instead, use the resources available from
http://gcc.gnu.org.

As always, a vast number of people contributed to this GCC release
-- far too many to thank them individually!


Re: RFC: Update top level libtool files

2017-10-10 Thread Nick Clifton
Hi Joseph,

> As per previous discussions on the issue: it's necessary to revert libtool 
> commit 3334f7ed5851ef1e96b052f2984c4acdbf39e20c, see 
> .

OK - thanks for that pointer.

> I do not know 
> if there are other local libtool changes that are not in version 2.4.6; it 
> would be necessary to check all differences from 2.2.7a to determine 
> whether any need to be re-applied to 2.4.6.

*sigh*  It seems that 2.2.7a was not an official release.  At least I could
not find a tarball for that specific version on the ftp.gnu.org/libtool archive.
(There is a 2.2.6b release and a 2.2.8 release but no 2.2.7).  So it
looks like we have been using a modified set of sources for a long time now.

Maybe I would be better off not rocking the boat, and just submit the Fedora
sys_path patch for consideration instead...

 
> For that matter, these trees are also using very old autoconf and automake 
> versions and using the current versions of those (2.69 and 1.15.1) would 
> be a good idea as well.  Hopefully version dependencies are loose enough 
> that it's possible to update one tool at a time (so update libtool without 
> needing to update autoconf or automake at the same time).

Oh gosh - I would love to see that done.  But the last time I tried I ended 
up going down a rabbit hole of autoconf/automake problems that just never 
ended.  So I gave up. :-(  Maybe someone with more autoconf-fu than me will
have a go one day though.

Cheers
  Nick



Re: RFC: Update top level libtool files

2017-10-10 Thread Joseph Myers
On Tue, 10 Oct 2017, Nick Clifton wrote:

> > I do not know 
> > if there are other local libtool changes that are not in version 2.4.6; it 
> > would be necessary to check all differences from 2.2.7a to determine 
> > whether any need to be re-applied to 2.4.6.
> 
> *sigh*  It seems that 2.2.7a was not an official release.  At least I could
> not find a tarball for that specific version on the ftp.gnu.org/libtool 
> archive.
> (There is a 2.2.6b release and a 2.2.8 release but no 2.2.7).  So it
> looks like we have been using a modified set of sources for a long time now.

Well, it looks like the update from upstream was

  r155012 | rwild | 2009-12-05 17:18:53 + (Sat, 05 Dec 2009) | 113 lines

  Sync from git Libtool and regenerate.

so looking for relevant libtool commits around that time might help 
identify the one that was used.  My guess is that

commit ef32f487d746dbcdc00c2c357ebe3cf2a68d8a28
Author: Ralf Wildenhues 
Date:   Tue Nov 24 12:22:13 2009 +0100

Enable symbol versioning with the GNU gold linker.

is the libtool commit you should be comparing libtool files with to 
identify local changes.

-- 
Joseph S. Myers
jos...@codesourcery.com


GCC Buildbot Update - Definition of regression

2017-10-10 Thread Paulo Matos
Hi all,

It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:

* 3 x86_64 workers from CF are now installed;
* There's one scheduler for trunk doing fresh builds for every Daily bump;
* One scheduler doing incremental builds for each active branch;
* An IRC bot which is currently silent;

The next steps are:
* Enable LNT (I have installed this but have yet to connect to buildbot)
for tracking performance benchmarks over time -- it should come up as
http://gcc-lnt.linki.tools in the near future.
* Enable regression analysis --- This is fundamental. I understand that
without this the buildbot is pretty useless so it has highest priority.
However, I would like some agreement as to what in GCC should be
considered a regression. Each test in deja gnu can have several status:
FAIL, PASS, UNSUPPORTED, UNTESTED, XPASS, KPASS, XFAIL, KFAIL, UNRESOLVED

Since GCC doesn't have a 'clean bill' of test results we need to analyse
the sum files for the current run and compare with the last run of the
same branch. I have written down that if for each test there's a
transition that looks like the following, then a regression exists and
the test run should be marked as failure.

ANY -> no test  ; Test disappears
ANY / XPASS -> XPASS; Test goes from any status other than XPASS
to XPASS
ANY / KPASS -> KPASS; Test goes from any status other than KPASS
to KPASS
new test -> FAIL; New test starts as fail
PASS -> ANY ; Test moves away from PASS

This is a suggestion. I am keen to have corrections from people who use
this on a daily basis and/or have a better understanding of each status.

As soon as we reach a consensus, I will deploy this analysis and enable
IRC bot to notify on the #gcc channel the results of the tests.

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Joseph Myers
On Tue, 10 Oct 2017, Paulo Matos wrote:

> ANY -> no test  ; Test disappears

No, that's not a regression.  Simply adding a line to a testcase will 
change the line number that appears in the PASS / FAIL line for an 
individual assertion therein.  Or the names will change when e.g. 
-std=c++2a becomes -std=c++20 and all the tests with a C++ standard 
version in them change their names.  Or if a bogus test is removed.

> ANY / XPASS -> XPASS; Test goes from any status other than XPASS
> to XPASS
> ANY / KPASS -> KPASS; Test goes from any status other than KPASS
> to KPASS

No, that's not a regression.  It's inevitable that XFAILing conditions may 
sometimes be broader than ideal, if it's not possible to describe the 
exact failure conditions to the testsuite, and so sometimes a test may 
reasonably XPASS.  Such tests *may* sometimes be candidates for a more 
precise XFAIL condition, but they aren't regressions.

> new test -> FAIL; New test starts as fail

No, that's not a regression, but you might want to treat it as one (in the 
sense that it's a regression at the higher level of "testsuite run should 
have no unexpected failures", even if the test in question would have 
failed all along if added earlier and so the underlying compiler bug, if 
any, is not a regression).  It should have human attention to classify it 
and either fix the test or XFAIL it (with issue filed in Bugzilla if a 
bug), but it's not a regression.  (Exception: where a test failing results 
in its name changing, e.g. through adding "(internal compiler error)".)

> PASS -> ANY ; Test moves away from PASS

No, only a regression if the destination result is FAIL (if it's 
UNRESOLVED then there might be a separate regression - execution test 
becoming UNRESOLVED should be accompanied by compilation becoming FAIL).  
If it's XFAIL, it might formally be a regression, but one already being 
tracked in another way (presumably Bugzilla) which should not turn the bot 
red.  If it's XPASS, that simply means XFAILing conditions slightly wider 
than necessary in order to mark failure in another configuration as 
expected.

My suggestion is:

PASS -> FAIL is an unambiguous regression.

Anything else -> FAIL and new FAILing tests aren't regressions at the 
individual test level, but may be treated as such at the whole testsuite 
level.

Any transition where the destination result is not FAIL is not a 
regression.

ERRORs in the .sum or .log files should be watched out for as well, 
however, as sometimes they may indicate broken Tcl syntax in the 
testsuite, which may cause many tests not to be run.

Note that the test names that come after PASS:, FAIL: etc. aren't unique 
between different .sum files, so you need to associate tests with a tuple 
(.sum file, test name) (and even then, sometimes multiple tests in a .sum 
file have the same name, but that's a testsuite bug).  If you're using 
--target_board options that run tests for more than one multilib in the 
same testsuite run, add the multilib to that tuple as well.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Markus Trippelsdorf
On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
> Hi all,
> 
> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
> 
> * 3 x86_64 workers from CF are now installed;
> * There's one scheduler for trunk doing fresh builds for every Daily bump;
> * One scheduler doing incremental builds for each active branch;
> * An IRC bot which is currently silent;

Using -j8 for the bot on a 8/16 (core/thread) machine like gcc67 is not
acceptable, because it will render it unusable for everybody else.
Also gcc67 has a buggy Ryzen CPU that causes random gcc crashes. Not the
best setup for a regression tester...

-- 
Markus


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Paulo Matos


On 11/10/17 06:17, Markus Trippelsdorf wrote:
> On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
>> Hi all,
>>
>> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
>>
>> * 3 x86_64 workers from CF are now installed;
>> * There's one scheduler for trunk doing fresh builds for every Daily bump;
>> * One scheduler doing incremental builds for each active branch;
>> * An IRC bot which is currently silent;
> 
> Using -j8 for the bot on a 8/16 (core/thread) machine like gcc67 is not
> acceptable, because it will render it unusable for everybody else.

I was going to correct you on that given what I read in
https://gcc.gnu.org/wiki/CompileFarm#Usage

but it was my mistake. I assumed that for an N-thread machine, I could
use N/2 processes but the guide explicitly says N-core, not N-thread.
Therefore I should be using 4 processes for gcc67 (or 0 given what follows).

I will fix also the number of processes used by the other workers.

> Also gcc67 has a buggy Ryzen CPU that causes random gcc crashes. Not the
> best setup for a regression tester...
> 

Is that documented anywhere? I will remove this worker.

Thanks,

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Paulo Matos


On 10/10/17 23:25, Joseph Myers wrote:
> On Tue, 10 Oct 2017, Paulo Matos wrote:
> 
>> new test -> FAIL; New test starts as fail
> 
> No, that's not a regression, but you might want to treat it as one (in the 
> sense that it's a regression at the higher level of "testsuite run should 
> have no unexpected failures", even if the test in question would have 
> failed all along if added earlier and so the underlying compiler bug, if 
> any, is not a regression).  It should have human attention to classify it 
> and either fix the test or XFAIL it (with issue filed in Bugzilla if a 
> bug), but it's not a regression.  (Exception: where a test failing results 
> in its name changing, e.g. through adding "(internal compiler error)".)
> 

When someone adds a new test to the testsuite, isn't it supposed to not
FAIL? If is does FAIL, shouldn't this be considered a regression?

Now, the danger is that since regressions are comparisons with previous
run something like this would happen:

run1:
...
FAIL: foo.c ; new test
...

run1 fails because new test entered as a FAIL

run2:
...
FAIL: foo.c
...

run2 succeeds because there are no changes.

For this reason all of this issues need to be taken care straight away
or they become part of the 'normal' status and no more failures are
issued... unless of course a more complex regression analysis is
implemented.

Also, when I mean, run1 fails or succeeds this is just the term I use to
display red/green in the buildbot interface for a given build, not
necessarily what I expect the process will do.

> 
> My suggestion is:
> 
> PASS -> FAIL is an unambiguous regression.
> 
> Anything else -> FAIL and new FAILing tests aren't regressions at the 
> individual test level, but may be treated as such at the whole testsuite 
> level.
> 
> Any transition where the destination result is not FAIL is not a 
> regression.
> 
> ERRORs in the .sum or .log files should be watched out for as well, 
> however, as sometimes they may indicate broken Tcl syntax in the 
> testsuite, which may cause many tests not to be run.
> 
> Note that the test names that come after PASS:, FAIL: etc. aren't unique 
> between different .sum files, so you need to associate tests with a tuple 
> (.sum file, test name) (and even then, sometimes multiple tests in a .sum 
> file have the same name, but that's a testsuite bug).  If you're using 
> --target_board options that run tests for more than one multilib in the 
> same testsuite run, add the multilib to that tuple as well.
> 

Thanks for all the comments. Sounds sensible.
By not being unique, you mean between languages?
I assume that two gcc.sum from different builds will always refer to the
same test/configuration when referring to (for example):
PASS: gcc.c-torture/compile/2105-1.c   -O1  (test for excess errors)

In this case, I assume that "gcc.c-torture/compile/2105-1.c   -O1
(test for excess errors)" will always be referring to the same thing.

-- 
Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-10 Thread Markus Trippelsdorf
On 2017.10.11 at 08:22 +0200, Paulo Matos wrote:
> 
> 
> On 11/10/17 06:17, Markus Trippelsdorf wrote:
> > On 2017.10.10 at 21:45 +0200, Paulo Matos wrote:
> >> Hi all,
> >>
> >> It's almost 3 weeks since I last posted on GCC Buildbot. Here's an update:
> >>
> >> * 3 x86_64 workers from CF are now installed;
> >> * There's one scheduler for trunk doing fresh builds for every Daily bump;
> >> * One scheduler doing incremental builds for each active branch;
> >> * An IRC bot which is currently silent;
> > 
> > Using -j8 for the bot on a 8/16 (core/thread) machine like gcc67 is not
> > acceptable, because it will render it unusable for everybody else.
> 
> I was going to correct you on that given what I read in
> https://gcc.gnu.org/wiki/CompileFarm#Usage
> 
> but it was my mistake. I assumed that for an N-thread machine, I could
> use N/2 processes but the guide explicitly says N-core, not N-thread.
> Therefore I should be using 4 processes for gcc67 (or 0 given what follows).
> 
> I will fix also the number of processes used by the other workers.

Thanks. And while you are at it please set the niceness to 19.

> > Also gcc67 has a buggy Ryzen CPU that causes random gcc crashes. Not the
> > best setup for a regression tester...
> > 
> 
> Is that documented anywhere? I will remove this worker.

https://community.amd.com/thread/215773

-- 
Markus