[arm] GCC validation: preferred way of running the testsuite?

2020-05-11 Thread Christophe Lyon via Gcc
Hi,


As you may know, I've been running validations of GCC trunk in many
configurations for Arm and Aarch64.


I was recently trying to make some cleanup in the new Bfloat16, MVE, CDE, and
ACLE tests because in several configurations I see 300-400 FAILs
mainly in these areas, because of “testisms”. The goal is to avoid
wasting time over the same failure reports when checking what needs
fixing. I thought this would be quick & easy, but this is tedious
because of the numerous combinations of options and configurations
available on Arm.


Sorry for the very long email, it’s hard to describe and summarize,
but I'd like to try nonetheless, hoping that we can make testing
easier/more efficient :-), because most of the time the problems I
found are with the tests rather than real compiler bugs, so I think
it's a bit of wasted time.


Here is a list of problems, starting with the tricky dependencies
around -mfloat-abi=XXX:

* Some targets do not support multilibs (eg arm-linux-gnueabi[hf] with
glibc), or one can decide not to build with both hard and soft FP
multilibs. This generally becomes a problem when including stdint.h
(used by arm_neon.h, arm_acle.h, …), leading to a compiler error for
lack of gnu/stub*.h for the missing float-abi. If you add -mthumb to
the picture, it becomes quite complex (eg -mfloat-abi=hard is not
supported on thumb-1).


Consider mytest.c that does not depend on any include file and has:
/* { dg-options "-mfloat-abi=hard" } */

If GCC is configured for arm-linux-gnueabi --with-cpu=cortex-a9 --with-fpu=neon,
with ‘make check’, the test PASSes.
With ‘make check’ with --target-board=-march=armv5t/-mthumb, then the
test FAILs:
sorry, unimplemented: Thumb-1 hard-float VFP ABI


If I add
/* { dg-require-effective-target arm_hard_ok } */
‘make check’ with --target-board=-march=armv5t/-mthumb is now
UNSUPPORTED (which is OK), but
plain ‘make check’ is now also UNSUPPORTED because arm_hard_ok detects
that we lack the -mfloat-abi=hard multilib. So we lose a PASS.

If I configure GCC for arm-linux-gnueabihf, then:
‘make check’ PASSes
‘make check’ with --target-board=-march=armv5t/-mthumb, FAILs
and with
/* { dg-require-effective-target arm_hard_ok } */
‘make check’ with --target-board=-march=armv5t/-mthumb is now UNSUPPORTED and
plain ‘make check’ PASSes

So it seems the best option is to add
/* { dg-require-effective-target arm_hard_ok } */
although it makes the test UNSUPPORTED by arm-linux-gnueabi even in
cases where it could PASS.

Is there consensus that this is the right way?



* In GCC DejaGnu helpers, the queries for -mfloat-abi=hard and
-march=XXX are independent in general, meaning if you query for
-mfloat-abi=hard support, it will do that in the absence of any
-march=XXX that the testcase may also be using. So, if GCC is
configured with its default cpu/fpu, -mfloat-abi=hard will be rejected
for lack of an fpu on the default cpu, but if GCC is configured with a
suitable cpu/fpu pair, -mfloat-abi=hard will be accepted.

I faced this problem when I tried to “fix” the order in which we try options in
Arm_v8_2a_bf16_neon_ok. (see
https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544654.html)

I faced similar problems while working on a patch of mine about a bug
with IRQ handlers which has different behaviour depending on the FP
ABI used: I have the feeling that I spend too much time writing the
tests to the detriment of the patch itself...

I also noticed that Richard Sandiford probably faced similar issues
with his recent fix for "no_unique_address", where he finally added
arm_arch_v8a_hard_ok to check arm8v-a CPU + neon-fp-armv8 FPU +
float-abi=hard at the same time.

Maybe we could decide on a consistent and simpler way of checking such things?


* A metric for this complexity could be the number of arm
effective-targets, a quick and not-fully accurate grep | sed | sort |
uniq -c | sort -n on target-supports.exp ends with:
 9 mips
 16 aarch64
 21 powerpc
 97 vect
106 arm
(does not count all the effective-targets generated by tcl code, eg
arm_arch_FUNC_ok)

This probably explains why it’s hard to get test directives right :-)

I’ve not thought about how we could reduce that number….



* Finally, I’m wondering about the most appropriate way of configuring
GCC and running the tests.

So far, for most of the configurations I'm testing, I use different
--with-cpu/--with-fpu/--with-mode configure flags for each toolchain
configuration I’m testing and rarely override the flags at testing
time. I also disable multilibs to save build time and (scratch) disk
space. (See 
https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
for the current list, each line corresponds to a clean build + make
check job -- so there are 15 different toolchain configs for
arm-linux-gnueabihf for instance)

However, I think this is may not be appropriate at least for the
arm-eabi toolchains, because I suspect the vendors who support several
SoCs ge

Re: dejagnu version update?

2020-05-13 Thread Christophe Lyon via Gcc
On Wed, 13 May 2020 at 19:44, Jonathan Wakely via Gcc  wrote:
>
> On Wed, 13 May 2020 at 18:19, Mike Stump via Gcc  wrote:
> >
> > I've changed the subject to match the 2015, 2017 and 2018 email threads.
> >
> > On May 13, 2020, at 3:26 AM, Thomas Schwinge  
> > wrote:
> > >
> > > Comparing DejaGnu/GCC testsuite '*.sum' files between two systems ("old"
> > > vs. "new") that ought to return identical results, I found that they
> > > didn't:
> >
> > > I have not found any evidence in DejaGnu master branch that this not
> > > working would've been a "recent" DejaGnu regression (and then fixed for
> > > DejaGnu 1.6) -- so do we have to assume that this never worked as
> > > intended back then?
> >
> > Likely not.
> >
> > > Per our "Prerequisites for GCC" installation documentation, we currently
> > > require DejaGnu 1.4.4.  Advancing that to 1.6 is probably out of
> > > question, given that it has "just" been released (four years ago).
> >
> > :-)  A user that wants full coverage should use 1.6, apparently.
>
> As documented at
> https://gcc.gnu.org/onlinedocs/libstdc++/manual/test.html#test.run.permutations
> anything older than 1.5.3 causes problems for libstdc++ (and probably
> the rest of GCC) because the options in --target_board get placed
> after the options in dg-options. If the test depends on the options in
> dg-options to work properly it might fail. For example, a test that
> has { dg-options "-O2" } and fails without optimisation would FAIL if
> you use --target_board=unix/-O0 with dejagnu 1.5.
>
I think that was commit:
http://git.savannah.gnu.org/gitweb/?p=dejagnu.git;a=commitdiff;h=5256bd82343000c76bc0e48139003f90b6184347
which for sure was a major change (though I don't see it documented in
dejagnu/NEWS file)

>
> > > As the failure mode with old DejaGnu is "benign" (only causes missing
> > > execution testing), we could simply move on, and accept non-reproducible
> > > results between different DejaGnu versions?  Kind of lame...  ;-|
> >
> > An ugly wart to be sure.
> >
> > So, now that ubuntu 20.04 is out and RHEL 8 is out, and they both contain 
> > 6, and SLES has 6 and since we've been sitting at 1.4.4 for so long, anyone 
> > want to not update dejagnu to require 1.6?
>
> There are still lots of older systems in use for GCC dev, like all the
> POWER servers in the compile farm (but I've put a recent dejagnu in
> /opt/cfarm on some of them).
>
> > I had previously approved the update to 1.5.3, but no one really wanted it 
> > as no one updated the requirement.  Let's have the 1.6 discussion.  I'm not 
> > only inclined to up to 1.6, but to actually edit it in this time.
>
> Would the tests actually refuse to run with an older version?
>
> > Anyone strongly against?  Why?
>
> I'm in favour of requiring 1.5.3 or later, so 1.6 would be OK for me.


Re: [arm] GCC validation: preferred way of running the testsuite?

2020-05-26 Thread Christophe Lyon via Gcc
On Tue, 19 May 2020 at 13:28, Richard Earnshaw
 wrote:
>
> On 11/05/2020 17:43, Christophe Lyon via Gcc wrote:
> > Hi,
> >
> >
> > As you may know, I've been running validations of GCC trunk in many
> > configurations for Arm and Aarch64.
> >
> >
> > I was recently trying to make some cleanup in the new Bfloat16, MVE, CDE, 
> > and
> > ACLE tests because in several configurations I see 300-400 FAILs
> > mainly in these areas, because of “testisms”. The goal is to avoid
> > wasting time over the same failure reports when checking what needs
> > fixing. I thought this would be quick & easy, but this is tedious
> > because of the numerous combinations of options and configurations
> > available on Arm.
> >
> >
> > Sorry for the very long email, it’s hard to describe and summarize,
> > but I'd like to try nonetheless, hoping that we can make testing
> > easier/more efficient :-), because most of the time the problems I
> > found are with the tests rather than real compiler bugs, so I think
> > it's a bit of wasted time.
> >
> >
> > Here is a list of problems, starting with the tricky dependencies
> > around -mfloat-abi=XXX:
> >
> > * Some targets do not support multilibs (eg arm-linux-gnueabi[hf] with
> > glibc), or one can decide not to build with both hard and soft FP
> > multilibs. This generally becomes a problem when including stdint.h
> > (used by arm_neon.h, arm_acle.h, …), leading to a compiler error for
> > lack of gnu/stub*.h for the missing float-abi. If you add -mthumb to
> > the picture, it becomes quite complex (eg -mfloat-abi=hard is not
> > supported on thumb-1).
> >
> >
> > Consider mytest.c that does not depend on any include file and has:
> > /* { dg-options "-mfloat-abi=hard" } */
> >
> > If GCC is configured for arm-linux-gnueabi --with-cpu=cortex-a9 
> > --with-fpu=neon,
> > with ‘make check’, the test PASSes.
> > With ‘make check’ with --target-board=-march=armv5t/-mthumb, then the
> > test FAILs:
> > sorry, unimplemented: Thumb-1 hard-float VFP ABI
> >
> >
> > If I add
> > /* { dg-require-effective-target arm_hard_ok } */
> > ‘make check’ with --target-board=-march=armv5t/-mthumb is now
> > UNSUPPORTED (which is OK), but
> > plain ‘make check’ is now also UNSUPPORTED because arm_hard_ok detects
> > that we lack the -mfloat-abi=hard multilib. So we lose a PASS.
> >
> > If I configure GCC for arm-linux-gnueabihf, then:
> > ‘make check’ PASSes
> > ‘make check’ with --target-board=-march=armv5t/-mthumb, FAILs
> > and with
> > /* { dg-require-effective-target arm_hard_ok } */
> > ‘make check’ with --target-board=-march=armv5t/-mthumb is now UNSUPPORTED 
> > and
> > plain ‘make check’ PASSes
> >
> > So it seems the best option is to add
> > /* { dg-require-effective-target arm_hard_ok } */
> > although it makes the test UNSUPPORTED by arm-linux-gnueabi even in
> > cases where it could PASS.
> >
> > Is there consensus that this is the right way?
> >
> >
> >
> > * In GCC DejaGnu helpers, the queries for -mfloat-abi=hard and
> > -march=XXX are independent in general, meaning if you query for
> > -mfloat-abi=hard support, it will do that in the absence of any
> > -march=XXX that the testcase may also be using. So, if GCC is
> > configured with its default cpu/fpu, -mfloat-abi=hard will be rejected
> > for lack of an fpu on the default cpu, but if GCC is configured with a
> > suitable cpu/fpu pair, -mfloat-abi=hard will be accepted.
> >
> > I faced this problem when I tried to “fix” the order in which we try 
> > options in
> > Arm_v8_2a_bf16_neon_ok. (see
> > https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544654.html)
> >
> > I faced similar problems while working on a patch of mine about a bug
> > with IRQ handlers which has different behaviour depending on the FP
> > ABI used: I have the feeling that I spend too much time writing the
> > tests to the detriment of the patch itself...
> >
> > I also noticed that Richard Sandiford probably faced similar issues
> > with his recent fix for "no_unique_address", where he finally added
> > arm_arch_v8a_hard_ok to check arm8v-a CPU + neon-fp-armv8 FPU +
> > float-abi=hard at the same time.
> >
> > Maybe we could decide on a consistent and simpler way of checking such 
> > things?
> >
> >
> > * A metric for this complexity could be the number of arm
> > effective-targets, a quick and not-fully accurate gr

Re: GCC Testsuite patches break AIX

2020-05-27 Thread Christophe Lyon via Gcc
On Wed, 27 May 2020 at 16:26, Jeff Law via Gcc  wrote:
>
> On Wed, 2020-05-27 at 11:16 -0300, Alexandre Oliva wrote:
> > Hello, David,
> >
> > On May 26, 2020, David Edelsohn  wrote:
> >
> > > Complaints about -dA, -dD, -dumpbase, etc.
> >
> > This was the main symptom of the problem fixed in the follow-up commit
> > r11-635-g6232d02b4fce4c67d39815aa8fb956e4b10a4e1b
> >
> > Could you please confirm that you did NOT have this commit in your
> > failing build, and that the patch above fixes the problem for you as it
> > did for others?
> >
> >
> > > This patch was not properly tested on all targets.
> >
> > This problem had nothing to do with targets.  Having Ada enabled, which
> > I've nearly always and very long done to increase test coverage, was
> > what kept the preexisting bug latent in my testing.
> >
> >
> > Sorry that I failed to catch it before the initial check in.
> Any thoughts on the massive breakage on the embedded ports in the testsuite?
> Essentially every test that links is failing like this:
>
> > Executing on host: /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c
> > gcc_tg.o-fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers
> > -fdiagnostics-color=never  -fdiagnostics-urls=never-O0  -w   -msim {} 
> > {}  -
> > Wl,-wrap,exit -Wl,-wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm  -o
> > ./2112-1.exe(timeout = 300)
> > spawn -ignore SIGHUP 
> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c gcc_tg.o
> > -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers 
> > -fdiagnostics-
> > color=never -fdiagnostics-urls=never -O0 -w -msim   -Wl,-wrap,exit -Wl,-
> > wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm -o ./2112-1.exe^M
> > xgcc: error: : No such file or directory^M
> > xgcc: error: : No such file or directory^M
> > compiler exited with status 1
> > FAIL: gcc.c-torture/execute/2112-1.c   -O0  (test for excess errors)
> > Excess errors:
> > xgcc: error: : No such file or directory
> > xgcc: error: : No such file or directory
> >
>
>
> Sadly there's no additional output that would help us figure out what went 
> wrong.

If that helps, I traced this down to the new gcc_adjust_linker_flags function.

Christophe


>
> jeff
>


Re: GCC Testsuite patches break AIX

2020-05-28 Thread Christophe Lyon via Gcc
On Wed, 27 May 2020 at 22:40, Alexandre Oliva  wrote:
>
> On May 27, 2020, Christophe Lyon via Gcc  wrote:
>
> > On Wed, 27 May 2020 at 16:26, Jeff Law via Gcc  wrote:
>
> >> Any thoughts on the massive breakage on the embedded ports in the 
> >> testsuite?
>
> I wasn't aware of any.  Indeed, one of my last steps before submitting
> the patchset was to fix problems that had come up in embedded ports,
> with gcc_adjust_linker_flags and corresponding changes to outputs.exp
> itself.
>
> >> Essentially every test that links is failing like this:
>
>
> >>
> >> > Executing on host: 
> >> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> >> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> >> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c
> >> > gcc_tg.o-fno-diagnostics-show-caret 
> >> > -fno-diagnostics-show-line-numbers
> >> > -fdiagnostics-color=never  -fdiagnostics-urls=never-O0  -w   -msim 
> >> > {} {}  -
> >> > Wl,-wrap,exit -Wl,-wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm  -o
> >> > ./2112-1.exe(timeout = 300)
> >> > spawn -ignore SIGHUP 
> >> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> >> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> >> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c 
> >> > gcc_tg.o
> >> > -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers 
> >> > -fdiagnostics-
> >> > color=never -fdiagnostics-urls=never -O0 -w -msim   -Wl,-wrap,exit -Wl,-
> >> > wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm -o ./2112-1.exe^M
> >> > xgcc: error: : No such file or directory^M
>
> >> Sadly there's no additional output that would help us figure out what went 
> >> wrong.
>
> > If that helps, I traced this down to the new gcc_adjust_linker_flags 
> > function.
>
> Thanks.  Yeah, H-P observed and submitted a similar report that made me
> wonder about empty arguments being passed to GCC.  Jeff's report
> confirms the suspicion.  See how there are a couple of {}s after -msim
> in the "Executing on host" line, that in the "spawn" line are completely
> invisible, only suggested by the extra whitespace.  That was not quite
> visible in H-P's report, but Jeff's makes it clear.
>
> I suppose this means there are consecutive blanks in e.g. board's
> ldflags, and the split function is turning each consecutive pair of

Yes, I'm seeing this because of
set_board_info ldflags  "[libgloss_link_flags] [newlib_link_flags]
$additional_options"
in arm-sim.exp

> blanks into an empty argument.  I'm testing a fix (kludge?) in
> refs/users/aoliva/heads/testme 169b13d14d3c1638e94ea7e8f718cdeaf88aed65
>
> --
> Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
> Free Software Evangelist  Stallman was right, but he's left :(
> GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: dejagnu version update?

2020-06-12 Thread Christophe Lyon via Gcc
Hi,

On Wed, 27 May 2020 at 03:58, Rob Savoye  wrote:
>
> On 5/26/20 7:20 PM, Maciej W. Rozycki wrote:
>
> >  I'll run some RISC-V remote GCC/GDB testing and compare results for
> > DejaGnu 1.6/1.6.1 vs trunk.  It will take several days though, as it takes
> > many hours to go through these testsuite runs.
>
>   That'd be great. I'd rather push out a stable release, than have to
> fix bugs right after it gets released.
>
> - rob -


I ran our GCC validation harness using dejagnu master branch and
compared to the results we get using our linaro-local/stable branch
(https://git.linaro.org/toolchain/dejagnu.git/)

I noticed that we'd need to add patches (1) and (2) at least.

Patch (1) enables us to run tests on aarch64-elf using Arm's Foundation Model.

Patch (2) was posted in 2016:
https://lists.gnu.org/archive/html/dejagnu/2016-03/msg5.html.
It fixes problems with tests output patterns (in fortran, ubsan and asan tests)

Patch (3) was posted in 2016 too:
https://lists.gnu.org/archive/html/dejagnu/2016-03/msg00034.html
I'm not 100% sure it made a difference in these test runs because we
still see some random failures anyway.

Thanks,

Christophe
From 382440f145811eeb3e85d0e57d9b8aa5418d1e80 Mon Sep 17 00:00:00 2001
From: Yvan Roux 
Date: Mon, 25 Apr 2016 11:09:52 +0200
Subject: [PATCH 2/3] Keep trailing newline in remote execution output.

	* lib/rsh.exp (rsh_exec): Don't remove trailing newline.
	* lib/ssh.exp (rsh_exec): Likewise.

Change-Id: I2368c18729c4bff9ee87d9163b1c8f4b0ad7f4d8
---
 lib/rsh.exp | 3 ---
 lib/ssh.exp | 3 ---
 2 files changed, 6 deletions(-)

diff --git a/lib/rsh.exp b/lib/rsh.exp
index 5b583c6..43f5430 100644
--- a/lib/rsh.exp
+++ b/lib/rsh.exp
@@ -283,8 +283,5 @@ proc rsh_exec { boardname program pargs inp outp } {
 	return [list -1 "Couldn't parse $RSH output, $output."]
 }
 regsub "XYZ(\[0-9\]*)ZYX\n?" $output "" output
-# Delete one trailing \n because that is what `exec' will do and we want
-# to behave identical to it.
-regsub "\n$" $output "" output
 return [list [expr {$status != 0}] $output]
 }
diff --git a/lib/ssh.exp b/lib/ssh.exp
index 0cf0f8d..a72f794 100644
--- a/lib/ssh.exp
+++ b/lib/ssh.exp
@@ -194,9 +194,6 @@ proc ssh_exec { boardname program pargs inp outp } {
 	return [list -1 "Couldn't parse $SSH output, $output."]
 }
 regsub "XYZ(\[0-9\]*)ZYX\n?" $output "" output
-# Delete one trailing \n because that is what `exec' will do and we want
-# to behave identical to it.
-regsub "\n$" $output "" output
 return [list [expr {$status != 0}] $output]
 }
 
-- 
2.7.4

From 1e5110d99ac8bac61e97bbdb4bb78ca72adb7e4e Mon Sep 17 00:00:00 2001
From: Maxim Kuvyrkov 
Date: Tue, 28 Jun 2016 09:40:01 +0100
Subject: [PATCH 1/3] Support using QEMU in local/remote testing using default
 "unix" board

If the board file defines "exec_shell", prepend it before the local or
remote command.

Change-Id: Ib3ff96126c4c96e4e7f8898609d0fce6faf803ef
---
 config/unix.exp | 13 +
 1 file changed, 13 insertions(+)

diff --git a/config/unix.exp b/config/unix.exp
index 2e38454..dc3f781 100644
--- a/config/unix.exp
+++ b/config/unix.exp
@@ -78,6 +78,11 @@ proc unix_load { dest prog args } {
 	verbose -log "Setting LD_LIBRARY_PATH to $ld_library_path:$orig_ld_library_path" 2
 	verbose -log "Execution timeout is: $test_timeout" 2
 
+	# Prepend shell name (e.g., qemu emulator) to the command.
+	if {[board_info $dest exists exec_shell]} {
+	set command "[board_info $dest exec_shell] $command"
+	}
+
 	set id [remote_spawn $dest $command "readonly"]
 	if { $id < 0 } {
 	set output "remote_spawn failed"
@@ -119,6 +124,14 @@ proc unix_load { dest prog args } {
 		return [list "unresolved" ""]
 	}
 	}
+
+	# Prepend shell name (e.g., qemu emulator) to the command.
+	if {[board_info $dest exists exec_shell]} {
+	set remotecmd "[board_info $dest exec_shell] $remotefile"
+	} else {
+	set remotecmd "$remotefile"
+	}
+
 	set status [remote_exec $dest $remotefile $parg $inp]
 	remote_file $dest delete $remotefile.o $remotefile
 	if { [lindex $status 0] < 0 } {
-- 
2.7.4

From b6a3e52aec69146e930d85b84a81b1e059f2ffe5 Mon Sep 17 00:00:00 2001
From: Christophe Lyon 
Date: Fri, 28 Sep 2018 08:26:02 +
Subject: [PATCH 3/3] 2018-09-28  Christophe Lyon  

	* lib/ssh.exp (ssh_exec): Redirect stderr to stdout on the remote
	machine, to avoid race conditions.

Change-Id: Ie0613a85fa990484fda41b13738025edf7477a62
---
 lib/ssh.exp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/ssh.exp b/lib/ssh.exp
index a72f794..3c7b840 100644
--- a/lib/ssh.exp
+++ b/lib/ssh.exp
@@ -171,7 +171,7 @@ proc ssh_exec { boardname program pargs inp outp } {
 
 # We use && here, as otherwise the echo always works, which makes it look
 # like execution succeeded when in reality it failed.
-set ret [local_exec "$SSH $ssh_useropts $ssh_user$hostname sh -c '$program $pargs && echo XYZ\\\${?}ZYX \\; rm -f $program'" $inp $outp $tim

Re: duplicate arm test results?

2020-09-22 Thread Christophe Lyon via Gcc
On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
>
> Hi Christophe,
>
> While checking recent test results I noticed many posts with results
> for various flavors of arm that at high level seem like duplicates
> of one another.
>
> For example, the batch below all have the same title, but not all
> of the contents are the same.  The details (such as test failures)
> on some of the pages are different.
>
> Can you help explain the differences?  Is there a way to avoid
> the duplication?
>

Sure, I am aware that many results look the same...


If you look at the top of the report (~line 5), you'll see:
Running target myarm-sim
Running target myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
Running target myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
Running target myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
Running target 
myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd

For all of these, the first line of the report is:
LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
TARGET=arm-none-eabi CPU=default FPU=default MODE=default

I have other combinations where I override the configure flags, eg:
LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb

I tried to see if I could fit something in the subject line, but that
didn't seem convenient (would be too long, and I fear modifying the
awk script)

I think HJ generates several "running targets" in the same log, I run
them separately to benefit from the compute farm I have access to.

Christophe

> Thanks
> Martin
>
> Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON


Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
>
> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> > On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> >>
> >> Hi Christophe,
> >>
> >> While checking recent test results I noticed many posts with results
> >> for various flavors of arm that at high level seem like duplicates
> >> of one another.
> >>
> >> For example, the batch below all have the same title, but not all
> >> of the contents are the same.  The details (such as test failures)
> >> on some of the pages are different.
> >>
> >> Can you help explain the differences?  Is there a way to avoid
> >> the duplication?
> >>
> >
> > Sure, I am aware that many results look the same...
> >
> >
> > If you look at the top of the report (~line 5), you'll see:
> > Running target myarm-sim
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> > Running target 
> > myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> > Running target 
> > myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> >
> > For all of these, the first line of the report is:
> > LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> > r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> > TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> >
> > I have other combinations where I override the configure flags, eg:
> > LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> > r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> > TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> >
> > I tried to see if I could fit something in the subject line, but that
> > didn't seem convenient (would be too long, and I fear modifying the
> > awk script)
>
> Without some indication of a difference in the title there's no way
> to know what result to look at, and checking all of them isn't really
> practical.  The duplication (and the sheer number of results) also
> make it more difficult to find results for targets other than arm-*.
> There are about 13,000 results for September and over 10,000 of those
> for arm-* alone.  It's good to have data but when there's this much
> of it, and when the only form of presentation is as a running list,
> it's too cumbersome to work with.
>

To help me track & report regressions, I build higher level reports like:
https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
where it's more obvious what configurations are tested.

Each line of such reports can send a message to gcc-testresults.

I can control when such emails are sent, independently for each line:
- never
- for daily bump
- for each validation

So, I can easily reduce the amount of emails (by disabling them for
some configurations),
but that won't make the subject more informative.
I included the short revision (rXX-) in the title to make it clearer.

The number of configurations has grown over time because we regularly
found regressions
in configurations not tested previously.

I can probably easily add the values of --with-cpu, --with-fpu,
--with-mode and RUNTESTFLAGS
as part of the [ revision rXX--Z] string in the title,
would that help?
I fear that's going to make very long subject lines.

It would probably be cleaner to update test_summary such that it adds
more info as part of $host
(as in "... testsuite on $host"), so that it grabs useful configure
parameters and runtestflags, however
this would be more controversial.

Christophe

> Martin
>
> >
> > I think HJ generates several "running targets" in the same log, I run
> > them separately to benefit from the compute farm I have access to.
> >
> > Christophe
> >
> >> Thanks
> >> Martin
> >>
> >> Results for 11.0.0 20200922 (experimental) [master revision
> >> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> >> arm-none-eabi   Christophe LYON
> >>
> >>   Results for 11.0.0 20200922 (experimental) [master revision
> >> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> >> arm-none-eabi   Christophe LYON
> >>   Results for 11.0.0 20200922 (experimental) [master revision
> >> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> >> arm-none-eabi   Christophe LYON
> >>   Results for 11.0.0 20200922 (experimental) [master revision
> >> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> >> arm-none-eabi   Christophe LYON
> >>   Results for 11.0.0 20200922 (experimental) [master revisio

Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 12:26, Richard Earnshaw
 wrote:
>
> On 23/09/2020 11:20, Jakub Jelinek via Gcc wrote:
> > On Wed, Sep 23, 2020 at 10:22:52AM +0100, Richard Sandiford wrote:
> >> So that would give:
> >>
> >>   Results for 8.4.1 20200918 [r8-10517] on arm-none-linux-gnueabihf
> >>
> >> and hopefully free up some space at the end for the kind of thing
> >> you mention.
> >
> > Even that 8.4.1 20200918 is redundant, r8-10517 uniquely and shortly
> > identifies both the branch and commit.
> > So just
> > Results for r8-10517 on ...
> > and in ... also include something that uniquely identifies the
> > configuration.
> >
> >   Jakub
> >
>
> I was thinking similarly, but then realised anyone using snapshots
> rather than git might not have that information.
>
> If that's not the case, however, then simplifying this would be a great
> idea.
>
> On the other hand, I use subject filters in my mail to steer results to
> a separate folder, so I do need some invariant key in the subject line
> that is sufficient to match without (too many) false positives.
>

I always assumed there was a required format for the title/email
contents, is that documented somewhere?
There must be a smart filter to avoid spam, doesn't it require some
"keywords" in the title for instance?

Same question for the gcc-regression list: is there a mandatory format?

Thanks,

Christophe

> R.


Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 17:33, Martin Sebor  wrote:
>
> On 9/23/20 2:54 AM, Christophe Lyon wrote:
> > On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
> >>
> >> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> >>> On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> 
>  Hi Christophe,
> 
>  While checking recent test results I noticed many posts with results
>  for various flavors of arm that at high level seem like duplicates
>  of one another.
> 
>  For example, the batch below all have the same title, but not all
>  of the contents are the same.  The details (such as test failures)
>  on some of the pages are different.
> 
>  Can you help explain the differences?  Is there a way to avoid
>  the duplication?
> 
> >>>
> >>> Sure, I am aware that many results look the same...
> >>>
> >>>
> >>> If you look at the top of the report (~line 5), you'll see:
> >>> Running target myarm-sim
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> >>> Running target 
> >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> >>> Running target 
> >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> >>>
> >>> For all of these, the first line of the report is:
> >>> LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> >>> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> >>> TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> >>>
> >>> I have other combinations where I override the configure flags, eg:
> >>> LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> >>> r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> >>> TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> >>>
> >>> I tried to see if I could fit something in the subject line, but that
> >>> didn't seem convenient (would be too long, and I fear modifying the
> >>> awk script)
> >>
> >> Without some indication of a difference in the title there's no way
> >> to know what result to look at, and checking all of them isn't really
> >> practical.  The duplication (and the sheer number of results) also
> >> make it more difficult to find results for targets other than arm-*.
> >> There are about 13,000 results for September and over 10,000 of those
> >> for arm-* alone.  It's good to have data but when there's this much
> >> of it, and when the only form of presentation is as a running list,
> >> it's too cumbersome to work with.
> >>
> >
> > To help me track & report regressions, I build higher level reports like:
> > https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
> > where it's more obvious what configurations are tested.
>
> That looks awesome!  The regression indicator looks especially
> helpful.  I really wish we had an overview like this for all
> results.  I've been thinking about writing a script to scrape
> gcc-testresults and format an HTML table kind of like this for
> years.  With that, the number of posts sent to the list wouldn't
> be a problem (at least not for those using the page).  But it
> would require settling on a standard format for the basic
> parameters of each run.
>

It's probably easier to detect regressions and format reports from the
.sum files rather than extracting them from the mailing-list.
But your approach has the advantage that you can detect regressions
from reports sent by other people, not only by you.


> >
> > Each line of such reports can send a message to gcc-testresults.
> >
> > I can control when such emails are sent, independently for each line:
> > - never
> > - for daily bump
> > - for each validation
> >
> > So, I can easily reduce the amount of emails (by disabling them for
> > some configurations),
> > but that won't make the subject more informative.
> > I included the short revision (rXX-) in the title to make it clearer.
> >
> > The number of configurations has grown over time because we regularly
> > found regressions
> > in configurations not tested previously.
> >
> > I can probably easily add the values of --with-cpu, --with-fpu,
> > --with-mode and RUNTESTFLAGS
> > as part of the [ revision rXX--Z] string in the title,
> > would that help?
> > I fear that's going to make very long subject lines.
> >
> > It would probably be cleaner to update test_summary such that it adds
> > more info as part of $host
> > (as in "... testsuite on $host"), so that it grabs useful configure
> > parameter

Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 14:33, David Edelsohn  wrote:
>
> On Wed, Sep 23, 2020 at 8:26 AM Christophe Lyon via Gcc  
> wrote:
> >
> > On Wed, 23 Sep 2020 at 12:26, Richard Earnshaw
> >  wrote:
> > >
> > > On 23/09/2020 11:20, Jakub Jelinek via Gcc wrote:
> > > > On Wed, Sep 23, 2020 at 10:22:52AM +0100, Richard Sandiford wrote:
> > > >> So that would give:
> > > >>
> > > >>   Results for 8.4.1 20200918 [r8-10517] on arm-none-linux-gnueabihf
> > > >>
> > > >> and hopefully free up some space at the end for the kind of thing
> > > >> you mention.
> > > >
> > > > Even that 8.4.1 20200918 is redundant, r8-10517 uniquely and shortly
> > > > identifies both the branch and commit.
> > > > So just
> > > > Results for r8-10517 on ...
> > > > and in ... also include something that uniquely identifies the
> > > > configuration.
> > > >
> > > >   Jakub
> > > >
> > >
> > > I was thinking similarly, but then realised anyone using snapshots
> > > rather than git might not have that information.
> > >
> > > If that's not the case, however, then simplifying this would be a great
> > > idea.
> > >
> > > On the other hand, I use subject filters in my mail to steer results to
> > > a separate folder, so I do need some invariant key in the subject line
> > > that is sufficient to match without (too many) false positives.
> > >
> >
> > I always assumed there was a required format for the title/email
> > contents, is that documented somewhere?
> > There must be a smart filter to avoid spam, doesn't it require some
> > "keywords" in the title for instance?
> >
> > Same question for the gcc-regression list: is there a mandatory format?
>
> The format is generated by contrib/test_summary.

That's true for gcc-testresults, and I was wondering what would happen
if I modify test_summary? Does some mail-filter need fixing too?

Regarding gcc-regression, I think only Intel guys send messages there
(https://gcc.gnu.org/pipermail/gcc-regression/)
and they use different formats, hence I'm curious about the constraints.

>
> - David


Re: duplicate arm test results?

2020-09-24 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 17:50, Christophe Lyon
 wrote:
>
> On Wed, 23 Sep 2020 at 17:33, Martin Sebor  wrote:
> >
> > On 9/23/20 2:54 AM, Christophe Lyon wrote:
> > > On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
> > >>
> > >> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> > >>> On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> > 
> >  Hi Christophe,
> > 
> >  While checking recent test results I noticed many posts with results
> >  for various flavors of arm that at high level seem like duplicates
> >  of one another.
> > 
> >  For example, the batch below all have the same title, but not all
> >  of the contents are the same.  The details (such as test failures)
> >  on some of the pages are different.
> > 
> >  Can you help explain the differences?  Is there a way to avoid
> >  the duplication?
> > 
> > >>>
> > >>> Sure, I am aware that many results look the same...
> > >>>
> > >>>
> > >>> If you look at the top of the report (~line 5), you'll see:
> > >>> Running target myarm-sim
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> > >>> Running target 
> > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> > >>> Running target 
> > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > >>>
> > >>> For all of these, the first line of the report is:
> > >>> LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> > >>> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> > >>> TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> > >>>
> > >>> I have other combinations where I override the configure flags, eg:
> > >>> LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> > >>> r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> > >>> TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> > >>>
> > >>> I tried to see if I could fit something in the subject line, but that
> > >>> didn't seem convenient (would be too long, and I fear modifying the
> > >>> awk script)
> > >>
> > >> Without some indication of a difference in the title there's no way
> > >> to know what result to look at, and checking all of them isn't really
> > >> practical.  The duplication (and the sheer number of results) also
> > >> make it more difficult to find results for targets other than arm-*.
> > >> There are about 13,000 results for September and over 10,000 of those
> > >> for arm-* alone.  It's good to have data but when there's this much
> > >> of it, and when the only form of presentation is as a running list,
> > >> it's too cumbersome to work with.
> > >>
> > >
> > > To help me track & report regressions, I build higher level reports like:
> > > https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
> > > where it's more obvious what configurations are tested.
> >
> > That looks awesome!  The regression indicator looks especially
> > helpful.  I really wish we had an overview like this for all
> > results.  I've been thinking about writing a script to scrape
> > gcc-testresults and format an HTML table kind of like this for
> > years.  With that, the number of posts sent to the list wouldn't
> > be a problem (at least not for those using the page).  But it
> > would require settling on a standard format for the basic
> > parameters of each run.
> >
>
> It's probably easier to detect regressions and format reports from the
> .sum files rather than extracting them from the mailing-list.
> But your approach has the advantage that you can detect regressions
> from reports sent by other people, not only by you.
>
>
> > >
> > > Each line of such reports can send a message to gcc-testresults.
> > >
> > > I can control when such emails are sent, independently for each line:
> > > - never
> > > - for daily bump
> > > - for each validation
> > >
> > > So, I can easily reduce the amount of emails (by disabling them for
> > > some configurations),
> > > but that won't make the subject more informative.
> > > I included the short revision (rXX-) in the title to make it clearer.
> > >
> > > The number of configurations has grown over time because we regularly
> > > found regressions
> > > in configurations not tested previously.
> > >
> > > I can probably easily add the values of --with-cpu, --with-fpu,
> > > --with-mode and RUNTESTFLAGS
> > > as part of the [ revision rXX--Z] string in the title,
> >

Re: duplicate arm test results?

2020-10-05 Thread Christophe Lyon via Gcc
On Thu, 24 Sep 2020 at 14:12, Christophe Lyon
 wrote:
>
> On Wed, 23 Sep 2020 at 17:50, Christophe Lyon
>  wrote:
> >
> > On Wed, 23 Sep 2020 at 17:33, Martin Sebor  wrote:
> > >
> > > On 9/23/20 2:54 AM, Christophe Lyon wrote:
> > > > On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
> > > >>
> > > >> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> > > >>> On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> > > 
> > >  Hi Christophe,
> > > 
> > >  While checking recent test results I noticed many posts with results
> > >  for various flavors of arm that at high level seem like duplicates
> > >  of one another.
> > > 
> > >  For example, the batch below all have the same title, but not all
> > >  of the contents are the same.  The details (such as test failures)
> > >  on some of the pages are different.
> > > 
> > >  Can you help explain the differences?  Is there a way to avoid
> > >  the duplication?
> > > 
> > > >>>
> > > >>> Sure, I am aware that many results look the same...
> > > >>>
> > > >>>
> > > >>> If you look at the top of the report (~line 5), you'll see:
> > > >>> Running target myarm-sim
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> > > >>> Running target 
> > > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> > > >>> Running target 
> > > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > > >>>
> > > >>> For all of these, the first line of the report is:
> > > >>> LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> > > >>> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> > > >>> TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> > > >>>
> > > >>> I have other combinations where I override the configure flags, eg:
> > > >>> LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> > > >>> r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> > > >>> TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> > > >>>
> > > >>> I tried to see if I could fit something in the subject line, but that
> > > >>> didn't seem convenient (would be too long, and I fear modifying the
> > > >>> awk script)
> > > >>
> > > >> Without some indication of a difference in the title there's no way
> > > >> to know what result to look at, and checking all of them isn't really
> > > >> practical.  The duplication (and the sheer number of results) also
> > > >> make it more difficult to find results for targets other than arm-*.
> > > >> There are about 13,000 results for September and over 10,000 of those
> > > >> for arm-* alone.  It's good to have data but when there's this much
> > > >> of it, and when the only form of presentation is as a running list,
> > > >> it's too cumbersome to work with.
> > > >>
> > > >
> > > > To help me track & report regressions, I build higher level reports 
> > > > like:
> > > > https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
> > > > where it's more obvious what configurations are tested.
> > >
> > > That looks awesome!  The regression indicator looks especially
> > > helpful.  I really wish we had an overview like this for all
> > > results.  I've been thinking about writing a script to scrape
> > > gcc-testresults and format an HTML table kind of like this for
> > > years.  With that, the number of posts sent to the list wouldn't
> > > be a problem (at least not for those using the page).  But it
> > > would require settling on a standard format for the basic
> > > parameters of each run.
> > >
> >
> > It's probably easier to detect regressions and format reports from the
> > .sum files rather than extracting them from the mailing-list.
> > But your approach has the advantage that you can detect regressions
> > from reports sent by other people, not only by you.
> >
> >
> > > >
> > > > Each line of such reports can send a message to gcc-testresults.
> > > >
> > > > I can control when such emails are sent, independently for each line:
> > > > - never
> > > > - for daily bump
> > > > - for each validation
> > > >
> > > > So, I can easily reduce the amount of emails (by disabling them for
> > > > some configurations),
> > > > but that won't make the subject more informative.
> > > > I included the short revision (rXX-) in the title to make it 
> > > > clearer.
> > > >
> > > > The number of configurations has grown o

Re: GCC 10.3 Release Candidate available from gcc.gnu.org

2021-04-04 Thread Christophe Lyon via Gcc
On Thu, 1 Apr 2021 at 14:35, Richard Biener  wrote:
>
>
> The first release candidate for GCC 10.3 is available from
>
>  https://gcc.gnu.org/pub/gcc/snapshots/10.3.0-RC-20210401/
>  ftp://gcc.gnu.org/pub/gcc/snapshots/10.3.0-RC-20210401/
>
> and shortly its mirrors.  It has been generated from git commit
> 892024d4af83b258801ff7484bf28f0cf1a1a999.
>
> I have so far bootstrapped and tested the release candidate on
> x86_64-linux.  Please test it and report any issues to bugzilla.
>
> If all goes well, I'd like to release 10.3 on Thursday, April 8th.


Hi,

Last week I committed Richard Earnshaw's fix for PR target/99773),
which affects gcc-10 (sorry I didn't check that when I filed the PR,
I just realized later that 10.3 was so close to release).

I think it would be desirable to backport the patch to gcc-10:
https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=6f93a7c7fc62b2d6ab47e5d5eb60d41366e1ee9e

Is that too late?

Thanks

Christophe


config/dfp.m4 license?

2022-04-29 Thread Christophe Lyon via Gcc

Hi!

The config/dfp.m4 file does not have a license header. Several other .m4 
files in the same directory have a GPL header, many others do not.


Can someone confirm the license of dfp.m4 and add the missing header if 
applicable?


Thanks!

Christophe


Re: Checks that autotools generated files were re-generated correctly

2023-11-06 Thread Christophe Lyon via Gcc
Hi!

On Mon, 6 Nov 2023 at 18:05, Martin Jambor  wrote:
>
> Hello,
>
> I have inherited Martin Liška's buildbot script that checks that all
> sorts of autotools generated files, mainly configure scripts, were
> re-generated correctly when appropriate.  While the checks are hopefully
> useful, they report issues surprisingly often and reporting them feels
> especially unproductive.
>
> Could such checks be added to our server side push hooks so that commits
> introducing these breakages would get refused automatically.  While the
> check might be a bit expensive, it only needs to be run on files
> touching the generated files and/or the files these are generated from.
>
> Alternatively, Maxim, you seem to have an infrastructure that is capable
> of sending email.  Would you consider adding the check to your buildbot
> instance and report issues automatically?  The level of totally

After the discussions we had during Cauldron, I actually thought we
should add such a bot.

Initially I was thinking about adding this as a "precommit" check, to
make sure the autogenerated files were submitted correctly, but I
realized that the policy is actually not to send autogenerated files
as part of the patch (thus making pre-commit check impracticable in
such cases, unless we autogenerate those files after applying the
patch)

I understand you mean to run this as a post-commit bot, meaning we
would continue to "accept" broken commits, but now automatically send
a notification, asking for a prompt fix?

We can probably implement that, indeed. Is that the general agreement?

Thanks,

Christophe

> false-positives should be low (I thought zero but see
> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635358.html).
>
> Thanks for any ideas which can lead to a mostly automated process.
>
> Martin


Re: Checks that autotools generated files were re-generated correctly

2023-11-07 Thread Christophe Lyon via Gcc
On Tue, 7 Nov 2023 at 15:36, Martin Jambor  wrote:
>
> Hello,
>
> On Tue, Nov 07 2023, Maxim Kuvyrkov wrote:
> n>> On Nov 6, 2023, at 21:19, Christophe Lyon  
> wrote:
> >>
> >> Hi!
> >>
> >> On Mon, 6 Nov 2023 at 18:05, Martin Jambor  wrote:
> >>>
> >>> Hello,
> >>>
> >>> I have inherited Martin Liška's buildbot script that checks that all
> >>> sorts of autotools generated files, mainly configure scripts, were
> >>> re-generated correctly when appropriate.  While the checks are hopefully
> >>> useful, they report issues surprisingly often and reporting them feels
> >>> especially unproductive.
> >>>
> >>> Could such checks be added to our server side push hooks so that commits
> >>> introducing these breakages would get refused automatically.  While the
> >>> check might be a bit expensive, it only needs to be run on files
> >>> touching the generated files and/or the files these are generated from.
> >>>
> >>> Alternatively, Maxim, you seem to have an infrastructure that is capable
> >>> of sending email.  Would you consider adding the check to your buildbot
> >>> instance and report issues automatically?  The level of totally
> >>
> >> After the discussions we had during Cauldron, I actually thought we
> >> should add such a bot.
> >>
> >> Initially I was thinking about adding this as a "precommit" check, to
> >> make sure the autogenerated files were submitted correctly, but I
> >> realized that the policy is actually not to send autogenerated files
> >> as part of the patch (thus making pre-commit check impracticable in
> >> such cases, unless we autogenerate those files after applying the
> >> patch)
> >>
> >> I understand you mean to run this as a post-commit bot, meaning we
> >> would continue to "accept" broken commits, but now automatically send
> >> a notification, asking for a prompt fix?
>
> My thinking was that ideally bad commits would get refused early, like
> when you get your ChangeLog completely wrong, but if there are drawbacks
> to that approach, a completely automated notification system would be
> great too.
>
Well, making such checks in a precommit-CI means that authors should
include regenerated files in their patch submissions, so it seems this
would imply a policy change (not impossible, but will likely take some
time to get consensus?)

> >>
> >> We can probably implement that, indeed. Is that the general agreement?
> >
> > [CC: Siddhesh, Carlos]
> >
> > Hi Martin,
> >
> > I agree with Christophe, and we can add various source-level checks
> > and wrap them up as a post-commit job.  The job will then send out
> > email reports to developers whose patches failed it.
>
> Thanks, automating this would be a huge improvement.
>
> >
> > Where the current script is located?  These checks would be useful for
> > all GNU Toolchain projects -- binutils/GDB, GCC, Glibc and, maybe,
> > Newlib -- so it would be useful to put it in a separate "gnutools"
> > repo.
>
> The test consists of running a python script that I'm pasting below in a
> directory with a current master branch and subsequently checking that
> "git diff" does not actually produce any diff (which currently does).

Great, I was thinking about writing something like that :-)

> You need to have locally built autotools utilities of exactly the right
> version.  The script (written by Martin Liška) is:
>
> -- 8< --
> #!/usr/bin/env python3
>
> import os
> import subprocess
> from pathlib import Path
>
> AUTOCONF_BIN = 'autoconf-2.69'
> AUTOMAKE_BIN = 'automake-1.15.1'
> ACLOCAL_BIN = 'aclocal-1.15.1'
> AUTOHEADER_BIN = 'autoheader-2.69'
>
> ENV = f'AUTOCONF={AUTOCONF_BIN} ACLOCAL={ACLOCAL_BIN} AUTOMAKE={AUTOMAKE_BIN}'
>
> config_folders = []
>
> for root, _, files in os.walk('.'):
> for file in files:
> if file == 'configure':
> config_folders.append(Path(root).resolve())
>
> for folder in sorted(config_folders):
> print(folder, flush=True)
> os.chdir(folder)
> configure_lines = open('configure.ac').read().splitlines()
> if any(True for line in configure_lines if 
> line.startswith('AC_CONFIG_HEADERS')):
> subprocess.check_output(f'{ENV} {AUTOHEADER_BIN} -f', shell=True, 
> encoding='utf8')
> # apparently automake is somehow unstable -> skip it for gotools
> if (any(True for line in configure_lines if 
> line.startswith('AM_INIT_AUTOMAKE'))
> and not str(folder).endswith('gotools')):
> subprocess.check_output(f'{ENV} {AUTOMAKE_BIN} -f',
> shell=True, encoding='utf8')
> subprocess.check_output(f'{ENV} {AUTOCONF_BIN} -f', shell=True, 
> encoding='utf8')
>
> -- 8< --

Nice, thanks for sharing.

>
> > I think Siddhesh and Carlos are looking into creating such a repo on
> > gitlab?
>
> I guess this particular script may be even put into gcc's contrib
> directory.  But it can be put anywhere where it makes most sen

Help needed with maintainer-mode

2024-02-29 Thread Christophe Lyon via Gcc
Hi!

Sorry for cross-posting, but I'm not sure the rules/guidelines are the
same in gcc vs binutils/gdb.

TL;DR: are there some guidelines about how to use/enable maintainer-mode?

In the context of the Linaro CI, I've been looking at enabling
maintainer-mode at configure time in our configurations where we test
patches before they are committed (aka "precommit CI", which relies on
patchwork).

Indeed, auto-generated files are not part of patch submissions, and
when a patch implies regenerating some files before building, we
currently report wrong failures because we don't perform such updates.

I hoped improving this would be as simple as adding
--enable-maintainer-mode when configuring, after making sure
autoconf-2.69 and automake-1.15.1 were in the PATH (using our host's
libtool and gettext seems OK).

However, doing so triggered several problems, which look like race
conditions in the build system (we build at -j160):
- random build errors in binutils / gdb with messages like "No rule to
make target 'po/BLD-POTFILES.in". I managed to reproduce something
similar manually once, I noticed an empty Makefile; the build logs are
of course difficult to read, so I couldn't figure out yet what could
cause this.

- random build failures in gcc in fixincludes. I think this is a race
condition because fixincludes is updated concurrently both from
/fixincludes and $buillddir/fixincludes. Probably fixable in gcc
Makefiles.

- I've seen other errors when building gcc like
configure.ac:25: error: possibly undefined macro: AM_ENABLE_MULTILIB
from libquadmath. I haven't investigated this yet.

I've read binutils' README-maintainer-mode, which contains a warning
about distclean, but we don't use this: we start our builds from a
scratch directory.

So... I'm wondering if there are some "official" guidelines about how
to regenerate files, and/or use maintainer-mode?  Maybe I missed a
"magic" make target (eg 'make autoreconf-all') that should be executed
after configure and before 'make all'?

I've noticed that sourceware's buildbot has a small script
"autoregen.py" which does not use the project's build system, but
rather calls aclocal/autoheader/automake/autoconf in an ad-hoc way.
Should we replicate that?

Thanks,

Christophe


Re: Help needed with maintainer-mode

2024-02-29 Thread Christophe Lyon via Gcc
On Thu, 29 Feb 2024 at 11:41, Richard Earnshaw (lists)
 wrote:
>
> On 29/02/2024 10:22, Christophe Lyon via Gcc wrote:
> > Hi!
> >
> > Sorry for cross-posting, but I'm not sure the rules/guidelines are the
> > same in gcc vs binutils/gdb.
> >
> > TL;DR: are there some guidelines about how to use/enable maintainer-mode?
> >
> > In the context of the Linaro CI, I've been looking at enabling
> > maintainer-mode at configure time in our configurations where we test
> > patches before they are committed (aka "precommit CI", which relies on
> > patchwork).
> >
> > Indeed, auto-generated files are not part of patch submissions, and
> > when a patch implies regenerating some files before building, we
> > currently report wrong failures because we don't perform such updates.
> >
> > I hoped improving this would be as simple as adding
> > --enable-maintainer-mode when configuring, after making sure
> > autoconf-2.69 and automake-1.15.1 were in the PATH (using our host's
> > libtool and gettext seems OK).
> >
> > However, doing so triggered several problems, which look like race
> > conditions in the build system (we build at -j160):
> > - random build errors in binutils / gdb with messages like "No rule to
> > make target 'po/BLD-POTFILES.in". I managed to reproduce something
> > similar manually once, I noticed an empty Makefile; the build logs are
> > of course difficult to read, so I couldn't figure out yet what could
> > cause this.
> >
> > - random build failures in gcc in fixincludes. I think this is a race
> > condition because fixincludes is updated concurrently both from
> > /fixincludes and $buillddir/fixincludes. Probably fixable in gcc
> > Makefiles.
> >
> > - I've seen other errors when building gcc like
> > configure.ac:25: error: possibly undefined macro: AM_ENABLE_MULTILIB
> > from libquadmath. I haven't investigated this yet.
> >
> > I've read binutils' README-maintainer-mode, which contains a warning
> > about distclean, but we don't use this: we start our builds from a
> > scratch directory.
> >
> > So... I'm wondering if there are some "official" guidelines about how
> > to regenerate files, and/or use maintainer-mode?  Maybe I missed a
> > "magic" make target (eg 'make autoreconf-all') that should be executed
> > after configure and before 'make all'?
> >
> > I've noticed that sourceware's buildbot has a small script
> > "autoregen.py" which does not use the project's build system, but
> > rather calls aclocal/autoheader/automake/autoconf in an ad-hoc way.
> > Should we replicate that?
> >
> > Thanks,
> >
> > Christophe
>
> There are other potential gotchas as well, such as the manual copying of the 
> generated tm.texi back into the source repo due to relicensing.  Perhaps we 
> haven't encountered that one because patches generally contain that 
> duplicated output.
>
It did happen a few weeks ago, with a patch that was updating the
target hooks IIRC.

> If we want a CI to work reliably, then perhaps we should reconsider our 
> policy of stripping out regenerated code.  We have a number of developer 
> practices, such as replying to an existing patch with an updated version that 
> the CI can't handle easily (especially if the patch is part of a series), so 
> there may be space for a discussion on how to work smarter.
>
Sure, there are many things we can improve in the current workflow to
make it more CI friendly ;-)
But I was only asking how maintainer-mode is supposed to be used, so
that I can replicate the process in CI.
I couldn't find any documentation :-)

Thanks,

Christophe

> My calendar says we have a toolchain office hours meeting today, perhaps this 
> would be worth bringing up.
>
> R.
>


Re: Help needed with maintainer-mode

2024-02-29 Thread Christophe Lyon via Gcc
On Thu, 29 Feb 2024 at 12:00, Mark Wielaard  wrote:
>
> Hi Christophe,
>
> On Thu, Feb 29, 2024 at 11:22:33AM +0100, Christophe Lyon via Gcc wrote:
> > I've noticed that sourceware's buildbot has a small script
> > "autoregen.py" which does not use the project's build system, but
> > rather calls aclocal/autoheader/automake/autoconf in an ad-hoc way.
> > Should we replicate that?
>
> That python script works across gcc/binutils/gdb:
> https://sourceware.org/cgit/builder/tree/builder/containers/autoregen.py
>
> It is installed into a container file that has the exact autoconf and
> automake version needed to regenerate the autotool files:
> https://sourceware.org/cgit/builder/tree/builder/containers/Containerfile-autotools
>
> And it was indeed done this way because that way the files are
> regenerated in a reproducible way. Which wasn't the case when using 
> --enable-maintainer-mode (and autoreconfig also doesn't work).

I see. So it is possibly incomplete, in the sense that it may lack
some of the steps that maintainer-mode would perform?
For instance, gas for aarch64 has some *opcodes*.c files that need
regenerating before committing. The regeneration step is enabled in
maintainer-mode, so I guess the autoregen bots on Sourceware would
miss a problem with these files?

Thanks,

Christophe

>
> It is run on all commits and warns if it detects a change in the
> (checked in) generated files.
> https://builder.sourceware.org/buildbot/#/builders/gcc-autoregen
> https://builder.sourceware.org/buildbot/#/builders/binutils-gdb-autoregen
>
> Cheers,
>
> Mark


Re: Help needed with maintainer-mode

2024-03-01 Thread Christophe Lyon via Gcc
On Thu, 29 Feb 2024 at 20:49, Thiago Jung Bauermann
 wrote:
>
>
> Hello,
>
> Christophe Lyon  writes:
>
> > I hoped improving this would be as simple as adding
> > --enable-maintainer-mode when configuring, after making sure
> > autoconf-2.69 and automake-1.15.1 were in the PATH (using our host's
> > libtool and gettext seems OK).
> >
> > However, doing so triggered several problems, which look like race
> > conditions in the build system (we build at -j160):
> > - random build errors in binutils / gdb with messages like "No rule to
> > make target 'po/BLD-POTFILES.in". I managed to reproduce something
> > similar manually once, I noticed an empty Makefile; the build logs are
> > of course difficult to read, so I couldn't figure out yet what could
> > cause this.
> >
> > - random build failures in gcc in fixincludes. I think this is a race
> > condition because fixincludes is updated concurrently both from
> > /fixincludes and $buillddir/fixincludes. Probably fixable in gcc
> > Makefiles.
> >
> > - I've seen other errors when building gcc like
> > configure.ac:25: error: possibly undefined macro: AM_ENABLE_MULTILIB
> > from libquadmath. I haven't investigated this yet.
>
> I don't know about the last one, but regarding the race conditions, one
> workaround might be to define a make target that regenerates all files
> (if one doesn't exist already, I don't know) and make the CI call it
> with -j1 to avoid concurrency, and then do the regular build step with
> -j160.
>

Yes, that's what I meant below with "magic" make target ;-)

Thanks,

Christophe

> --
> Thiago


Re: Help needed with maintainer-mode

2024-03-01 Thread Christophe Lyon via Gcc
On Fri, 1 Mar 2024 at 14:08, Mark Wielaard  wrote:
>
> Hi Christophe,
>
> On Thu, 2024-02-29 at 18:39 +0100, Christophe Lyon wrote:
> > On Thu, 29 Feb 2024 at 12:00, Mark Wielaard  wrote:
> > > That python script works across gcc/binutils/gdb:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/autoregen.py
> > >
> > > It is installed into a container file that has the exact autoconf and
> > > automake version needed to regenerate the autotool files:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/Containerfile-autotools
> > >
> > > And it was indeed done this way because that way the files are
> > > regenerated in a reproducible way. Which wasn't the case when using 
> > > --enable-maintainer-mode (and autoreconfig also doesn't work).
> >
> > I see. So it is possibly incomplete, in the sense that it may lack
> > some of the steps that maintainer-mode would perform?
> > For instance, gas for aarch64 has some *opcodes*.c files that need
> > regenerating before committing. The regeneration step is enabled in
> > maintainer-mode, so I guess the autoregen bots on Sourceware would
> > miss a problem with these files?
>
> Yes, it is certainly incomplete. But it is done this way because it is
> my understanding that even the gcc release maintainers do the
> automake/autoconf invocations by hand instead of running with configure
> --enable-maintainer-mode.

Indeed, I've just discovered that earlier today :-)

>
> Note that another part that isn't caught at the moment are the
> regeneration of the opt.urls files. There is a patch for that pending:
Indeed. I hadn't thought of it either. And just noticed it requires
the D frontend, which we don't build in CI.

> https://inbox.sourceware.org/buildbot/20231215005908.gc12...@gnu.wildebeest.org/
>
> But that is waiting for the actual opt.urls to be regenerated correctly
> first:
> https://inbox.sourceware.org/gcc-patches/20240224174258.gd1...@gnu.wildebeest.org/
> Ping David?
>
> It would be nice to have all these "regeneration targets" in one script
> that could be used by both the pre-commit and post-commit checkers.
>
Agreed.

> Cheers,
>
> Mark


Re: Help needed with maintainer-mode

2024-03-01 Thread Christophe Lyon via Gcc
On Fri, 1 Mar 2024 at 14:08, Mark Wielaard  wrote:
>
> Hi Christophe,
>
> On Thu, 2024-02-29 at 18:39 +0100, Christophe Lyon wrote:
> > On Thu, 29 Feb 2024 at 12:00, Mark Wielaard  wrote:
> > > That python script works across gcc/binutils/gdb:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/autoregen.py
> > >
> > > It is installed into a container file that has the exact autoconf and
> > > automake version needed to regenerate the autotool files:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/Containerfile-autotools
> > >
> > > And it was indeed done this way because that way the files are
> > > regenerated in a reproducible way. Which wasn't the case when using 
> > > --enable-maintainer-mode (and autoreconfig also doesn't work).
> >
> > I see. So it is possibly incomplete, in the sense that it may lack
> > some of the steps that maintainer-mode would perform?
> > For instance, gas for aarch64 has some *opcodes*.c files that need
> > regenerating before committing. The regeneration step is enabled in
> > maintainer-mode, so I guess the autoregen bots on Sourceware would
> > miss a problem with these files?
>
> Yes, it is certainly incomplete. But it is done this way because it is
> my understanding that even the gcc release maintainers do the
> automake/autoconf invocations by hand instead of running with configure
> --enable-maintainer-mode.

After a discussion on IRC, I read
https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
which basically says "run autoreconf in every dir where there is a
configure script"
but this is not exactly what autoregen.py is doing. IIRC it is based
on a script from Martin Liska, do you know/remember why it follows a
different process?

Thanks,

Christophe

>
> Note that another part that isn't caught at the moment are the
> regeneration of the opt.urls files. There is a patch for that pending:
> https://inbox.sourceware.org/buildbot/20231215005908.gc12...@gnu.wildebeest.org/
>
> But that is waiting for the actual opt.urls to be regenerated correctly
> first:
> https://inbox.sourceware.org/gcc-patches/20240224174258.gd1...@gnu.wildebeest.org/
> Ping David?
>
> It would be nice to have all these "regeneration targets" in one script
> that could be used by both the pre-commit and post-commit checkers.
>
> Cheers,
>
> Mark


Re: Help needed with maintainer-mode

2024-03-04 Thread Christophe Lyon via Gcc
Hi!

On Mon, 4 Mar 2024 at 10:36, Thomas Schwinge  wrote:
>
> Hi!
>
> On 2024-03-04T00:30:05+, Sam James  wrote:
> > Mark Wielaard  writes:
> >> On Fri, Mar 01, 2024 at 05:32:15PM +0100, Christophe Lyon wrote:
> >>> [...], I read
> >>> https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
> >>> which basically says "run autoreconf in every dir where there is a
> >>> configure script"
> >>> but this is not exactly what autoregen.py is doing. IIRC it is based
> >>> on a script from Martin Liska, do you know/remember why it follows a
> >>> different process?
> >>
> >> CCing Sam and Arsen who helped refine the autoregen.py script, who
> >> might remember more details. We wanted a script that worked for both
> >> gcc and binutils-gdb. And as far as I know autoreconf simply didn't
> >> work in all directories. We also needed to skip some directories that
> >> did contain a configure script, but that were imported (gotools,
> >> readline, minizip).
> >
> > What we really need to do is, for a start, land tschwinge/aoliva's patches 
> > [0]
> > for AC_CONFIG_SUBDIRS.
>
> Let me allocate some time this week to get that one completed.
>
> > Right now, the current situation is janky and it's nowhere near
> > idiomatic autotools usage. It is not a comfortable experience
> > interacting with it even as someone who is familiar and happy with using
> > autotools otherwise.
> >
> > I didn't yet play with maintainer-mode myself but I also didn't see much
> > point given I knew of more fundamental problems like this.
> >
> > [0] 
> > https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/
>

Thanks for the background. I didn't follow that discussion at that time :-)

So... I was confused because I noticed many warnings when doing a simple
find . -name configure |while read f; do echo $f;d=$(dirname $f) &&
autoreconf -f $d && echo $d; done
as suggested by https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration

Then I tried with autoregen.py, and saw the same and now just
checked Sourceware's bot logs and saw the same numerous warnings at
least in GCC (didn't check binutils yet). Looks like this is
"expected" 

I started looking at auto-regenerating these files in our CI a couple
of weeks ago, after we received several "complaints" from contributors
saying that our precommit CI was useless / bothering since it didn't
regenerate files, leading to false alarms.
But now I'm wondering how such contributors regenerate the files
impacted by their patches before committing, they probably just
regenerate things in their subdir of interest, not noticing the whole
picture :-(

As a first step, we can probably use autoregen.py too, and declare
maintainer-mode broken. However, I do notice that besides the rules
about regenerating configure/Makefile.in/..., maintainer-mode is also
used to update some files.
In gcc:
fixincludes: fixincl.x
libffi: doc/version.texi
libgfortran: some stuff :-)
libiberty: functions.texi

in binutils/bfd:
gdb/sim
bfd/po/SRC-POTFILES.in
bfd/po/BLD-POTFILES.in
bfd/bfd-in2.h
bfd/libbfd.h
bfd/libcoff.h
binutils/po/POTFILES.in
gas/po/POTFILES.in
opcodes/i386*.h
gdb/copying.c
gdb/data-directory/*.xml
gold/po/POTFILES.in
gprof/po/POTFILES.in
gfprofng/doc/version.texi
ld/po/SRC-POTFILES.in
ld/po/BLD-POTFILES.in
ld: ldgram/ldlex... and all emulation sources
libiberty/functions.texi
opcodes/po/POTFILES.in
opcodes/aarch64-{asm,dis,opc}-2.c
opcodes/ia64 msp430 rl78 rx z8k decoders

How are these files "normally" expected to be updated? Do people just
manually uncomment the corresponding maintainer lines in the Makefiles
and update manually?   In particular we hit issues several times with
files under opcodes, that we don't regenerate currently. Maybe we
could build binutils in maintainer-mode at -j1 but, well

README-maintainer-mode in binutils/gdb only mentions a problem with
'make distclean' and maintainer mode
binutils/README-how-to-make-a-release indicates to use
--enable-maintainer-mode, and the sample 'make' invocations do not
include any -j flag, is that an indication that only -j1 is supposed
to work?
Similarly, the src-release.sh script does not use -j.

Thanks,

Christophe

>
> Grüße
>  Thomas


Re: Help needed with maintainer-mode

2024-03-04 Thread Christophe Lyon via Gcc
On Mon, 4 Mar 2024 at 12:25, Jonathan Wakely  wrote:
>
> On Mon, 4 Mar 2024 at 10:44, Christophe Lyon via Gcc  wrote:
> >
> > Hi!
> >
> > On Mon, 4 Mar 2024 at 10:36, Thomas Schwinge  wrote:
> > >
> > > Hi!
> > >
> > > On 2024-03-04T00:30:05+, Sam James  wrote:
> > > > Mark Wielaard  writes:
> > > >> On Fri, Mar 01, 2024 at 05:32:15PM +0100, Christophe Lyon wrote:
> > > >>> [...], I read
> > > >>> https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
> > > >>> which basically says "run autoreconf in every dir where there is a
> > > >>> configure script"
> > > >>> but this is not exactly what autoregen.py is doing. IIRC it is based
> > > >>> on a script from Martin Liska, do you know/remember why it follows a
> > > >>> different process?
> > > >>
> > > >> CCing Sam and Arsen who helped refine the autoregen.py script, who
> > > >> might remember more details. We wanted a script that worked for both
> > > >> gcc and binutils-gdb. And as far as I know autoreconf simply didn't
> > > >> work in all directories. We also needed to skip some directories that
> > > >> did contain a configure script, but that were imported (gotools,
> > > >> readline, minizip).
> > > >
> > > > What we really need to do is, for a start, land tschwinge/aoliva's 
> > > > patches [0]
> > > > for AC_CONFIG_SUBDIRS.
> > >
> > > Let me allocate some time this week to get that one completed.
> > >
> > > > Right now, the current situation is janky and it's nowhere near
> > > > idiomatic autotools usage. It is not a comfortable experience
> > > > interacting with it even as someone who is familiar and happy with using
> > > > autotools otherwise.
> > > >
> > > > I didn't yet play with maintainer-mode myself but I also didn't see much
> > > > point given I knew of more fundamental problems like this.
> > > >
> > > > [0] 
> > > > https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/
> > >
> >
> > Thanks for the background. I didn't follow that discussion at that time :-)
> >
> > So... I was confused because I noticed many warnings when doing a simple
> > find . -name configure |while read f; do echo $f;d=$(dirname $f) &&
> > autoreconf -f $d && echo $d; done
> > as suggested by https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
> >
> > Then I tried with autoregen.py, and saw the same and now just
> > checked Sourceware's bot logs and saw the same numerous warnings at
> > least in GCC (didn't check binutils yet). Looks like this is
> > "expected" 
> >
> > I started looking at auto-regenerating these files in our CI a couple
> > of weeks ago, after we received several "complaints" from contributors
> > saying that our precommit CI was useless / bothering since it didn't
> > regenerate files, leading to false alarms.
> > But now I'm wondering how such contributors regenerate the files
> > impacted by their patches before committing, they probably just
> > regenerate things in their subdir of interest, not noticing the whole
> > picture :-(
> >
> > As a first step, we can probably use autoregen.py too, and declare
> > maintainer-mode broken. However, I do notice that besides the rules
> > about regenerating configure/Makefile.in/..., maintainer-mode is also
> > used to update some files.
> > In gcc:
> > fixincludes: fixincl.x
> > libffi: doc/version.texi
> > libgfortran: some stuff :-)
> > libiberty: functions.texi
>
> My recently proposed patch adds the first of those to gcc_update, the
> other should be done too.
> https://gcc.gnu.org/pipermail/gcc-patches/2024-March/647027.html
>

This script touches files such that they appear more recent than their
dependencies,
so IIUC even if one uses --enable-maintainer-mode, it will have no effect.
For auto* files, this is "fine" as we can run autoreconf or
autoregen.py before starting configure+build, but what about other
files?
For instance, if we have to test a patch which implies changes to
fixincludes/fixincl.x, how should we proceed?
1- git checkout (with possibly "wrong" timestamps)
2- apply patch-to-test
3- contrib/gcc_update -t
4- configure --enable-maintainer-mode

I guess --enable-maintainer-mode would be largely (if not comple

Re: Help needed with maintainer-mode

2024-03-04 Thread Christophe Lyon via Gcc
On Mon, 4 Mar 2024 at 16:41, Richard Earnshaw  wrote:
>
>
>
> On 04/03/2024 15:36, Richard Earnshaw (lists) wrote:
> > On 04/03/2024 14:46, Christophe Lyon via Gcc wrote:
> >> On Mon, 4 Mar 2024 at 12:25, Jonathan Wakely  wrote:
> >>>
> >>> On Mon, 4 Mar 2024 at 10:44, Christophe Lyon via Gcc  
> >>> wrote:
> >>>>
> >>>> Hi!
> >>>>
> >>>> On Mon, 4 Mar 2024 at 10:36, Thomas Schwinge  
> >>>> wrote:
> >>>>>
> >>>>> Hi!
> >>>>>
> >>>>> On 2024-03-04T00:30:05+, Sam James  wrote:
> >>>>>> Mark Wielaard  writes:
> >>>>>>> On Fri, Mar 01, 2024 at 05:32:15PM +0100, Christophe Lyon wrote:
> >>>>>>>> [...], I read
> >>>>>>>> https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration 
> >>>>>>>> <https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration>
> >>>>>>>> which basically says "run autoreconf in every dir where there is a
> >>>>>>>> configure script"
> >>>>>>>> but this is not exactly what autoregen.py is doing. IIRC it is based
> >>>>>>>> on a script from Martin Liska, do you know/remember why it follows a
> >>>>>>>> different process?
> >>>>>>>
> >>>>>>> CCing Sam and Arsen who helped refine the autoregen.py script, who
> >>>>>>> might remember more details. We wanted a script that worked for both
> >>>>>>> gcc and binutils-gdb. And as far as I know autoreconf simply didn't
> >>>>>>> work in all directories. We also needed to skip some directories that
> >>>>>>> did contain a configure script, but that were imported (gotools,
> >>>>>>> readline, minizip).
> >>>>>>
> >>>>>> What we really need to do is, for a start, land tschwinge/aoliva's 
> >>>>>> patches [0]
> >>>>>> for AC_CONFIG_SUBDIRS.
> >>>>>
> >>>>> Let me allocate some time this week to get that one completed.
> >>>>>
> >>>>>> Right now, the current situation is janky and it's nowhere near
> >>>>>> idiomatic autotools usage. It is not a comfortable experience
> >>>>>> interacting with it even as someone who is familiar and happy with 
> >>>>>> using
> >>>>>> autotools otherwise.
> >>>>>>
> >>>>>> I didn't yet play with maintainer-mode myself but I also didn't see 
> >>>>>> much
> >>>>>> point given I knew of more fundamental problems like this.
> >>>>>>
> >>>>>> [0] 
> >>>>>> https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/
> >>>>>>  
> >>>>>> <https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/>
> >>>>>
> >>>>
> >>>> Thanks for the background. I didn't follow that discussion at that time 
> >>>> :-)
> >>>>
> >>>> So... I was confused because I noticed many warnings when doing a simple
> >>>> find . -name configure |while read f; do echo $f;d=$(dirname $f) &&
> >>>> autoreconf -f $d && echo $d; done
> >>>> as suggested by https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration 
> >>>> <https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration>
> >>>>
> >>>> Then I tried with autoregen.py, and saw the same and now just
> >>>> checked Sourceware's bot logs and saw the same numerous warnings at
> >>>> least in GCC (didn't check binutils yet). Looks like this is
> >>>> "expected" 
> >>>>
> >>>> I started looking at auto-regenerating these files in our CI a couple
> >>>> of weeks ago, after we received several "complaints" from contributors
> >>>> saying that our precommit CI was useless / bothering since it didn't
> >>>> regenerate files, leading to false alarms.
> >>>> But now I'm wondering how such contributors regenerate the files
> >>>> impacted by their patches before committing, they probably jus

[RFC] add regenerate Makefile target

2024-03-13 Thread Christophe Lyon via Gcc
Hi!

After recent discussions on IRC and on the lists about maintainer-mode
and various problems with auto-generated source files, I've written
this small prototype.

Based on those discussions, I assumed that people generally want to
update autotools files using a script similar to autoregen.py, which
takes care of running aclocal, autoheader, automake and autoconf as
appropriate.

What is currently missing is a "simple" way of regenerating other
files, which happens normally with --enable-maintainer-mode (which is
reportedly broken).  This patch as a "regenerate" Makefile target
which can be called to update those files, provided
--enable-maintainer-mode is used.

I tried this approach with the following workflow for binutils/gdb:
- run autoregen.py in srcdir
- cd builddir
- configure --enable-maintainer-mode 
- make all-bfd all-libiberty regenerate -j1
- for gdb: make all -C gdb/data-directory -j1
- make all -jXXX

Making 'all' in bfd and libiberty is needed by some XXX-gen host
programs in opcodes.

The advantage (for instance for CI) is that we can regenerate files at
-j1, thus avoiding the existing race conditions, and build the rest
with -j XXX.

Among drawbacks:
- most sub-components use Makefile.am, but gdb does not: this may make
  maintenance more complex (different rules for different projects)
- maintaining such ad-hoc "regenerate" rules would require special
  attention from maintainers/reviewers
- dependency on -all-bfd and all-libiberty is probably not fully
   intuitive, but should not be a problem if the "regenerate" rules
   are used after a full build for instance

Of course Makefile.def/Makefile.tpl would need further cleanup as I
didn't try to take gcc into account is this patch.

Thoughts?

Thanks,

Christophe


---
 Makefile.def |   37 +-
 Makefile.in  | 1902 ++
 Makefile.tpl |7 +
 bfd/Makefile.am  |1 +
 bfd/Makefile.in  |1 +
 binutils/Makefile.am |1 +
 binutils/Makefile.in |1 +
 gas/Makefile.am  |1 +
 gas/Makefile.in  |1 +
 gdb/Makefile.in  |1 +
 gold/Makefile.am |2 +-
 gold/Makefile.in |2 +-
 gprof/Makefile.am|1 +
 gprof/Makefile.in|1 +
 ld/Makefile.am   |1 +
 ld/Makefile.in   |1 +
 opcodes/Makefile.am  |2 +
 opcodes/Makefile.in  |2 +
 18 files changed, 1952 insertions(+), 13 deletions(-)

diff --git a/Makefile.def b/Makefile.def
index 3e00a729a0c..42e71a9ffa2 100644
--- a/Makefile.def
+++ b/Makefile.def
@@ -39,7 +39,8 @@ host_modules= { module= binutils; bootstrap=true; };
 host_modules= { module= bison; no_check_cross= true; };
 host_modules= { module= cgen; };
 host_modules= { module= dejagnu; };
-host_modules= { module= etc; };
+host_modules= { module= etc;
+missing= regenerate; };
 host_modules= { module= fastjar; no_check_cross= true; };
 host_modules= { module= fixincludes; bootstrap=true;
missing= TAGS;
@@ -73,7 +74,8 @@ host_modules= { module= isl; lib_path=.libs; bootstrap=true;
no_install= true; };
 host_modules= { module= gold; bootstrap=true; };
 host_modules= { module= gprof; };
-host_modules= { module= gprofng; };
+host_modules= { module= gprofng;
+missing= regenerate; };
 host_modules= { module= gettext; bootstrap=true; no_install=true;
 module_srcdir= "gettext/gettext-runtime";
// We always build gettext with pic, because some packages 
(e.g. gdbserver)
@@ -95,7 +97,8 @@ host_modules= { module= tcl;
 missing=mostlyclean; };
 host_modules= { module= itcl; };
 host_modules= { module= ld; bootstrap=true; };
-host_modules= { module= libbacktrace; bootstrap=true; };
+host_modules= { module= libbacktrace; bootstrap=true;
+missing= regenerate; };
 host_modules= { module= libcpp; bootstrap=true; };
 // As with libiconv, don't install any of libcody
 host_modules= { module= libcody; bootstrap=true;
@@ -110,9 +113,11 @@ host_modules= { module= libcody; bootstrap=true;
missing= install-dvi;
missing=TAGS; };
 host_modules= { module= libdecnumber; bootstrap=true;
-   missing=TAGS; };
+   missing=TAGS;
+missing= regenerate; };
 host_modules= { module= libgui; };
 host_modules= { module= libiberty; bootstrap=true;
+missing= regenerate;

extra_configure_flags='@extra_host_libiberty_configure_flags@';};
 // Linker plugins may need their own build of libiberty; see
 // gcc/doc/install.texi.  We take care that this build of libiberty doesn't get
@@ -134,16 +139,22 @@ host_modules= { module= libiconv;
missing= install-html;
missing= install-info; };
 host_modules= { module= m4; };
-host_modules= { module= readline; };
+host_modules= { module= readline;
+missing= regenerate; };
 host_modules= { module= sid; };
-host_modules= { module= sim; }

Re: [RFC] add regenerate Makefile target

2024-03-15 Thread Christophe Lyon via Gcc
On Thu, 14 Mar 2024 at 19:10, Simon Marchi  wrote:
>
>
>
> On 2024-03-13 04:02, Christophe Lyon via Gdb wrote:
> > Hi!
> >
> > After recent discussions on IRC and on the lists about maintainer-mode
> > and various problems with auto-generated source files, I've written
> > this small prototype.
> >
> > Based on those discussions, I assumed that people generally want to
> > update autotools files using a script similar to autoregen.py, which
> > takes care of running aclocal, autoheader, automake and autoconf as
> > appropriate.
> >
> > What is currently missing is a "simple" way of regenerating other
> > files, which happens normally with --enable-maintainer-mode (which is
> > reportedly broken).  This patch as a "regenerate" Makefile target
> > which can be called to update those files, provided
> > --enable-maintainer-mode is used.
> >
> > I tried this approach with the following workflow for binutils/gdb:
> > - run autoregen.py in srcdir
> > - cd builddir
> > - configure --enable-maintainer-mode
> > - make all-bfd all-libiberty regenerate -j1
> > - for gdb: make all -C gdb/data-directory -j1
> > - make all -jXXX
> >
> > Making 'all' in bfd and libiberty is needed by some XXX-gen host
> > programs in opcodes.
> >
> > The advantage (for instance for CI) is that we can regenerate files at
> > -j1, thus avoiding the existing race conditions, and build the rest
> > with -j XXX.
> >
> > Among drawbacks:
> > - most sub-components use Makefile.am, but gdb does not: this may make
> >   maintenance more complex (different rules for different projects)
> > - maintaining such ad-hoc "regenerate" rules would require special
> >   attention from maintainers/reviewers
> > - dependency on -all-bfd and all-libiberty is probably not fully
> >intuitive, but should not be a problem if the "regenerate" rules
> >are used after a full build for instance
> >
> > Of course Makefile.def/Makefile.tpl would need further cleanup as I
> > didn't try to take gcc into account is this patch.
> >
> > Thoughts?
>
> My first thought it: why is it a Makefile target, instead of some script
> on the side (like autoregen.sh).  It would be nice / useful to be
> able to it without configuring / building anything.  For instance, the
> autoregen buildbot job could run it without configuring anything.
> Ideally, the buildbot wouldn't maintain its own autoregen.py file on the
> side, it would just use whatever is in the repo.

Firstly because of what you mention later: some regeneration steps
require building host tools first, like the XXX-gen in opcodes.

Since the existing Makefiles already contain the rules to autoregen
all these files, it seemed natural to me to reuse them, to avoid
reinventing the wheel with the risk of introducing new bugs.

This involves changes in places where I've never looked at before, so
I'd rather reuse as much existing support as possible.

For instance, there are the generators in opcodes/, but also things in
sim/, bfd/, updates to the docs and potfiles. In gcc, there's also
something "unusual" in fixincludes/ and libgfortran/

In fact, I considered also including 'configure', 'Makefile.in',
etc... in the 'regenerate' target, it does not seem natural to me to
invoke a script on the side, where you have to replicate the behaviour
of existing Makefiles, possibly getting out-of-sync when someone
forgets to update either Makefile or autoregen.py. What is currently
missing is a way to easily regenerate files without having to run a
full 'make all' (which currently takes care of calling autoconf &
friends to update configure/Makefile.in).

But yeah, having to configure before being able to regenerate files is
a bit awkward too :-)


>
> Looking at the rule to re-generate copying.c in gdb for instance:
>
> # Make copying.c from COPYING
> $(srcdir)/copying.c: @MAINTAINER_MODE_TRUE@ $(srcdir)/../COPYING3 
> $(srcdir)/copying.awk
>awk -f $(srcdir)/copying.awk \
>< $(srcdir)/../COPYING3 > $(srcdir)/copying.tmp
>mv $(srcdir)/copying.tmp $(srcdir)/copying.c
>
> There is nothing in this code that requires having configured the source
> tree.  This code could for instance be moved to some
> generate-copying-c.sh script.  generate-copying-c.sh could be called by
> an hypothetical autoregen.sh script, as well as the copying.c Makefile
> target, if we want to continue supporting the maintainer mode.
Wouldn't it be more obscure than now? Currently such build rules are
all in the relevant Makefile. You'd have to open several scripts to
discover what's involved with updating copying.c

>
> Much like your regenerate targets, an autoregen.sh script in a given
> directory would be responsible to re-generate all the files in this
> directory that are generated and checked in git.  It would also be
> responsible to call any autoregen.sh file in subdirectories.
Makefiles already have all that in place :-)
Except if you consider that you'd want to ignore timestamps and always
regenerate th

Re: [RFC] add regenerate Makefile target

2024-03-18 Thread Christophe Lyon via Gcc
On Fri, 15 Mar 2024 at 15:13, Eric Gallager  wrote:
>
> On Fri, Mar 15, 2024 at 4:53 AM Christophe Lyon via Gcc  
> wrote:
> >
> > On Thu, 14 Mar 2024 at 19:10, Simon Marchi  wrote:
> > >
> > >
> > >
> > > On 2024-03-13 04:02, Christophe Lyon via Gdb wrote:
> > > > Hi!
> > > >
> > > > After recent discussions on IRC and on the lists about maintainer-mode
> > > > and various problems with auto-generated source files, I've written
> > > > this small prototype.
> > > >
> > > > Based on those discussions, I assumed that people generally want to
> > > > update autotools files using a script similar to autoregen.py, which
> > > > takes care of running aclocal, autoheader, automake and autoconf as
> > > > appropriate.
> > > >
> > > > What is currently missing is a "simple" way of regenerating other
> > > > files, which happens normally with --enable-maintainer-mode (which is
> > > > reportedly broken).  This patch as a "regenerate" Makefile target
> > > > which can be called to update those files, provided
> > > > --enable-maintainer-mode is used.
> > > >
> > > > I tried this approach with the following workflow for binutils/gdb:
> > > > - run autoregen.py in srcdir
> > > > - cd builddir
> > > > - configure --enable-maintainer-mode
> > > > - make all-bfd all-libiberty regenerate -j1
> > > > - for gdb: make all -C gdb/data-directory -j1
> > > > - make all -jXXX
> > > >
> > > > Making 'all' in bfd and libiberty is needed by some XXX-gen host
> > > > programs in opcodes.
> > > >
> > > > The advantage (for instance for CI) is that we can regenerate files at
> > > > -j1, thus avoiding the existing race conditions, and build the rest
> > > > with -j XXX.
> > > >
> > > > Among drawbacks:
> > > > - most sub-components use Makefile.am, but gdb does not: this may make
> > > >   maintenance more complex (different rules for different projects)
> > > > - maintaining such ad-hoc "regenerate" rules would require special
> > > >   attention from maintainers/reviewers
> > > > - dependency on -all-bfd and all-libiberty is probably not fully
> > > >intuitive, but should not be a problem if the "regenerate" rules
> > > >are used after a full build for instance
> > > >
> > > > Of course Makefile.def/Makefile.tpl would need further cleanup as I
> > > > didn't try to take gcc into account is this patch.
> > > >
> > > > Thoughts?
> > >
> > > My first thought it: why is it a Makefile target, instead of some script
> > > on the side (like autoregen.sh).  It would be nice / useful to be
> > > able to it without configuring / building anything.  For instance, the
> > > autoregen buildbot job could run it without configuring anything.
> > > Ideally, the buildbot wouldn't maintain its own autoregen.py file on the
> > > side, it would just use whatever is in the repo.
> >
> > Firstly because of what you mention later: some regeneration steps
> > require building host tools first, like the XXX-gen in opcodes.
> >
> > Since the existing Makefiles already contain the rules to autoregen
> > all these files, it seemed natural to me to reuse them, to avoid
> > reinventing the wheel with the risk of introducing new bugs.
> >
> > This involves changes in places where I've never looked at before, so
> > I'd rather reuse as much existing support as possible.
> >
> > For instance, there are the generators in opcodes/, but also things in
> > sim/, bfd/, updates to the docs and potfiles. In gcc, there's also
> > something "unusual" in fixincludes/ and libgfortran/
> >
> > In fact, I considered also including 'configure', 'Makefile.in',
> > etc... in the 'regenerate' target, it does not seem natural to me to
> > invoke a script on the side, where you have to replicate the behaviour
> > of existing Makefiles, possibly getting out-of-sync when someone
> > forgets to update either Makefile or autoregen.py. What is currently
> > missing is a way to easily regenerate files without having to run a
> > full 'make all' (which currently takes care of calling autoconf &
> > friends to update configure/Makefile.in).
> >
> > But y

Re: [RFC] add regenerate Makefile target

2024-03-18 Thread Christophe Lyon via Gcc
On Sat, 16 Mar 2024 at 18:16, Simon Marchi  wrote:
>
>
>
> On 2024-03-15 04:50, Christophe Lyon via Gdb wrote:
> > On Thu, 14 Mar 2024 at 19:10, Simon Marchi  wrote:
> >> My first thought it: why is it a Makefile target, instead of some script
> >> on the side (like autoregen.sh).  It would be nice / useful to be
> >> able to it without configuring / building anything.  For instance, the
> >> autoregen buildbot job could run it without configuring anything.
> >> Ideally, the buildbot wouldn't maintain its own autoregen.py file on the
> >> side, it would just use whatever is in the repo.
> >
> > Firstly because of what you mention later: some regeneration steps
> > require building host tools first, like the XXX-gen in opcodes.
>
> "build" and not "host", I think?

yes, sorry

> > Since the existing Makefiles already contain the rules to autoregen
> > all these files, it seemed natural to me to reuse them, to avoid
> > reinventing the wheel with the risk of introducing new bugs.
>
> I understand.  Although one advantage of moving the actual code out of
> the Makefile (even if there's still a Makefile rule calling the external
> script), is that it's much easier to maintain.  Editors are much more
> useful when editing a standalone shell script than editing shell code in
> a Makefile target.  It doesn't have to be this big one liner if you want
> to use variables, you don't need to escape $, you can run it through
> linters, you can call it by hand, etc.  This is what I did here, for
> instance:
>
> https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f39632d9579d3c97f1e50a728efed3c5409747d2
>
> So I think there's value in any case of moving the regeneration logic
> out of the Makefiles per se.
>
In this case, the generation rules look simple enough indeed.
But as mentioned elsewhere in the thread, there are more complex
cases, which involve building helper tools, which have dependencies on
bfd and libiberty for instance. I'm not sure that's easily/naturally
scriptable?
There's also 'chew' in bfd/

> > This involves changes in places where I've never looked at before, so
> > I'd rather reuse as much existing support as possible.
> >
> > For instance, there are the generators in opcodes/, but also things in
> > sim/, bfd/, updates to the docs and potfiles. In gcc, there's also
> > something "unusual" in fixincludes/ and libgfortran/
> >
> > In fact, I considered also including 'configure', 'Makefile.in',
> > etc... in the 'regenerate' target, it does not seem natural to me to
> > invoke a script on the side, where you have to replicate the behaviour
> > of existing Makefiles, possibly getting out-of-sync when someone
> > forgets to update either Makefile or autoregen.py.
>
> I'm not sure I follow.  Are you referring to the rules that automake
> automatically puts to re-generate Makefile.in and others when
> Makefile.am has changed?  Your regenerate target would depend on those
> builtin rules?
Yes, "regenerate" would include "configure, Makeifile.in, configh"
(as/if needed) in its list of dependencies.

>
> Let's say my generate-autostuff.sh script does:
>
>   aclocal --some-flags
>   automake --some-other-flags
>   autoconf --some-other-other-flags
>
> And the checked-in Makefile.in is regenerated based on that.  Wouldn't
> the built-in rules just call aclocal/automake/autoconf with those same
> flags?  I don't see why they would get out of sync.
Well the rule to regenerate Makefile.in (eg in in opcodes/) is a bit
more complex
than just calling automake. IIUC it calls automake --foreign it any of
*.m4 file from $(am__configure_deps) that is newer than Makefile.in
(with an early exit in the loop), does nothing if Makefile.am or
doc/local.mk are newer than Makefile.in, and then calls 'automake
--foreign Makefile'

I've never looked closely at that rule (I suppose he does what it's
intended to do ;-) ), but why not call automake once in $srcdir then
once in $top_srcdir?
TBH I'd rather not spend ages figuring out all this magic :-)

But yeah, maybe some careful looking at these rules might lead to a
couple of simple shell lines.


>
> > What is currently
> > missing is a way to easily regenerate files without having to run a
> > full 'make all' (which currently takes care of calling autoconf &
> > friends to update configure/Makefile.in).
> >
> > But yeah, having to configure before being able to regenerate files is
> > a bit awkward too :-)
>
> I understand the constraints your are working with, and I guess that
> doing:
>
>   ./configure && make regenerate
>
> is not too bad.  The buildbot could probably do that... except that
> it would need a way to force regenerate everything, ignoring the
> timestamps.  Perhaps this option of GNU make would work?
>
>-B, --always-make
> Unconditionally make all targets.
I noticed that option when writing my previous message, maybe that would work.

> >> Looking at the rule to re-generate copying.c in gdb for instance:
> >>
> >> # Make copying.c 

Re: [RFC] add regenerate Makefile target

2024-03-18 Thread Christophe Lyon via Gcc
On Fri, 15 Mar 2024 at 15:25, Tom Tromey  wrote:
>
> > "Eric" == Eric Gallager  writes:
>
> Eric> Also there are the files generated by cgen, too, which no one seems to
> Eric> know how to regenerate, either.
>
> I thought I sent out some info on this a while ago.
>
> Anyway what I do is make a symlink to the cgen source tree in the
> binutils-gdb source tree, then configure with --enable-cgen-maint.
> Then I make sure to build with 'make GUILE=guile3.0'.
>
> It could be better but that would require someone to actually work on
> cgen.
>
> Eric> And then in bfd there's that chew
> Eric> program in the doc subdir. And then in the binutils subdirectory
> Eric> proper there's that sysinfo tool for generating sysroff.[ch].
>
> gdb used to use a mish-mash of different approaches, some quite strange,
> but over the last few years we standardized on Python scripts that
> generate files.  They're written to be seamless -- just invoke in the
> source dir; the output is then just part of your patch.  No special
> configure options are needed.  On the whole this has been a big
> improvement.
>
Good to know that this is perceived as a big improvement, that's a
strong argument for moving to a script.

I'm not up-to-date with gdb's policy about patches: are they supposed
to be posted with or without the regenerated parts included?
IIUC they are not included in patch submissions for binutils and gcc,
which makes the pre-commit CI miss some patches.

Thanks,

Christophe

> Tom


Re: [RFC] add regenerate Makefile target

2024-03-19 Thread Christophe Lyon via Gcc
Hi,

On Mon, 18 Mar 2024 at 18:25, Christophe Lyon
 wrote:
>
> On Sat, 16 Mar 2024 at 18:16, Simon Marchi  wrote:
> >
> >
> >
> > On 2024-03-15 04:50, Christophe Lyon via Gdb wrote:
> > > On Thu, 14 Mar 2024 at 19:10, Simon Marchi  wrote:
> > >> My first thought it: why is it a Makefile target, instead of some script
> > >> on the side (like autoregen.sh).  It would be nice / useful to be
> > >> able to it without configuring / building anything.  For instance, the
> > >> autoregen buildbot job could run it without configuring anything.
> > >> Ideally, the buildbot wouldn't maintain its own autoregen.py file on the
> > >> side, it would just use whatever is in the repo.
> > >
> > > Firstly because of what you mention later: some regeneration steps
> > > require building host tools first, like the XXX-gen in opcodes.
> >
> > "build" and not "host", I think?
>
> yes, sorry
>
> > > Since the existing Makefiles already contain the rules to autoregen
> > > all these files, it seemed natural to me to reuse them, to avoid
> > > reinventing the wheel with the risk of introducing new bugs.
> >
> > I understand.  Although one advantage of moving the actual code out of
> > the Makefile (even if there's still a Makefile rule calling the external
> > script), is that it's much easier to maintain.  Editors are much more
> > useful when editing a standalone shell script than editing shell code in
> > a Makefile target.  It doesn't have to be this big one liner if you want
> > to use variables, you don't need to escape $, you can run it through
> > linters, you can call it by hand, etc.  This is what I did here, for
> > instance:
> >
> > https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=f39632d9579d3c97f1e50a728efed3c5409747d2
> >
> > So I think there's value in any case of moving the regeneration logic
> > out of the Makefiles per se.
> >
> In this case, the generation rules look simple enough indeed.
> But as mentioned elsewhere in the thread, there are more complex
> cases, which involve building helper tools, which have dependencies on
> bfd and libiberty for instance. I'm not sure that's easily/naturally
> scriptable?
> There's also 'chew' in bfd/
>
> > > This involves changes in places where I've never looked at before, so
> > > I'd rather reuse as much existing support as possible.
> > >
> > > For instance, there are the generators in opcodes/, but also things in
> > > sim/, bfd/, updates to the docs and potfiles. In gcc, there's also
> > > something "unusual" in fixincludes/ and libgfortran/
> > >
> > > In fact, I considered also including 'configure', 'Makefile.in',
> > > etc... in the 'regenerate' target, it does not seem natural to me to
> > > invoke a script on the side, where you have to replicate the behaviour
> > > of existing Makefiles, possibly getting out-of-sync when someone
> > > forgets to update either Makefile or autoregen.py.
> >
> > I'm not sure I follow.  Are you referring to the rules that automake
> > automatically puts to re-generate Makefile.in and others when
> > Makefile.am has changed?  Your regenerate target would depend on those
> > builtin rules?
> Yes, "regenerate" would include "configure, Makeifile.in, configh"
> (as/if needed) in its list of dependencies.
>
> >
> > Let's say my generate-autostuff.sh script does:
> >
> >   aclocal --some-flags
> >   automake --some-other-flags
> >   autoconf --some-other-other-flags
> >
> > And the checked-in Makefile.in is regenerated based on that.  Wouldn't
> > the built-in rules just call aclocal/automake/autoconf with those same
> > flags?  I don't see why they would get out of sync.
> Well the rule to regenerate Makefile.in (eg in in opcodes/) is a bit
> more complex
> than just calling automake. IIUC it calls automake --foreign it any of
> *.m4 file from $(am__configure_deps) that is newer than Makefile.in
> (with an early exit in the loop), does nothing if Makefile.am or
> doc/local.mk are newer than Makefile.in, and then calls 'automake
> --foreign Makefile'
>
> I've never looked closely at that rule (I suppose he does what it's
> intended to do ;-) ), but why not call automake once in $srcdir then
> once in $top_srcdir?
> TBH I'd rather not spend ages figuring out all this magic :-)
>
> But yeah, maybe some careful looking at these rules might lead to a
> couple of simple shell lines.
>

I looked a bit more closely at gcc, and noticed that ACLOCAL_AMFLAGS
is given different values at various parts of the source tree:
-I $(top_srcdir) -I $(top_srcdir)/config
-I ../config
-I ../config -I ..
-I ./config -I ../config
-I .. -I ../../config
-I .. -I ../config
-I ../.. -I ../../config
-I . -I .. -I ../config
-I m4

not sure if the current autoregen.py is in sync with that?

Also... I discovered the existence of an automake rule:
am--refresh which IIUC is intended to automake the update of Makefile
and its dependencies.

I'm by no means an autotool expert :-)

Christophe

>
> >
> > > What is currently
> > > missing is a way to easily regene

Re: [RFC] add regenerate Makefile target

2024-03-21 Thread Christophe Lyon via Gcc
On Wed, 20 Mar 2024 at 16:34, Simon Marchi  wrote:
>
> On 3/18/24 13:25, Christophe Lyon wrote:
> > Well the rule to regenerate Makefile.in (eg in in opcodes/) is a bit
> > more complex
> > than just calling automake. IIUC it calls automake --foreign it any of
> > *.m4 file from $(am__configure_deps) that is newer than Makefile.in
> > (with an early exit in the loop), does nothing if Makefile.am or
> > doc/local.mk are newer than Makefile.in, and then calls 'automake
> > --foreign Makefile'
>
> The rules looks complex because they've been generated by automake, this
> Makefile.in is not written by hand.  And I guess automake has put
> `--foreign` there because foreign is used in Makefile.am:
Yes, I know :-)

>
>   AUTOMAKE_OPTIONS = foreign no-dist
>
> But a simple call so `automake -f` (or `autoreconf -f`) just works, as
> automake picks up the foreign option from AUTOMAKE_OPTIONS, so a human
> or an external script who wants to regenerate things would probably just
> use that.

Indeed. I guess my concern is: if some change happens to
Makefile.am/Makefile.in which would imply that 'autoreconf -f' would
not work, how do we make sure autoregen.py (or whatever script) is
updated accordingly? Or maybe whatever change is made to
Makefile.am/Makefile.in, 'autoreconf -f' is supposed to handle it
without additional flag?

>
> > The bot I want to put in place would regenerate things as they are
> > supposed to be, then build and run the testsuite to make sure that
> > what is supposed to be committed would work (if the committer
> > regenerates everything correctly)
>
> For your job, would it be fine to just force-regenerate everything and
> ignore timestamps (just like the buildbot's autoregen job wants to do)?
> It would waste a few cycles, but it would be much simpler.
>
Yes, that would achieve the purpose: be able to handle as many patches
as possible in precommit-CI.
And as described earlier, for binutils this currently means:
autoregen
confgure --enable-maintainer-mode
make all (with a low -j value otherwise we have random build failures)
and my proposal to workaround the problem with -j is to do
make all-bfd all-libiberty regenerate -j1
make all -j XXX

Another possibility would be a policy change in how patches are
submitted, to require that they contain all the autogenerated files.


> Simon


Re: [RFC] add regenerate Makefile target

2024-03-25 Thread Christophe Lyon via Gcc
On Thu, 21 Mar 2024 at 15:32, Christophe Lyon
 wrote:
>
> On Wed, 20 Mar 2024 at 16:34, Simon Marchi  wrote:
> >
> > On 3/18/24 13:25, Christophe Lyon wrote:
> > > Well the rule to regenerate Makefile.in (eg in in opcodes/) is a bit
> > > more complex
> > > than just calling automake. IIUC it calls automake --foreign it any of
> > > *.m4 file from $(am__configure_deps) that is newer than Makefile.in
> > > (with an early exit in the loop), does nothing if Makefile.am or
> > > doc/local.mk are newer than Makefile.in, and then calls 'automake
> > > --foreign Makefile'
> >
> > The rules looks complex because they've been generated by automake, this
> > Makefile.in is not written by hand.  And I guess automake has put
> > `--foreign` there because foreign is used in Makefile.am:
> Yes, I know :-)
>
> >
> >   AUTOMAKE_OPTIONS = foreign no-dist
> >
> > But a simple call so `automake -f` (or `autoreconf -f`) just works, as
> > automake picks up the foreign option from AUTOMAKE_OPTIONS, so a human
> > or an external script who wants to regenerate things would probably just
> > use that.
>
> Indeed. I guess my concern is: if some change happens to
> Makefile.am/Makefile.in which would imply that 'autoreconf -f' would
> not work, how do we make sure autoregen.py (or whatever script) is
> updated accordingly? Or maybe whatever change is made to
> Makefile.am/Makefile.in, 'autoreconf -f' is supposed to handle it
> without additional flag?
>
I think I've just noticed a variant of this: if you look at
opcodes/Makefile.in, you can see that aclocal.m4 depends on
configure.ac (among others). So if configure.ac is updated, a
maintainer-mode rule in Makefile.in will call aclocal and regenerate
aclocal.m4.

However, autoregen.py calls aclocal only if configure.ac contains
AC_CONFIG_MACRO_DIRS, which is not the case here.

That's probably a bug in opcode/configure.ac, but still the current
Makefile.in machinery would update aclocal.m4 as needed when
autoregen.py will not.

I haven't audited all configure.ac but there are probably other
occurrences of this.

Christophe

> >
> > > The bot I want to put in place would regenerate things as they are
> > > supposed to be, then build and run the testsuite to make sure that
> > > what is supposed to be committed would work (if the committer
> > > regenerates everything correctly)
> >
> > For your job, would it be fine to just force-regenerate everything and
> > ignore timestamps (just like the buildbot's autoregen job wants to do)?
> > It would waste a few cycles, but it would be much simpler.
> >
> Yes, that would achieve the purpose: be able to handle as many patches
> as possible in precommit-CI.
> And as described earlier, for binutils this currently means:
> autoregen
> confgure --enable-maintainer-mode
> make all (with a low -j value otherwise we have random build failures)
> and my proposal to workaround the problem with -j is to do
> make all-bfd all-libiberty regenerate -j1
> make all -j XXX
>
> Another possibility would be a policy change in how patches are
> submitted, to require that they contain all the autogenerated files.
>
>
> > Simon


Re: Building Single Tree for a Specific Set of CFLAGS

2024-03-27 Thread Christophe Lyon via Gcc

Hi!

On 3/26/24 22:52, Joel Sherrill via Gcc wrote:

Hi

For RTEMS, we normally build a multilib'ed gcc+newlib, but I have a case
where the CPU model is something not covered by our multilibs and not one
we are keen to add. I've looked around but not found anything that makes me
feel confident.

What's the magic for building a gcc+newlib with a single set of libraries
that are built for a specific CPU CFLAGS?

I am trying --disable-multlibs on the gcc configure and adding
CFLAGS_FOR_TARGET to make.

Advice appreciated.



I would configure GCC with --disable-multilibs --with-cpu=XXX 
--with-mode=XXX --with-float=XXX [maybe --with-fpu=XXX]

This way GCC defaults to what you want.

Thanks,

Christophe



Thanks.

--joel


Re: [RFC] add regenerate Makefile target

2024-03-27 Thread Christophe Lyon via Gcc
On Tue, 26 Mar 2024 at 16:42, Jens Remus  wrote:
>
> Am 15.03.2024 um 09:50 schrieb Christophe Lyon:
> > On Thu, 14 Mar 2024 at 19:10, Simon Marchi  wrote:
> >> On 2024-03-13 04:02, Christophe Lyon via Gdb wrote:
> ...
> >> There's just the issue of files that are generated using tools that are
> >> compiled.  When experimenting with maintainer mode the other day, I
> >> stumbled on the opcodes/i386-gen, for instance.  I don't have a good
> >> solution to that, except to rewrite these tools in a scripting language
> >> like Python.
> >
> > So for opcodes, it currently means rewriting such programs for i386,
> > aarch64, ia64 and luckily msp430/rl78/rx share the same opc2c
> > generator.
> > Not sure how to find volunteers?
>
> Why are those generated source files checked into the repository and not
> generated at build-time? Would there be a reason for s390 do so as well
> (opcodes/s390-opc.tab is generated at build-time from
> opcodes/s390-opc.txt using s390-mkopc built from opcodes/s390-mkopc.c)?
>
I remember someone mentioned a requirement of being able to rebuild
with the sources on a read-only filesystem.
I don't know if there's a requirement that such generated files should
be part of the source tree though. Is opcodes/s390-opc.tab in builddir
or in srcdir?

I think there are other motivations but I can't remember them at the moment :-)

Thanks,

Christophe

> Thanks and regards,
> Jens
> --
> Jens Remus
> Linux on Z Development (D3303) and z/VSE Support
> +49-7031-16-1128 Office
> jre...@de.ibm.com
>
> IBM
>
> IBM Deutschland Research & Development GmbH; Vorsitzender des
> Aufsichtsrats: Wolfgang Wendt; Geschäftsführung: David Faller; Sitz der
> Gesellschaft: Böblingen; Registergericht: Amtsgericht Stuttgart, HRB 243294
> IBM Data Privacy Statement: https://www.ibm.com/privacy/


Re: [RFC] add regenerate Makefile target

2024-03-27 Thread Christophe Lyon via Gcc
Hi!


On Mon, 25 Mar 2024 at 15:19, Christophe Lyon
 wrote:
>
> On Thu, 21 Mar 2024 at 15:32, Christophe Lyon
>  wrote:
> >
> > On Wed, 20 Mar 2024 at 16:34, Simon Marchi  wrote:
> > >
> > > On 3/18/24 13:25, Christophe Lyon wrote:
> > > > Well the rule to regenerate Makefile.in (eg in in opcodes/) is a bit
> > > > more complex
> > > > than just calling automake. IIUC it calls automake --foreign it any of
> > > > *.m4 file from $(am__configure_deps) that is newer than Makefile.in
> > > > (with an early exit in the loop), does nothing if Makefile.am or
> > > > doc/local.mk are newer than Makefile.in, and then calls 'automake
> > > > --foreign Makefile'
> > >
> > > The rules looks complex because they've been generated by automake, this
> > > Makefile.in is not written by hand.  And I guess automake has put
> > > `--foreign` there because foreign is used in Makefile.am:
> > Yes, I know :-)
> >
> > >
> > >   AUTOMAKE_OPTIONS = foreign no-dist
> > >
> > > But a simple call so `automake -f` (or `autoreconf -f`) just works, as
> > > automake picks up the foreign option from AUTOMAKE_OPTIONS, so a human
> > > or an external script who wants to regenerate things would probably just
> > > use that.
> >
> > Indeed. I guess my concern is: if some change happens to
> > Makefile.am/Makefile.in which would imply that 'autoreconf -f' would
> > not work, how do we make sure autoregen.py (or whatever script) is
> > updated accordingly? Or maybe whatever change is made to
> > Makefile.am/Makefile.in, 'autoreconf -f' is supposed to handle it
> > without additional flag?
> >
> I think I've just noticed a variant of this: if you look at
> opcodes/Makefile.in, you can see that aclocal.m4 depends on
> configure.ac (among others). So if configure.ac is updated, a
> maintainer-mode rule in Makefile.in will call aclocal and regenerate
> aclocal.m4.
>
> However, autoregen.py calls aclocal only if configure.ac contains
> AC_CONFIG_MACRO_DIRS, which is not the case here.
>
> That's probably a bug in opcode/configure.ac, but still the current
> Makefile.in machinery would update aclocal.m4 as needed when
> autoregen.py will not.
>
> I haven't audited all configure.ac but there are probably other
> occurrences of this.
>

As another follow-up on this topic, while working on a tentative GCC
patch to implement this, I realized an obvious issue: all target
libraries configure steps depend on 'all-gcc' (of course, we need a
compiler to build the libs...)

So they idea of doing roughly:
- configure --enable-maintainer-mode
- make regenerate -j1  (to avoid current race conditions in maintainer-mode)
- make all -jXXX

means that the regenerate step will trigger the configure step for all
host and target subdirs as needed, and configuring target-libs
requires building 'all-gcc', which would happen at -j1 !

sigh :-)

Looks like we should handle binutils, gdb, and gcc differently for the
sake of precommit CI.

Thanks,

Christophe



> Christophe
>
> > >
> > > > The bot I want to put in place would regenerate things as they are
> > > > supposed to be, then build and run the testsuite to make sure that
> > > > what is supposed to be committed would work (if the committer
> > > > regenerates everything correctly)
> > >
> > > For your job, would it be fine to just force-regenerate everything and
> > > ignore timestamps (just like the buildbot's autoregen job wants to do)?
> > > It would waste a few cycles, but it would be much simpler.
> > >
> > Yes, that would achieve the purpose: be able to handle as many patches
> > as possible in precommit-CI.
> > And as described earlier, for binutils this currently means:
> > autoregen
> > confgure --enable-maintainer-mode
> > make all (with a low -j value otherwise we have random build failures)
> > and my proposal to workaround the problem with -j is to do
> > make all-bfd all-libiberty regenerate -j1
> > make all -j XXX
> >
> > Another possibility would be a policy change in how patches are
> > submitted, to require that they contain all the autogenerated files.
> >
> >
> > > Simon


Re: Building Single Tree for a Specific Set of CFLAGS

2024-03-28 Thread Christophe Lyon via Gcc




On 3/27/24 20:07, Joel Sherrill wrote:



On Wed, Mar 27, 2024 at 3:53 AM Christophe Lyon via Gcc <mailto:gcc@gcc.gnu.org>> wrote:


Hi!

On 3/26/24 22:52, Joel Sherrill via Gcc wrote:
 > Hi
 >
 > For RTEMS, we normally build a multilib'ed gcc+newlib, but I have
a case
 > where the CPU model is something not covered by our multilibs and
not one
 > we are keen to add. I've looked around but not found anything
that makes me
 > feel confident.
 >
 > What's the magic for building a gcc+newlib with a single set of
libraries
 > that are built for a specific CPU CFLAGS?
 >
 > I am trying --disable-multlibs on the gcc configure and adding
 > CFLAGS_FOR_TARGET to make.
 >
 > Advice appreciated.
 >

I would configure GCC with --disable-multilibs --with-cpu=XXX
--with-mode=XXX --with-float=XXX [maybe --with-fpu=XXX]
This way GCC defaults to what you want.


Thanks. Is there any documentation or even a good example? I found
--with-mode=[arm|thumb] but am having trouble mapping the others back
to GCC options.


I don't know of any good doc/example.
I look in gcc/config.gcc to check what is supported.



I have this for CFLAGS_FOR_TARGET

"-mcpu=cortex-m7 -mthumb -mlittle-endian -mfloat-abi=hard 
-mfpu=fpv5-sp-d16 -march=armv7e-m+fpv5"


I think that means...

--with-mode=thumb   for -mthumb
--with-cpu=cortex-m7 for -mcortex-m7
--with-float=hard         for -mfloat-abi=hard

That leaves a few options I don't know how to map.


You can see that for arm:
supported_defaults="arch cpu float tune fpu abi mode tls"
so there's a --with-XXX for any of the above, meaning that there's no 
--with-endian (default endianness on arm is derived from the target 
triplet eg. armeb-* vs arm-*)


Also note that config.gcc checks that you don't provide both
--with-cpu and --with-arch
or --with-cpu and --with-tune

HTH,

Christophe


--joel


Thanks,

Christophe


 > Thanks.
 >
 > --joel



Patches submission policy change

2024-04-03 Thread Christophe Lyon via Gcc
Dear release managers and developers,

TL;DR: For the sake of improving precommit CI coverage and simplifying
workflows, I’d like to request a patch submission policy change, so
that we now include regenerated files. This was discussed during the
last GNU toolchain office hours meeting [1] (2024-03-28).

Benefits or this change include:
- Increased compatibility with precommit CI
- No need to manually edit patches before submitting, thus the “git
send-email” workflow is simplified
- Patch reviewers can be confident that the committed patch will be
exactly what they approved
- Precommit CI can test exactly what has been submitted

Any concerns/objections?

As discussed on the lists and during the meeting, we have been facing
issues with testing a class of patches: those which imply regenerating
some files. Indeed, for binutils and gcc, the current patch submission
policy is to *not* include the regenerated files (IIUC the policy is
different for gdb [2]).

This means that precommit CI receives an incomplete patch, leading to
wrong and misleading regression reports, and complaints/frustration.
(our notifications now include a warning, making it clear that we do
not regenerate files ;-) )

I thought the solution was as easy as adding –enable-maintainer-mode
to the configure arguments but this has proven to be broken (random
failures with highly parallel builds).  I tried to workaround that by
adding new “regenerate” rules in the makefiles, that we could build at
-j1 before running the real build with a higher parallelism level, but
this is not ideal, not to mention that in the case of gcc, configuring
target libraries requires having built all-gcc before, which is quite
slow at -j1.

Another approach used in binutils and gdb builtbots is a dedicated
script (aka autoregen.py) which calls autotools as appropriate. It
could be extended to update other types of files, but this can be a
bit tricky (eg. some opcodes files require to build a generator first,
some doc fragments also use non-trivial build sequences), and it
creates a maintenance issue: the build recipe is duplicated in the
script and in the makefiles.  Such a script has proven to be useful
though in postcommit CI, to catch regeneration errors.

Having said that, it seems that for the sake of improving usefulness
of precommit CI, the simplest way forward is to change the policy to
include regenerated files.  This does not seem to be a burden for
developers, since they have to regenerate the files before pushing
their patches anyway, and it also enables reviewers to make sure the
generated files are correct.

Said differently, if you want to increase the chances of having your
patches tested by precommit CI, make sure to include all the
regenerated files, otherwise you might receive failure notifications.

Any concerns/objections?

Thanks,

Christophe

[1] https://gcc.gnu.org/wiki/OfficeHours#Meeting:_2024-03-28_.40_1100h_EST5EDT
[2] 
https://inbox.sourceware.org/gdb/cc0a5c86-a041-429a-9890-efd393760...@simark.ca/


Re: Patches submission policy change

2024-04-03 Thread Christophe Lyon via Gcc
On Wed, 3 Apr 2024 at 10:30, Jan Beulich  wrote:
>
> On 03.04.2024 10:22, Christophe Lyon wrote:
> > Dear release managers and developers,
> >
> > TL;DR: For the sake of improving precommit CI coverage and simplifying
> > workflows, I’d like to request a patch submission policy change, so
> > that we now include regenerated files. This was discussed during the
> > last GNU toolchain office hours meeting [1] (2024-03-28).
> >
> > Benefits or this change include:
> > - Increased compatibility with precommit CI
> > - No need to manually edit patches before submitting, thus the “git
> > send-email” workflow is simplified
> > - Patch reviewers can be confident that the committed patch will be
> > exactly what they approved
> > - Precommit CI can test exactly what has been submitted
> >
> > Any concerns/objections?
>
> Yes: Patch size. And no, not sending patches inline is bad practice.
Not sure what you mean? Do you mean sending patches as attachments is
bad practice?

> Even assuming sending patches bi-modal (inline and as attachment) works
> (please indicate whether that's the case), it would mean extra work on
> the sending side.
>
For the CI perspective, we use what patchwork is able to detect as patches.
Looking at recent patches submissions, it seems patchwork is able to
cope with the output of git format-patch/git send-email, as well as
attachments.
There are cases where patchwork is not able to detect the patch, but I
don't know patchwork's exact specifications.

Thanks,

Christophe

> Jan


Re: Patches submission policy change

2024-04-03 Thread Christophe Lyon via Gcc
On Wed, 3 Apr 2024 at 12:21, Jan Beulich  wrote:
>
> On 03.04.2024 10:57, Richard Biener wrote:
> > On Wed, 3 Apr 2024, Jan Beulich wrote:
> >> On 03.04.2024 10:45, Jakub Jelinek wrote:
> >>> On Wed, Apr 03, 2024 at 10:22:24AM +0200, Christophe Lyon wrote:
>  Any concerns/objections?
> >>>
> >>> I'm all for it, in fact I've been sending it like that myself for years
> >>> even when the policy said not to.  In most cases, the diff for the
> >>> regenerated files is very small and it helps even in patch review to
> >>> actually check if the configure.ac/m4 etc. changes result just in the
> >>> expected changes and not some unrelated ones (e.g. caused by user using
> >>> wrong version of autoconf/automake etc.).
> >>> There can be exceptions, e.g. when in GCC we update from a new version
> >>> of Unicode, the regenerated ucnid.h diff can be large and
> >>> uname2c.h can be huge, such that it can trigger the mailing list size
> >>> limits even when the patch is compressed, see e.g.
> >>> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636427.html
> >>> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636426.html
> >>> But I think most configure or Makefile changes should be pretty small,
> >>> usual changes shouldn't rewrite everything in those files.
> >>
> >> Which may then call for a policy saying "include generate script diff-s,
> >> but don't include generated data file ones"? At least on the binutils
> >> side, dealing (for CI) with what e.g. opcodes/*-gen produce ought to be
> >> possible by having something along the lines of "maintainer mode light".
> >
> > I'd say we should send generated files when it fits the mailing list
> > limits (and possibly simply lift those limits?).
>
> Well, that would allow patches making it through, but it would still
> severely increase overall size. I'm afraid more people than not also
> fail to cut down reply context, so we'd further see (needlessly) huge
> replies to patches as well.
>
> Additionally - how does one up front determine "fits the mailing list
> limits"? My mail UI (Thunderbird) doesn't show me the size of a message
> until I've actually sent it.
>
> >  As a last resort
> > do a series splitting the re-generation out (but I guess this would
> > confuse the CI as well and of course for the push you want to squash
> > again).
>
> Yeah, unless the CI would only ever test full series, this wouldn't help.
> It's also imo even more cumbersome than simply stripping the generated
> file parts from emails.
>

Our CI starts by testing the full series, then iterates by dropping
the top-most patches one by one, to make sure no patch breaks
something that is fixed in a later patch.
This is meant to be additional information for patch reviewers, if
they use patchwork to assist them.

Other CIs may behave differently though.

Thanks,

Christophe

> Jan


Re: Patches submission policy change

2024-04-03 Thread Christophe Lyon via Gcc
On Wed, 3 Apr 2024 at 14:59, Joel Sherrill  wrote:
>
> Another possible issue which may be better now than in years past
> is that the versions of autoconf/automake required often had to be
> installed by hand. I think newlib has gotten better but before the
> rework on its Makefile/configure, I had a special install of autotools
> which precisely matched what it required.
>
> And that led to very few people being able to successfully regenerate.
>
> Is that avoidable?
>
> OTOH the set of people touching these files may be small enough that
> pain isn't an issue.
>

For binutils/gcc/gdb we still have to use specific versions which are
generally not the distro's ones.
So indeed, the number of people having to update autotools-related
files is relatively small, but there are other files which are
regenerated during the build process when maintainer-mode is enabled
(for instance parts of the gcc documentation, or opcodes tables in
binutils, and these are modified by more people.

Thanks,

Christophe

> --joel
>
> On Wed, Apr 3, 2024 at 5:22 AM Jan Beulich via Gcc  wrote:
>>
>> On 03.04.2024 10:57, Richard Biener wrote:
>> > On Wed, 3 Apr 2024, Jan Beulich wrote:
>> >> On 03.04.2024 10:45, Jakub Jelinek wrote:
>> >>> On Wed, Apr 03, 2024 at 10:22:24AM +0200, Christophe Lyon wrote:
>>  Any concerns/objections?
>> >>>
>> >>> I'm all for it, in fact I've been sending it like that myself for years
>> >>> even when the policy said not to.  In most cases, the diff for the
>> >>> regenerated files is very small and it helps even in patch review to
>> >>> actually check if the configure.ac/m4 etc. changes result just in the
>> >>> expected changes and not some unrelated ones (e.g. caused by user using
>> >>> wrong version of autoconf/automake etc.).
>> >>> There can be exceptions, e.g. when in GCC we update from a new version
>> >>> of Unicode, the regenerated ucnid.h diff can be large and
>> >>> uname2c.h can be huge, such that it can trigger the mailing list size
>> >>> limits even when the patch is compressed, see e.g.
>> >>> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636427.html
>> >>> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/636426.html
>> >>> But I think most configure or Makefile changes should be pretty small,
>> >>> usual changes shouldn't rewrite everything in those files.
>> >>
>> >> Which may then call for a policy saying "include generate script diff-s,
>> >> but don't include generated data file ones"? At least on the binutils
>> >> side, dealing (for CI) with what e.g. opcodes/*-gen produce ought to be
>> >> possible by having something along the lines of "maintainer mode light".
>> >
>> > I'd say we should send generated files when it fits the mailing list
>> > limits (and possibly simply lift those limits?).
>>
>> Well, that would allow patches making it through, but it would still
>> severely increase overall size. I'm afraid more people than not also
>> fail to cut down reply context, so we'd further see (needlessly) huge
>> replies to patches as well.
>>
>> Additionally - how does one up front determine "fits the mailing list
>> limits"? My mail UI (Thunderbird) doesn't show me the size of a message
>> until I've actually sent it.
>>
>> >  As a last resort
>> > do a series splitting the re-generation out (but I guess this would
>> > confuse the CI as well and of course for the push you want to squash
>> > again).
>>
>> Yeah, unless the CI would only ever test full series, this wouldn't help.
>> It's also imo even more cumbersome than simply stripping the generated
>> file parts from emails.
>>
>> Jan


Re: Patches submission policy change

2024-04-05 Thread Christophe Lyon via Gcc

Hi Mark,


On 4/4/24 23:35, Mark Wielaard wrote:

Hi Christophe,

On Wed, Apr 03, 2024 at 10:22:24AM +0200, Christophe Lyon via Gdb wrote:

TL;DR: For the sake of improving precommit CI coverage and simplifying
workflows, I’d like to request a patch submission policy change, so
that we now include regenerated files. This was discussed during the
last GNU toolchain office hours meeting [1] (2024-03-28).

Benefits or this change include:
- Increased compatibility with precommit CI
- No need to manually edit patches before submitting, thus the “git
send-email” workflow is simplified
- Patch reviewers can be confident that the committed patch will be
exactly what they approved
- Precommit CI can test exactly what has been submitted

Any concerns/objections?


I am all for it. It will make testing patches easier for everyone.

I do think we still need a better way to make sure all generated files
can be regenerated. If only to check that the files were generated
correctly with the correct versions. The autoregen buildbots are able
to catch some, but not all such mistakes.

wrt to the mailinglists maybe getting larger patches, I think most
will still be under 400K and I wouldn't raise the limit (because most
such larger emails are really just spam). But we might want to get
more mailinglist moderators.

gcc-patches, binutils and gdb-patches all have only one moderator
(Jeff, Ian and Thiago). It would probably be good if there were
more.

Any volunteers? It shouldn't be more than 1 to 3 emails a week
(sadly most of them spam).


I'm happy to help with moderation of any/all of these 3 lists.

Thanks,

Christophe


Cheers,

Mark


Re: Patches submission policy change

2024-04-05 Thread Christophe Lyon via Gcc
On Thu, 4 Apr 2024 at 10:12, Jan Beulich  wrote:
>
> On 03.04.2024 15:11, Christophe Lyon wrote:
> > On Wed, 3 Apr 2024 at 10:30, Jan Beulich  wrote:
> >>
> >> On 03.04.2024 10:22, Christophe Lyon wrote:
> >>> Dear release managers and developers,
> >>>
> >>> TL;DR: For the sake of improving precommit CI coverage and simplifying
> >>> workflows, I’d like to request a patch submission policy change, so
> >>> that we now include regenerated files. This was discussed during the
> >>> last GNU toolchain office hours meeting [1] (2024-03-28).
> >>>
> >>> Benefits or this change include:
> >>> - Increased compatibility with precommit CI
> >>> - No need to manually edit patches before submitting, thus the “git
> >>> send-email” workflow is simplified
> >>> - Patch reviewers can be confident that the committed patch will be
> >>> exactly what they approved
> >>> - Precommit CI can test exactly what has been submitted
> >>>
> >>> Any concerns/objections?
> >>
> >> Yes: Patch size. And no, not sending patches inline is bad practice.
> > Not sure what you mean? Do you mean sending patches as attachments is
> > bad practice?
>
> Yes. It makes it difficult to reply to them (with proper reply context).

Agreed.

>
> >> Even assuming sending patches bi-modal (inline and as attachment) works
> >> (please indicate whether that's the case), it would mean extra work on
> >> the sending side.
> >>
> > For the CI perspective, we use what patchwork is able to detect as patches.
> > Looking at recent patches submissions, it seems patchwork is able to
> > cope with the output of git format-patch/git send-email, as well as
> > attachments.
> > There are cases where patchwork is not able to detect the patch, but I
> > don't know patchwork's exact specifications.
>
> Question was though: If a patch was sent inline plus attachment, what
> would CI use as the patch to apply? IOW would it be an option to
> attach the un-stripped patch, while inlining the stripped one?
>

Sorry, I don't know how patchwork would handle such a case.

Thanks,

Christophe

> Jan
>


Re: [RFC] add regenerate Makefile target

2024-04-08 Thread Christophe Lyon via Gcc
Hi,

On Mon, 25 Mar 2024 at 15:19, Christophe Lyon
 wrote:
>
> On Thu, 21 Mar 2024 at 15:32, Christophe Lyon
>  wrote:
> >
> > On Wed, 20 Mar 2024 at 16:34, Simon Marchi  wrote:
> > >
> > > On 3/18/24 13:25, Christophe Lyon wrote:
> > > > Well the rule to regenerate Makefile.in (eg in in opcodes/) is a bit
> > > > more complex
> > > > than just calling automake. IIUC it calls automake --foreign it any of
> > > > *.m4 file from $(am__configure_deps) that is newer than Makefile.in
> > > > (with an early exit in the loop), does nothing if Makefile.am or
> > > > doc/local.mk are newer than Makefile.in, and then calls 'automake
> > > > --foreign Makefile'
> > >
> > > The rules looks complex because they've been generated by automake, this
> > > Makefile.in is not written by hand.  And I guess automake has put
> > > `--foreign` there because foreign is used in Makefile.am:
> > Yes, I know :-)
> >
> > >
> > >   AUTOMAKE_OPTIONS = foreign no-dist
> > >
> > > But a simple call so `automake -f` (or `autoreconf -f`) just works, as
> > > automake picks up the foreign option from AUTOMAKE_OPTIONS, so a human
> > > or an external script who wants to regenerate things would probably just
> > > use that.
> >
> > Indeed. I guess my concern is: if some change happens to
> > Makefile.am/Makefile.in which would imply that 'autoreconf -f' would
> > not work, how do we make sure autoregen.py (or whatever script) is
> > updated accordingly? Or maybe whatever change is made to
> > Makefile.am/Makefile.in, 'autoreconf -f' is supposed to handle it
> > without additional flag?
> >
> I think I've just noticed a variant of this: if you look at
> opcodes/Makefile.in, you can see that aclocal.m4 depends on
> configure.ac (among others). So if configure.ac is updated, a
> maintainer-mode rule in Makefile.in will call aclocal and regenerate
> aclocal.m4.
>
> However, autoregen.py calls aclocal only if configure.ac contains
> AC_CONFIG_MACRO_DIRS, which is not the case here.
>
> That's probably a bug in opcode/configure.ac, but still the current
> Makefile.in machinery would update aclocal.m4 as needed when
> autoregen.py will not.
>
> I haven't audited all configure.ac but there are probably other
> occurrences of this.
>

Another discrepancy I've just noticed: if you look at libsframe/Makefile.am,
you can see that ACLOCAL_AMFLAGS = -I .. -I ../config -I ../bfd,
so if you run autoreconf -f, it will invoke aclocal with these flags
(the same is performed by the aclocal.m4 regeneration rule in the Makefile),
but autoregen.py won't run aclocal because configure.ac does not define
AC_CONFIG_MACRO_DIRS, and even if it did, it would only use -I../config

I guess the same applies for several other subdirs.

So in general how do we make sure autoregen.py uses the right flags?

Or what prevents us from just using autoreconf -f? If that does not work
because configure.ac/Makeline.am and others have bugs, maybe
we should fix those bugs instead?

which makes me think about Eric's reply:

> `autoreconf -f` works fine in individual subdirectories, the problem
> is that the top-level configure.ac doesn't use the AC_CONFIG_SUBDIRS
> macro to specify its subdirectories, but rather uses its own
> hand-rolled method of specifying subdirectories that autoreconf
> doesn't know about. This means that autoreconf won't automatically
> recurse into all the necessary subdirectories by itself automatically,
> and instead has to be run manually in each subdirectory separately.

It's not clear to me if that "problem" is a bug, or a design decision
we must take into account when writing tools to help regeneration?

> Also the various subdirectories are inconsistent about whether they
> have a rule for running it (autoreconf) from the Makefile or not,
should that be considered a bug, and fixed?

> which usually comes down to whether the subdirectory uses automake for
> its Makefile or not (the top-level Makefile doesn't; it uses its own
> weird autogen-based regeneration method instead, which means that it
> misses out on all the built-in rules that automake would implicitly
> generate, including ones related to build system regeneration).

Thanks,

Christophe


> Christophe
>
> > >
> > > > The bot I want to put in place would regenerate things as they are
> > > > supposed to be, then build and run the testsuite to make sure that
> > > > what is supposed to be committed would work (if the committer
> > > > regenerates everything correctly)
> > >
> > > For your job, would it be fine to just force-regenerate everything and
> > > ignore timestamps (just like the buildbot's autoregen job wants to do)?
> > > It would waste a few cycles, but it would be much simpler.
> > >
> > Yes, that would achieve the purpose: be able to handle as many patches
> > as possible in precommit-CI.
> > And as described earlier, for binutils this currently means:
> > autoregen
> > confgure --enable-maintainer-mode
> > make all (with a low -j value otherwise we have random build failures)
> > and 

Re: Updated Sourceware infrastructure plans

2024-04-18 Thread Christophe Lyon via Gcc
Hi,

On Thu, 18 Apr 2024 at 10:15, FX Coudert  wrote:
>
> > I regenerate auto* files from time to time for libgfortran. Regenerating
> > them has always been very fragile (using --enable-maintainer-mode),
> > and difficult to get right.
>
> I have never found them difficult to regenerate, but if you have only a non 
> maintainer build, it is a pain to have to make a new maintainer build for a 
> minor change.
>

FWIW, we have noticed lots of warnings from autoreconf in libgfortran.
I didn't try to investigate, since the regenerated files are identical
to what is currently in the repo.

For instance, you can download the "stdio" output from the
autoregen.py step in
https://builder.sourceware.org/buildbot/#/builders/269/builds/4373

Thanks,

Christophe


> Moreover, our m4 code is particularly painful to use and unreadable. I have 
> been wondering for some time: should we switch to simpler Python scripts? It 
> would also mean that we would have fewer files in the generated/ folder: 
> right now, every time we add new combinations of types, we have a 
> combinatorial explosion of files.
>
> $ ls generated/sum_*
> generated/sum_c10.c generated/sum_c17.c generated/sum_c8.c  
> generated/sum_i16.c generated/sum_i4.c  generated/sum_r10.c 
> generated/sum_r17.c generated/sum_r8.c
> generated/sum_c16.c generated/sum_c4.c  generated/sum_i1.c  
> generated/sum_i2.c  generated/sum_i8.c  generated/sum_r16.c generated/sum_r4.c
>
> We could imagine having a single file for all sum intrinsics.
>
> How do Fortran maintainers feel about that?
>
> FX


Linaro CI new feature: skip precommit testing

2024-10-16 Thread Christophe Lyon via Gcc
Hi,

Following "popular request", we are happy to announce that users can
now request to skip Linaro CI precommit testing for some patches.

The current implementation skips testing in two cases:
1- there is [RFC] or [RFC v[0-9]] in the patch subject
2- the commit message contains a line starting with 'CI-tag: skip'

[1] Enables to avoid undesirable regression notifications when one
sends an incomplete patch to start discussing ideas.

[2] Aims at helping workflows where people submit patches for "master
files" and "regenerated files" as two patches to make review easier.
In such cases, the patch with only "master files" changes would
generally generate regression notifications, confusing both reviewers
and patch authors.  For instance:
- patch #1/3: introduce new code -> should pass CI
- patch #2/3: changes to "master files" -> skip CI
- patch #3/3: changes to "regenerated files" -> should pass CI


This change does NOT affect postcommit CI, where CI is always expected
to pass (otherwise regression notifications will be generated).


Technically, we use 'git am --keep' when applying the patches, so
that subject lines are untouched: this enables us to handle
standard markers such as [PATCH RFC] or [RFC PATCH] for instance.

This also means that a 'CI-tag: skip' after the usual '---' lines
will be ignored (git-am will consider it as part of the patch,
rather than the commit message).

People used to git am --scissors may find it convenient to put
the tag at the start of the commit message:

CI-tag: skip
-- >8 --
=

But it's fine to put that tag along with other tags (such as
Signed-Off-By, Co-authored-by, ...)

We hope this will be useful / helpful.

Thanks,

The Linaro Toolchain team.


Re: Christophe Lyon as MVE reviewer for the AArch32 (arm) port.

2024-09-27 Thread Christophe Lyon via Gcc

Hi Ramana,


On 9/26/24 19:22, Ramana Radhakrishnan wrote:

I am pleased to announce that the GCC Steering Committee has appointed
Christophe Lyon as a MVE Reviewer for the AArch32 port.

Please join me in congratulating Christophe on his new role.

Christophe, please update your listings in the MAINTAINERS file.

Regards,
Ramana



Thanks for your trust!

Christophe


Re: scraperbot protection - Patchwork and Bunsen behind Anubis

2025-04-23 Thread Christophe Lyon via Gcc
Hi!

Thanks for all the hard work maintaining all this fundamental infrastructure.

On Mon, 21 Apr 2025 at 18:00, Mark Wielaard  wrote:
>
> Hi hackers,
>
> TLDR; When using https://patchwork.sourceware.org or Bunsen
> https://builder.sourceware.org/testruns/ you might now have to enable
> javascript. This should not impact any scripts, just browsers (or bots
> pretending to be browsers). If it does cause trouble, please let us
> know. If this works out we might also "protect" bugzilla, gitweb,
> cgit, and the wikis this way.
>
> We don't like to hav to do this, but as some of you might have noticed
> Sourceware has been fighting the new AI scraperbots since start of the
> year. We are not alone in this.
>
> https://lwn.net/Articles/1008897/
> https://arstechnica.com/ai/2025/03/devs-say-ai-crawlers-dominate-traffic-forcing-blocks-on-entire-countries/
>
> We have tried to isolate services more and block various ip-blocks
> that were abusing the servers. But that has helped only so much.

In terms of isolation, since I have no idea how / which services are
currently isolated, I may ask obvious questions..

We were wondering if it would be possible / suitable to have https
requests served by one container,
and ssh ones by another? Maybe that's already the case though...

Speaking with CI in mind, the Linaro CI is currently severely impacted by these
scraperbots too:
- directly because our git servers are also overloaded, so our build
process often fail to checkout build scripts & infra scripts
- indirectly because when the above succeed we may fail to connect to sourceware

so maybe it would help if we switched to ssh access for our CI user
when cloning GCC / binuitls / etc sources?

If only the containers serving https requests are impacted, ssh access
could still work well?
(that would mean creating a Linaro-CI user on sourceware, I don't know
what the policy is?)

Thanks,

Christophe

> Unfortunately the scraper bots are using lots of ip addresses
> (probably by installing "free" VPN services that use normal user
> connections as exit point) and pretending to be common
> browsers/agents.  We seem to have to make access to some services
> depend on solving a javascript challenge.
>
> So we have installed Anubis https://anubis.techaro.lol/ in front of
> patchwork and bunsen. This means that if you are using a browser that
> identifies as Mozilla or Opera in their User-Agent you will get a
> brief page showing the happy anime girl that requires javascript to
> solve a challenge and get a cookie to get through. Scripts and search
> engines should get through without. Also removing Mozilla and/or Opera
> from your User-Agent will get you through without javascript.
>
> We want to thanks Xe Iaso who has helped us set this up and worked
> with use over the Easter weekend solving some of our problems/typos.
> Please check out if you want to be one of their patrons as thank you.
> https://xeiaso.net/notes/2025/anubis-works/
> https://xeiaso.net/patrons/
>
> Cheers,
>
> Mark