mudflap vs bounds-checking

2007-02-14 Thread Christophe LYON

Hi all,

I was somewhat used to the bounds-checking patches for GCC 3.x from 
Herman ten Brugge.


Now that GCC-4.x ships with mudflap, I am a bit confused, since the 
bounds-checking patches also exist at least for until GCC-4.0.2.


What is the difference between the two systems?

Thanks,

Christophe.


Question on Dwarf2 unwind info and optimized code

2005-10-24 Thread Christophe LYON


Hi all,

I have been look at the Dwarf2 frame info generated by GCC, and how it 
works.
From what I can see, only the register saves are recorded, and not the 
restores. Why?


I guess it may loose GDB if one sets a breakpoint inside a function 
epilogue, right?




I am currently working on the debug_frame info emission in our C/C++ 
compiler (based on Open64) and I have recently come across optimized 
code which I don't know how to handle.


Consider the following function:

foo() {
PROLOGUE

if (COND1) {
SAVE_REG r1 to stack (r1 is callee save)
compute...
bar1()
RESTORE r1
}
else if (COND2) {
bar2()
}

EPILOGUE
}

Suppose that the generated code looks like:
foo:
PROLOGUE
COND1
if true goto LABEL1

COND2
if true goto LABEL2

epilogue_label:
EPILOGUE

LABEL1:
SAVE register r1 to stack
compute
goto CALL_BAR1

LABEL2:
CALL bar2
goto epilogue_label

CALL_BAR1:
CALL bar1
RESTORE r1
goto epilogue_label

My question is:
what debug_frame info should I generate around epilogue, LABEL1, LABEL2, 
and CALL_BAR1 ?

Should I repeat all the info of the prologue before LABEL1 and LABEL2?
for CALL_BAR1, should I also add the save of register r1 ?

It's also unclear to me if I can/should use the 
remember_state/restore_state dwarf operators.

Indeed, what is the rule?

Use remember_state after the prologue, and use restore state at LABEL1 
and LABEL2?
But as states are defined as a stack, it means I would restore twice the 
same thing...


Any help/suggestion highly appreciated!

Thanks,

Christophe.




Re: Question on Dwarf2 unwind info and optimized code

2005-10-25 Thread Christophe LYON

Daniel Jacobowitz wrote:

I am currently working on the debug_frame info emission in our C/C++ 
compiler (based on Open64) and I have recently come across optimized 
code which I don't know how to handle.


Reposting this question to increasingly unrelated lists is not likely
to help you find an answer :-)



Well, anyway thanks for your response, it is sufficient for me.
FYI, I first asked on the dwarf-dedicated mailing-list and surprisingly 
got no reply at all!


It's also unclear to me if I can/should use the 
remember_state/restore_state dwarf operators.

Indeed, what is the rule?

There's no rule, just documentation of how they work - if you can run
through the CFI stack machine and get the right results, you could use
them.


OK, thanks for clarifying the situation.

Christophe



Re: Question on Dwarf2 unwind info and optimized code

2005-10-25 Thread Christophe LYON

Jim Wilson wrote:

Christophe LYON wrote:

I have been look at the Dwarf2 frame info generated by GCC, and how it 
works.
 From what I can see, only the register saves are recorded, and not 
the restores. Why?



The frame info is primarily used for C++ EH stack unwinding.  Since you 
can't throw a C++ exception in an epilogue, epilogue frame info isn't 
needed for this, and was never implemented for most targets.  Which is a 
shame.


There is a PR for this, PR 18749, for the x86-64 target.  The lack of 
epilogue unwind info shows up if you run the libunwind testsuite. 
Otherwise, it is really hard to find an example where the missing unwind 
info is a problem.



That's what I thought, but wanted to be sure I was not missing something.

On occasions, I wonder whether it wouldn't make sense to generate
different infos in debug_frame and eh_frame: IIUC, GCC tries to
'compress' the debug frame info by generating few advance_loc
instructions (eg only 1 for the whole prologue), which makes sense in
the C++ EH stack unwinding context, but my cause problems in a debugger.

Thanks

Christophe.





Re: bootstrap possibly broken on trunk ?

2016-10-13 Thread Christophe Lyon
On 13 October 2016 at 20:30, Prathamesh Kulkarni
 wrote:
> On 13 October 2016 at 23:12, Prathamesh Kulkarni
>  wrote:
>> Hi,
>> I am getting the following error when bootstrapping trunk (tried with 
>> r241108)
>> on x86_64-unknown-linux-gnu during stage-1:
>>
>> ../../../../gcc/libstdc++-v3/src/c++11/compatibility-thread-c++0x.cc:121:12:
>> error: ISO C++ forbids declaration of \u2018_Bind_simple_helper\u2019
>> with no type [-fpermissive]
>>template _Bind_simple_helper> reference_wrapper>::__type __bind_simple(void (thread::*&&)(),
>> reference_wrapper&&);
>> ^~~
>> ../../../../gcc/libstdc++-v3/src/c++11/compatibility-thread-c++0x.cc:121:12:
>> error: \u2018_Bind_simple_helper\u2019 is not a template function
>> ../../../../gcc/libstdc++-v3/src/c++11/compatibility-thread-c++0x.cc:121:31:
>> error: expected \u2018;\u2019 before \u2018<\u2019 token
>>template _Bind_simple_helper> reference_wrapper>::__type __bind_simple(void (thread::*&&)(),
>> reference_wrapper&&);
>>
>> Could someone please confirm this for me ?
> Goes away after I updated trunk to r241132.
>

Indeed it was broken at r241093, and fixed at r24.

I'm not testing bootstrap, but cross-builds can catch already
catch most mistakes. Too bad I do not have more
bandwidth in our compute farm :-)

If you have subscribed to tcwg-validat...@linaro.org (Linaro internal only),
you should have received emails when builds failed.

Or you can have a look at:
http://people.linaro.org/~christophe.lyon/cross-validation/gcc-build/trunk/

these are the status of build-only jobs for a subset of our arm*/aarch64*
configurations.

Christophe


> Thanks,
> Prathamesh
>>
>> Thanks,
>> Prathamesh


Cross-testing libsanitizer

2014-06-02 Thread Christophe Lyon
Hi,

I am updating my (small) patch to enable libsanitizer on AArch64, but
I am wondering about the testing.

Indeed, when testing on my laptop, execution tests fail because
libsanitizer wants to allocated 8GB of memory (I am using qemu as
execution engine).
When running on servers with more RAM, the tests pass.

I suspect this is going to be a problem, and I am wondering about the
best approach. When I enabled libsanitizer for ARM, I already had to
introduce check_effective_target_hw to avoid libsanitizer tests
involving threads because of qemu inability to handle them properly.

I could probably change this function into
check_effective_target_qemu, but that might not be acceptable (and it
would be used for 2 different purposes: threads and too much memory
allocation).

Thoughts?

Thanks,

Christophe.


Re: Cross-testing libsanitizer

2014-06-03 Thread Christophe Lyon
On 3 June 2014 08:39, Yury Gribov  wrote:
> Christophe,
>
>
>> Indeed, when testing on my laptop, execution tests fail because
>> libsanitizer wants to allocated 8GB of memory (I am using qemu as
>> execution engine).
>
> Is this 8G of RAM? If yes - I'd be curious to know which part of
> libsanitizer needs so much memory.
>
> -Y

Here is what I have in gcc.log:
==12356==ERROR: AddressSanitizer failed to allocate 0x21000
(8589938688) bytes at address ff000 (errno: 12)^M
==12356==ReserveShadowMemoryRange failed while trying to map
0x21000 bytes. Perhaps you're using ulimit -v^M
qemu: uncaught target signal 6 (Aborted) - core dumped^M
proot info: pid 12356: terminated with signal 6^M
FAIL: c-c++-common/asan/asan-interface-1.c  -O0  execution test

and ulimit -a says:
virtual memory  (kbytes, -v) unlimited

Christophe


Re: Cross-testing libsanitizer

2014-06-03 Thread Christophe Lyon
On 3 June 2014 12:16, Yury Gribov  wrote:
>>> Is this 8G of RAM? If yes - I'd be curious to know which part of
>>> libsanitizer needs so much memory.
>>
>>
>> Here is what I have in gcc.log:
>> ==12356==ERROR: AddressSanitizer failed to allocate 0x21000
>> (8589938688) bytes at address ff000 (errno: 12)^M
>> ==12356==ReserveShadowMemoryRange failed while trying to map
>> 0x21000 bytes. Perhaps you're using ulimit -v^M
>
>
> Interesting. AFAIK Asan maps shadow memory with NORESERVE flag so it should
> not consume any RAM at all...
>

Thanks for the reminder in fact I posted a qemu patch in February
http://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00319.html
I thought it was applied, but it's not yet in trunk

I used to use a patched qemu, but when I upgraded to 2.0 I forgot
about this patch.
I am going to re-check with a patched qemu, and ping them.


Re: Cross-testing libsanitizer

2014-06-05 Thread Christophe Lyon
On 3 June 2014 14:46, Christophe Lyon  wrote:
> On 3 June 2014 12:16, Yury Gribov  wrote:
>>>> Is this 8G of RAM? If yes - I'd be curious to know which part of
>>>> libsanitizer needs so much memory.
>>>
>>>
>>> Here is what I have in gcc.log:
>>> ==12356==ERROR: AddressSanitizer failed to allocate 0x21000
>>> (8589938688) bytes at address ff000 (errno: 12)^M
>>> ==12356==ReserveShadowMemoryRange failed while trying to map
>>> 0x21000 bytes. Perhaps you're using ulimit -v^M
>>
>>
>> Interesting. AFAIK Asan maps shadow memory with NORESERVE flag so it should
>> not consume any RAM at all...
>>
>
> Thanks for the reminder in fact I posted a qemu patch in February
> http://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00319.html
> I thought it was applied, but it's not yet in trunk
>
> I used to use a patched qemu, but when I upgraded to 2.0 I forgot
> about this patch.
> I am going to re-check with a patched qemu, and ping them.

So after applying my patch to qemu, I no longer see this error.
Now, all execution tests fail in timeout after generating ASAN:SEGV.

Which means I have to investigate is going-on :-(

It worked better in February :-(

Christophe.


Testing Leak Sanitizer?

2014-09-30 Thread Christophe Lyon
Hello,

After I've recently enabled Address Sanitizer for AArch64 in GCC, I'd
like to enable Leak Sanitizer.

I'd like to know what are the requirements wrt testing it? IIUC there
are no lsan tests in the GCC testsuite so far.

Should I just test a few sample programs to check if basic functionality is OK?

The patch seems to be a 1-line patch, I just want to check the
acceptance criteria.

Thanks,

Christophe.


[testsuite] Need to compile testglue with target-specific flags

2014-10-01 Thread Christophe Lyon
Hi,

When Jason recently committed his fix for PR58678, I noticed that the
newly introduced test was failing on arm-linux target.

This is because testglue.o contains relocations incompatible with
-shared which is used in the testcase.

I am preparing a testsuite patch to fix that, but I am wondering how
to add target-specific flags when compiling testglue? It does not seem
to be supported yet.

Is there already an example of a similar use case I should use?

Thanks

Christophe.


Re: [PATCH] gcc parallel make check

2014-10-10 Thread Christophe Lyon
Hi Jakub,


On 15 September 2014 18:05, Jakub Jelinek  wrote:
[...]
>  # For parallelized check-% targets, this decides whether parallelization
>  # is desirable (if -jN is used and RUNTESTFLAGS doesn't contain anything
>  # but optional --target_board or --extra_opts arguments).  If desirable,
>  # recursive make is run with check-parallel-$lang{,1,2,3,4,5} etc. goals,
>  # which can be executed in parallel, as they are run in separate directories.
> -# check-parallel-$lang{1,2,3,4,5} etc. goals invoke runtest with the longest
> -# running *.exp files from the testsuite, as determined by 
> check_$lang_parallelize
> -# variable.  The check-parallel-$lang goal in that case invokes runtest with
> -# all the remaining *.exp files not handled by the separate goals.
> +# check-parallel-$lang{,1,2,3,4,5} etc. goals invoke runtest with
> +# GCC_RUNTEST_PARALLELIZE_DIR var in the environment and runtest_file_p
> +# dejaGNU procedure is overridden to additionally synchronize through
> +# a $lang-parallel directory which tests will be run by which runtest 
> instance.
>  # Afterwards contrib/dg-extract-results.sh is used to merge the sum and log
>  # files.  If parallelization isn't desirable, only one recursive make
>  # is run with check-parallel-$lang goal and check_$lang_parallelize variable
> @@ -3662,76 +3645,60 @@ check_p_subdirs=$(wordlist 1,$(words $(c
>  # to lang_checks_parallelized variable and define check_$lang_parallelize
>  # variable (see above check_gcc_parallelize description).
>  $(lang_checks_parallelized): check-% : site.exp
> -   @if [ -z "$(filter-out --target_board=%,$(filter-out 
> --extra_opts%,$(RUNTESTFLAGS)))" ] \

Since you removed this test, the comment above is not longer accurate:
setting RUNTESTFLAGS to whatever value no longer disables
parallelization.

Which leads me to discuss a bug I faced after you committed this
change: I am testing a patch which bring a series of new tests.
$ RUNTESTFLAGS=my.exp make -jN check (in fact the 'make -j' is
embedded in a larger build script)

my.exp contains the following construct which is often used in the testsuite:
==
foreach src [lsort [glob -nocomplain $srcdir/$subdir/*.c]] {
# If we're only testing specific files and this isn't one of them,
skip it.
if ![runtest_file_p $runtests $src] then {
continue
}
c-torture-execute $src $additional_flags
gcc-dg-runtest $src "" $additional_flags
}
==
Note that gcc-dg-runtest calls runtest_file_p too.

What I observed is that if I use -j1, all my .c files get tested,
while with N>2 some of them are silently skipped.

It took me a while to figure out that it's because gcc-dg-runtest
calls runtest_file_p, which means that runtest_file_p is called twice
when the 1st invocation returns 1, and only once when the 1st
invocation returns 0.

For example, if we have pid0, pid1 the concurrent runtest processes,
and file0.c, file1.c,  the testcases, then:
* pid0 decides to keep file0.c file1.c file2.c file3.c file4.c. Since
the above loop calls runtest_file_p twice for each, we reach the
"minor" counter of 10.
* in the mean time, pid1 decides to skip file0.c, file1.c ... file9.c
since it calls runtest_file_p only once for each
* pid1 increments its parallel counter to 1, and create the new testing subdir
* pid1 decides to keep file10, file11, file12, file13 and file14
(again, 2 calls to runtest_file_p per testcase)
* pid0 increments its parallel counter to 1, and decides it has to skip it
* pid0 thus decides to skip file5, file6, file7, ... file14, calling
runtest_file_p once for each
* etc...

In the end, we have ignored file5...file9

I'm not sure why you have made special cases for some of the existing
*.exp when you forced them to disable parallelization.
Was it to handle such cases?

I'm not sure about the next step:
- should I modify my .exp file?
- should you modify gcc_parallel_test_run_p?

Even if I have to modify my .exp file, I think this is error prone,
and others could introduce a similar construct in the future.

Thanks,

Christophe.


Re: [PATCH] gcc parallel make check

2014-10-10 Thread Christophe Lyon
On 10 October 2014 16:19, Jakub Jelinek  wrote:
> On Fri, Oct 10, 2014 at 04:09:39PM +0200, Christophe Lyon wrote:
>> my.exp contains the following construct which is often used in the testsuite:
>> ==
>> foreach src [lsort [glob -nocomplain $srcdir/$subdir/*.c]] {
>> # If we're only testing specific files and this isn't one of them,
>> skip it.
>> if ![runtest_file_p $runtests $src] then {
>> continue
>> }
>> c-torture-execute $src $additional_flags
>> gcc-dg-runtest $src "" $additional_flags
>> }
>> ==
>> Note that gcc-dg-runtest calls runtest_file_p too.
>
> Such my.exp is invalid, you need to guarantee gcc_parallel_test_run_p
> is run the same number of times in all instances unless
> gcc_parallel_test_enable has been disabled.

Thanks for your prompt answer.

Is this documented somewhere, so that such cases do not happen in the future?


> See the patches I've posted when adding the fine-grained parallelization,
> e.g. go testsuite has been fixed that way, etc.
> So, in your above example, you'd need:
> gcc_parallel_test_enable 0
> line before c-torture-execute and
> gcc_parallel_test_enable 1
> line after gcc-dg-runtest.  That way, if runtest_file_p says the test should
> be scheduled by current instance, all the subtests will be run there.
>
> If my.exp is part of gcc/testsuite, I'm sorry for missing it, if it is
> elsewhere, just fix it up.

It's in a patch which has been under review for quite some time
(started before your change), that's why you missed it.

> Note, there are #verbose lines in gcc_parallel_test_run_p, you can uncomment
> them and through sed on the log files verify that each instance performs the
> same parallelization checks (same strings).
Yep, I saw those and also added other traces of my own :-)


What about my remark about:
>  # For parallelized check-% targets, this decides whether parallelization
>  # is desirable (if -jN is used and RUNTESTFLAGS doesn't contain anything
>  # but optional --target_board or --extra_opts arguments).  If desirable,
I think it should be removed from gcc/Makefile.in

Thanks,

Christophe.


Re: Testing Leak Sanitizer?

2014-11-27 Thread Christophe Lyon
On 30 September 2014 at 19:08, Konstantin Serebryany
 wrote:
> Correct, you can run tests from llvm tree with any compiler.
> https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerTestSuite
>

I've read that document, and as a first step I wanted to build LLVM +
run the tests in the "best case" (before any modifications I could
make, and to have a reference to compare with GCC).
I have a few questions.

To have clang as the "toolchain I want to test", I added the clang
sources under llvm_tmp_src/tools,  and compiler-rt sources under
projects.

I managed to run the tests, but I couldn't find the detailed logs.
I added -DLLVM_LIT_ARGS=-v when calling cmake, which gave me a list like:
XFAIL: AddressSanitizer64 :: TestCases/use-after-scope.cc (245 of  249)
PASS: AddressSanitizer64 :: TestCases/use-after-poison.cc (246 of 249)

1- I suppose there are more details, like gcc.log. Where are they?
2- this is running x86_64 native tests, how can I cross-test with
aarch64 (using qemu for instance)?

Thanks


> Note that lsan does not depend on the compiler, it is a library-only feature.
>
> --kcc
>
> On Tue, Sep 30, 2014 at 9:47 AM, Yury Gribov  wrote:
>> On 09/30/2014 07:15 PM, Christophe Lyon wrote:
>>>
>>> Hello,
>>>
>>> After I've recently enabled Address Sanitizer for AArch64 in GCC, I'd
>>> like to enable Leak Sanitizer.
>>>
>>> I'd like to know what are the requirements wrt testing it? IIUC there
>>> are no lsan tests in the GCC testsuite so far.
>>>
>>> Should I just test a few sample programs to check if basic functionality
>>> is OK?
>>>
>>> The patch seems to be a 1-line patch, I just want to check the
>>> acceptance criteria.
>>
>>
>> AFAIK compiler-rt testsuite supports running under non-Clang compiler. Don't
>> ask me how to setup the beast though.
>>


how to build and test uClinux toolchains

2018-10-16 Thread Christophe Lyon
Hi,

While reviewing one of my patches about FDPIC support for ARM, Richard
raised the concern of testing the patch on other uClinux targets [1].

I looked at uclinux.org and at the GCC maintainers file, but it's
still not obvious to me which uClinux targets are currently supported?

I'd like to do a basic:
- build toolchain for target XXX, make check
- apply my GCC patch
- build toolchain for target XXX, make check again
- compare the results to make sure I didn't break anything

I'd appreciate hints on which targets offer to do that easily enough,
using a simulator (qemu?).

Thanks,

Christophe


[1] https://gcc.gnu.org/ml/gcc-patches/2018-10/msg00736.html


Re: how to build and test uClinux toolchains

2018-10-26 Thread Christophe Lyon
On Sun, 21 Oct 2018 at 04:06, Max Filippov  wrote:
>
> Hello,
>
> On Tue, Oct 16, 2018 at 1:54 PM Jim Wilson  wrote:
> >
> > On 10/16/18 7:19 AM, Christophe Lyon wrote:
> > > While reviewing one of my patches about FDPIC support for ARM, Richard
> > > raised the concern of testing the patch on other uClinux targets [1].
> > >
> > > I looked at uclinux.org and at the GCC maintainers file, but it's
> > > still not obvious to me which uClinux targets are currently supported?
>
> I'm trying to keep nommu xtensa alive.
>
> > I see that buildroot has obvious blackfin (bfin), m68k, and xtensa
> > uclinux support.  But blackfin.uclinux.org says the uclinux port was
> > deprecated in 2012.  m68k as mentioned above should be usable.  It
> > appears that xtensa uclinux is still alive and usable.
> >  http://wiki.linux-xtensa.org/index.php/UClinux
>
> Probably the easiest way to get all xtensa toolchain parts correctly it
> by using existing buildroot configuration. E.g. the following configuration
> may be used to build uclinux xtensa toolchain for the dc233c core:
> https://git.buildroot.net/buildroot/tree/configs/qemu_xtensa_lx60_nommu_defconfig
>
OK, thanks for your suggestion. I think I managed to build it.
Now, how/where can I run 'make check' for gcc?
I do not see the GCC build tree.

> Also bFLT executable format is currently not supported for linux-user
> xtensa QEMU. The following branch adds that support:
> https://github.com/OSLL/qemu-xtensa/commits/xtensa-bflt
>
> qemu-xtensa built from this QEMU then may be registered as a binfmt
> handler for bFLT executable images allowing to run gcc tests that want
> to run target binaries.
Do you have the magic commands for this?

Thanks,

Christophe

> --
> Thanks.
> -- Max


Re: how to build and test uClinux toolchains

2018-10-31 Thread Christophe Lyon
On Fri, 26 Oct 2018 at 19:54, Max Filippov  wrote:
>
> On Fri, Oct 26, 2018 at 6:51 AM Christophe Lyon
>  wrote:
> > On Sun, 21 Oct 2018 at 04:06, Max Filippov  wrote:
> > > Probably the easiest way to get all xtensa toolchain parts correctly it
> > > by using existing buildroot configuration. E.g. the following 
> > > configuration
> > > may be used to build uclinux xtensa toolchain for the dc233c core:
> > > https://git.buildroot.net/buildroot/tree/configs/qemu_xtensa_lx60_nommu_defconfig
> > >
> > OK, thanks for your suggestion. I think I managed to build it.
> > Now, how/where can I run 'make check' for gcc?
> > I do not see the GCC build tree.
>
> The gcc build tree is usually in the build/host-gcc-final in the buildroot
> build tree. But that's gcc version selected in the buildroot, you probably
> want a different version. Usually after the buildroot toolchain is ready I
> build gcc separately using binutils and sysroot produced by the buildroot.
> I have a few examples here:
>
>   http://wiki.osll.ru/doku.php/etc:users:jcmvbkbc:gcc-xtensa-call0
>
> Please note that you'd need to apply gcc part of the xtensa overlay to
> your gcc source for it to correctly generate code for that configuration.
>
> I've run the tests with the current gcc trunk and a lot of execution
> tests related to TLS (which is expected) and exceptions (which I
> didn't expect) are failing. I'm looking at it.
>

I'm not sure if I followed the instructions correctly:
make qemu_xtensa_lx60_nommu_defconfig
make all
which built:
./output/host/bin/xtensa-buildroot-uclinux-uclibc-gcc (which is 7.3)
then I tried to follow the wiki above:
export TOOLCHAIN=$PWD/output
PATH=$TOOLCHAIN/host/bin:$PATH /gcc/configure [...]

I also built qemu from the branch you mentioned,
and used the "run it on linux-user QEMU" section in the wiki

I see many execution errors, now realizing that I didn't
do what you said above: "apply gcc part of the xtensa overlay".
But what is this? Do you mean the patches in buildroot/packages/gcc/8.2.0 ?
I tried to apply 0004-gcc-xtensa-fix-NAND-code-in-xtensa_expand_atomic.patch
but patch says it's already applied (I'm using GCC trunk)

And while writing this reply, I'm just realizing that buildroot builds
for uclinux-uclibc-gcc, while the wiki uses linux-uclibc :(
Does the wiki need an update wrt target name?

> > > Also bFLT executable format is currently not supported for linux-user
> > > xtensa QEMU. The following branch adds that support:
> > > https://github.com/OSLL/qemu-xtensa/commits/xtensa-bflt
> > >
> > > qemu-xtensa built from this QEMU then may be registered as a binfmt
> > > handler for bFLT executable images allowing to run gcc tests that want
> > > to run target binaries.
> > Do you have the magic commands for this?
>
> If you build QEMU from the link above you can use the following command
> to register binfmt handler for bFLT binaries assuming that you've installed
> it into $QEMU_PREFIX:
>
>   sudo scripts/qemu-binfmt-conf.sh --qemu-path=$QEMU_PREFIX/bin --flat 
> 'xtensa'
>
> The --flat switch is not final, it will likely change before it's accepted to
> the QEMU mainline.
> --
> Thanks.
> -- Max


Re: how to build and test uClinux toolchains

2018-11-02 Thread Christophe Lyon
On Wed, 31 Oct 2018 at 16:43, Christophe Lyon
 wrote:
>
> On Fri, 26 Oct 2018 at 19:54, Max Filippov  wrote:
> >
> > On Fri, Oct 26, 2018 at 6:51 AM Christophe Lyon
> >  wrote:
> > > On Sun, 21 Oct 2018 at 04:06, Max Filippov  wrote:
> > > > Probably the easiest way to get all xtensa toolchain parts correctly it
> > > > by using existing buildroot configuration. E.g. the following 
> > > > configuration
> > > > may be used to build uclinux xtensa toolchain for the dc233c core:
> > > > https://git.buildroot.net/buildroot/tree/configs/qemu_xtensa_lx60_nommu_defconfig
> > > >
> > > OK, thanks for your suggestion. I think I managed to build it.
> > > Now, how/where can I run 'make check' for gcc?
> > > I do not see the GCC build tree.
> >
> > The gcc build tree is usually in the build/host-gcc-final in the buildroot
> > build tree. But that's gcc version selected in the buildroot, you probably
> > want a different version. Usually after the buildroot toolchain is ready I
> > build gcc separately using binutils and sysroot produced by the buildroot.
> > I have a few examples here:
> >
> >   http://wiki.osll.ru/doku.php/etc:users:jcmvbkbc:gcc-xtensa-call0
> >
> > Please note that you'd need to apply gcc part of the xtensa overlay to
> > your gcc source for it to correctly generate code for that configuration.
> >
> > I've run the tests with the current gcc trunk and a lot of execution
> > tests related to TLS (which is expected) and exceptions (which I
> > didn't expect) are failing. I'm looking at it.
> >
>
> I'm not sure if I followed the instructions correctly:
> make qemu_xtensa_lx60_nommu_defconfig
> make all
> which built:
> ./output/host/bin/xtensa-buildroot-uclinux-uclibc-gcc (which is 7.3)
> then I tried to follow the wiki above:
> export TOOLCHAIN=$PWD/output
> PATH=$TOOLCHAIN/host/bin:$PATH /gcc/configure [...]
>
> I also built qemu from the branch you mentioned,
> and used the "run it on linux-user QEMU" section in the wiki
>
> I see many execution errors, now realizing that I didn't
> do what you said above: "apply gcc part of the xtensa overlay".
> But what is this? Do you mean the patches in buildroot/packages/gcc/8.2.0 ?
> I tried to apply 0004-gcc-xtensa-fix-NAND-code-in-xtensa_expand_atomic.patch
> but patch says it's already applied (I'm using GCC trunk)
>
> And while writing this reply, I'm just realizing that buildroot builds
> for uclinux-uclibc-gcc, while the wiki uses linux-uclibc :(
> Does the wiki need an update wrt target name?
>

I re-ran the wiki instructions with --target=xtensa-buildroot-uclinux-uclibc
and while gcc/g++ results looks mostly OK, the libstdc++ ones only show:
Running ...f/trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp ...
ERROR: could not compile testsuite_abi.cc
ERROR: tcl error sourcing.../trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp.
Running.../trunk/libstdc++-v3/testsuite/libstdc++-dg/conformance.exp ...
ERROR: could not compile testsuite_abi.cc
etc...

libstdc++.log show many instances of:
.../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc: In function
'symbols create_symbols(const char*)':^M
.../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc:565: note:
non-delegitimized UNSPEC 3 found in variable location^M
.../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc: In function
'void examine_symbol(const char*, const char*)':^M
.../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc:355: note:
non-delegitimized UNSPEC 3 found in variable location^M

ERROR: tcl error sourcing
.../trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp.
ERROR: could not compile testsuite_abi.cc
while executing
"error "could not compile $f""
(procedure "v3-build_support" line 62)
invoked from within
"v3-build_support"
(file ".../trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp" line 34)
invoked from within
"source .../trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp"
("uplevel" body line 1)
invoked from within
"uplevel #0 source .../trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp"
invoked from within
"catch "uplevel #0 source $test_file_name""

Do you know what the problem is in my setup? Or with GCC trunk?

> > > > Also bFLT executable format is currently not supported for linux-user
> > > > xtensa QEMU. The following branch adds that support:
> > > > https://github.com/OSLL/qemu-xtensa/commits/xtensa-bflt
> > > >
> > > > qemu-xtensa built from this QEMU then may be registered as a

Re: how to build and test uClinux toolchains

2018-11-05 Thread Christophe Lyon
On Mon, 5 Nov 2018 at 21:49, Max Filippov  wrote:
>
> On Fri, Nov 2, 2018 at 3:29 AM Christophe Lyon
>  wrote:
> > I re-ran the wiki instructions with --target=xtensa-buildroot-uclinux-uclibc
> > and while gcc/g++ results looks mostly OK, the libstdc++ ones only show:
> > Running ...f/trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp ...
> > ERROR: could not compile testsuite_abi.cc
> > ERROR: tcl error 
> > sourcing.../trunk/libstdc++-v3/testsuite/libstdc++-abi/abi.exp.
> > Running.../trunk/libstdc++-v3/testsuite/libstdc++-dg/conformance.exp ...
> > ERROR: could not compile testsuite_abi.cc
> > etc...
> >
> > libstdc++.log show many instances of:
> > .../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc: In function
> > 'symbols create_symbols(const char*)':^M
> > .../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc:565: note:
> > non-delegitimized UNSPEC 3 found in variable location^M
> > .../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc: In function
> > 'void examine_symbol(const char*, const char*)':^M
> > .../trunk/libstdc++-v3/testsuite/util/testsuite_abi.cc:355: note:
> > non-delegitimized UNSPEC 3 found in variable location^M
>
> "non-delegitimized UNSPEC 3 found" is a note, not an error.
> There should also be an error.
>
That's what I thought, but I couldn't find it.

> > Do you know what the problem is in my setup? Or with GCC trunk?
>
> I ran make check on gcc trunk from 2018-10-20, libstdc++ testsuite
> works for me:
>
> === libstdc++ Summary ===
>
> # of expected passes9889
> # of unexpected failures89
> # of unexpected successes   4
> # of expected failures  79
> # of unresolved testcases   17
> # of unsupported tests  758
>

OK, thanks for the confirmation, I'm now re-building with the patches
from your previous email

> --
> Thanks.
> -- Max


Re: GCC turns &~ into | due to undefined bit-shift without warning

2019-03-20 Thread Christophe Lyon

On 20/03/2019 15:08, Moritz Strübe wrote:

Hey.

Am 11.03.2019 um 12:17 schrieb Jakub Jelinek:

On Mon, Mar 11, 2019 at 11:06:37AM +, Moritz Strübe wrote:


On 11.03.2019 at 10:14 Jakub Jelinek wrote:


You could build with -fsanitize=undefined, that would tell you at runtime you
have undefined behavior in your code (if the SingleDiff has bit ever 0x20
set).


Yes, that helps. Unfortunately I'm on an embedded system, thus the code
size increase is just too big.


You can -fsanitize-undefined-trap-on-error, which doesn't increase size too
much, it is less user-friendly, but still should catch the UB.




Wouldn't this fail to link? I thought the sanitizers need some runtime 
libraries which are only available under linux/macos/android. What do you mean 
by embedded? Isn't it arm-eabi?



Ok, I played around a bit. Interestingly, if I set -fsanitize=udefined and 
-fsanitize-undefined-trap-on-error the compiler detects that it will always 
trap, and optimizes the code accordingly (the code after the trap is removed).* 
 Which kind of brings me to David's argument: Shouldn't the compiler warn if 
there is undefined behavior it certainly knows of?
I do assume though that fsanitize just injects the test-code everywhere and 
relies on the compiler to remove it at unnecessary places. Would be nice, 
though. :)



Could you confirm in which version of the ST libraries you noticed this bug?
I'm told it was fixed on 23-march-2018.

Thanks,

Christophe



Cheers
Morty

*After fixing the code, it got too big to fit.


--
Redheads Ltd. Softwaredienstleistungen
Schillerstr. 14
90409 Nürnberg

Telefon: +49 (0)911 180778-50
E-Mail: moritz.stru...@redheads.de | Web: 
www.redheads.de

Geschäftsführer: Andreas Hanke
Sitz der Gesellschaft: Lauf
Amtsgericht Nürnberg HRB 22681
Ust-ID: DE 249436843





[arm] Too strict linker assert?

2019-04-09 Thread Christophe Lyon
Hi,

While building a newlib-based arm-eabi toolchain with
--with-multilib-list=rmprofile, I faced a linker assertion failure in
elf32_arm_merge_eabi_attributes (bfd/elf32-arm.c):
BFD_ASSERT (in_attr[Tag_ABI_HardFP_use].i == 0)

I traced this down to newlib's impure.o containing only data, and thus
GCC does not emit a .fpu directive when compiling impure.c.

When the linker merges impure.o's attributes with the other
contributions that already have
Tag_FP_arch, this assertion fails because in my multilib case (-mthumb
-march=armv7e-m+fp -mfloat-abi=softfp) all the object files have
  Tag_ABI_HardFP_use: SP only

Put differently, all objects but impure.o have
  Tag_ABI_HardFP_use: SP only
  Tag_FP_arch: VFPv4-D16
but impure.o has only:
  Tag_ABI_HardFP_use: SP only
(and no Tag_FP_arch)

Removing the linker assertion makes the build succeed, so I guess my
question is: should I submit a linker patch to remove the assert
because it is too strict, or should I find a way to make GCC emit the
needed .fpu directive?

Thanks,

Christophe


Re: [arm] Too strict linker assert?

2019-04-10 Thread Christophe Lyon
On Wed, 10 Apr 2019 at 00:30, Richard Earnshaw
 wrote:
>
> On 09/04/2019 13:26, Christophe Lyon wrote:
> > Hi,
> >
> > While building a newlib-based arm-eabi toolchain with
> > --with-multilib-list=rmprofile, I faced a linker assertion failure in
> > elf32_arm_merge_eabi_attributes (bfd/elf32-arm.c):
> > BFD_ASSERT (in_attr[Tag_ABI_HardFP_use].i == 0)
> >
> > I traced this down to newlib's impure.o containing only data, and thus
> > GCC does not emit a .fpu directive when compiling impure.c.
> >
> > When the linker merges impure.o's attributes with the other
> > contributions that already have
> > Tag_FP_arch, this assertion fails because in my multilib case (-mthumb
> > -march=armv7e-m+fp -mfloat-abi=softfp) all the object files have
> >   Tag_ABI_HardFP_use: SP only
> >
> > Put differently, all objects but impure.o have
> >   Tag_ABI_HardFP_use: SP only
> >   Tag_FP_arch: VFPv4-D16
> > but impure.o has only:
> >   Tag_ABI_HardFP_use: SP only
> > (and no Tag_FP_arch)
> >
> > Removing the linker assertion makes the build succeed, so I guess my
> > question is: should I submit a linker patch to remove the assert
> > because it is too strict, or should I find a way to make GCC emit the
> > needed .fpu directive?
> >
> > Thanks,
> >
> > Christophe
> >
>
> I think removing the assert will remove entirely the check that a user
> is not mixing code with incompatible ABIs.  So probably this is a bug.
>
> Which version of GCC were you using, and which version of binutils?  I
> thought I'd addressed this when doing the rework of the FPU option code;
> but perhaps I've missed something somewhere.  I'll check in more detail
> tomorrow.
>

This was with binutils-2.28-branch from Apr 11th, 2017  and GCC trunk
from Nov 15th, 2018 (r266188), newlib master from Apr 1st 2019.

However, upgrading to binutils master avoided the problem.

Thanks,

Christophe


Re: [arm] Too strict linker assert?

2019-04-10 Thread Christophe Lyon
On Wed, 10 Apr 2019 at 11:42, Richard Earnshaw (lists)
 wrote:
>
> On 10/04/2019 10:16, Christophe Lyon wrote:
> > On Wed, 10 Apr 2019 at 00:30, Richard Earnshaw
> >  wrote:
> >>
> >> On 09/04/2019 13:26, Christophe Lyon wrote:
> >>> Hi,
> >>>
> >>> While building a newlib-based arm-eabi toolchain with
> >>> --with-multilib-list=rmprofile, I faced a linker assertion failure in
> >>> elf32_arm_merge_eabi_attributes (bfd/elf32-arm.c):
> >>> BFD_ASSERT (in_attr[Tag_ABI_HardFP_use].i == 0)
> >>>
> >>> I traced this down to newlib's impure.o containing only data, and thus
> >>> GCC does not emit a .fpu directive when compiling impure.c.
> >>>
> >>> When the linker merges impure.o's attributes with the other
> >>> contributions that already have
> >>> Tag_FP_arch, this assertion fails because in my multilib case (-mthumb
> >>> -march=armv7e-m+fp -mfloat-abi=softfp) all the object files have
> >>>   Tag_ABI_HardFP_use: SP only
> >>>
> >>> Put differently, all objects but impure.o have
> >>>   Tag_ABI_HardFP_use: SP only
> >>>   Tag_FP_arch: VFPv4-D16
> >>> but impure.o has only:
> >>>   Tag_ABI_HardFP_use: SP only
> >>> (and no Tag_FP_arch)
> >>>
> >>> Removing the linker assertion makes the build succeed, so I guess my
> >>> question is: should I submit a linker patch to remove the assert
> >>> because it is too strict, or should I find a way to make GCC emit the
> >>> needed .fpu directive?
> >>>
> >>> Thanks,
> >>>
> >>> Christophe
> >>>
> >>
> >> I think removing the assert will remove entirely the check that a user
> >> is not mixing code with incompatible ABIs.  So probably this is a bug.
> >>
> >> Which version of GCC were you using, and which version of binutils?  I
> >> thought I'd addressed this when doing the rework of the FPU option code;
> >> but perhaps I've missed something somewhere.  I'll check in more detail
> >> tomorrow.
> >>
> >
> > This was with binutils-2.28-branch from Apr 11th, 2017  and GCC trunk
> > from Nov 15th, 2018 (r266188), newlib master from Apr 1st 2019.
> >
> > However, upgrading to binutils master avoided the problem.
> >
> > Thanks,
> >
> > Christophe
> >
>
> Digging through the archives it does look as though I reached the same
> conclusion as you did: that the assert is too harsh.  The patch was this
> one:
>
> https://sourceware.org/ml/binutils/2017-06/msg00090.html
>

Ha, thanks for the pointer, and sorry for the noise.

Christophe


Re: dg-extract-results broken since rev 268511, was Re: Status of 9.0.1 20190415 [trunk revision 270358] on x86_64-w64-mingw32

2019-04-16 Thread Christophe Lyon
On Tue, 16 Apr 2019 at 13:04, Rainer Emrich  wrote:
>
> Am 16.04.2019 um 11:59 schrieb Rainer Emrich:
> > Am 15.04.2019 um 20:12 schrieb Rainer Emrich:
> >> Am 15.04.2019 um 17:43 schrieb Rainer Emrich:
> >>> Am 15.04.2019 um 17:38 schrieb Jakub Jelinek:
>  On Mon, Apr 15, 2019 at 05:30:14PM +0200, Rainer Emrich wrote:
> > There seems to be a generic issue with the tests in gcc/testsuite. The
> > log files do not contain the logs.
> 
>  Perhaps contrib/dg-extract-results* misbehaved?
>  Can you look for the testsuite/g++*/g++.log.sep files?  Do they contain
>  everything?
> >> The *.log.sep files seem to be ok.
> >>
>  If yes, can you try to say
>  mv contrib/dg-extract-results.py{,.bad}
>  and retry, to see if there isn't a problem with the python version 
>  thereof?
> >> I will try this over the night.
> > The shell version of dg-extract-results does not work either.
> >
> > AFAIS, there were changes to the dg-extract-results script 5th of March.
> > Looks like these changes are causing the issue, but I'm not sure.
> >
> > What I can say, my setup works at least for the gcc-8 branch and used to
> > work in the past.
> I tested dg-extractresults.sh manually and found that the change from
> 4th of February, revision 268411 broke the log extraction. Easy to test
> with version from 23rd of September last year which works.
>
> I don't have the time to analyze the python version, but my bet, it's
> the same issue.
>
Hi,

Sorry for the breakage, I really wanted to improve those scripts.
Could you give me a reproducer, since we didn't notice problems in our
validations?

Thanks,

Christophe


Re: dg-extract-results broken since rev 268511, was Re: Status of 9.0.1 20190415 [trunk revision 270358] on x86_64-w64-mingw32

2019-04-16 Thread Christophe Lyon
On Tue, 16 Apr 2019 at 14:34, Rainer Emrich  wrote:
>
> Am 16.04.2019 um 14:10 schrieb Christophe Lyon:
> > On Tue, 16 Apr 2019 at 13:04, Rainer Emrich  
> > wrote:
> >>
> >> Am 16.04.2019 um 11:59 schrieb Rainer Emrich:
> >>> Am 15.04.2019 um 20:12 schrieb Rainer Emrich:
> >>>> Am 15.04.2019 um 17:43 schrieb Rainer Emrich:
> >>>>> Am 15.04.2019 um 17:38 schrieb Jakub Jelinek:
> >>>>>> On Mon, Apr 15, 2019 at 05:30:14PM +0200, Rainer Emrich wrote:
> >>>>>>> There seems to be a generic issue with the tests in gcc/testsuite. The
> >>>>>>> log files do not contain the logs.
> >>>>>>
> >>>>>> Perhaps contrib/dg-extract-results* misbehaved?
> >>>>>> Can you look for the testsuite/g++*/g++.log.sep files?  Do they contain
> >>>>>> everything?
> >>>> The *.log.sep files seem to be ok.
> >>>>
> >>>>>> If yes, can you try to say
> >>>>>> mv contrib/dg-extract-results.py{,.bad}
> >>>>>> and retry, to see if there isn't a problem with the python version 
> >>>>>> thereof?
> >>>> I will try this over the night.
> >>> The shell version of dg-extract-results does not work either.
> >>>
> >>> AFAIS, there were changes to the dg-extract-results script 5th of March.
> >>> Looks like these changes are causing the issue, but I'm not sure.
> >>>
> >>> What I can say, my setup works at least for the gcc-8 branch and used to
> >>> work in the past.
> >> I tested dg-extractresults.sh manually and found that the change from
> >> 4th of February, revision 268411 broke the log extraction. Easy to test
> >> with version from 23rd of September last year which works.
> >>
> >> I don't have the time to analyze the python version, but my bet, it's
> >> the same issue.
> >>
> > Hi,
> >
> > Sorry for the breakage, I really wanted to improve those scripts.
> > Could you give me a reproducer, since we didn't notice problems in our
> > validations?
> Hi Christope,
>
> I executed the dg-extract-results.sh manually in the gcc/testsuite
> directory after a complete testsuite run which didn't give the correct
> results. Rev. 240429 gives the expected results, where rev 268511 fails.
> I'm on windows using msys2 with bash 4.4.23.
>
> I'm bootsrapping at the moment but that's really slow on windows. When
> the testsuite run is finished I try to assemble a reproducer. This will
> take a while.
>

OK, thanks! Do you mean the problem happens on Windows only?

> Thanks,
>
> Rainer
>


Re: dg-extract-results broken since rev 268511, was Re: Status of 9.0.1 20190415 [trunk revision 270358] on x86_64-w64-mingw32

2019-04-16 Thread Christophe Lyon
On Tue, 16 Apr 2019 at 17:36, Jakub Jelinek  wrote:
>
> On Tue, Apr 16, 2019 at 03:44:44PM +0200, Jakub Jelinek wrote:
> > I can't reproduce this on my Fedora 29 x86_64-linux bootstrap box though,
> > the *.log files are complete there.
> >
> > And I have no idea if it was introduced with your change or earlier.
>
> Actually, I managed to reproduce in a Fedora 31 chroot, in which I don't
> have /usr/bin/python installed (I think in Fedora 30+ there is
> /usr/bin/python2 and /usr/bin/python3 but not /usr/bin/python, at least not
> in the default buildroot).
>
> The changes to contrib/dg-extract-results.sh look wrong to me:
> --- contrib/dg-extract-results.sh   2018-04-25 09:40:40.139659386 +0200
> +++ contrib/dg-extract-results.sh   2019-03-05 21:49:34.471573434 +0100
> @@ -298,6 +298,8 @@ BEGIN {
>cnt=0
>print_using=0
>need_close=0
> +  has_timeout=0
> +  timeout_cnt=0
>  }
>  /^EXPFILE: / {
>expfiles[expfileno] = \$2
> @@ -329,16 +331,37 @@ BEGIN {
># Ugly hack for gfortran.dg/dg.exp
>if ("$TOOL" == "gfortran" && testname ~ /^gfortran.dg\/g77\//)
>  testname="h"testname
> +  if (\$1 == "WARNING:" && \$2 == "program" && \$3 == "timed" && (\$4 == 
> "out" || \$4 == "out.")) {
> +has_timeout=1
> +timeout_cnt=cnt
> +  } else {
> +  # Prepare timeout replacement message in case it's needed
> +timeout_msg=\$0
> +sub(\$1, "WARNING:", timeout_msg)
> +  }
>  }
>  /^$/ { if ("$MODE" == "sum") next }
>  { if (variant == curvar && curfile) {
>  if ("$MODE" == "sum") {
> -  printf "%s %08d|", testname, cnt >> curfile
> -  cnt = cnt + 1
> +  # Do not print anything if the current line is a timeout
> +  if (has_timeout == 0) {
> +# If the previous line was a timeout,
> +# insert the full current message without keyword
> +if (timeout_cnt != 0) {
> +  printf "%s %08d|%s program timed out.\n", testname, timeout_cnt, 
> timeout_msg >> curfile
> +  timeout_cnt = 0
> +  cnt = cnt + 1
> +}
> +printf "%s %08d|", testname, cnt >> curfile
> +cnt = cnt + 1
> +filewritten[curfile]=1
> +need_close=1
> +if (timeout_cnt == 0)
> +  print >> curfile
> +  }
> +
> +  has_timeout=0
>  }
> -filewritten[curfile]=1
> -need_close=1
> -print >> curfile
>} else
>  next
>  }
> First of all, I don't see why the WARNING: program timed out
> stuff should be handled in any way specially in -L mode, there is no sorting
> at all and all the lines go together.  But more importantly, the above

The "WARNING: program timed out" stuff needs to be handled specially
in non-L mode (when handling .sum), because in that case we are using
"sort", which used to put all "WARNING:" lines together before most of
the report.

> changes broke completely the -L mode, previously the filewritten, need_close
> and print lines were done for both sum and log modes, but now they are done
> only in the sum mode (and in that case only if has_timeout is 0, which is
> desirable).
>
I did check my patch against .sum and .log files, but it looks like my
tests were incomplete, sorry for that.

> I believe the following patch should fix it, but I don't actually have any
> WARNING: program timed out
> lines in my *.sep files in any of the last 12 bootstraps I have around.

You can just insert one such line in your .sum/.log manually, and
possibly replace a PASS with a FAIL to check that the WARNING and FAIL
are kept next to each other in the .sum (that was my original
intention).

Christophe

>
> Additionally, perhaps we should change dg-extract-results.sh, so that it
> doesn't try just python, but also python3?  I think in some distros
> /usr/bin/python even warns users that they should decide if they mean
> python2 or python3.
>
> 2019-04-16  Jakub Jelinek  
>
> * dg-extract-results.sh: Only handle WARNING: program timed out
> lines specially in "$MODE" == "sum".  Restore previous behavior
> for "$MODE" != "sum".  Clear has_timeout and timeout_cnt if in
> a different variant or curfile is empty.
> * dg-extract-results.py: Fix a typo.
>
> --- contrib/dg-extract-results.sh.jj2019-03-05 21:49:34.471573434 +0100
> +++ contrib/dg-extract-results.sh   2019-04-16 17:09:02.710004553 +0200
> @@ -331,13 +331,15 @@ BEGIN {
># Ugly hack for gfortran.dg/dg.exp
>if ("$TOOL" == "gfortran" && testname ~ /^gfortran.dg\/g77\//)
>  testname="h"testname
> -  if (\$1 == "WARNING:" && \$2 == "program" && \$3 == "timed" && (\$4 == 
> "out" || \$4 == "out.")) {
> -has_timeout=1
> -timeout_cnt=cnt
> -  } else {
> -  # Prepare timeout replacement message in case it's needed
> -timeout_msg=\$0
> -sub(\$1, "WARNING:", timeout_msg)
> +  if ("$MODE" == "sum") {
> +if (\$0 ^ /^WARNING: program timed out/) {
> +  has_timeout=1
> +  timeout_cnt=cnt
> +} else {
> +  # Prepare timeout replacement mess

Re: [testsuite] What's the expected behaviour of dg-require-effective-target shared?

2019-06-24 Thread Christophe Lyon
On Fri, 21 Jun 2019 at 16:28, Iain Sandoe  wrote:
>
> Hi Christophe,
>
> we’ve been looking at some cases where Darwin tests fail or pass unexpectedly 
> depending on
> options.  It came as a surprise to see it failing a test for shared support 
> (since it’s always
> supported shared libs).
>
> -
>
> It’s a long time ago, but in r216117 you added this to target-supports.
>
> # Return 1 if -shared is supported, as in no warnings or errors
> # emitted, 0 otherwise.
>
> proc check_effective_target_shared { } {
>  # Note that M68K has a multilib that supports -fpic but not
>  # -fPIC, so we need to check both.  We test with a program that
>  # requires GOT references.
>  return [check_no_compiler_messages shared executable {
>extern int foo (void); extern int bar;
>   int baz (void) { return foo () + bar; }
>  } "-shared -fpic”
> }
>
> 
>
> The thing is that this is testing two things:
>
> 1) if the target consumes -shared -fpic without warning
>
> 2) assuming that those cause a shared lib to be made it also tests that
> the target will allow a link of that to complete with undefined symbols.
>
> So Darwin *does* support  “-shared -fpic” and is very happy to make
> shared libraries.  However, it doesn’t (by default) allow  undefined symbols
> in the link.
>
> So my question is really about the intent of the test:
>
>   if the intent is to see if the target supports shared libs, then we should
>   arrange for Darwin to pass - either by hardwiring it (since all Darwin
>   versions do support shared) or by adding suitable options to suppress
>   the error.
>
>   if the intent is to check that the target supports linking a shared lib with
>  undefined external symbols, then perhaps we need a different test for the
>  “just supports shared libs”
>
> ===

The patch was posted in https://gcc.gnu.org/ml/gcc-patches/2014-10/msg00596.html
so it's really a matter of testing whether shared libs are supported.

> (note, also the comment doesn’t match what’s actually done, but that’s prob
>  a cut & pasto).
Indeed looks like I cut & pasted too much from check_effective_target_fpic

Thanks,

Christophe

> thanks
> Iain
>
>
>
>


[testsuite] Effective-target depending on current compiler flags?

2019-09-10 Thread Christophe Lyon
Hi,

While looking at GCC validation results when configured for ARM
cortex-m33 by default, I noticed that
FAIL: gcc.target/arm/cmse/mainline/soft/cmse-5.c  -march=armv8-m.main
-mthumb -mfloat-abi=soft  -O0   scan-assembler msr\tAPSR_nzcvqg, lr

The corresponding line in the testcase is (are):
/* { dg-final { scan-assembler "msr\tAPSR_nzcvq, lr" { target { !
arm_dsp } } } } */
/* { dg-final { scan-assembler "msr\tAPSR_nzcvqg, lr" { target arm_dsp } } } */

So the arm_dsp effective target is true and the test tries to match
APSR_nzcvqg, while APSR_nzcvq is generated, as expected.

There is an inconsistency because the testcase is compiled with
-march=armv8-m.main, while arm_dsp is not: like most effective
targets, it does not take the current cflags into account.
In the present case, the -march flag is added by cmse.exp in the
arguments to gcc-dg-runtest.

I tried to add [current_compiler_flags] to all arm effective targets
(for consistency, it wouldn't make sense to add it to arm_dsp only),
but then I noticed further problems...

For instance, in my configuration, when advsimd-intrinsics.exp is
executed, it tries
check_effective_target_arm_v8_2a_fp16_neon_hw
which fails, and then tries check_effective_target_arm_neon_fp16_ok
which succeeds and thus adds -mfloat-abi=softfp -mfp16-format=ieee to
additional_flags.

Since $additional_flags is passed to gcc-dg-runtest, these flags are
visible by current_compiler_flags and taken into account when
computing effective_targets (since I modified them).

So... when computing arm_v8_2a_fp16_scalar_ok, it uses
-mfloat-abi=softfp -mfp16-format=ieee and doesn't need to add any flag
beyond -march=armv8.2-a+fp16.
So et_arm_v8_2a_fp16_scalar_flags contains -march=armv8.2-a+fp16 only.

Later on, while executing arm.exp, -mfloat-abi=softfp
-mfp16-format=ieee are not part of $DEFAULT_CFLAGS as used by
dg-runtest, but et_arm_v8_2a_fp16_scalar_flags is still in the cache
and it's not valid anymore.

So. before burying myself, is there already a way to make
effective-targets take the current compiler flags into account as
needed in gcc.target/arm/cmse/mainline/soft/cmse-5.c ?

Not sure my explanation is clear enough :-)

Thanks,

Christophe


Re: [testsuite] Effective-target depending on current compiler flags?

2019-09-11 Thread Christophe Lyon
On Wed, 11 Sep 2019 at 11:56, Richard Sandiford
 wrote:
>
> Christophe Lyon  writes:
> > Hi,
> >
> > While looking at GCC validation results when configured for ARM
> > cortex-m33 by default, I noticed that
> > FAIL: gcc.target/arm/cmse/mainline/soft/cmse-5.c  -march=armv8-m.main
> > -mthumb -mfloat-abi=soft  -O0   scan-assembler msr\tAPSR_nzcvqg, lr
> >
> > The corresponding line in the testcase is (are):
> > /* { dg-final { scan-assembler "msr\tAPSR_nzcvq, lr" { target { !
> > arm_dsp } } } } */
> > /* { dg-final { scan-assembler "msr\tAPSR_nzcvqg, lr" { target arm_dsp } } 
> > } */
> >
> > So the arm_dsp effective target is true and the test tries to match
> > APSR_nzcvqg, while APSR_nzcvq is generated, as expected.
> >
> > There is an inconsistency because the testcase is compiled with
> > -march=armv8-m.main, while arm_dsp is not: like most effective
> > targets, it does not take the current cflags into account.
> > In the present case, the -march flag is added by cmse.exp in the
> > arguments to gcc-dg-runtest.
> >
> > I tried to add [current_compiler_flags] to all arm effective targets
> > (for consistency, it wouldn't make sense to add it to arm_dsp only),
> > but then I noticed further problems...
>
> Yeah, effective targets shouldn't depend on the compiler flags.
> They're supposed to be properties of the testing target (what it
> supports, what it enables by default, etc.) and are cached between
> tests that run with different options.
>
Thanks for the clarification/confirmation it currently works as intended.

> CMSE isn't my area, so I don't know why the scan-assembler lines
> were written this way.  Is the { target arm_dsp } line there for
> cases in which a user-specified -march flag manages to override
> -march=armv8-m.main?
>
I've added Andre in cc:, as he originally wrote that test.

Thanks,

Christophe

> Thanks,
> Richard
>
> > For instance, in my configuration, when advsimd-intrinsics.exp is
> > executed, it tries
> > check_effective_target_arm_v8_2a_fp16_neon_hw
> > which fails, and then tries check_effective_target_arm_neon_fp16_ok
> > which succeeds and thus adds -mfloat-abi=softfp -mfp16-format=ieee to
> > additional_flags.
> >
> > Since $additional_flags is passed to gcc-dg-runtest, these flags are
> > visible by current_compiler_flags and taken into account when
> > computing effective_targets (since I modified them).
> >
> > So... when computing arm_v8_2a_fp16_scalar_ok, it uses
> > -mfloat-abi=softfp -mfp16-format=ieee and doesn't need to add any flag
> > beyond -march=armv8.2-a+fp16.
> > So et_arm_v8_2a_fp16_scalar_flags contains -march=armv8.2-a+fp16 only.
> >
> > Later on, while executing arm.exp, -mfloat-abi=softfp
> > -mfp16-format=ieee are not part of $DEFAULT_CFLAGS as used by
> > dg-runtest, but et_arm_v8_2a_fp16_scalar_flags is still in the cache
> > and it's not valid anymore.
> >
> > So. before burying myself, is there already a way to make
> > effective-targets take the current compiler flags into account as
> > needed in gcc.target/arm/cmse/mainline/soft/cmse-5.c ?
> >
> > Not sure my explanation is clear enough :-)
> >
> > Thanks,
> >
> > Christophe


Re: [testsuite] Effective-target depending on current compiler flags?

2019-09-20 Thread Christophe Lyon
On Wed, 11 Sep 2019 at 14:21, Christophe Lyon
 wrote:
>
> On Wed, 11 Sep 2019 at 11:56, Richard Sandiford
>  wrote:
> >
> > Christophe Lyon  writes:
> > > Hi,
> > >
> > > While looking at GCC validation results when configured for ARM
> > > cortex-m33 by default, I noticed that
> > > FAIL: gcc.target/arm/cmse/mainline/soft/cmse-5.c  -march=armv8-m.main
> > > -mthumb -mfloat-abi=soft  -O0   scan-assembler msr\tAPSR_nzcvqg, lr
> > >
> > > The corresponding line in the testcase is (are):
> > > /* { dg-final { scan-assembler "msr\tAPSR_nzcvq, lr" { target { !
> > > arm_dsp } } } } */
> > > /* { dg-final { scan-assembler "msr\tAPSR_nzcvqg, lr" { target arm_dsp } 
> > > } } */
> > >
> > > So the arm_dsp effective target is true and the test tries to match
> > > APSR_nzcvqg, while APSR_nzcvq is generated, as expected.
> > >
> > > There is an inconsistency because the testcase is compiled with
> > > -march=armv8-m.main, while arm_dsp is not: like most effective
> > > targets, it does not take the current cflags into account.
> > > In the present case, the -march flag is added by cmse.exp in the
> > > arguments to gcc-dg-runtest.
> > >
> > > I tried to add [current_compiler_flags] to all arm effective targets
> > > (for consistency, it wouldn't make sense to add it to arm_dsp only),
> > > but then I noticed further problems...
> >
> > Yeah, effective targets shouldn't depend on the compiler flags.
> > They're supposed to be properties of the testing target (what it
> > supports, what it enables by default, etc.) and are cached between
> > tests that run with different options.
> >
> Thanks for the clarification/confirmation it currently works as intended.

Actually I've just realized that effective targets also depend on
compiler flags passed via RUNTESTFLAGS / -target-board, so this is
getting tricky...

>
> > CMSE isn't my area, so I don't know why the scan-assembler lines
> > were written this way.  Is the { target arm_dsp } line there for
> > cases in which a user-specified -march flag manages to override
> > -march=armv8-m.main?
> >
> I've added Andre in cc:, as he originally wrote that test.
>
If I add -march=armv8-m.main+dsp via RUNTESTFLAGS/-target-board,
arm_dsp succeeds, but the testcase (cmse-5.c) is compiled with
-march=armv8-m.main+dsp  [...]  -march=armv8-m.main
the last flag comes from cmse.exp via add_options_for_arm_arch_v8m_main

But IIRC the order between RUNTESTFLAGS and dg-options depends on the
dejagnu version (somewhere near 1.6)

Anyway, this makes me wonder what's the supposed/recommended way of
running validations for non-default cpu?
1- configure --with-cpu=cortex-XX ; make check
2- configure (using default cpu setting) ;
RUNTESTFLAGS=-mcpu=cortex-XX make check

I use (1), but I think most others use (2) ?

Thanks,

Christophe

> Thanks,
>
> Christophe
>
> > Thanks,
> > Richard
> >
> > > For instance, in my configuration, when advsimd-intrinsics.exp is
> > > executed, it tries
> > > check_effective_target_arm_v8_2a_fp16_neon_hw
> > > which fails, and then tries check_effective_target_arm_neon_fp16_ok
> > > which succeeds and thus adds -mfloat-abi=softfp -mfp16-format=ieee to
> > > additional_flags.
> > >
> > > Since $additional_flags is passed to gcc-dg-runtest, these flags are
> > > visible by current_compiler_flags and taken into account when
> > > computing effective_targets (since I modified them).
> > >
> > > So... when computing arm_v8_2a_fp16_scalar_ok, it uses
> > > -mfloat-abi=softfp -mfp16-format=ieee and doesn't need to add any flag
> > > beyond -march=armv8.2-a+fp16.
> > > So et_arm_v8_2a_fp16_scalar_flags contains -march=armv8.2-a+fp16 only.
> > >
> > > Later on, while executing arm.exp, -mfloat-abi=softfp
> > > -mfp16-format=ieee are not part of $DEFAULT_CFLAGS as used by
> > > dg-runtest, but et_arm_v8_2a_fp16_scalar_flags is still in the cache
> > > and it's not valid anymore.
> > >
> > > So. before burying myself, is there already a way to make
> > > effective-targets take the current compiler flags into account as
> > > needed in gcc.target/arm/cmse/mainline/soft/cmse-5.c ?
> > >
> > > Not sure my explanation is clear enough :-)
> > >
> > > Thanks,
> > >
> > > Christophe


Re: How to test aarch64 when building a cross-compiler?

2019-11-25 Thread Christophe Lyon
On Mon, 25 Nov 2019 at 20:17, Andrew Dean via gcc  wrote:
>
> Based on https://www.gnu.org/software/hurd/hurd/glibc.html, I'm using 
> glibc/scripts/build-many-glibcs.py targeting aarch64-linux-gnu as so:
>
> build-many-glibcs.py build_dir checkout --keep all
>
> build-many-glibcs.py build_dir host-libraries --keep all -j 12
>
> build-many-glibcs.py build_dir compilers aarch64-linux-gnu --keep all -j 12 
> --full-gcc
> build-many-glibcs.py build_dir glibcs aarch64-linux-gnu --keep all -j 12
>
> This completes successfully. However, when I then try to run the gcc tests 
> like so:
> runtest --outdir . --tool gcc --srcdir /path/to/gcc/gcc/testsuite aarch64.exp 
> --target aarch64-linux-gnu --target_board aarch64-sim --tool_exec 
> /path_to/build_dir/install/compilers/aarch64-linux-gnu/bin/aarch64-glibc-linux-gnu-gcc
>  --verbose -v
>
> I get errors like this:
>
> aarch64-glibc-linux-gnu-gcc: fatal error: cannot read spec file 
> 'rdimon.specs': No such file or directory
>
> I can see that the rdimon.specs flag is added based on this line in 
> aarch64-sim.exp:

Where does aarch64-sim.exp comes from?

>
> set_board_info ldflags  "[libgloss_link_flags] [newlib_link_flags] 
> -specs=rdimon.specs"
>
I think this is for baremetal/newlib targets, ie. aarch64-elf, not for
aarch64-linux-gnu.

> I've tried searching for how to address this, but so far unsuccessfully. Does 
> anybody know what I'm missing here?
>
> Thanks,
> Andrew


[ARM] LLVM's -arm-assume-misaligned-load-store equivalent in GCC?

2020-01-07 Thread Christophe Lyon
Hi,

I've received a support request where GCC generates strd/ldrd which
require aligned memory addresses, while the user code actually
provides sub-aligned pointers.

The sample code is derived from CMSIS:
#define __SIMD32_TYPE int
#define __SIMD32(addr) (*(__SIMD32_TYPE **) & (addr))

void foo(short *pDst, int in1, int in2) {
   *__SIMD32(pDst)++ = in1;
   *__SIMD32(pDst)++ = in2;
}

compiled with arm-none-eabi-gcc -mcpu=cortex-m7 CMSIS.c -S -O2
generates:
foo:
strdr1, r2, [r0]
bx  lr

Using -mno-unaligned-access of course makes no change, since the code
is lying to the compiler by casting short* to int*.

However, LLVM has -arm-assume-misaligned-load-store which disables
generation of ldrd/strd in such cases:
https://reviews.llvm.org/D17015?id=48020

Would some equivalent be acceptable in GCC? I have a small patch that
seems to work.

Thanks,

Christophe


Re: [ARM] LLVM's -arm-assume-misaligned-load-store equivalent in GCC?

2020-01-07 Thread Christophe Lyon
On Tue, 7 Jan 2020 at 17:06, Richard Earnshaw (lists)
 wrote:
>
> On 07/01/2020 15:57, Christophe Lyon wrote:
> > Hi,
> >
> > I've received a support request where GCC generates strd/ldrd which
> > require aligned memory addresses, while the user code actually
> > provides sub-aligned pointers.
> >
> > The sample code is derived from CMSIS:
> > #define __SIMD32_TYPE int
> > #define __SIMD32(addr) (*(__SIMD32_TYPE **) & (addr))
> >
> > void foo(short *pDst, int in1, int in2) {
> > *__SIMD32(pDst)++ = in1;
> > *__SIMD32(pDst)++ = in2;
> > }
> >
> > compiled with arm-none-eabi-gcc -mcpu=cortex-m7 CMSIS.c -S -O2
> > generates:
> > foo:
> >  strdr1, r2, [r0]
> >  bx  lr
> >
> > Using -mno-unaligned-access of course makes no change, since the code
> > is lying to the compiler by casting short* to int*.
> >
> > However, LLVM has -arm-assume-misaligned-load-store which disables
> > generation of ldrd/strd in such cases:
> > https://reviews.llvm.org/D17015?id=48020
> >
> > Would some equivalent be acceptable in GCC? I have a small patch that
> > seems to work.
> >
> > Thanks,
> >
> > Christophe
> >
>
> It sounds ill-conceived to me.  Why just this case (ldrd/strd)?  What
> about cases where we use ldm/stm which also can't tolerate misaligned
> accesses?

Sorry, I over-simplified. That would avoid generating ldrd/strd/ldm/stm.

>
> Unless the conditions for the option are well-defined, I don't think we
> should be doing things like this.  In the long term it just leads to
> more bugs being reported when the user is bitten by their own wrong code.
>
Indeed.
The thing (as usual) is that users report that "it works with other
compilers"


> R.


Re: [ARM] LLVM's -arm-assume-misaligned-load-store equivalent in GCC?

2020-01-07 Thread Christophe Lyon
On Tue, 7 Jan 2020 at 17:18, Marc Glisse  wrote:
>
> On Tue, 7 Jan 2020, Christophe Lyon wrote:
>
> > I've received a support request where GCC generates strd/ldrd which
> > require aligned memory addresses, while the user code actually
> > provides sub-aligned pointers.
> >
> > The sample code is derived from CMSIS:
> > #define __SIMD32_TYPE int
> > #define __SIMD32(addr) (*(__SIMD32_TYPE **) & (addr))
> >
> > void foo(short *pDst, int in1, int in2) {
> >   *__SIMD32(pDst)++ = in1;
> >   *__SIMD32(pDst)++ = in2;
> > }
> >
> > compiled with arm-none-eabi-gcc -mcpu=cortex-m7 CMSIS.c -S -O2
> > generates:
> > foo:
> >strdr1, r2, [r0]
> >bx  lr
> >
> > Using -mno-unaligned-access of course makes no change, since the code
> > is lying to the compiler by casting short* to int*.
>
> If the issue is as well isolated as this, can't they just edit the code?
>
> typedef int __SIMD32_TYPE __attribute__((aligned(1)));
>

That type is defined by a macro in CMSIS's arm_math.h.
I think users don't want to play tricks with such libraries,
but I could try to check with them if something around these lines
would be acceptable.

> gets
>
> str r1, [r0]@ unaligned
> str r2, [r0, #4]@ unaligned
>
> instead of
>
> strdr1, r2, [r0]
>
> --
> Marc Glisse


Re: [ARM] LLVM's -arm-assume-misaligned-load-store equivalent in GCC?

2020-01-09 Thread Christophe Lyon
On Tue, 7 Jan 2020 at 17:33, Christophe Lyon  wrote:
>
> On Tue, 7 Jan 2020 at 17:18, Marc Glisse  wrote:
> >
> > On Tue, 7 Jan 2020, Christophe Lyon wrote:
> >
> > > I've received a support request where GCC generates strd/ldrd which
> > > require aligned memory addresses, while the user code actually
> > > provides sub-aligned pointers.
> > >
> > > The sample code is derived from CMSIS:
> > > #define __SIMD32_TYPE int
> > > #define __SIMD32(addr) (*(__SIMD32_TYPE **) & (addr))
> > >
> > > void foo(short *pDst, int in1, int in2) {
> > >   *__SIMD32(pDst)++ = in1;
> > >   *__SIMD32(pDst)++ = in2;
> > > }
> > >
> > > compiled with arm-none-eabi-gcc -mcpu=cortex-m7 CMSIS.c -S -O2
> > > generates:
> > > foo:
> > >strdr1, r2, [r0]
> > >bx  lr
> > >
> > > Using -mno-unaligned-access of course makes no change, since the code
> > > is lying to the compiler by casting short* to int*.
> >
> > If the issue is as well isolated as this, can't they just edit the code?
> >
> > typedef int __SIMD32_TYPE __attribute__((aligned(1)));
> >
>
> That type is defined by a macro in CMSIS's arm_math.h.
> I think users don't want to play tricks with such libraries,
> but I could try to check with them if something around these lines
> would be acceptable.
>

Actually I got a confirmation of what I suspected: the offending function foo()
is part of ARM CMSIS libraries, although the users are able to recompile them,
they don't want to modify that source code. Having a compilation option to
avoid generating problematic code sequences would be OK for them.

So from the user's perspective, the wrong code is part of a 3rd party library
which they can recompile but do not want to modify.


> > gets
> >
> > str r1, [r0]@ unaligned
> > str r2, [r0, #4]@ unaligned
> >
> > instead of
> >
> > strdr1, r2, [r0]
> >
> > --
> > Marc Glisse


Re: Getting spurious FAILS in testsuite?

2017-06-08 Thread Christophe Lyon
On 8 June 2017 at 11:57, Georg-Johann Lay  wrote:
> On 05.06.2017 18:25, Jim Wilson wrote:
>>
>> On 06/01/2017 05:59 AM, Georg-Johann Lay wrote:
>>>
>>> Hi, when I am running the gcc testsuite in $builddir/gcc then
>>> $ make check-gcc RUNTESTFLAGS='ubsan.exp'
>>> comes up with spurious fails.
>>
>>
>> This was discussed before, and the suspicion was that it was a linux
>> kernel bug.  There were multiple kernel fixes pointed at, it wasn't
>> clear which one was required to fix it.
>>
>> I have Ubuntu 16.04 LTS on my laptop, and I see the problem.  I can't
>> run the ubsan testsuites with -j factor greater than one and get
>> reproducible results.  There may also be other ways to trigger the
>> problem.
>>
>> See for instance the thread
>> https://gcc.gnu.org/ml/gcc/2016-07/msg00117.html
>> The first message in the thread from Andrew Pinski mentions that the log
>> output is corrupted from apparent buffer overflow.
>>
>> Jim
>
>
>
> I have "Ubuntu 16.04.2 LTS".
>
> Asking this at DejaGNU's, I got the following pointer:
>
> https://lists.gnu.org/archive/html/dejagnu/2016-03/msg00034.html
>
> AFAIU there is a problem separating stdout and stderr?
>

Be careful, I'm not a dejagnu maintainer/developer :-)
I just meant to say I had "similar" problems, but according to this
thread, I'm not the only one :(

> Johann
>
>
>


Re: GCC Buildbot

2017-09-21 Thread Christophe Lyon
On 20 September 2017 at 17:01, Paulo Matos  wrote:
> Hi all,
>
> I am internally running buildbot for a few projects, including one for a
> simple gcc setup for a private port. After some discussions with David
> Edelsohn at the last couple of Cauldrons, who told me this might be
> interesting for the community in general, I have contacted Sergio DJ
> with a few questions on his buildbot configuration for GDB. I then
> stripped out his configuration and transformed it into one from GCC,
> with a few private additions and ported it to the most recent buildbot
> version nine (which is numerically 0.9.x).
>

That's something I'd have liked to discuss at the Cauldron, but I
couldn't attend.

> To make a long story short: https://gcc-buildbot.linki.tools
> With brief documentation in: https://linkitools.github.io/gcc-buildbot
> and configuration in: https://github.com/LinkiTools/gcc-buildbot
>
> Now, this is still pretty raw but it:
> * Configures a fedora x86_64 for C, C++ and ObjectiveC (./configure
> --disable-multilib)
> * Does an incremental build
> * Runs all tests
> * Grabs the test results and stores them as properties
> * Creates a tarball of the sum and log files from the testsuite
> directory and uploads them
>
> This mail's intention is to gauge the interest of having a buildbot for
> GCC. Buildbot is a generic Python framework to build a test framework so
> the possibilities are pretty much endless as all workflows are
> programmed in Python and with buildbot nine the interface is also
> modifiable, if required.

I think there is no question about the interest of such a feature. It's almost
mandatory nowadays.

FYI, I've been involved in some "bots" for GCC for the past 4-5 years.
Our interest is in the ARM and AArch64 targets.

I don't want to start a Buildbot vs Jenkins vs something else war,
but I can share my experience. I did look at Buildbot, including when
the GDB guys started their own, but I must admit that I have trouble
with Python ;-)

A general warning would be: avoid sharing resources, it's always
a cause of trouble.

In ST, I stopped using my team's Jenkins instance because it
was overloaded, needed to be restarted at inconvenient times, ...
I'm now using a nice crontab :-)
Still in ST, I am using our Compute Farm, which is a large number
of x86_64 servers, where you submit batch jobs, wait, then parse
the results, and the workspace is deleted upon job completion.
I have to cope with various rules, to have a decent throughput,
and minimize pending time as much as possible.

Yet, and probably because the machines are shared with other users
running large (much larger?) programs at the same time, I have to face
random failures (processes are killed randomly, interrupted system calls,
etc). Trying to handle these problems gracefully is very time consuming.

I upload the results on a Linaro server, so that I can share them
when I report a regression. For disk space reasons, I currently
keep about 2 months of results. For the trunk:
http://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/


In Linaro, we use Jenkins, and a few dedicated x86_64 builders
as well as arm and aarch64 builders and test machines. We have
much less cpu power than what I can currently use in ST, so
we run less builds, and less configurations. But even there we have
to face a few random test results (mostly when threads and libgomp
are involved).
https://ci.linaro.org/view/tcwg-ci/job/tcwg-upstream-monitoring/

These random false failures have been preventing us from sending
results automatically.

>
> If this is something of interest, then we will need to understand what
> is required, among those:
>
> - which machines we can use as workers: we certainly need more worker
> (previously known as slave) machines to test GCC in different
> archs/configurations;

To cover various archs, it may be more practical to build cross-compilers,
using "cheap" x86_64 builders, and relying on qemu or other simulators
to run the tests. I don't think the GCC compute farm can offer powerful
enough machines for all the archs we want to test.

It's not as good as using native hardware, but this is often faster.
And it does not prevent from using native hardware for weekly
bootstraps for instance.

> - what kind of build configurations do we need and what they should do:
> for example, do we want to build gcc standalone against system (the one
> installed in the worker) binutils, glibc, etc or do we want a builder to
> bootstrap everything?

Using the system tools is OK for native builders, maybe not when building
cross-compilers.

Then I think it's way safer to stick to given binutils/glibc/newlib versions
and monitor only gcc changes. There are already frequent regressions,
and it's easier to be sure it's related to gcc-changes only.

And have a mechanism to upgrade such components after checking
the impact on the gcc testsuite.

In Linaro we have a job tracking all master branches, it is almost
always red :(

> 

Re: GCC Buildbot Update - Definition of regression

2017-10-11 Thread Christophe Lyon
On 11 October 2017 at 08:34, Paulo Matos  wrote:
>
>
> On 10/10/17 23:25, Joseph Myers wrote:
>> On Tue, 10 Oct 2017, Paulo Matos wrote:
>>
>>> new test -> FAIL; New test starts as fail
>>
>> No, that's not a regression, but you might want to treat it as one (in the
>> sense that it's a regression at the higher level of "testsuite run should
>> have no unexpected failures", even if the test in question would have
>> failed all along if added earlier and so the underlying compiler bug, if
>> any, is not a regression).  It should have human attention to classify it
>> and either fix the test or XFAIL it (with issue filed in Bugzilla if a
>> bug), but it's not a regression.  (Exception: where a test failing results
>> in its name changing, e.g. through adding "(internal compiler error)".)
>>
>
> When someone adds a new test to the testsuite, isn't it supposed to not
> FAIL? If is does FAIL, shouldn't this be considered a regression?
>
> Now, the danger is that since regressions are comparisons with previous
> run something like this would happen:
>
> run1:
> ...
> FAIL: foo.c ; new test
> ...
>
> run1 fails because new test entered as a FAIL
>
> run2:
> ...
> FAIL: foo.c
> ...
>
> run2 succeeds because there are no changes.
>
> For this reason all of this issues need to be taken care straight away
> or they become part of the 'normal' status and no more failures are
> issued... unless of course a more complex regression analysis is
> implemented.
>
Agreed.

> Also, when I mean, run1 fails or succeeds this is just the term I use to
> display red/green in the buildbot interface for a given build, not
> necessarily what I expect the process will do.
>
>>
>> My suggestion is:
>>
>> PASS -> FAIL is an unambiguous regression.
>>
>> Anything else -> FAIL and new FAILing tests aren't regressions at the
>> individual test level, but may be treated as such at the whole testsuite
>> level.
>>
>> Any transition where the destination result is not FAIL is not a
>> regression.
>>

FWIW, we consider regressions:
* any->FAIL because we don't want such a regression at the whole testsuite level
* any->UNRESOLVED for the same reason
* {PASS,UNSUPPORTED,UNTESTED,UNRESOLVED}-> XPASS
* new XPASS
* XFAIL disappears (may mean that a testcase was removed, worth a manual check)
* ERRORS



>> ERRORs in the .sum or .log files should be watched out for as well,
>> however, as sometimes they may indicate broken Tcl syntax in the
>> testsuite, which may cause many tests not to be run.
>>
>> Note that the test names that come after PASS:, FAIL: etc. aren't unique
>> between different .sum files, so you need to associate tests with a tuple
>> (.sum file, test name) (and even then, sometimes multiple tests in a .sum
>> file have the same name, but that's a testsuite bug).  If you're using
>> --target_board options that run tests for more than one multilib in the
>> same testsuite run, add the multilib to that tuple as well.
>>
>
> Thanks for all the comments. Sounds sensible.
> By not being unique, you mean between languages?
Yes, but not only as Joseph mentioned above.

You have the obvious example of c-c++-common/*san tests, which are
common to gcc and g++.

> I assume that two gcc.sum from different builds will always refer to the
> same test/configuration when referring to (for example):
> PASS: gcc.c-torture/compile/2105-1.c   -O1  (test for excess errors)
>
> In this case, I assume that "gcc.c-torture/compile/2105-1.c   -O1
> (test for excess errors)" will always be referring to the same thing.
>
In gcc.sum, I can see 4 occurrences of
PASS: gcc.dg/Werror-13.c  (test for errors, line )

Actually, there are quite a few others like that

Christophe

> --
> Paulo Matos


Re: GCC Buildbot Update - Definition of regression

2017-10-11 Thread Christophe Lyon
On 11 October 2017 at 11:03, Paulo Matos  wrote:
>
>
> On 11/10/17 10:35, Christophe Lyon wrote:
>>
>> FWIW, we consider regressions:
>> * any->FAIL because we don't want such a regression at the whole testsuite 
>> level
>> * any->UNRESOLVED for the same reason
>> * {PASS,UNSUPPORTED,UNTESTED,UNRESOLVED}-> XPASS
>> * new XPASS
>> * XFAIL disappears (may mean that a testcase was removed, worth a manual 
>> check)
>> * ERRORS
>>
>
> That's certainly stricter than what it was proposed by Joseph. I will
> run a few tests on historical data to see what I get using both approaches.
>
>>
>>
>>>> ERRORs in the .sum or .log files should be watched out for as well,
>>>> however, as sometimes they may indicate broken Tcl syntax in the
>>>> testsuite, which may cause many tests not to be run.
>>>>
>>>> Note that the test names that come after PASS:, FAIL: etc. aren't unique
>>>> between different .sum files, so you need to associate tests with a tuple
>>>> (.sum file, test name) (and even then, sometimes multiple tests in a .sum
>>>> file have the same name, but that's a testsuite bug).  If you're using
>>>> --target_board options that run tests for more than one multilib in the
>>>> same testsuite run, add the multilib to that tuple as well.
>>>>
>>>
>>> Thanks for all the comments. Sounds sensible.
>>> By not being unique, you mean between languages?
>> Yes, but not only as Joseph mentioned above.
>>
>> You have the obvious example of c-c++-common/*san tests, which are
>> common to gcc and g++.
>>
>>> I assume that two gcc.sum from different builds will always refer to the
>>> same test/configuration when referring to (for example):
>>> PASS: gcc.c-torture/compile/2105-1.c   -O1  (test for excess errors)
>>>
>>> In this case, I assume that "gcc.c-torture/compile/2105-1.c   -O1
>>> (test for excess errors)" will always be referring to the same thing.
>>>
>> In gcc.sum, I can see 4 occurrences of
>> PASS: gcc.dg/Werror-13.c  (test for errors, line )
>>
>> Actually, there are quite a few others like that
>>
>
> That actually surprised me.
>
> I also see:
> PASS: gcc.dg/Werror-13.c  (test for errors, line )
> PASS: gcc.dg/Werror-13.c  (test for errors, line )
> PASS: gcc.dg/Werror-13.c  (test for errors, line )
> PASS: gcc.dg/Werror-13.c  (test for errors, line )
>
> among others like it. Looks like a line number is missing?
>
> In any case, it feels like the code I have to track this down needs to
> be improved.
>
We had to derive our scripts from the ones in contrib/ because these
failed to handle some cases (eg when a same test reports
both PASS and FAIL, yes it does happen).

You can have a look at
https://git.linaro.org/toolchain/gcc-compare-results.git/
where compare_tests is a patched version of the contrib/ script,
it calls the main perl script (which is not the prettiest thing :-)

Christophe

> --
> Paulo Matos


Re: GCC Buildbot Update

2017-12-14 Thread Christophe Lyon
On 14 December 2017 at 09:56, Paulo Matos  wrote:
> Hello,
>
> Apologies for the delay on the update. It was my plan to do an update on
> a monthly basis but it slipped by a couple of weeks.
>
Hi,

Thanks for the update!


> The current status is:
>
> *Workers:*
>
> - x86_64
>
> 2 workers from CF (gcc16 and gcc20) up and running;
> 1 worker from my farm (jupiter-F26) up and running;
>
> 2 broken CF (gcc75 and gcc76) - the reason for the brokenness is that
> the machines work well but all outgoing ports except the git port is
> open (9418 if not mistaken). This means that not only we cannot svn co
> gcc but we can't connect a worker to the master through port 9918. I
> have contacted the cf admin but the reply was that nothing can be done
> as they don't really own the machine. They seemed to have relayed the
> request to the machine owners.
>
> - aarch64
>
> I got an email suggesting I add some aarch64 workers so I did:
> 4 workers from CF (gcc113, gcc114, gcc115 and gcc116);
>
Great, I thought the CF machines were reserved for developpers.
Good news you could add builders on them.

> *Builds:*
>
> As before we have the full build and the incremental build. Both enabled
> for x86_64 and aarch64, except they are currently failing for aarch64
> (more on that later).
>
> The full build is triggered on Daily bump commit and the incremental
> build is triggered for each commit.
>
> The problem with this setup is that the incremental builder takes too
> long to run the tests. Around 1h30m on CF machines for x86_64.
>
> Segher Boessenkool sent me a patch to disable guality and prettyprinters
> which coupled with --disable-gomp at configure time was supposed to make
> things much faster. I have added this as the Fast builder, except this
> is failing during the test runs:
> unable to alloc 389376 bytes
> /bin/bash: line 21: 32472 Aborted `if [ -f
> ${srcdir}/../dejagnu/runtest ] ; then echo ${srcdir}/../dejagnu/runtest
> ; else echo runtest; fi` --tool gcc
> /bin/bash: fork: Cannot allocate memory
> make[3]: [check-parallel-gcc] Error 254 (ignored)
> make[3]: execvp: /bin/bash: Cannot allocate memory
> make[3]: [check-parallel-gcc_1] Error 127 (ignored)
> make[3]: execvp: /bin/bash: Cannot allocate memory
> make[3]: [check-parallel-gcc_1] Error 127 (ignored)
> make[3]: execvp: /bin/bash: Cannot allocate memory
> make[3]: *** [check-parallel-gcc_1] Error 127
>
>
> However, something interesting is happening here since the munin
> interface for gcc16 doesn't show the machine running out of memory:
> https://cfarm.tetaneutral.net/munin/gccfarm/gcc16/memory.html
> (something confirmed by the cf admins)
>
> The aarch64 build is failing as mentioned earlier. If you check the logs:
> https://gcc-buildbot.linki.tools/#/builders/5/builds/10
> the problem seems to be the assembler issuing:
> Assembler messages:
> Error: unknown architecture `armv8.1-a'
> Error: unrecognized option -march=armv8.1-a
>
>
> If I go to the machines and check the versions I get:
> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as --version
> GNU assembler (GNU Binutils for Ubuntu) 2.24
> Copyright 2013 Free Software Foundation, Inc.
> This program is free software; you may redistribute it under the terms of
> the GNU General Public License version 3 or later.
> This program has absolutely no warranty.
> This assembler was configured for a target of `aarch64-linux-gnu'.
>
> pmatos@gcc115:~/gcc-8-20171203_BUILD$ gcc --version
> gcc (Ubuntu/Linaro 4.8.4-2ubuntu1~14.04.3) 4.8.4
> Copyright (C) 2013 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>
> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
> Assembler messages:
> Error: unknown architecture `armv8.1-a'
>
> Error: unrecognized option -march=armv8.1-a
>
> However, if I run the a compiler build manually with just:
>
> $ configure --disable-multilib
> $ nice -n 19 make -j4 all
>
> This compiles just fine. So I am at the moment attempting to investigate
> what might cause the difference between what buildbot does and what I do
> through ssh.
>
I suspect you are hitting a bug introduced recently, and fixed by:
https://gcc.gnu.org/ml/gcc-patches/2017-12/msg00434.html

> *Reporters:*
>
> There is a single reporter which is a irc bot currently silent.
>
> *Regression analysis:*
>
> This is one of the most important issues to tackle and I have a solution
> in a branch regression-testing :
> https://github.com/LinkiTools/gcc-buildbot/tree/regression-testing
>
> using jamais-vu from David Malcolm to analyze the regressions.
> It needs some more testing and I should be able to get it working still
> this year.
>
Great

> *LNT:*
>
> I had mentioned I wanted to setup an interface which would allow for
> easy visibility of test failures, time taken to build/test, etc.
> Initially I thought a stack of influx+grafana would be a good idea, but
> was pointed ou

Re: GCC Buildbot Update

2017-12-15 Thread Christophe Lyon
On 15 December 2017 at 10:19, Paulo Matos  wrote:
>
>
> On 14/12/17 21:32, Christophe Lyon wrote:
>> Great, I thought the CF machines were reserved for developpers.
>> Good news you could add builders on them.
>>
>
> Oh. I have seen similar things happening on CF machines so I thought it
> was not a problem. I have never specifically asked for permission.
>
>>> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
>>> Assembler messages:
>>> Error: unknown architecture `armv8.1-a'
>>>
>>> Error: unrecognized option -march=armv8.1-a
>>>
>>> However, if I run the a compiler build manually with just:
>>>
>>> $ configure --disable-multilib
>>> $ nice -n 19 make -j4 all
>>>
>>> This compiles just fine. So I am at the moment attempting to investigate
>>> what might cause the difference between what buildbot does and what I do
>>> through ssh.
>>>
>> I suspect you are hitting a bug introduced recently, and fixed by:
>> https://gcc.gnu.org/ml/gcc-patches/2017-12/msg00434.html
>>
>
> Wow, that's really useful. Thanks for letting me know.
>
And the patch was committed last night (r255659), so maybe your builds now work?

> --
> Paulo Matos


Re: GCC Buildbot Update

2017-12-20 Thread Christophe Lyon
On 20 December 2017 at 09:31, Paulo Matos  wrote:
>
>
> On 15/12/17 10:21, Christophe Lyon wrote:
>> On 15 December 2017 at 10:19, Paulo Matos  wrote:
>>>
>>>
>>> On 14/12/17 21:32, Christophe Lyon wrote:
>>>> Great, I thought the CF machines were reserved for developpers.
>>>> Good news you could add builders on them.
>>>>
>>>
>>> Oh. I have seen similar things happening on CF machines so I thought it
>>> was not a problem. I have never specifically asked for permission.
>>>
>>>>> pmatos@gcc115:~/gcc-8-20171203_BUILD$ as -march=armv8.1-a
>>>>> Assembler messages:
>>>>> Error: unknown architecture `armv8.1-a'
>>>>>
>>>>> Error: unrecognized option -march=armv8.1-a
>>>>>
>>>>> However, if I run the a compiler build manually with just:
>>>>>
>>>>> $ configure --disable-multilib
>>>>> $ nice -n 19 make -j4 all
>>>>>
>>>>> This compiles just fine. So I am at the moment attempting to investigate
>>>>> what might cause the difference between what buildbot does and what I do
>>>>> through ssh.
>>>>>
>>>> I suspect you are hitting a bug introduced recently, and fixed by:
>>>> https://gcc.gnu.org/ml/gcc-patches/2017-12/msg00434.html
>>>>
>>>
>>> Wow, that's really useful. Thanks for letting me know.
>>>
>> And the patch was committed last night (r255659), so maybe your builds now 
>> work?
>>
>
> On some machines, in incremental builds I still seeing this:
> Assembler messages:
> Error: unknown architectural extension `lse'
> Error: unrecognized option -march=armv8-a+lse
> make[4]: *** [load_1_1_.lo] Error 1
> make[4]: *** Waiting for unfinished jobs
>
> Looks related... the only strange thing happening is that this doesn't
> happen in full builds.
>

The recent fix changed the Makefile and configure script in libatomic.
I guess that if your incremental builds does not run configure, it's
still using old Makefiles, and old options.


> --
> Paulo Matos


Re: GCC Buildbot Update

2017-12-20 Thread Christophe Lyon
On 20 December 2017 at 11:02, Paulo Matos  wrote:
>
>
> On 20/12/17 10:51, Christophe Lyon wrote:
>>
>> The recent fix changed the Makefile and configure script in libatomic.
>> I guess that if your incremental builds does not run configure, it's
>> still using old Makefiles, and old options.
>>
>>
> You're right. I guess incremental builds should always call configure,
> just in case.
>

Maybe, but this does not always work. Sometimes, I have to rm -rf $builddir


> Thanks,
> --
> Paulo Matos


aarch64-none-elf build broken

2018-06-08 Thread Christophe Lyon
Hi,

As I reported in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78870#c16, the build of
GCC for aarch64*-none-elf fails when configuring libstdc++ since
r261034 (a week ago).

The root cause is PR66203 which I reported quite some time ago, which
points to a newlib problem: on aarch64 there is no default rom
monitor, one has to explicitly use a --specs flag for the link to
succeed.

Maybe I missed a change about this in newlib, and I should upgrade the
version I use for GCC automatic validations?

If not, and if there is not much interest in these configurations,
maybe I should just drop them from my list? Alternatively, I could try
to use LDFLAGS_FOR_TARGET=--specs=rdimon.specs in my validation
scripts.

Or, better, are there plans to fix this?

I ask, because I have no immediate plans to look at this.

Thanks,

Christophe


Re: aarch64-none-elf build broken

2018-06-08 Thread Christophe Lyon
On 8 June 2018 at 16:41, Jonathan Wakely  wrote:
> On 8 June 2018 at 14:22, Christophe Lyon wrote:
>> Hi,
>>
>> As I reported in
>> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78870#c16, the build of
>> GCC for aarch64*-none-elf fails when configuring libstdc++ since
>> r261034 (a week ago).
>
> Sorry for not trying to fix it, I'm travelling and not been able to
> look into it (which is why I've only been doing trivial refactoring
> patches all week).
>
I'm not blaming you in any way :)


>
>> The root cause is PR66203 which I reported quite some time ago, which
>> points to a newlib problem: on aarch64 there is no default rom
>> monitor, one has to explicitly use a --specs flag for the link to
>> succeed.
>
> I have no idea why this causes the libstdc++ configuration problem
> though, I don't understand the issue.

That's because aarch64-elf-gcc conftest.c -o conftest fails to link if
one does not provide --specs=rdimon.specs.

So, every configure test that involves a link phase fails.


Re: GCC regression question (pr77445-2.c & ssa-dom-thread-7.c)

2018-06-28 Thread Christophe Lyon
On Thu, 28 Jun 2018 at 17:23, Steve Ellcey  wrote:
>
> Does anyone know anything about these failures that I see on my aarch64
> build & test?
>
> FAIL: gcc.dg/tree-ssa/pr77445-2.c scan-tree-dump-not thread3 "not considered"
> FAIL: gcc.dg/tree-ssa/ssa-dom-thread-7.c scan-tree-dump-not vrp2 "Jumps 
> threaded"
>
> The both seem to have started showing up on May 20th and I don't see any
> bugzilla report on them.  Before I try and track down what checkin caused
> them and whether or not they were caused by the same checkin I thought I
> would see if anyone had already done that.
>

Yes it's sometimes tricky to find out if a regression has been reported or not.

Martin is aware of these two:
https://gcc.gnu.org/ml/gcc-patches/2018-05/msg01479.html

Christophe

> Steve Ellcey
> sell...@cavium.com


forcing the linker to be a particular one (i.e. gold vs bfd)

2012-01-25 Thread Christophe Lyon

Hello,

In a one year old thread, there was a discussion about a patch adding 
-fuse-ld=gold and -fuse-ld=bfd options to GCC.

It seems the GCC part of the patch was never actually committed, as there were 
some problems with LTO:
http://sourceware.org/ml/binutils/2011-01/msg00287.html

Unfortunately there was no detail about what these problems were.

Could someone elaborate?

Are there plans to actually bring such a feature?

Thanks

Christophe.



Re: forcing the linker to be a particular one (i.e. gold vs bfd)

2012-01-25 Thread Christophe Lyon

On 25.01.2012 15:49, Richard Guenther wrote:


You can change the linker used by adjusting your path or giving an
appropriate -B option to the gcc driver.

Richard.


Yes, but part of the thread was precisely to discuss alternatives, more 
end-user friendly (for those won't know the full path to alternate linker).

Christophe.



Re: forcing the linker to be a particular one (i.e. gold vs bfd)

2012-01-25 Thread Christophe Lyon

On 25.01.2012 16:14, Richard Guenther wrote:

On Wed, Jan 25, 2012 at 3:55 PM, Christophe Lyon  wrote:

On 25.01.2012 15:49, Richard Guenther wrote:


You can change the linker used by adjusting your path or giving an
appropriate -B option to the gcc driver.

Richard.


Yes, but part of the thread was precisely to discuss alternatives, more
end-user friendly (for those won't know the full path to alternate linker).

Would they know that two different linkers exist, or even what a linker is?

Richard.



Yes they do, but as Matthias Klose said, it would help distribution makers (see 
http://sourceware.org/ml/binutils/2011-01/msg00191.html)

Maybe Matthias remembers which problems he found with his patch and LTO?

Thanks
Christophe.



Canadian cross build fails on 64 bits build machine

2010-08-06 Thread Christophe LYON

Hello,

I have noticed a build failure with GCC-4.5.0, when configuring with:
--build=x86_64-unknown-linux-gnu
--host=arm-none-linux-gnueabi
--target=arm-none-linux-gnueabi

The build fails when compiling gcc/genconstants.c for the build machine:
In file included from ../../gcc/rtl.h:28,
 from ../../gcc/genconstants.c:32:
../../gcc/real.h:84: error: size of array `test_real_width' is negative

I have looked a bit at real.h, and tried to compile with -m32, which works.

Now, I think it should also work without -m32.

From my brief investigation, I think that the problem is due to the 
fact that struct real_value uses the 'long' type for the 'sig' field, 
while the computation of REAL_WIDTH relies on HOST_BITS_PER_WIDE_INT.


Promoting 'sig' to type unsigned HOST_WIDE_INT makes the compilation pass.

Here is a naive patch proposal:
--- gcc-4.5.0/gcc/real.h2010-01-05 18:14:30.0 +0100
+++ gcc-4.5.0.patched/gcc/real.h2010-08-06 14:02:03.0 +0200
@@ -40,11 +40,11 @@ enum real_value_class {
   rvc_nan
 };

-#define SIGNIFICAND_BITS   (128 + HOST_BITS_PER_LONG)
+#define SIGNIFICAND_BITS   (128 + HOST_BITS_PER_WIDE_INT)
 #define EXP_BITS   (32 - 6)
 #define MAX_EXP((1 << (EXP_BITS - 1)) - 1)
-#define SIGSZ  (SIGNIFICAND_BITS / HOST_BITS_PER_LONG)
-#define SIG_MSB((unsigned long)1 << 
(HOST_BITS_PER_LONG - 1))

+#define SIGSZ  (SIGNIFICAND_BITS / HOST_BITS_PER_WIDE_INT)
+#define SIG_MSB((unsigned HOST_WIDE_INT)1 << 
(HOST_BITS_PER_WIDE_INT - 1))


 struct GTY(()) real_value {
   /* Use the same underlying type for all bit-fields, so as to make
@@ -56,7 +56,7 @@ struct GTY(()) real_value {
   unsigned int signalling : 1;
   unsigned int canonical : 1;
   unsigned int uexp : EXP_BITS;
-  unsigned long sig[SIGSZ];
+  unsigned HOST_WIDE_INT sig[SIGSZ];
 };


Is it OK?

Thanks,

Christophe.


Re: Canadian cross build fails on 64 bits build machine

2010-08-06 Thread Christophe Lyon

On 06.08.2010 15:53, Joseph S. Myers wrote:

On Fri, 6 Aug 2010, Christophe LYON wrote:


 From my brief investigation, I think that the problem is due to the fact that
struct real_value uses the 'long' type for the 'sig' field, while the
computation of REAL_WIDTH relies on HOST_BITS_PER_WIDE_INT.


No, this is not a problem; it's fine to use long in the representation but
HOST_WIDE_INT when stored in an rtx.  The issue appears rather to be with

#define REAL_VALUE_TYPE_SIZE (SIGNIFICAND_BITS + 32)

where with 64-bit long there are going to be 32 bits of padding in this
structure that are not allowed for.  Try changing that 32 to
HOST_BITS_PER_LONG.



I does not fix my problem: HOST_BITS_PER_LONG is still 32. Remember, my 
host is ARM, my target is ARM, but my build machine is x86_64, which 
makes the 'sig' field of 'real_value' an array of 5 * 64 bits, while 
SIGSZ, SIGNIFICAND_BITS and REAL_VALUE_TYPE_SIZE are all defined wrt to 
HOST_BITS_PER_LONG, which is 32.


Christophe.



ARM / linux float ABI discrepancy

2010-08-18 Thread Christophe Lyon

Hi all,

While trying to generate a ARM-GCC for Linux with hard and soft FP 
multilibs, I have noticed that:
- arm/linux-elf.h says the default float ABI is "hard" and 
multilib_defaults includes "mhard-float"


- arm/linux-eabi.h says the default float ABI is "soft" but does not 
change the multilibs_defaults.


This means that when building a linux-eabi GCC, no hard-float multilib 
will be generated even though you ask for

MULTILIB_OPTIONS= msoft-float/mhard-float

Indeed, the multilib system will decide that -mhard-float does not need 
a special multilib build because -mhard-float is defined as a default.


So you end up with 2 multilibs:
1- the default one, also used with -mhard-float
2- -msoft-float

But in linux-eabi, the default is "soft float", so the 2 variants are 
actually the same.


There are several ways of fixing this, one is to redefine 
multilib_defaults in arm/linux-eabi.h so that it includes soft-float 
instead of hard-float.


Any opinions?

Christophe


Invoking atomic functions from a C++ shared lib (or should I force linking with -lgcc?)

2010-11-16 Thread Christophe Lyon
Hi,

I have been investigating a problem I have while building Qt-embedded
with GCC-4.5.0 for ARM/Linux, and managed to produce the reduced test
case as follows.

Consider this shared library (C++):
 atomic.cxx
int atomicIncrement(int volatile* addend)
{ return __sync_fetch_and_add(addend, 1) + 1; }

Compiled with:
$ arm-linux-g++ atomic.cxx -fPIC -shared -o libatomic.so

Now the main program:
 atomain.cxx
extern int atomicIncrement(int volatile* addend);
volatile int myvar;
int main()
{ return atomicIncrement(&myvar); }

Compiled & linked with:
$ arm-linux-g++ atomain.cxx -o atomain -L. -latomic
.../ld: atomain: hidden symbol `__sync_fetch_and_add_4' in
/.../libgcc.a(linux-atomic.o) is referenced by DSO


What I have found is that g++ (unlike gcc) links with -lgcc_s instead of
-lgcc and that the atomic functions are present in libgcc.a and not in
libgcc_s.so.

If I create libatomic.so with -lgcc, it works.

What I don't understand is if this is the intended behaviour and that
adding -lgcc is the right fix, or not?

[This surprises me, because as I said, I faced this problem when
compiling Qt-embedded for ARM/Linux and I don't think I am the only one
doing that, so I expected it to just work ;-)]

Thanks,

Christophe.


incompatible GCC 4.4.5 and 4.5.1 libstdc++ ?

2010-11-17 Thread Christophe Lyon
Hi,

I have just faced a problem where a C++ program (which I get in binary
form - I can't recompile) crashes when it uses libstdc++.so.6 from
GCC-4.5.1, while it works when using libstdc++.so.6 from GCC-4.4.5.

I am using an x86 machine.

Are there any known incompatibilities between these two libraries?
As the major revision hasn't changed I didn't expect any behaviour change.

Thanks,

Christophe.


Pb with libiconv while building gcc-4.1.0

2006-03-02 Thread Christophe LYON

Hi all,

I am trying to build/install gcc-4.1.0 on my Linux box (RHEL-3), in a 
non-standard prefix.


I use -with-libiconv-prefix the tell configure where to find libiconv, 
and the configure step works.


The build step fails in libcpp:
/apa/gnu/Linux-RH-WS-3/gcc/gcc-3.4.4/bin/gcc -O2 
-I/apa/gnu/Linux-RH-WS-3/include  -o makedepend \

  makedepend.o libcpp.a ../libiberty/libiberty.a \
   -liconv
/apa/gnu/Linux-RH-WS-3/.package/gcc-3.4.4/lib/gcc/i686-pc-linux-gnu/3.4.4/../../../../i686-pc-linux-gnu/bin/ld: 
cannot find -liconv

collect2: ld returned 1 exit status

It turns out that the proper -L flags do not get to this point.

Maybe I am not using configure in the right way?

Thanks for any help,

Christophe.


Porting libsanitizer to aarch64

2013-05-21 Thread Christophe Lyon
Hi,

I have been looking at enabling libsanitizer for aarch64 GCC compilers.

To make the build succeed, I had to modify libsanitizer code:
- some syscalls are not available on aarch64 (libsanitizer uses some
legacy ones such as open, readlink, stat, ...)
- unwinding code needs to be added.

What's the way of discussing such patches? On GCC lists or elsewhere?


Then arises a runtime problem: aarch64's frame grows upward which is
not supported: how long would it take to develop this support if at
all possible?

I have not looked at tsan in detail yet, it currently does not build
for aarch64 either.

Thanks,

Christophe.


Re: RFC: [ARM] Disable peeling

2013-10-01 Thread Christophe Lyon
Hi,

I am resuming investigations about disabling peeling for
alignment (see thread at
http://gcc.gnu.org/ml/gcc/2012-12/msg00036.html).

As a reminder, I have a simple patch which disables peeling
unconditionally and gives some improvement in benchmarks.

However, I've noticed a regression where a reduced test case is:
#define SIZE 8
void func(float *data, float d)
{
int i;
for (i=0; ivector stmts have higher cost than currently with the hope
that the loop prologue would become too expensive; but to reach this
level, this cost needs to be increased quite a lot, so this approach
does not seem right.

The vectorizer estimates the cost of the prologue/epilogue/loop body
with and without vectorization and computes the number of iterations
needed for profitability. In the present case, keeping reasonable
costs, this number is very low (2 or 3 typically), while the compiler
knows we have 8 iterations for sure.

I think we need something to describe the dependency between vdup
and vst1.

Otherwise, from the vectorizer point of view, this looks like an
ideal loop.

Do you have suggestions on how to tackle this?

(I've just had a look at the recent vectorizer cost model
modification, which doesn't seem to handle this case.)

Thanks,

Christophe.

On 13 December 2012 10:42, Richard Biener  wrote:
> On Wed, Dec 12, 2012 at 6:50 PM, Andi Kleen  wrote:
>> "H.J. Lu"  writes:
>>>
>>> i386.c has
>>>
>>>{
>>>   /* When not optimize for size, enable vzeroupper optimization for
>>>  TARGET_AVX with -fexpensive-optimizations and split 32-byte
>>>  AVX unaligned load/store.  */
>>
>> This is only for the load, not for deciding whether peeling is
>> worthwhile or not.
>>
>> I believe it's unimplemented for x86 at this point. There isn't even a
>> hook for it. Any hook that is added should ideally work for both ARM64
>> and x86. This would imply it would need to handle different vector
>> sizes.
>
> There is
>
> /* Implement targetm.vectorize.builtin_vectorization_cost.  */
> static int
> ix86_builtin_vectorization_cost (enum vect_cost_for_stmt type_of_cost,
>  tree vectype,
>  int misalign ATTRIBUTE_UNUSED)
> {
> ...
>   case unaligned_load:
>   case unaligned_store:
> return ix86_cost->vec_unalign_load_cost;
>
> which indeed doesn't distinguish between unaligned load/store cost.  Still
> it does distinguish between aligned and unaligned load/store cost.
>
> Now look at the cost tables and see different unaligned vs. aligned costs
> dependent on the target CPU.
>
> generic32 and generic64 have:
>
>   1,/* vec_align_load_cost.  */
>   2,/* vec_unalign_load_cost.  */
>   1,/* vec_store_cost.  */
>
> The missed piece in the vectorizer is that peeling for alignment should have 
> the
> option to turn itself off based on that costs (and analysis).
>
> Richard.


Re: gcc buildbot?

2014-01-10 Thread Christophe Lyon

On 01/10/14 10:11, Richard Sandiford wrote:

Hi,

Philippe Baril Lecavalier  writes:

I have been experimenting with buildbot lately, and I would be glad to
help in providing it. If there is interest, I could have a prototype and
a detailed proposal ready in a few days. It could serve GCC, binutils
and some important libraries as well.

Thanks for the offer.  I think the current state is that Jan-Benedict Glaw
has put together a buildbot for testing that binutils, gcc and gdb
build for a wide range of targets:

http://toolchain.lug-owl.de/buildbot/

which has been very useful for catching target-specific build failures.
AFAIK the bot doesn't (and wasn't supposed to) check for testsuite
regressions, but I could be wrong about that.  There was some discussion
about adding testsuite coverage here:

http://gcc.gnu.org/ml/gcc/2013-08/msg00317.html


One aspect hasn't been discussed in that thread: I don't think it's possible to 
run the testsuite for every single commit, since 'make check' takes really a 
lot of time on some targets.

I have developed a cross-validation environment in which cross-build and cross-validate 
several ARM and AArch64 combinations. In my case, each build+check job takes about 3h 
(make -j4, in tmpfs, c, c++ and fortran), and I have to restrict the validation to an 
"interesting" subset of commits. Running on actual HW would be slower.

But it's definitely worth it :-)

Christophe.



ARM/getting rid of superfluous zero extension

2012-10-03 Thread Christophe Lyon
Hi,

I have recently added ARM support for builtin_bswap16, which uses the
rev16 instruction when dealing with an unsigned argument.

Considering:
unsigned short myfunc(unsigned short x) {
  return __builtin_bswap16(x);
}

gcc -O2 generates:
myfunc:
rev16   r0, r0
uxthr0, r0
bx  lr

I'd like to get rid of the zero extension, which is not needed since
r0's 16 upper bits are zero on input.

Note that rev16 actually operates on a 32 bits value and swaps the
bytes in each halfword of a 32 bits register.

After discussions with Ulrich, I have changed the machine description
of bswaphi2 to:
(define_insn "arm_rev16_new"
  [(set (match_operand:SI 0 "s_register_operand" "=l,l,r")
(ior:SI (and:SI (ashift:SI (match_operand:SI 1 "s_register_operand" 
"l,l,r")
  (const_int 8))
   (const_int 4278255360))
   (and:SI (lshiftrt:SI (match_dup 1) (const_int 8))
   (const_int 16711935]
  "arm_arch6"
  "@
   rev16\t%0, %1
   rev16%?\t%0, %1
   rev16%?\t%0, %1"
  [(set_attr "arch" "t1,t2,32")
   (set_attr "length" "2,2,4")]
)

(define_expand "bswaphi2"
  [(set (match_operand:HI 0 "s_register_operand" "")
   (bswap:HI (match_operand:HI 1 "s_register_operand" "")))]
  "arm_arch6"
  {
rtx in = gen_lowpart (SImode, operands[1]);
rtx out = gen_lowpart (SImode, operands[0]);

emit_insn (gen_arm_rev16_new (out, in));

DONE;
  }
 )

Now, this exposes the fact that rev16 also changes the 16 upper bits,
but the generated code is still the same.

I have been trying to understand why combine does not manage to infer
that the zero extension is superfluous.
Before RTL, the gimple IR contains:
myfunc (short unsigned int x)
{
  short unsigned int _2;
;;   basic block 2, loop depth 0
;;pred:   ENTRY
  _2 = __builtin_bswap16 (x_1(D)); [tail call]
  return _2;
;;succ:   EXIT
}

Before combine, the RTL is:

(note 4 0 2 2 [bb 2] NOTE_INSN_BASIC_BLOCK)
(insn 2 4 3 2 (set (reg/v:SI 112 [ x ])
(reg:SI 0 r0 [ x ])) rev16.c:11 636 {*arm_movsi_vfp}
 (expr_list:REG_DEAD (reg:SI 0 r0 [ x ])
(nil)))
(note 3 2 6 2 NOTE_INSN_FUNCTION_BEG)
(insn 6 3 7 2 (set (subreg:SI (reg:HI 113) 0)
(ior:SI (and:SI (ashift:SI (reg/v:SI 112 [ x ])
(const_int 8 [0x8]))
(const_int 4278255360 [0xff00ff00]))
(and:SI (lshiftrt:SI (reg/v:SI 112 [ x ])
(const_int 8 [0x8]))
(const_int 16711935 [0xff00ff] rev16.c:17 354
{arm_rev16_new}
 (expr_list:REG_DEAD (reg/v:SI 112 [ x ])
(nil)))
(insn 7 6 12 2 (set (reg:SI 110 [ D.4971 ])
(zero_extend:SI (reg:HI 113))) rev16.c:17 166 {*arm_zero_extendhisi2_v6}
 (expr_list:REG_DEAD (reg:HI 113)
(nil)))
(insn 12 7 15 2 (set (reg/i:SI 0 r0)
(reg:SI 110 [ D.4971 ])) rev16.c:19 636 {*arm_movsi_vfp}
 (expr_list:REG_DEAD (reg:SI 110 [ D.4971 ])
(nil)))
(insn 15 12 0 2 (use (reg/i:SI 0 r0)) rev16.c:19 -1
 (nil))

Stepping inside set_nonzero_bits_and_sign_copies() indicates that:
- insn 2 has nonzero_bits = 65535, and sign_bit_copies = 16
- insn 6 has nonzero_bits = 65535 and sign_bit_copies = 1
- insn 7 has nonzero_bits = 65535 and sign_bit_copies = 16

Any suggestion about how I could avoid generating this zero_extension?

Thanks,

Christophe.


if-conversion/HOT-COLD partitioning

2012-10-23 Thread Christophe Lyon
Hi,

While debugging a GCC failure on ARM when compiling with
-fprofile-use, I came across a case where if_convert merges 2 blocks
which belong to different partitions (hot/cold).

In turn merge_blocks() (in cfghooks.c) merges the BB flags by ORing
them, resulting in a block belonging to both cold and hot partitions,
which seems to confuse the compiler: for instance in
bb-reorder.c:connect_traces(), the code expects only one partition
flag to be set.

I think merge_blocks() should be modified to handle such cases; I
experimented a little and forcing the merged BB's partition to be the
hot one in such a case is not sufficient: some edges have been marked
as crossing ones, and this would be no longer true. Is there a simple
way to clean that?

Thanks,

Christophe.


Re: if-conversion/HOT-COLD partitioning

2012-10-23 Thread Christophe Lyon
On 23 October 2012 19:45, Steven Bosscher  wrote:
> Christophe wrote:
>> I think merge_blocks() should be modified to handle such cases;
>
> I think can_merge_blocks should be fixed. Blocks from different
> partitions should not be merged. See cfgrtl.c:rtl_can_merge_blocks and
> cfgrtl.c:cfg_layout_can_merge_blocks_p. Why are they not blocking
> these blocks from getting merged?
>

Well, both of these functions appear to check that the 2 blocks to
merge belong to the same partition, so it should be OK.

But not all calls to merge_blocks are guarded by if
(can_merge_block_p()), this is probably where the problem is?

Christophe.


Re: if-conversion/HOT-COLD partitioning

2012-10-24 Thread Christophe Lyon
On 24 October 2012 00:42, Steven Bosscher  wrote:
> On Tue, Oct 23, 2012 at 10:29 PM, Christophe Lyon
>  wrote:
>> Well, both of these functions appear to check that the 2 blocks to
>> merge belong to the same partition, so it should be OK.
>
> In your first email, you said if-convert was merging two blocks from
> different partitions. can_merge_block_p() would rejected merging the
> two blocks, so merge_blocks shouldn't be called on them.
>
> IIRC cfghooks.c:merge_blocks() used to have a
> gcc_assert(can_merge_blocks(a,b)) but it's not there now. But if
> can_merge_blocks() returns false, merge_blocks should fail. Your bug
> is that merge_blocks is being called at all on those blocks from
> different partitions.
>
>
>> But not all calls to merge_blocks are guarded by if
>> (can_merge_block_p()), this is probably where the problem is?
>
> Not sure. Depends on what blocks get merged. It may be that
> if-conversion shouldn't even be attempting whatever transformation
> it's attempting. Not enough information.
>

What happens is that merge_if_block() is called with test_bb, then_bb
and else_bb in the cold section, while join_bb is in the hot one.

merge_if_block calls merge_blocks unconditionally several times (in my
case, the wrong one is merge_blocks (combo_bb, join_bb)).

Christophe.


Re: if-conversion/HOT-COLD partitioning

2012-10-25 Thread Christophe Lyon
On 24 October 2012 22:07, Steven Bosscher  wrote:
> On Wed, Oct 24, 2012 at 6:11 PM, Christophe Lyon wrote:
>> On 24 October 2012 00:42, Steven Bosscher wrote:
>>> On Tue, Oct 23, 2012 at 10:29 PM, Christophe Lyon wrote:
>>>> Well, both of these functions appear to check that the 2 blocks to
>>>> merge belong to the same partition, so it should be OK.
>>>
>>> In your first email, you said if-convert was merging two blocks from
>>> different partitions. can_merge_block_p() would rejected merging the
>>> two blocks, so merge_blocks shouldn't be called on them.
>>>
>>> IIRC cfghooks.c:merge_blocks() used to have a
>>> gcc_assert(can_merge_blocks(a,b)) but it's not there now. But if
>>> can_merge_blocks() returns false, merge_blocks should fail. Your bug
>>> is that merge_blocks is being called at all on those blocks from
>>> different partitions.
>>>
>>>
>>>> But not all calls to merge_blocks are guarded by if
>>>> (can_merge_block_p()), this is probably where the problem is?
>>>
>>> Not sure. Depends on what blocks get merged. It may be that
>>> if-conversion shouldn't even be attempting whatever transformation
>>> it's attempting. Not enough information.
>>>
>>
>> What happens is that merge_if_block() is called with test_bb, then_bb
>> and else_bb in the cold section, while join_bb is in the hot one.
>
> AFAICT that can only happen if the join_bb has more predecessors than
> just then_bb and else_bb. Otherwise, you'd be looking at a complete
> diamond region, and test_bb and either else_bb or then_bb should be in
> the hot partition as well. But if the join_bb has more predecessors,
> then merge_blocks shouldn't be able to merge away the join block.
>
> So either something's wrong with the CFG so that merge_if_blocks sees
> a join_bb with fewer than 2 predecessors (the only path to the
> merge_blocks call in merge_if_blocks), or the profile data is so
> corrupted that the partitioning has somehow gone wrong. So...
>
It looks like something is wrong with the CFG:

   |
   19 (COLD)
/ \
   /   \
20 (COLD)  21 (COLD)
   \   /
\ /
 22 (HOT)

but indeed we have EDGE_COUNT (join_bb->preds) == 1


>> merge_if_block calls merge_blocks unconditionally several times (in my
>> case, the wrong one is merge_blocks (combo_bb, join_bb)).
>
> ... still not quite enough information.
>
> A more detailed explanation of the paths through the code that lead to
> the error would be nice. A test case would be good. A PR would be
> best.

I understand; the problem is that I am not allowed to publish the
input code leading to this situation :-(

Thanks for your help,

Christophe.


Re: if-conversion/HOT-COLD partitioning

2012-10-25 Thread Christophe Lyon
On 25 October 2012 16:10, Christophe Lyon  wrote:
> On 24 October 2012 22:07, Steven Bosscher  wrote:
>> On Wed, Oct 24, 2012 at 6:11 PM, Christophe Lyon wrote:
>>> On 24 October 2012 00:42, Steven Bosscher wrote:
>>>> On Tue, Oct 23, 2012 at 10:29 PM, Christophe Lyon wrote:
>>>>> Well, both of these functions appear to check that the 2 blocks to
>>>>> merge belong to the same partition, so it should be OK.
>>>>
>>>> In your first email, you said if-convert was merging two blocks from
>>>> different partitions. can_merge_block_p() would rejected merging the
>>>> two blocks, so merge_blocks shouldn't be called on them.
>>>>
>>>> IIRC cfghooks.c:merge_blocks() used to have a
>>>> gcc_assert(can_merge_blocks(a,b)) but it's not there now. But if
>>>> can_merge_blocks() returns false, merge_blocks should fail. Your bug
>>>> is that merge_blocks is being called at all on those blocks from
>>>> different partitions.
>>>>
>>>>
>>>>> But not all calls to merge_blocks are guarded by if
>>>>> (can_merge_block_p()), this is probably where the problem is?
>>>>
>>>> Not sure. Depends on what blocks get merged. It may be that
>>>> if-conversion shouldn't even be attempting whatever transformation
>>>> it's attempting. Not enough information.
>>>>
>>>
>>> What happens is that merge_if_block() is called with test_bb, then_bb
>>> and else_bb in the cold section, while join_bb is in the hot one.
>>
>> AFAICT that can only happen if the join_bb has more predecessors than
>> just then_bb and else_bb. Otherwise, you'd be looking at a complete
>> diamond region, and test_bb and either else_bb or then_bb should be in
>> the hot partition as well. But if the join_bb has more predecessors,
>> then merge_blocks shouldn't be able to merge away the join block.
>>
>> So either something's wrong with the CFG so that merge_if_blocks sees
>> a join_bb with fewer than 2 predecessors (the only path to the
>> merge_blocks call in merge_if_blocks), or the profile data is so
>> corrupted that the partitioning has somehow gone wrong. So...
>>
> It looks like something is wrong with the CFG:
>
>|
>19 (COLD)
> / \
>/   \
> 20 (COLD)  21 (COLD)
>\   /
> \ /
>  22 (HOT)
>
> but indeed we have EDGE_COUNT (join_bb->preds) == 1
>

This is because after merging 19 & 20, and then 19 & 21, there is only
1 egde left between 19 and 22, and is actually the expected case as
the comment says.


Re: if-conversion/HOT-COLD partitioning

2012-10-26 Thread Christophe Lyon
On 26 October 2012 00:47, Steven Bosscher  wrote:
> On Fri, Oct 26, 2012 at 12:26 AM, Andrew Pinski wrote:
>> The official wording from SPEC is that the sources are under the same
>> license as they are provided to them.  It is the data files which are
>> under the SPEC license.
>
> Good. So the only things needed to reproduce the problem can be
> shared: the source file that causes the ICE, the gcda+gcno files for
> the profile, the compiler SVN revision number, and the compiler
> configuration and options.

Wow! I didn't mention here that I am compiling SPEC/GAP :-)

I also use some patches posted by Matthew Gretton-Dann, which are
still under discussion: I will open a PR, and attach these patches
too. Is it OK?

Thanks!


RFC: [ARM] Disable peeling

2012-12-07 Thread Christophe Lyon
Hi,

As ARM supports unaligned vector accesses for almost no penalty, I'd
like to disable loop peeling on ARM targets.

I have ran benchmarks on cortex-A9 (hard-float) and noticed these
significant improvements:
* 1.5% improvement on a popular embedded benchmark (with peaks at +20% and +29%)
* 2.1% on spec2k mesa
* 9.2% on spec2k eon
* up to 3.4% on some part of another embedded benchmark

The largest regression I noticed is 1%.

I have attached a preliminary patch to discuss how acceptable it would
be, and to discuss the needed changes in the testsuite. Indeed; quite
a few tests now fail because they count the number of "vectorizing an
unaligned access" and "alignment of access forced using peeling"
occurrences in the vectorizer traces.

I could add a property to target-supports.exp, which would currently
be only true on ARM to select whether to rely on peeling or not, and
updated all the affected tests accordingly.

As there are quite a few tests to update, I'd like opinions first.

Thanks,

Christophe.
2012-12-07  Christophe Lyon 

gcc/
* config/arm/arm.c (arm_vector_worth_peeling): New function.
(TARGET_VECTORIZE_VECTOR_WORTH_PEELING): New define.
* doc/tm.texi.in (TARGET_VECTORIZE_VECTOR_WORTH_PEELING): Add
documentation.
* doc/tm.texi (TARGET_VECTORIZE_VECTOR_WORTH_PEELING): Likewise.
* target.def (vector_worth_peeling): New hook.
* targhooks.c (default_vector_worth_peeling): New function.
* targhooks.h (default_vector_worth_peeling): Declare.
* tree-vect-data-refs.c (vector_alignment_reachable_p): Call
vector_worth_peeling hook.

diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 1470602..ebbf594 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -261,6 +261,7 @@ static bool arm_builtin_support_vector_misalignment (enum 
machine_mode mode,
 const_tree type,
 int misalignment,
 bool is_packed);
+static bool arm_vector_worth_peeling (int misalignment);
 static void arm_conditional_register_usage (void);
 static reg_class_t arm_preferred_rename_class (reg_class_t rclass);
 static unsigned int arm_autovectorize_vector_sizes (void);
@@ -618,6 +619,10 @@ static const struct attribute_spec arm_attribute_table[] =
 #define TARGET_VECTORIZE_SUPPORT_VECTOR_MISALIGNMENT \
   arm_builtin_support_vector_misalignment
 
+#undef TARGET_VECTORIZE_VECTOR_WORTH_PEELING
+#define TARGET_VECTORIZE_VECTOR_WORTH_PEELING \
+  arm_vector_worth_peeling
+
 #undef TARGET_PREFERRED_RENAME_CLASS
 #define TARGET_PREFERRED_RENAME_CLASS \
   arm_preferred_rename_class
@@ -25200,6 +25205,14 @@ arm_builtin_support_vector_misalignment (enum 
machine_mode mode,
  is_packed);
 }
 
+/* ARM supports misaligned accesses with low penalty. It's not worth
+   peeling.  */
+static bool
+arm_vector_worth_peeling (int misalignment)
+{
+  return false;
+}
+
 static void
 arm_conditional_register_usage (void)
 {
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index b36c764..05b1f67 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -5755,6 +5755,12 @@ the elements in the vectors should be of type 
@var{type}.  @var{is_packed}
 parameter is true if the memory access is defined in a packed struct.
 @end deftypefn
 
+@deftypefn {Target Hook} bool TARGET_VECTORIZE_VECTOR_WORTH_PEELING (int 
@var{misalignment})
+This hook should return true if the cost of peeling is cheaper than a
+misaligned access of a specific factor denoted in the
+@var{misalignment} parameter.
+@end deftypefn
+
 @deftypefn {Target Hook} {enum machine_mode} 
TARGET_VECTORIZE_PREFERRED_SIMD_MODE (enum machine_mode @var{mode})
 This hook should return the preferred mode for vectorizing scalar
 mode @var{mode}.  The default is
diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in
index 4858d97..452a929 100644
--- a/gcc/doc/tm.texi.in
+++ b/gcc/doc/tm.texi.in
@@ -5679,6 +5679,12 @@ the elements in the vectors should be of type 
@var{type}.  @var{is_packed}
 parameter is true if the memory access is defined in a packed struct.
 @end deftypefn
 
+@hook TARGET_VECTORIZE_VECTOR_WORTH_PEELING
+This hook should return true if the cost of peeling is cheaper than a
+misaligned access of a specific factor denoted in the
+@var{misalignment} parameter.
+@end deftypefn
+
 @hook TARGET_VECTORIZE_PREFERRED_SIMD_MODE
 This hook should return the preferred mode for vectorizing scalar
 mode @var{mode}.  The default is
diff --git a/gcc/target.def b/gcc/target.def
index 2d79290..d3a2671 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -1005,6 +1005,15 @@ DEFHOOK
  (enum machine_mode mode, const_tree type, int misalignment, bool is_packed),
  default_builtin_support_vector_misalignment)
 
+/* Return true if peeling is worth its cost compared to mi

Re: RFC: [ARM] Disable peeling

2012-12-10 Thread Christophe Lyon
On 10 December 2012 10:02, Richard Biener  wrote:
> On Fri, Dec 7, 2012 at 6:30 PM, Richard Earnshaw  wrote:
>> On 07/12/12 15:13, Christophe Lyon wrote:
>>>
>>> Hi,
>>>
>>> As ARM supports unaligned vector accesses for almost no penalty, I'd
>>> like to disable loop peeling on ARM targets.
>>>
>>> I have ran benchmarks on cortex-A9 (hard-float) and noticed these
>>> significant improvements:
>>> * 1.5% improvement on a popular embedded benchmark (with peaks at +20% and
>>> +29%)
>>> * 2.1% on spec2k mesa
>>> * 9.2% on spec2k eon
>>> * up to 3.4% on some part of another embedded benchmark
>>>
>>> The largest regression I noticed is 1%.
>>>
>>> I have attached a preliminary patch to discuss how acceptable it would
>>> be, and to discuss the needed changes in the testsuite. Indeed; quite
>>> a few tests now fail because they count the number of "vectorizing an
>>> unaligned access" and "alignment of access forced using peeling"
>>> occurrences in the vectorizer traces.
>>>
>>> I could add a property to target-supports.exp, which would currently
>>> be only true on ARM to select whether to rely on peeling or not, and
>>> updated all the affected tests accordingly.
>>>
>>> As there are quite a few tests to update, I'd like opinions first.
>>>
>>> Thanks,
>>>
>>> Christophe.
>>>
>>
>> This feels a bit like a sledge-hammer for a nut that really needs just a
>> normal hammer.  I guess the crux of the question comes down to do we know
>> how many times the loop will be executed?  If the answer is no, then OK we
>> assume that the execution count will be small and don't peel.  If the answer
>> is yes (or we know the minimum iteration count), then we should be able to
>> work out what the saving will be by peeling to reach alignment.
>>
>> So I think your hook should pass the known (minimum) iteration count as well
>> -- with 0 indicating that we don't know what the minimum is.
>>
>> Note, it may be that today we can't work out what the minimum will be and
>> that for now we always pass zero.  But that doesn't mean we shouldn't code
>> for the time when we can work this out.
>
> I agree that this is a sledgehammer.  If aligned/unaligned loads/stores have
> the same cost then reflect that in the vectorized stmt cost hook.  If that

I am not sure to understand which hook you are referring to?
My understanding of vect_enhance_data_refs_alignment() is that it uses
cost to check if the target misaligned stores are more expensive than
misaligned loads, but at this point it has already decided to perform
peeling. On simple loops, it has no reason to later decide not to
perform peeling.

> alone does not prevent peeling for alignment to happen then the fix is to
> not consider doing peeling for alignment if aligned/unaligned costs are the
> same, not adding a new hook.
>
I thought that a new hook could enable target variations on this: if
the cost is very slightly different, it might be worth peeling or not,
depending on the peeling amount or the number of iterations as Richard
Earnshaw mentioned.

Thanks for your comments,

Christophe.


Re: RFC: [ARM] Disable peeling

2012-12-12 Thread Christophe Lyon
On 11 December 2012 13:26, Tim Prince  wrote:
> On 12/11/2012 5:14 AM, Richard Earnshaw wrote:
>>
>> On 11/12/12 09:56, Richard Biener wrote:
>>>
>>> On Tue, Dec 11, 2012 at 10:48 AM, Richard Earnshaw 
>>> wrote:

 On 11/12/12 09:45, Richard Biener wrote:
>
>
> On Mon, Dec 10, 2012 at 10:07 PM, Andi Kleen 
> wrote:
>>
>>
>> Jan Hubicka  writes:
>>
>>> Note that I think Core has similar characteristics - at least for
>>> string
>>> operations
>>> it fares well with unalignes accesses.
>>
>>
>>
>> Nehalem and later has very fast unaligned vector loads. There's still
>> some
>> penalty when they cross cache lines however.
>>
>> iirc the rule of thumb is to do unaligned for 128 bit vectors,
>> but avoid it for 256bit vectors because the cache line cross
>> penalty is larger on Sandy Bridge and more likely with the larger
>> vectors.
>
>
>
> Yes, I think the rule was that using the unaligned instruction variants
> carries
> no penalty when the actual access is aligned but that aligned accesses
> are
> still faster than unaligned accesses.  Thus peeling for alignment _is_
> a
> win.
> I also seem to remember that the story for unaligned stores vs.
> unaligned
> loads
> is usually different.



 Yes, it's generally the case that unaligned loads are slightly more
 expensive than unaligned stores, since the stores can often merge in a
 store
 buffer with little or no penalty.
>>>
>>>
>>> It was the other way around on AMD CPUs AFAIK - unaligned stores forced
>>> flushes of the store buffers.  Which is why the vectorizer first and
>>> foremost tries
>>> to align stores.
>>>
>>
>> In which case, which to align should be a question that the ME asks the
>> BE.
>>
>> R.
>>
>>
> I see that this thread is no longer about ARM.
> Yes, when peeling for alignment, aligned stores should take precedence over
> aligned loads.
> "ivy bridge" corei7-3 is supposed to have corrected the situation on "sandy
> bridge" corei7-2 where unaligned 256-bit load is more expensive than
> explicitly split (128-bit) loads.  There aren't yet any production
> multi-socket corei7-3 platforms.
> It seems difficult to make the best decision between 128-bit unaligned
> access without peeling and 256-bit access with peeling for alignment (unless
> the loop count is known to be too small for the latter to come up to speed).
> Facilities afforded by various compilers to allow the programmer to guide
> this choice are rather strange and probably not to be counted on.
> In my experience, "westmere" unaligned 128-bit loads are more expensive than
> explicitly split (64-bit) loads, but the architecture manuals disagree with
> this finding.  gcc already does a good job for corei7[-1] in such
> situations.
>
> --
> Tim Prince
>

Since this thread is also about x86 now, I have tried to look at how
things are implemented on this target.
People have mentioned nehalem, sandy bridge, ivy bridge and westmere;
I have searched for occurrences of these strings in GCC, and I
couldn't find anything that would imply a different behavior wrt
unaligned loads on 128/256 bits vectors. Is it still unimplemented?

Thanks,

Christophe.


libsanitizer and qemu compatibility

2013-02-13 Thread Christophe Lyon
Hi,

I am working on enabing libsanitizer on ARM.
I have a very simple patch to enable it, and a sample program seems to
work on board.

However, I would like to use qemu as an execution engine, but I get
error messages from libsanitizer at startup:==30022== Shadow memory
range interleaves with an existing memory mapping. ASan cannot proceed
correctly. ABORTING.
** shadow start 0x1000 shadow_end 0x3fff
==30022== Process memory map follows:
0x-0x8000
0x8000-0x9000/home/lyon/src/tests/sanitizer.armhf
0x9000-0x0001
0x0001-0x00011000/home/lyon/src/tests/sanitizer.armhf
0x00011000-0xf4f5
0xf4f5-0xf4f52000
0xf4f52000-0xf4f54000
0xf4f54000-0xf4f58000
0xf4f58000-0xf4f5c000

[many others follow, belonging to libgcc_s.so, libm.so, libstdc++.so,
libdl,so, libpthread.so, libc.so and libasan.so, and some with no
filename]

So I have a probably very naive question: can libsanitizer work under
qemu (linux-user mode)?
What should I change?

[I have already modified qemu's output of /proc/self/maps to add a
space character after the last number if there is no filename, to
avoid parsing errors from libsanitizer].


Thanks,

Christophe.


Re: libsanitizer and qemu compatibility

2013-02-15 Thread Christophe Lyon
On 14 February 2013 05:24, Konstantin Serebryany
 wrote:
> Hi Christophe,
>
> Are you talking about ARM Linux?

Yes.

> It will be easier for us (asan developers) to fix this upstream first.
> Could you please file a bug at https://code.google.com/p/address-sanitizer/ ?
OK, just entered as #160


>> ** shadow start 0x1000 shadow_end 0x3fff
>> ==30022== Process memory map follows:
>> 0x-0x8000
>> 0x8000-0x9000/home/lyon/src/tests/sanitizer.armhf
>> 0x9000-0x0001
>> 0x0001-0x00011000/home/lyon/src/tests/sanitizer.armhf
>
> 0x00011000-0xf4f5   << where is this crazy mapping come from?
>
I don't know :-) It's probably a qemu feature

Christophe.


C++: operator new and disabled exceptions

2007-09-28 Thread Christophe LYON

Hello,

I have already asked this question on gcc-help (see 
http://gcc.gnu.org/ml/gcc-help/2007-09/msg00328.html), but I would like 
 advice from GCC developers.


Basically, when I compile with -fno-exceptions, I wonder why the G++ 
compiler still generates calls to the standard new operator (the one 
that throws bad_alloc when it runs out of memory), rather than 
new(nothrow) (_ZnwjRKSt9nothrow_t) ?


In addition, do you think I can patch my GCC such that it calls 
new(nothrow) when compiling with -fno-exceptions, or is it a bad idea? 
(compatibility issues, ...)


Thanks for your recommendation,

Christophe.


ICE in in compare_values_warnv, at tree-vrp.c:701

2007-11-16 Thread Christophe LYON

Hello,

I have recently reported GCC bug #34030 
(http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34030)


As it might have been fixed in 4.2.3, and as my concern is primarily for 
the 4.1.1 branch (we don't want to upgrade now), I am ready to fix it in 
my own sources.


However, I am not familiar with the involved compiler passes, so any 
advice on where to search would help me.


Given the fact that it is generic (I reproduced it on several targets), 
and that removing the (void*) cast in the sample code, I suppose it has 
to do with early lowering phases that incorrectly propagate (or not) the 
type of comparison operands.


Thanks for your suggestions,

Christophe.


[arm] GCC validation: preferred way of running the testsuite?

2020-05-11 Thread Christophe Lyon via Gcc
Hi,


As you may know, I've been running validations of GCC trunk in many
configurations for Arm and Aarch64.


I was recently trying to make some cleanup in the new Bfloat16, MVE, CDE, and
ACLE tests because in several configurations I see 300-400 FAILs
mainly in these areas, because of “testisms”. The goal is to avoid
wasting time over the same failure reports when checking what needs
fixing. I thought this would be quick & easy, but this is tedious
because of the numerous combinations of options and configurations
available on Arm.


Sorry for the very long email, it’s hard to describe and summarize,
but I'd like to try nonetheless, hoping that we can make testing
easier/more efficient :-), because most of the time the problems I
found are with the tests rather than real compiler bugs, so I think
it's a bit of wasted time.


Here is a list of problems, starting with the tricky dependencies
around -mfloat-abi=XXX:

* Some targets do not support multilibs (eg arm-linux-gnueabi[hf] with
glibc), or one can decide not to build with both hard and soft FP
multilibs. This generally becomes a problem when including stdint.h
(used by arm_neon.h, arm_acle.h, …), leading to a compiler error for
lack of gnu/stub*.h for the missing float-abi. If you add -mthumb to
the picture, it becomes quite complex (eg -mfloat-abi=hard is not
supported on thumb-1).


Consider mytest.c that does not depend on any include file and has:
/* { dg-options "-mfloat-abi=hard" } */

If GCC is configured for arm-linux-gnueabi --with-cpu=cortex-a9 --with-fpu=neon,
with ‘make check’, the test PASSes.
With ‘make check’ with --target-board=-march=armv5t/-mthumb, then the
test FAILs:
sorry, unimplemented: Thumb-1 hard-float VFP ABI


If I add
/* { dg-require-effective-target arm_hard_ok } */
‘make check’ with --target-board=-march=armv5t/-mthumb is now
UNSUPPORTED (which is OK), but
plain ‘make check’ is now also UNSUPPORTED because arm_hard_ok detects
that we lack the -mfloat-abi=hard multilib. So we lose a PASS.

If I configure GCC for arm-linux-gnueabihf, then:
‘make check’ PASSes
‘make check’ with --target-board=-march=armv5t/-mthumb, FAILs
and with
/* { dg-require-effective-target arm_hard_ok } */
‘make check’ with --target-board=-march=armv5t/-mthumb is now UNSUPPORTED and
plain ‘make check’ PASSes

So it seems the best option is to add
/* { dg-require-effective-target arm_hard_ok } */
although it makes the test UNSUPPORTED by arm-linux-gnueabi even in
cases where it could PASS.

Is there consensus that this is the right way?



* In GCC DejaGnu helpers, the queries for -mfloat-abi=hard and
-march=XXX are independent in general, meaning if you query for
-mfloat-abi=hard support, it will do that in the absence of any
-march=XXX that the testcase may also be using. So, if GCC is
configured with its default cpu/fpu, -mfloat-abi=hard will be rejected
for lack of an fpu on the default cpu, but if GCC is configured with a
suitable cpu/fpu pair, -mfloat-abi=hard will be accepted.

I faced this problem when I tried to “fix” the order in which we try options in
Arm_v8_2a_bf16_neon_ok. (see
https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544654.html)

I faced similar problems while working on a patch of mine about a bug
with IRQ handlers which has different behaviour depending on the FP
ABI used: I have the feeling that I spend too much time writing the
tests to the detriment of the patch itself...

I also noticed that Richard Sandiford probably faced similar issues
with his recent fix for "no_unique_address", where he finally added
arm_arch_v8a_hard_ok to check arm8v-a CPU + neon-fp-armv8 FPU +
float-abi=hard at the same time.

Maybe we could decide on a consistent and simpler way of checking such things?


* A metric for this complexity could be the number of arm
effective-targets, a quick and not-fully accurate grep | sed | sort |
uniq -c | sort -n on target-supports.exp ends with:
 9 mips
 16 aarch64
 21 powerpc
 97 vect
106 arm
(does not count all the effective-targets generated by tcl code, eg
arm_arch_FUNC_ok)

This probably explains why it’s hard to get test directives right :-)

I’ve not thought about how we could reduce that number….



* Finally, I’m wondering about the most appropriate way of configuring
GCC and running the tests.

So far, for most of the configurations I'm testing, I use different
--with-cpu/--with-fpu/--with-mode configure flags for each toolchain
configuration I’m testing and rarely override the flags at testing
time. I also disable multilibs to save build time and (scratch) disk
space. (See 
https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
for the current list, each line corresponds to a clean build + make
check job -- so there are 15 different toolchain configs for
arm-linux-gnueabihf for instance)

However, I think this is may not be appropriate at least for the
arm-eabi toolchains, because I suspect the vendors who support several
SoCs ge

Re: dejagnu version update?

2020-05-13 Thread Christophe Lyon via Gcc
On Wed, 13 May 2020 at 19:44, Jonathan Wakely via Gcc  wrote:
>
> On Wed, 13 May 2020 at 18:19, Mike Stump via Gcc  wrote:
> >
> > I've changed the subject to match the 2015, 2017 and 2018 email threads.
> >
> > On May 13, 2020, at 3:26 AM, Thomas Schwinge  
> > wrote:
> > >
> > > Comparing DejaGnu/GCC testsuite '*.sum' files between two systems ("old"
> > > vs. "new") that ought to return identical results, I found that they
> > > didn't:
> >
> > > I have not found any evidence in DejaGnu master branch that this not
> > > working would've been a "recent" DejaGnu regression (and then fixed for
> > > DejaGnu 1.6) -- so do we have to assume that this never worked as
> > > intended back then?
> >
> > Likely not.
> >
> > > Per our "Prerequisites for GCC" installation documentation, we currently
> > > require DejaGnu 1.4.4.  Advancing that to 1.6 is probably out of
> > > question, given that it has "just" been released (four years ago).
> >
> > :-)  A user that wants full coverage should use 1.6, apparently.
>
> As documented at
> https://gcc.gnu.org/onlinedocs/libstdc++/manual/test.html#test.run.permutations
> anything older than 1.5.3 causes problems for libstdc++ (and probably
> the rest of GCC) because the options in --target_board get placed
> after the options in dg-options. If the test depends on the options in
> dg-options to work properly it might fail. For example, a test that
> has { dg-options "-O2" } and fails without optimisation would FAIL if
> you use --target_board=unix/-O0 with dejagnu 1.5.
>
I think that was commit:
http://git.savannah.gnu.org/gitweb/?p=dejagnu.git;a=commitdiff;h=5256bd82343000c76bc0e48139003f90b6184347
which for sure was a major change (though I don't see it documented in
dejagnu/NEWS file)

>
> > > As the failure mode with old DejaGnu is "benign" (only causes missing
> > > execution testing), we could simply move on, and accept non-reproducible
> > > results between different DejaGnu versions?  Kind of lame...  ;-|
> >
> > An ugly wart to be sure.
> >
> > So, now that ubuntu 20.04 is out and RHEL 8 is out, and they both contain 
> > 6, and SLES has 6 and since we've been sitting at 1.4.4 for so long, anyone 
> > want to not update dejagnu to require 1.6?
>
> There are still lots of older systems in use for GCC dev, like all the
> POWER servers in the compile farm (but I've put a recent dejagnu in
> /opt/cfarm on some of them).
>
> > I had previously approved the update to 1.5.3, but no one really wanted it 
> > as no one updated the requirement.  Let's have the 1.6 discussion.  I'm not 
> > only inclined to up to 1.6, but to actually edit it in this time.
>
> Would the tests actually refuse to run with an older version?
>
> > Anyone strongly against?  Why?
>
> I'm in favour of requiring 1.5.3 or later, so 1.6 would be OK for me.


Re: [arm] GCC validation: preferred way of running the testsuite?

2020-05-26 Thread Christophe Lyon via Gcc
On Tue, 19 May 2020 at 13:28, Richard Earnshaw
 wrote:
>
> On 11/05/2020 17:43, Christophe Lyon via Gcc wrote:
> > Hi,
> >
> >
> > As you may know, I've been running validations of GCC trunk in many
> > configurations for Arm and Aarch64.
> >
> >
> > I was recently trying to make some cleanup in the new Bfloat16, MVE, CDE, 
> > and
> > ACLE tests because in several configurations I see 300-400 FAILs
> > mainly in these areas, because of “testisms”. The goal is to avoid
> > wasting time over the same failure reports when checking what needs
> > fixing. I thought this would be quick & easy, but this is tedious
> > because of the numerous combinations of options and configurations
> > available on Arm.
> >
> >
> > Sorry for the very long email, it’s hard to describe and summarize,
> > but I'd like to try nonetheless, hoping that we can make testing
> > easier/more efficient :-), because most of the time the problems I
> > found are with the tests rather than real compiler bugs, so I think
> > it's a bit of wasted time.
> >
> >
> > Here is a list of problems, starting with the tricky dependencies
> > around -mfloat-abi=XXX:
> >
> > * Some targets do not support multilibs (eg arm-linux-gnueabi[hf] with
> > glibc), or one can decide not to build with both hard and soft FP
> > multilibs. This generally becomes a problem when including stdint.h
> > (used by arm_neon.h, arm_acle.h, …), leading to a compiler error for
> > lack of gnu/stub*.h for the missing float-abi. If you add -mthumb to
> > the picture, it becomes quite complex (eg -mfloat-abi=hard is not
> > supported on thumb-1).
> >
> >
> > Consider mytest.c that does not depend on any include file and has:
> > /* { dg-options "-mfloat-abi=hard" } */
> >
> > If GCC is configured for arm-linux-gnueabi --with-cpu=cortex-a9 
> > --with-fpu=neon,
> > with ‘make check’, the test PASSes.
> > With ‘make check’ with --target-board=-march=armv5t/-mthumb, then the
> > test FAILs:
> > sorry, unimplemented: Thumb-1 hard-float VFP ABI
> >
> >
> > If I add
> > /* { dg-require-effective-target arm_hard_ok } */
> > ‘make check’ with --target-board=-march=armv5t/-mthumb is now
> > UNSUPPORTED (which is OK), but
> > plain ‘make check’ is now also UNSUPPORTED because arm_hard_ok detects
> > that we lack the -mfloat-abi=hard multilib. So we lose a PASS.
> >
> > If I configure GCC for arm-linux-gnueabihf, then:
> > ‘make check’ PASSes
> > ‘make check’ with --target-board=-march=armv5t/-mthumb, FAILs
> > and with
> > /* { dg-require-effective-target arm_hard_ok } */
> > ‘make check’ with --target-board=-march=armv5t/-mthumb is now UNSUPPORTED 
> > and
> > plain ‘make check’ PASSes
> >
> > So it seems the best option is to add
> > /* { dg-require-effective-target arm_hard_ok } */
> > although it makes the test UNSUPPORTED by arm-linux-gnueabi even in
> > cases where it could PASS.
> >
> > Is there consensus that this is the right way?
> >
> >
> >
> > * In GCC DejaGnu helpers, the queries for -mfloat-abi=hard and
> > -march=XXX are independent in general, meaning if you query for
> > -mfloat-abi=hard support, it will do that in the absence of any
> > -march=XXX that the testcase may also be using. So, if GCC is
> > configured with its default cpu/fpu, -mfloat-abi=hard will be rejected
> > for lack of an fpu on the default cpu, but if GCC is configured with a
> > suitable cpu/fpu pair, -mfloat-abi=hard will be accepted.
> >
> > I faced this problem when I tried to “fix” the order in which we try 
> > options in
> > Arm_v8_2a_bf16_neon_ok. (see
> > https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544654.html)
> >
> > I faced similar problems while working on a patch of mine about a bug
> > with IRQ handlers which has different behaviour depending on the FP
> > ABI used: I have the feeling that I spend too much time writing the
> > tests to the detriment of the patch itself...
> >
> > I also noticed that Richard Sandiford probably faced similar issues
> > with his recent fix for "no_unique_address", where he finally added
> > arm_arch_v8a_hard_ok to check arm8v-a CPU + neon-fp-armv8 FPU +
> > float-abi=hard at the same time.
> >
> > Maybe we could decide on a consistent and simpler way of checking such 
> > things?
> >
> >
> > * A metric for this complexity could be the number of arm
> > effective-targets, a quick and not-fully accurate gr

Re: GCC Testsuite patches break AIX

2020-05-27 Thread Christophe Lyon via Gcc
On Wed, 27 May 2020 at 16:26, Jeff Law via Gcc  wrote:
>
> On Wed, 2020-05-27 at 11:16 -0300, Alexandre Oliva wrote:
> > Hello, David,
> >
> > On May 26, 2020, David Edelsohn  wrote:
> >
> > > Complaints about -dA, -dD, -dumpbase, etc.
> >
> > This was the main symptom of the problem fixed in the follow-up commit
> > r11-635-g6232d02b4fce4c67d39815aa8fb956e4b10a4e1b
> >
> > Could you please confirm that you did NOT have this commit in your
> > failing build, and that the patch above fixes the problem for you as it
> > did for others?
> >
> >
> > > This patch was not properly tested on all targets.
> >
> > This problem had nothing to do with targets.  Having Ada enabled, which
> > I've nearly always and very long done to increase test coverage, was
> > what kept the preexisting bug latent in my testing.
> >
> >
> > Sorry that I failed to catch it before the initial check in.
> Any thoughts on the massive breakage on the embedded ports in the testsuite?
> Essentially every test that links is failing like this:
>
> > Executing on host: /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c
> > gcc_tg.o-fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers
> > -fdiagnostics-color=never  -fdiagnostics-urls=never-O0  -w   -msim {} 
> > {}  -
> > Wl,-wrap,exit -Wl,-wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm  -o
> > ./2112-1.exe(timeout = 300)
> > spawn -ignore SIGHUP 
> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c gcc_tg.o
> > -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers 
> > -fdiagnostics-
> > color=never -fdiagnostics-urls=never -O0 -w -msim   -Wl,-wrap,exit -Wl,-
> > wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm -o ./2112-1.exe^M
> > xgcc: error: : No such file or directory^M
> > xgcc: error: : No such file or directory^M
> > compiler exited with status 1
> > FAIL: gcc.c-torture/execute/2112-1.c   -O0  (test for excess errors)
> > Excess errors:
> > xgcc: error: : No such file or directory
> > xgcc: error: : No such file or directory
> >
>
>
> Sadly there's no additional output that would help us figure out what went 
> wrong.

If that helps, I traced this down to the new gcc_adjust_linker_flags function.

Christophe


>
> jeff
>


Re: GCC Testsuite patches break AIX

2020-05-28 Thread Christophe Lyon via Gcc
On Wed, 27 May 2020 at 22:40, Alexandre Oliva  wrote:
>
> On May 27, 2020, Christophe Lyon via Gcc  wrote:
>
> > On Wed, 27 May 2020 at 16:26, Jeff Law via Gcc  wrote:
>
> >> Any thoughts on the massive breakage on the embedded ports in the 
> >> testsuite?
>
> I wasn't aware of any.  Indeed, one of my last steps before submitting
> the patchset was to fix problems that had come up in embedded ports,
> with gcc_adjust_linker_flags and corresponding changes to outputs.exp
> itself.
>
> >> Essentially every test that links is failing like this:
>
>
> >>
> >> > Executing on host: 
> >> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> >> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> >> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c
> >> > gcc_tg.o-fno-diagnostics-show-caret 
> >> > -fno-diagnostics-show-line-numbers
> >> > -fdiagnostics-color=never  -fdiagnostics-urls=never-O0  -w   -msim 
> >> > {} {}  -
> >> > Wl,-wrap,exit -Wl,-wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm  -o
> >> > ./2112-1.exe(timeout = 300)
> >> > spawn -ignore SIGHUP 
> >> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
> >> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
> >> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c 
> >> > gcc_tg.o
> >> > -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers 
> >> > -fdiagnostics-
> >> > color=never -fdiagnostics-urls=never -O0 -w -msim   -Wl,-wrap,exit -Wl,-
> >> > wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm -o ./2112-1.exe^M
> >> > xgcc: error: : No such file or directory^M
>
> >> Sadly there's no additional output that would help us figure out what went 
> >> wrong.
>
> > If that helps, I traced this down to the new gcc_adjust_linker_flags 
> > function.
>
> Thanks.  Yeah, H-P observed and submitted a similar report that made me
> wonder about empty arguments being passed to GCC.  Jeff's report
> confirms the suspicion.  See how there are a couple of {}s after -msim
> in the "Executing on host" line, that in the "spawn" line are completely
> invisible, only suggested by the extra whitespace.  That was not quite
> visible in H-P's report, but Jeff's makes it clear.
>
> I suppose this means there are consecutive blanks in e.g. board's
> ldflags, and the split function is turning each consecutive pair of

Yes, I'm seeing this because of
set_board_info ldflags  "[libgloss_link_flags] [newlib_link_flags]
$additional_options"
in arm-sim.exp

> blanks into an empty argument.  I'm testing a fix (kludge?) in
> refs/users/aoliva/heads/testme 169b13d14d3c1638e94ea7e8f718cdeaf88aed65
>
> --
> Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
> Free Software Evangelist  Stallman was right, but he's left :(
> GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: dejagnu version update?

2020-06-12 Thread Christophe Lyon via Gcc
Hi,

On Wed, 27 May 2020 at 03:58, Rob Savoye  wrote:
>
> On 5/26/20 7:20 PM, Maciej W. Rozycki wrote:
>
> >  I'll run some RISC-V remote GCC/GDB testing and compare results for
> > DejaGnu 1.6/1.6.1 vs trunk.  It will take several days though, as it takes
> > many hours to go through these testsuite runs.
>
>   That'd be great. I'd rather push out a stable release, than have to
> fix bugs right after it gets released.
>
> - rob -


I ran our GCC validation harness using dejagnu master branch and
compared to the results we get using our linaro-local/stable branch
(https://git.linaro.org/toolchain/dejagnu.git/)

I noticed that we'd need to add patches (1) and (2) at least.

Patch (1) enables us to run tests on aarch64-elf using Arm's Foundation Model.

Patch (2) was posted in 2016:
https://lists.gnu.org/archive/html/dejagnu/2016-03/msg5.html.
It fixes problems with tests output patterns (in fortran, ubsan and asan tests)

Patch (3) was posted in 2016 too:
https://lists.gnu.org/archive/html/dejagnu/2016-03/msg00034.html
I'm not 100% sure it made a difference in these test runs because we
still see some random failures anyway.

Thanks,

Christophe
From 382440f145811eeb3e85d0e57d9b8aa5418d1e80 Mon Sep 17 00:00:00 2001
From: Yvan Roux 
Date: Mon, 25 Apr 2016 11:09:52 +0200
Subject: [PATCH 2/3] Keep trailing newline in remote execution output.

	* lib/rsh.exp (rsh_exec): Don't remove trailing newline.
	* lib/ssh.exp (rsh_exec): Likewise.

Change-Id: I2368c18729c4bff9ee87d9163b1c8f4b0ad7f4d8
---
 lib/rsh.exp | 3 ---
 lib/ssh.exp | 3 ---
 2 files changed, 6 deletions(-)

diff --git a/lib/rsh.exp b/lib/rsh.exp
index 5b583c6..43f5430 100644
--- a/lib/rsh.exp
+++ b/lib/rsh.exp
@@ -283,8 +283,5 @@ proc rsh_exec { boardname program pargs inp outp } {
 	return [list -1 "Couldn't parse $RSH output, $output."]
 }
 regsub "XYZ(\[0-9\]*)ZYX\n?" $output "" output
-# Delete one trailing \n because that is what `exec' will do and we want
-# to behave identical to it.
-regsub "\n$" $output "" output
 return [list [expr {$status != 0}] $output]
 }
diff --git a/lib/ssh.exp b/lib/ssh.exp
index 0cf0f8d..a72f794 100644
--- a/lib/ssh.exp
+++ b/lib/ssh.exp
@@ -194,9 +194,6 @@ proc ssh_exec { boardname program pargs inp outp } {
 	return [list -1 "Couldn't parse $SSH output, $output."]
 }
 regsub "XYZ(\[0-9\]*)ZYX\n?" $output "" output
-# Delete one trailing \n because that is what `exec' will do and we want
-# to behave identical to it.
-regsub "\n$" $output "" output
 return [list [expr {$status != 0}] $output]
 }
 
-- 
2.7.4

From 1e5110d99ac8bac61e97bbdb4bb78ca72adb7e4e Mon Sep 17 00:00:00 2001
From: Maxim Kuvyrkov 
Date: Tue, 28 Jun 2016 09:40:01 +0100
Subject: [PATCH 1/3] Support using QEMU in local/remote testing using default
 "unix" board

If the board file defines "exec_shell", prepend it before the local or
remote command.

Change-Id: Ib3ff96126c4c96e4e7f8898609d0fce6faf803ef
---
 config/unix.exp | 13 +
 1 file changed, 13 insertions(+)

diff --git a/config/unix.exp b/config/unix.exp
index 2e38454..dc3f781 100644
--- a/config/unix.exp
+++ b/config/unix.exp
@@ -78,6 +78,11 @@ proc unix_load { dest prog args } {
 	verbose -log "Setting LD_LIBRARY_PATH to $ld_library_path:$orig_ld_library_path" 2
 	verbose -log "Execution timeout is: $test_timeout" 2
 
+	# Prepend shell name (e.g., qemu emulator) to the command.
+	if {[board_info $dest exists exec_shell]} {
+	set command "[board_info $dest exec_shell] $command"
+	}
+
 	set id [remote_spawn $dest $command "readonly"]
 	if { $id < 0 } {
 	set output "remote_spawn failed"
@@ -119,6 +124,14 @@ proc unix_load { dest prog args } {
 		return [list "unresolved" ""]
 	}
 	}
+
+	# Prepend shell name (e.g., qemu emulator) to the command.
+	if {[board_info $dest exists exec_shell]} {
+	set remotecmd "[board_info $dest exec_shell] $remotefile"
+	} else {
+	set remotecmd "$remotefile"
+	}
+
 	set status [remote_exec $dest $remotefile $parg $inp]
 	remote_file $dest delete $remotefile.o $remotefile
 	if { [lindex $status 0] < 0 } {
-- 
2.7.4

From b6a3e52aec69146e930d85b84a81b1e059f2ffe5 Mon Sep 17 00:00:00 2001
From: Christophe Lyon 
Date: Fri, 28 Sep 2018 08:26:02 +
Subject: [PATCH 3/3] 2018-09-28  Christophe Lyon  

	* lib/ssh.exp (ssh_exec): Redirect stderr to stdout on the remote
	machine, to avoid race conditions.

Change-Id: Ie0613a85fa990484fda41b13738025edf7477a62
---
 lib/ssh.exp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/ssh.exp b/lib/ssh.exp
index a72f794..3c7b840 100644
--- a/lib/ssh.exp
+++ b/lib/ssh.exp
@@ -171,7 +171,7 @@ proc ssh_exec { boar

Re: duplicate arm test results?

2020-09-22 Thread Christophe Lyon via Gcc
On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
>
> Hi Christophe,
>
> While checking recent test results I noticed many posts with results
> for various flavors of arm that at high level seem like duplicates
> of one another.
>
> For example, the batch below all have the same title, but not all
> of the contents are the same.  The details (such as test failures)
> on some of the pages are different.
>
> Can you help explain the differences?  Is there a way to avoid
> the duplication?
>

Sure, I am aware that many results look the same...


If you look at the top of the report (~line 5), you'll see:
Running target myarm-sim
Running target myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
Running target myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
Running target 
myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
Running target myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
Running target 
myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd

For all of these, the first line of the report is:
LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
TARGET=arm-none-eabi CPU=default FPU=default MODE=default

I have other combinations where I override the configure flags, eg:
LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb

I tried to see if I could fit something in the subject line, but that
didn't seem convenient (would be too long, and I fear modifying the
awk script)

I think HJ generates several "running targets" in the same log, I run
them separately to benefit from the compute farm I have access to.

Christophe

> Thanks
> Martin
>
> Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON
>  Results for 11.0.0 20200922 (experimental) [master revision
> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> arm-none-eabi   Christophe LYON


Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
>
> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> > On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> >>
> >> Hi Christophe,
> >>
> >> While checking recent test results I noticed many posts with results
> >> for various flavors of arm that at high level seem like duplicates
> >> of one another.
> >>
> >> For example, the batch below all have the same title, but not all
> >> of the contents are the same.  The details (such as test failures)
> >> on some of the pages are different.
> >>
> >> Can you help explain the differences?  Is there a way to avoid
> >> the duplication?
> >>
> >
> > Sure, I am aware that many results look the same...
> >
> >
> > If you look at the top of the report (~line 5), you'll see:
> > Running target myarm-sim
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> > Running target 
> > myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> > Running target 
> > myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> > Running target 
> > myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> >
> > For all of these, the first line of the report is:
> > LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> > r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> > TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> >
> > I have other combinations where I override the configure flags, eg:
> > LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> > r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> > TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> >
> > I tried to see if I could fit something in the subject line, but that
> > didn't seem convenient (would be too long, and I fear modifying the
> > awk script)
>
> Without some indication of a difference in the title there's no way
> to know what result to look at, and checking all of them isn't really
> practical.  The duplication (and the sheer number of results) also
> make it more difficult to find results for targets other than arm-*.
> There are about 13,000 results for September and over 10,000 of those
> for arm-* alone.  It's good to have data but when there's this much
> of it, and when the only form of presentation is as a running list,
> it's too cumbersome to work with.
>

To help me track & report regressions, I build higher level reports like:
https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
where it's more obvious what configurations are tested.

Each line of such reports can send a message to gcc-testresults.

I can control when such emails are sent, independently for each line:
- never
- for daily bump
- for each validation

So, I can easily reduce the amount of emails (by disabling them for
some configurations),
but that won't make the subject more informative.
I included the short revision (rXX-) in the title to make it clearer.

The number of configurations has grown over time because we regularly
found regressions
in configurations not tested previously.

I can probably easily add the values of --with-cpu, --with-fpu,
--with-mode and RUNTESTFLAGS
as part of the [ revision rXX--Z] string in the title,
would that help?
I fear that's going to make very long subject lines.

It would probably be cleaner to update test_summary such that it adds
more info as part of $host
(as in "... testsuite on $host"), so that it grabs useful configure
parameters and runtestflags, however
this would be more controversial.

Christophe

> Martin
>
> >
> > I think HJ generates several "running targets" in the same log, I run
> > them separately to benefit from the compute farm I have access to.
> >
> > Christophe
> >
> >> Thanks
> >> Martin
> >>
> >> Results for 11.0.0 20200922 (experimental) [master revision
> >> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c] (GCC) testsuite on
> >> arm-none-eabi   Christophe LYON
> >>
> >>   Results for 11.0.0 20200922 (e

Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 12:26, Richard Earnshaw
 wrote:
>
> On 23/09/2020 11:20, Jakub Jelinek via Gcc wrote:
> > On Wed, Sep 23, 2020 at 10:22:52AM +0100, Richard Sandiford wrote:
> >> So that would give:
> >>
> >>   Results for 8.4.1 20200918 [r8-10517] on arm-none-linux-gnueabihf
> >>
> >> and hopefully free up some space at the end for the kind of thing
> >> you mention.
> >
> > Even that 8.4.1 20200918 is redundant, r8-10517 uniquely and shortly
> > identifies both the branch and commit.
> > So just
> > Results for r8-10517 on ...
> > and in ... also include something that uniquely identifies the
> > configuration.
> >
> >   Jakub
> >
>
> I was thinking similarly, but then realised anyone using snapshots
> rather than git might not have that information.
>
> If that's not the case, however, then simplifying this would be a great
> idea.
>
> On the other hand, I use subject filters in my mail to steer results to
> a separate folder, so I do need some invariant key in the subject line
> that is sufficient to match without (too many) false positives.
>

I always assumed there was a required format for the title/email
contents, is that documented somewhere?
There must be a smart filter to avoid spam, doesn't it require some
"keywords" in the title for instance?

Same question for the gcc-regression list: is there a mandatory format?

Thanks,

Christophe

> R.


Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 17:33, Martin Sebor  wrote:
>
> On 9/23/20 2:54 AM, Christophe Lyon wrote:
> > On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
> >>
> >> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> >>> On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> >>>>
> >>>> Hi Christophe,
> >>>>
> >>>> While checking recent test results I noticed many posts with results
> >>>> for various flavors of arm that at high level seem like duplicates
> >>>> of one another.
> >>>>
> >>>> For example, the batch below all have the same title, but not all
> >>>> of the contents are the same.  The details (such as test failures)
> >>>> on some of the pages are different.
> >>>>
> >>>> Can you help explain the differences?  Is there a way to avoid
> >>>> the duplication?
> >>>>
> >>>
> >>> Sure, I am aware that many results look the same...
> >>>
> >>>
> >>> If you look at the top of the report (~line 5), you'll see:
> >>> Running target myarm-sim
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> >>> Running target 
> >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> >>> Running target 
> >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> >>> Running target 
> >>> myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> >>>
> >>> For all of these, the first line of the report is:
> >>> LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> >>> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> >>> TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> >>>
> >>> I have other combinations where I override the configure flags, eg:
> >>> LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> >>> r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> >>> TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> >>>
> >>> I tried to see if I could fit something in the subject line, but that
> >>> didn't seem convenient (would be too long, and I fear modifying the
> >>> awk script)
> >>
> >> Without some indication of a difference in the title there's no way
> >> to know what result to look at, and checking all of them isn't really
> >> practical.  The duplication (and the sheer number of results) also
> >> make it more difficult to find results for targets other than arm-*.
> >> There are about 13,000 results for September and over 10,000 of those
> >> for arm-* alone.  It's good to have data but when there's this much
> >> of it, and when the only form of presentation is as a running list,
> >> it's too cumbersome to work with.
> >>
> >
> > To help me track & report regressions, I build higher level reports like:
> > https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
> > where it's more obvious what configurations are tested.
>
> That looks awesome!  The regression indicator looks especially
> helpful.  I really wish we had an overview like this for all
> results.  I've been thinking about writing a script to scrape
> gcc-testresults and format an HTML table kind of like this for
> years.  With that, the number of posts sent to the list wouldn't
> be a problem (at least not for those using the page).  But it
> would require settling on a standard format for the basic
> parameters of each run.
>

It's probably easier to detect regressions and format reports from the
.sum files rather than extracting them from the mailing-list.
But your approach has the advantage that you can detect regressions
from reports sent by other people, not only by you.


> >
> > Each line of such reports can send a message to gcc-testresults.
> >
> > I can control when such ema

Re: duplicate arm test results?

2020-09-23 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 14:33, David Edelsohn  wrote:
>
> On Wed, Sep 23, 2020 at 8:26 AM Christophe Lyon via Gcc  
> wrote:
> >
> > On Wed, 23 Sep 2020 at 12:26, Richard Earnshaw
> >  wrote:
> > >
> > > On 23/09/2020 11:20, Jakub Jelinek via Gcc wrote:
> > > > On Wed, Sep 23, 2020 at 10:22:52AM +0100, Richard Sandiford wrote:
> > > >> So that would give:
> > > >>
> > > >>   Results for 8.4.1 20200918 [r8-10517] on arm-none-linux-gnueabihf
> > > >>
> > > >> and hopefully free up some space at the end for the kind of thing
> > > >> you mention.
> > > >
> > > > Even that 8.4.1 20200918 is redundant, r8-10517 uniquely and shortly
> > > > identifies both the branch and commit.
> > > > So just
> > > > Results for r8-10517 on ...
> > > > and in ... also include something that uniquely identifies the
> > > > configuration.
> > > >
> > > >   Jakub
> > > >
> > >
> > > I was thinking similarly, but then realised anyone using snapshots
> > > rather than git might not have that information.
> > >
> > > If that's not the case, however, then simplifying this would be a great
> > > idea.
> > >
> > > On the other hand, I use subject filters in my mail to steer results to
> > > a separate folder, so I do need some invariant key in the subject line
> > > that is sufficient to match without (too many) false positives.
> > >
> >
> > I always assumed there was a required format for the title/email
> > contents, is that documented somewhere?
> > There must be a smart filter to avoid spam, doesn't it require some
> > "keywords" in the title for instance?
> >
> > Same question for the gcc-regression list: is there a mandatory format?
>
> The format is generated by contrib/test_summary.

That's true for gcc-testresults, and I was wondering what would happen
if I modify test_summary? Does some mail-filter need fixing too?

Regarding gcc-regression, I think only Intel guys send messages there
(https://gcc.gnu.org/pipermail/gcc-regression/)
and they use different formats, hence I'm curious about the constraints.

>
> - David


Re: duplicate arm test results?

2020-09-24 Thread Christophe Lyon via Gcc
On Wed, 23 Sep 2020 at 17:50, Christophe Lyon
 wrote:
>
> On Wed, 23 Sep 2020 at 17:33, Martin Sebor  wrote:
> >
> > On 9/23/20 2:54 AM, Christophe Lyon wrote:
> > > On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
> > >>
> > >> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> > >>> On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> > >>>>
> > >>>> Hi Christophe,
> > >>>>
> > >>>> While checking recent test results I noticed many posts with results
> > >>>> for various flavors of arm that at high level seem like duplicates
> > >>>> of one another.
> > >>>>
> > >>>> For example, the batch below all have the same title, but not all
> > >>>> of the contents are the same.  The details (such as test failures)
> > >>>> on some of the pages are different.
> > >>>>
> > >>>> Can you help explain the differences?  Is there a way to avoid
> > >>>> the duplication?
> > >>>>
> > >>>
> > >>> Sure, I am aware that many results look the same...
> > >>>
> > >>>
> > >>> If you look at the top of the report (~line 5), you'll see:
> > >>> Running target myarm-sim
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> > >>> Running target 
> > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> > >>> Running target 
> > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> > >>> Running target 
> > >>> myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > >>>
> > >>> For all of these, the first line of the report is:
> > >>> LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> > >>> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> > >>> TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> > >>>
> > >>> I have other combinations where I override the configure flags, eg:
> > >>> LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> > >>> r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> > >>> TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> > >>>
> > >>> I tried to see if I could fit something in the subject line, but that
> > >>> didn't seem convenient (would be too long, and I fear modifying the
> > >>> awk script)
> > >>
> > >> Without some indication of a difference in the title there's no way
> > >> to know what result to look at, and checking all of them isn't really
> > >> practical.  The duplication (and the sheer number of results) also
> > >> make it more difficult to find results for targets other than arm-*.
> > >> There are about 13,000 results for September and over 10,000 of those
> > >> for arm-* alone.  It's good to have data but when there's this much
> > >> of it, and when the only form of presentation is as a running list,
> > >> it's too cumbersome to work with.
> > >>
> > >
> > > To help me track & report regressions, I build higher level reports like:
> > > https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
> > > where it's more obvious what configurations are tested.
> >
> > That looks awesome!  The regression indicator looks especially
> > helpful.  I really wish we had an overview like this for all
> > results.  I've been thinking about writing a script to scrape
> > gcc-testresults and format an HTML table kind of like this for
> > years.  With that, the number of posts sent to the list wouldn't
> > be a problem (at least not for those using the page).  But it
> > would require settling on a standard for

Re: duplicate arm test results?

2020-10-05 Thread Christophe Lyon via Gcc
On Thu, 24 Sep 2020 at 14:12, Christophe Lyon
 wrote:
>
> On Wed, 23 Sep 2020 at 17:50, Christophe Lyon
>  wrote:
> >
> > On Wed, 23 Sep 2020 at 17:33, Martin Sebor  wrote:
> > >
> > > On 9/23/20 2:54 AM, Christophe Lyon wrote:
> > > > On Wed, 23 Sep 2020 at 01:47, Martin Sebor  wrote:
> > > >>
> > > >> On 9/22/20 9:15 AM, Christophe Lyon wrote:
> > > >>> On Tue, 22 Sep 2020 at 17:02, Martin Sebor  wrote:
> > > >>>>
> > > >>>> Hi Christophe,
> > > >>>>
> > > >>>> While checking recent test results I noticed many posts with results
> > > >>>> for various flavors of arm that at high level seem like duplicates
> > > >>>> of one another.
> > > >>>>
> > > >>>> For example, the batch below all have the same title, but not all
> > > >>>> of the contents are the same.  The details (such as test failures)
> > > >>>> on some of the pages are different.
> > > >>>>
> > > >>>> Can you help explain the differences?  Is there a way to avoid
> > > >>>> the duplication?
> > > >>>>
> > > >>>
> > > >>> Sure, I am aware that many results look the same...
> > > >>>
> > > >>>
> > > >>> If you look at the top of the report (~line 5), you'll see:
> > > >>> Running target myarm-sim
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m3/-mfloat-abi=soft/-march=armv7-m
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m0/-mfloat-abi=soft/-march=armv6s-m
> > > >>> Running target 
> > > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m7/-mfloat-abi=hard/-march=armv7e-m+fp.dp
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m4/-mfloat-abi=hard/-march=armv7e-m+fp
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-m33/-mfloat-abi=hard/-march=armv8-m.main+fp+dsp
> > > >>> Running target 
> > > >>> myarm-sim/-mcpu=cortex-a7/-mfloat-abi=soft/-march=armv7ve+simd
> > > >>> Running target 
> > > >>> myarm-sim/-mthumb/-mcpu=cortex-a7/-mfloat-abi=hard/-march=armv7ve+simd
> > > >>>
> > > >>> For all of these, the first line of the report is:
> > > >>> LAST_UPDATED: Tue Sep 22 09:39:18 UTC 2020 (revision
> > > >>> r11-3343-g44135373fcdbe4019c5524ec3dff8e93d9ef113c)
> > > >>> TARGET=arm-none-eabi CPU=default FPU=default MODE=default
> > > >>>
> > > >>> I have other combinations where I override the configure flags, eg:
> > > >>> LAST_UPDATED: Tue Sep 22 11:25:12 UTC 2020 (revision
> > > >>> r9-8928-gb3043e490896ea37cd0273e6e149c3eeb3298720)
> > > >>> TARGET=arm-none-linux-gnueabihf CPU=cortex-a9 FPU=neon-fp16 MODE=thumb
> > > >>>
> > > >>> I tried to see if I could fit something in the subject line, but that
> > > >>> didn't seem convenient (would be too long, and I fear modifying the
> > > >>> awk script)
> > > >>
> > > >> Without some indication of a difference in the title there's no way
> > > >> to know what result to look at, and checking all of them isn't really
> > > >> practical.  The duplication (and the sheer number of results) also
> > > >> make it more difficult to find results for targets other than arm-*.
> > > >> There are about 13,000 results for September and over 10,000 of those
> > > >> for arm-* alone.  It's good to have data but when there's this much
> > > >> of it, and when the only form of presentation is as a running list,
> > > >> it's too cumbersome to work with.
> > > >>
> > > >
> > > > To help me track & report regressions, I build higher level reports 
> > > > like:
> > > > https://people.linaro.org/~christophe.lyon/cross-validation/gcc/trunk/0latest/report-build-info.html
> > > > where it's more obvious what configurations are tested.
> > >
> > > That looks awesome!  The regr

Re: GCC 10.3 Release Candidate available from gcc.gnu.org

2021-04-04 Thread Christophe Lyon via Gcc
On Thu, 1 Apr 2021 at 14:35, Richard Biener  wrote:
>
>
> The first release candidate for GCC 10.3 is available from
>
>  https://gcc.gnu.org/pub/gcc/snapshots/10.3.0-RC-20210401/
>  ftp://gcc.gnu.org/pub/gcc/snapshots/10.3.0-RC-20210401/
>
> and shortly its mirrors.  It has been generated from git commit
> 892024d4af83b258801ff7484bf28f0cf1a1a999.
>
> I have so far bootstrapped and tested the release candidate on
> x86_64-linux.  Please test it and report any issues to bugzilla.
>
> If all goes well, I'd like to release 10.3 on Thursday, April 8th.


Hi,

Last week I committed Richard Earnshaw's fix for PR target/99773),
which affects gcc-10 (sorry I didn't check that when I filed the PR,
I just realized later that 10.3 was so close to release).

I think it would be desirable to backport the patch to gcc-10:
https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=6f93a7c7fc62b2d6ab47e5d5eb60d41366e1ee9e

Is that too late?

Thanks

Christophe


config/dfp.m4 license?

2022-04-29 Thread Christophe Lyon via Gcc

Hi!

The config/dfp.m4 file does not have a license header. Several other .m4 
files in the same directory have a GPL header, many others do not.


Can someone confirm the license of dfp.m4 and add the missing header if 
applicable?


Thanks!

Christophe


Re: Checks that autotools generated files were re-generated correctly

2023-11-06 Thread Christophe Lyon via Gcc
Hi!

On Mon, 6 Nov 2023 at 18:05, Martin Jambor  wrote:
>
> Hello,
>
> I have inherited Martin Liška's buildbot script that checks that all
> sorts of autotools generated files, mainly configure scripts, were
> re-generated correctly when appropriate.  While the checks are hopefully
> useful, they report issues surprisingly often and reporting them feels
> especially unproductive.
>
> Could such checks be added to our server side push hooks so that commits
> introducing these breakages would get refused automatically.  While the
> check might be a bit expensive, it only needs to be run on files
> touching the generated files and/or the files these are generated from.
>
> Alternatively, Maxim, you seem to have an infrastructure that is capable
> of sending email.  Would you consider adding the check to your buildbot
> instance and report issues automatically?  The level of totally

After the discussions we had during Cauldron, I actually thought we
should add such a bot.

Initially I was thinking about adding this as a "precommit" check, to
make sure the autogenerated files were submitted correctly, but I
realized that the policy is actually not to send autogenerated files
as part of the patch (thus making pre-commit check impracticable in
such cases, unless we autogenerate those files after applying the
patch)

I understand you mean to run this as a post-commit bot, meaning we
would continue to "accept" broken commits, but now automatically send
a notification, asking for a prompt fix?

We can probably implement that, indeed. Is that the general agreement?

Thanks,

Christophe

> false-positives should be low (I thought zero but see
> https://gcc.gnu.org/pipermail/gcc-patches/2023-November/635358.html).
>
> Thanks for any ideas which can lead to a mostly automated process.
>
> Martin


Re: Checks that autotools generated files were re-generated correctly

2023-11-07 Thread Christophe Lyon via Gcc
On Tue, 7 Nov 2023 at 15:36, Martin Jambor  wrote:
>
> Hello,
>
> On Tue, Nov 07 2023, Maxim Kuvyrkov wrote:
> n>> On Nov 6, 2023, at 21:19, Christophe Lyon  
> wrote:
> >>
> >> Hi!
> >>
> >> On Mon, 6 Nov 2023 at 18:05, Martin Jambor  wrote:
> >>>
> >>> Hello,
> >>>
> >>> I have inherited Martin Liška's buildbot script that checks that all
> >>> sorts of autotools generated files, mainly configure scripts, were
> >>> re-generated correctly when appropriate.  While the checks are hopefully
> >>> useful, they report issues surprisingly often and reporting them feels
> >>> especially unproductive.
> >>>
> >>> Could such checks be added to our server side push hooks so that commits
> >>> introducing these breakages would get refused automatically.  While the
> >>> check might be a bit expensive, it only needs to be run on files
> >>> touching the generated files and/or the files these are generated from.
> >>>
> >>> Alternatively, Maxim, you seem to have an infrastructure that is capable
> >>> of sending email.  Would you consider adding the check to your buildbot
> >>> instance and report issues automatically?  The level of totally
> >>
> >> After the discussions we had during Cauldron, I actually thought we
> >> should add such a bot.
> >>
> >> Initially I was thinking about adding this as a "precommit" check, to
> >> make sure the autogenerated files were submitted correctly, but I
> >> realized that the policy is actually not to send autogenerated files
> >> as part of the patch (thus making pre-commit check impracticable in
> >> such cases, unless we autogenerate those files after applying the
> >> patch)
> >>
> >> I understand you mean to run this as a post-commit bot, meaning we
> >> would continue to "accept" broken commits, but now automatically send
> >> a notification, asking for a prompt fix?
>
> My thinking was that ideally bad commits would get refused early, like
> when you get your ChangeLog completely wrong, but if there are drawbacks
> to that approach, a completely automated notification system would be
> great too.
>
Well, making such checks in a precommit-CI means that authors should
include regenerated files in their patch submissions, so it seems this
would imply a policy change (not impossible, but will likely take some
time to get consensus?)

> >>
> >> We can probably implement that, indeed. Is that the general agreement?
> >
> > [CC: Siddhesh, Carlos]
> >
> > Hi Martin,
> >
> > I agree with Christophe, and we can add various source-level checks
> > and wrap them up as a post-commit job.  The job will then send out
> > email reports to developers whose patches failed it.
>
> Thanks, automating this would be a huge improvement.
>
> >
> > Where the current script is located?  These checks would be useful for
> > all GNU Toolchain projects -- binutils/GDB, GCC, Glibc and, maybe,
> > Newlib -- so it would be useful to put it in a separate "gnutools"
> > repo.
>
> The test consists of running a python script that I'm pasting below in a
> directory with a current master branch and subsequently checking that
> "git diff" does not actually produce any diff (which currently does).

Great, I was thinking about writing something like that :-)

> You need to have locally built autotools utilities of exactly the right
> version.  The script (written by Martin Liška) is:
>
> -- 8< --
> #!/usr/bin/env python3
>
> import os
> import subprocess
> from pathlib import Path
>
> AUTOCONF_BIN = 'autoconf-2.69'
> AUTOMAKE_BIN = 'automake-1.15.1'
> ACLOCAL_BIN = 'aclocal-1.15.1'
> AUTOHEADER_BIN = 'autoheader-2.69'
>
> ENV = f'AUTOCONF={AUTOCONF_BIN} ACLOCAL={ACLOCAL_BIN} AUTOMAKE={AUTOMAKE_BIN}'
>
> config_folders = []
>
> for root, _, files in os.walk('.'):
> for file in files:
> if file == 'configure':
> config_folders.append(Path(root).resolve())
>
> for folder in sorted(config_folders):
> print(folder, flush=True)
> os.chdir(folder)
> configure_lines = open('configure.ac').read().splitlines()
> if any(True for line in configure_lines if 
> line.startswith('AC_CONFIG_HEADERS')):
> subprocess.check_output(f'{ENV} {AUTOHEADER_BIN} -f', shell=T

Help needed with maintainer-mode

2024-02-29 Thread Christophe Lyon via Gcc
Hi!

Sorry for cross-posting, but I'm not sure the rules/guidelines are the
same in gcc vs binutils/gdb.

TL;DR: are there some guidelines about how to use/enable maintainer-mode?

In the context of the Linaro CI, I've been looking at enabling
maintainer-mode at configure time in our configurations where we test
patches before they are committed (aka "precommit CI", which relies on
patchwork).

Indeed, auto-generated files are not part of patch submissions, and
when a patch implies regenerating some files before building, we
currently report wrong failures because we don't perform such updates.

I hoped improving this would be as simple as adding
--enable-maintainer-mode when configuring, after making sure
autoconf-2.69 and automake-1.15.1 were in the PATH (using our host's
libtool and gettext seems OK).

However, doing so triggered several problems, which look like race
conditions in the build system (we build at -j160):
- random build errors in binutils / gdb with messages like "No rule to
make target 'po/BLD-POTFILES.in". I managed to reproduce something
similar manually once, I noticed an empty Makefile; the build logs are
of course difficult to read, so I couldn't figure out yet what could
cause this.

- random build failures in gcc in fixincludes. I think this is a race
condition because fixincludes is updated concurrently both from
/fixincludes and $buillddir/fixincludes. Probably fixable in gcc
Makefiles.

- I've seen other errors when building gcc like
configure.ac:25: error: possibly undefined macro: AM_ENABLE_MULTILIB
from libquadmath. I haven't investigated this yet.

I've read binutils' README-maintainer-mode, which contains a warning
about distclean, but we don't use this: we start our builds from a
scratch directory.

So... I'm wondering if there are some "official" guidelines about how
to regenerate files, and/or use maintainer-mode?  Maybe I missed a
"magic" make target (eg 'make autoreconf-all') that should be executed
after configure and before 'make all'?

I've noticed that sourceware's buildbot has a small script
"autoregen.py" which does not use the project's build system, but
rather calls aclocal/autoheader/automake/autoconf in an ad-hoc way.
Should we replicate that?

Thanks,

Christophe


Re: Help needed with maintainer-mode

2024-02-29 Thread Christophe Lyon via Gcc
On Thu, 29 Feb 2024 at 11:41, Richard Earnshaw (lists)
 wrote:
>
> On 29/02/2024 10:22, Christophe Lyon via Gcc wrote:
> > Hi!
> >
> > Sorry for cross-posting, but I'm not sure the rules/guidelines are the
> > same in gcc vs binutils/gdb.
> >
> > TL;DR: are there some guidelines about how to use/enable maintainer-mode?
> >
> > In the context of the Linaro CI, I've been looking at enabling
> > maintainer-mode at configure time in our configurations where we test
> > patches before they are committed (aka "precommit CI", which relies on
> > patchwork).
> >
> > Indeed, auto-generated files are not part of patch submissions, and
> > when a patch implies regenerating some files before building, we
> > currently report wrong failures because we don't perform such updates.
> >
> > I hoped improving this would be as simple as adding
> > --enable-maintainer-mode when configuring, after making sure
> > autoconf-2.69 and automake-1.15.1 were in the PATH (using our host's
> > libtool and gettext seems OK).
> >
> > However, doing so triggered several problems, which look like race
> > conditions in the build system (we build at -j160):
> > - random build errors in binutils / gdb with messages like "No rule to
> > make target 'po/BLD-POTFILES.in". I managed to reproduce something
> > similar manually once, I noticed an empty Makefile; the build logs are
> > of course difficult to read, so I couldn't figure out yet what could
> > cause this.
> >
> > - random build failures in gcc in fixincludes. I think this is a race
> > condition because fixincludes is updated concurrently both from
> > /fixincludes and $buillddir/fixincludes. Probably fixable in gcc
> > Makefiles.
> >
> > - I've seen other errors when building gcc like
> > configure.ac:25: error: possibly undefined macro: AM_ENABLE_MULTILIB
> > from libquadmath. I haven't investigated this yet.
> >
> > I've read binutils' README-maintainer-mode, which contains a warning
> > about distclean, but we don't use this: we start our builds from a
> > scratch directory.
> >
> > So... I'm wondering if there are some "official" guidelines about how
> > to regenerate files, and/or use maintainer-mode?  Maybe I missed a
> > "magic" make target (eg 'make autoreconf-all') that should be executed
> > after configure and before 'make all'?
> >
> > I've noticed that sourceware's buildbot has a small script
> > "autoregen.py" which does not use the project's build system, but
> > rather calls aclocal/autoheader/automake/autoconf in an ad-hoc way.
> > Should we replicate that?
> >
> > Thanks,
> >
> > Christophe
>
> There are other potential gotchas as well, such as the manual copying of the 
> generated tm.texi back into the source repo due to relicensing.  Perhaps we 
> haven't encountered that one because patches generally contain that 
> duplicated output.
>
It did happen a few weeks ago, with a patch that was updating the
target hooks IIRC.

> If we want a CI to work reliably, then perhaps we should reconsider our 
> policy of stripping out regenerated code.  We have a number of developer 
> practices, such as replying to an existing patch with an updated version that 
> the CI can't handle easily (especially if the patch is part of a series), so 
> there may be space for a discussion on how to work smarter.
>
Sure, there are many things we can improve in the current workflow to
make it more CI friendly ;-)
But I was only asking how maintainer-mode is supposed to be used, so
that I can replicate the process in CI.
I couldn't find any documentation :-)

Thanks,

Christophe

> My calendar says we have a toolchain office hours meeting today, perhaps this 
> would be worth bringing up.
>
> R.
>


Re: Help needed with maintainer-mode

2024-02-29 Thread Christophe Lyon via Gcc
On Thu, 29 Feb 2024 at 12:00, Mark Wielaard  wrote:
>
> Hi Christophe,
>
> On Thu, Feb 29, 2024 at 11:22:33AM +0100, Christophe Lyon via Gcc wrote:
> > I've noticed that sourceware's buildbot has a small script
> > "autoregen.py" which does not use the project's build system, but
> > rather calls aclocal/autoheader/automake/autoconf in an ad-hoc way.
> > Should we replicate that?
>
> That python script works across gcc/binutils/gdb:
> https://sourceware.org/cgit/builder/tree/builder/containers/autoregen.py
>
> It is installed into a container file that has the exact autoconf and
> automake version needed to regenerate the autotool files:
> https://sourceware.org/cgit/builder/tree/builder/containers/Containerfile-autotools
>
> And it was indeed done this way because that way the files are
> regenerated in a reproducible way. Which wasn't the case when using 
> --enable-maintainer-mode (and autoreconfig also doesn't work).

I see. So it is possibly incomplete, in the sense that it may lack
some of the steps that maintainer-mode would perform?
For instance, gas for aarch64 has some *opcodes*.c files that need
regenerating before committing. The regeneration step is enabled in
maintainer-mode, so I guess the autoregen bots on Sourceware would
miss a problem with these files?

Thanks,

Christophe

>
> It is run on all commits and warns if it detects a change in the
> (checked in) generated files.
> https://builder.sourceware.org/buildbot/#/builders/gcc-autoregen
> https://builder.sourceware.org/buildbot/#/builders/binutils-gdb-autoregen
>
> Cheers,
>
> Mark


Re: Help needed with maintainer-mode

2024-03-01 Thread Christophe Lyon via Gcc
On Thu, 29 Feb 2024 at 20:49, Thiago Jung Bauermann
 wrote:
>
>
> Hello,
>
> Christophe Lyon  writes:
>
> > I hoped improving this would be as simple as adding
> > --enable-maintainer-mode when configuring, after making sure
> > autoconf-2.69 and automake-1.15.1 were in the PATH (using our host's
> > libtool and gettext seems OK).
> >
> > However, doing so triggered several problems, which look like race
> > conditions in the build system (we build at -j160):
> > - random build errors in binutils / gdb with messages like "No rule to
> > make target 'po/BLD-POTFILES.in". I managed to reproduce something
> > similar manually once, I noticed an empty Makefile; the build logs are
> > of course difficult to read, so I couldn't figure out yet what could
> > cause this.
> >
> > - random build failures in gcc in fixincludes. I think this is a race
> > condition because fixincludes is updated concurrently both from
> > /fixincludes and $buillddir/fixincludes. Probably fixable in gcc
> > Makefiles.
> >
> > - I've seen other errors when building gcc like
> > configure.ac:25: error: possibly undefined macro: AM_ENABLE_MULTILIB
> > from libquadmath. I haven't investigated this yet.
>
> I don't know about the last one, but regarding the race conditions, one
> workaround might be to define a make target that regenerates all files
> (if one doesn't exist already, I don't know) and make the CI call it
> with -j1 to avoid concurrency, and then do the regular build step with
> -j160.
>

Yes, that's what I meant below with "magic" make target ;-)

Thanks,

Christophe

> --
> Thiago


Re: Help needed with maintainer-mode

2024-03-01 Thread Christophe Lyon via Gcc
On Fri, 1 Mar 2024 at 14:08, Mark Wielaard  wrote:
>
> Hi Christophe,
>
> On Thu, 2024-02-29 at 18:39 +0100, Christophe Lyon wrote:
> > On Thu, 29 Feb 2024 at 12:00, Mark Wielaard  wrote:
> > > That python script works across gcc/binutils/gdb:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/autoregen.py
> > >
> > > It is installed into a container file that has the exact autoconf and
> > > automake version needed to regenerate the autotool files:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/Containerfile-autotools
> > >
> > > And it was indeed done this way because that way the files are
> > > regenerated in a reproducible way. Which wasn't the case when using 
> > > --enable-maintainer-mode (and autoreconfig also doesn't work).
> >
> > I see. So it is possibly incomplete, in the sense that it may lack
> > some of the steps that maintainer-mode would perform?
> > For instance, gas for aarch64 has some *opcodes*.c files that need
> > regenerating before committing. The regeneration step is enabled in
> > maintainer-mode, so I guess the autoregen bots on Sourceware would
> > miss a problem with these files?
>
> Yes, it is certainly incomplete. But it is done this way because it is
> my understanding that even the gcc release maintainers do the
> automake/autoconf invocations by hand instead of running with configure
> --enable-maintainer-mode.

Indeed, I've just discovered that earlier today :-)

>
> Note that another part that isn't caught at the moment are the
> regeneration of the opt.urls files. There is a patch for that pending:
Indeed. I hadn't thought of it either. And just noticed it requires
the D frontend, which we don't build in CI.

> https://inbox.sourceware.org/buildbot/20231215005908.gc12...@gnu.wildebeest.org/
>
> But that is waiting for the actual opt.urls to be regenerated correctly
> first:
> https://inbox.sourceware.org/gcc-patches/20240224174258.gd1...@gnu.wildebeest.org/
> Ping David?
>
> It would be nice to have all these "regeneration targets" in one script
> that could be used by both the pre-commit and post-commit checkers.
>
Agreed.

> Cheers,
>
> Mark


Re: Help needed with maintainer-mode

2024-03-01 Thread Christophe Lyon via Gcc
On Fri, 1 Mar 2024 at 14:08, Mark Wielaard  wrote:
>
> Hi Christophe,
>
> On Thu, 2024-02-29 at 18:39 +0100, Christophe Lyon wrote:
> > On Thu, 29 Feb 2024 at 12:00, Mark Wielaard  wrote:
> > > That python script works across gcc/binutils/gdb:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/autoregen.py
> > >
> > > It is installed into a container file that has the exact autoconf and
> > > automake version needed to regenerate the autotool files:
> > > https://sourceware.org/cgit/builder/tree/builder/containers/Containerfile-autotools
> > >
> > > And it was indeed done this way because that way the files are
> > > regenerated in a reproducible way. Which wasn't the case when using 
> > > --enable-maintainer-mode (and autoreconfig also doesn't work).
> >
> > I see. So it is possibly incomplete, in the sense that it may lack
> > some of the steps that maintainer-mode would perform?
> > For instance, gas for aarch64 has some *opcodes*.c files that need
> > regenerating before committing. The regeneration step is enabled in
> > maintainer-mode, so I guess the autoregen bots on Sourceware would
> > miss a problem with these files?
>
> Yes, it is certainly incomplete. But it is done this way because it is
> my understanding that even the gcc release maintainers do the
> automake/autoconf invocations by hand instead of running with configure
> --enable-maintainer-mode.

After a discussion on IRC, I read
https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
which basically says "run autoreconf in every dir where there is a
configure script"
but this is not exactly what autoregen.py is doing. IIRC it is based
on a script from Martin Liska, do you know/remember why it follows a
different process?

Thanks,

Christophe

>
> Note that another part that isn't caught at the moment are the
> regeneration of the opt.urls files. There is a patch for that pending:
> https://inbox.sourceware.org/buildbot/20231215005908.gc12...@gnu.wildebeest.org/
>
> But that is waiting for the actual opt.urls to be regenerated correctly
> first:
> https://inbox.sourceware.org/gcc-patches/20240224174258.gd1...@gnu.wildebeest.org/
> Ping David?
>
> It would be nice to have all these "regeneration targets" in one script
> that could be used by both the pre-commit and post-commit checkers.
>
> Cheers,
>
> Mark


Re: Help needed with maintainer-mode

2024-03-04 Thread Christophe Lyon via Gcc
Hi!

On Mon, 4 Mar 2024 at 10:36, Thomas Schwinge  wrote:
>
> Hi!
>
> On 2024-03-04T00:30:05+, Sam James  wrote:
> > Mark Wielaard  writes:
> >> On Fri, Mar 01, 2024 at 05:32:15PM +0100, Christophe Lyon wrote:
> >>> [...], I read
> >>> https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
> >>> which basically says "run autoreconf in every dir where there is a
> >>> configure script"
> >>> but this is not exactly what autoregen.py is doing. IIRC it is based
> >>> on a script from Martin Liska, do you know/remember why it follows a
> >>> different process?
> >>
> >> CCing Sam and Arsen who helped refine the autoregen.py script, who
> >> might remember more details. We wanted a script that worked for both
> >> gcc and binutils-gdb. And as far as I know autoreconf simply didn't
> >> work in all directories. We also needed to skip some directories that
> >> did contain a configure script, but that were imported (gotools,
> >> readline, minizip).
> >
> > What we really need to do is, for a start, land tschwinge/aoliva's patches 
> > [0]
> > for AC_CONFIG_SUBDIRS.
>
> Let me allocate some time this week to get that one completed.
>
> > Right now, the current situation is janky and it's nowhere near
> > idiomatic autotools usage. It is not a comfortable experience
> > interacting with it even as someone who is familiar and happy with using
> > autotools otherwise.
> >
> > I didn't yet play with maintainer-mode myself but I also didn't see much
> > point given I knew of more fundamental problems like this.
> >
> > [0] 
> > https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/
>

Thanks for the background. I didn't follow that discussion at that time :-)

So... I was confused because I noticed many warnings when doing a simple
find . -name configure |while read f; do echo $f;d=$(dirname $f) &&
autoreconf -f $d && echo $d; done
as suggested by https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration

Then I tried with autoregen.py, and saw the same and now just
checked Sourceware's bot logs and saw the same numerous warnings at
least in GCC (didn't check binutils yet). Looks like this is
"expected" 

I started looking at auto-regenerating these files in our CI a couple
of weeks ago, after we received several "complaints" from contributors
saying that our precommit CI was useless / bothering since it didn't
regenerate files, leading to false alarms.
But now I'm wondering how such contributors regenerate the files
impacted by their patches before committing, they probably just
regenerate things in their subdir of interest, not noticing the whole
picture :-(

As a first step, we can probably use autoregen.py too, and declare
maintainer-mode broken. However, I do notice that besides the rules
about regenerating configure/Makefile.in/..., maintainer-mode is also
used to update some files.
In gcc:
fixincludes: fixincl.x
libffi: doc/version.texi
libgfortran: some stuff :-)
libiberty: functions.texi

in binutils/bfd:
gdb/sim
bfd/po/SRC-POTFILES.in
bfd/po/BLD-POTFILES.in
bfd/bfd-in2.h
bfd/libbfd.h
bfd/libcoff.h
binutils/po/POTFILES.in
gas/po/POTFILES.in
opcodes/i386*.h
gdb/copying.c
gdb/data-directory/*.xml
gold/po/POTFILES.in
gprof/po/POTFILES.in
gfprofng/doc/version.texi
ld/po/SRC-POTFILES.in
ld/po/BLD-POTFILES.in
ld: ldgram/ldlex... and all emulation sources
libiberty/functions.texi
opcodes/po/POTFILES.in
opcodes/aarch64-{asm,dis,opc}-2.c
opcodes/ia64 msp430 rl78 rx z8k decoders

How are these files "normally" expected to be updated? Do people just
manually uncomment the corresponding maintainer lines in the Makefiles
and update manually?   In particular we hit issues several times with
files under opcodes, that we don't regenerate currently. Maybe we
could build binutils in maintainer-mode at -j1 but, well

README-maintainer-mode in binutils/gdb only mentions a problem with
'make distclean' and maintainer mode
binutils/README-how-to-make-a-release indicates to use
--enable-maintainer-mode, and the sample 'make' invocations do not
include any -j flag, is that an indication that only -j1 is supposed
to work?
Similarly, the src-release.sh script does not use -j.

Thanks,

Christophe

>
> Grüße
>  Thomas


Re: Help needed with maintainer-mode

2024-03-04 Thread Christophe Lyon via Gcc
On Mon, 4 Mar 2024 at 12:25, Jonathan Wakely  wrote:
>
> On Mon, 4 Mar 2024 at 10:44, Christophe Lyon via Gcc  wrote:
> >
> > Hi!
> >
> > On Mon, 4 Mar 2024 at 10:36, Thomas Schwinge  wrote:
> > >
> > > Hi!
> > >
> > > On 2024-03-04T00:30:05+, Sam James  wrote:
> > > > Mark Wielaard  writes:
> > > >> On Fri, Mar 01, 2024 at 05:32:15PM +0100, Christophe Lyon wrote:
> > > >>> [...], I read
> > > >>> https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
> > > >>> which basically says "run autoreconf in every dir where there is a
> > > >>> configure script"
> > > >>> but this is not exactly what autoregen.py is doing. IIRC it is based
> > > >>> on a script from Martin Liska, do you know/remember why it follows a
> > > >>> different process?
> > > >>
> > > >> CCing Sam and Arsen who helped refine the autoregen.py script, who
> > > >> might remember more details. We wanted a script that worked for both
> > > >> gcc and binutils-gdb. And as far as I know autoreconf simply didn't
> > > >> work in all directories. We also needed to skip some directories that
> > > >> did contain a configure script, but that were imported (gotools,
> > > >> readline, minizip).
> > > >
> > > > What we really need to do is, for a start, land tschwinge/aoliva's 
> > > > patches [0]
> > > > for AC_CONFIG_SUBDIRS.
> > >
> > > Let me allocate some time this week to get that one completed.
> > >
> > > > Right now, the current situation is janky and it's nowhere near
> > > > idiomatic autotools usage. It is not a comfortable experience
> > > > interacting with it even as someone who is familiar and happy with using
> > > > autotools otherwise.
> > > >
> > > > I didn't yet play with maintainer-mode myself but I also didn't see much
> > > > point given I knew of more fundamental problems like this.
> > > >
> > > > [0] 
> > > > https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/
> > >
> >
> > Thanks for the background. I didn't follow that discussion at that time :-)
> >
> > So... I was confused because I noticed many warnings when doing a simple
> > find . -name configure |while read f; do echo $f;d=$(dirname $f) &&
> > autoreconf -f $d && echo $d; done
> > as suggested by https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration
> >
> > Then I tried with autoregen.py, and saw the same and now just
> > checked Sourceware's bot logs and saw the same numerous warnings at
> > least in GCC (didn't check binutils yet). Looks like this is
> > "expected" 
> >
> > I started looking at auto-regenerating these files in our CI a couple
> > of weeks ago, after we received several "complaints" from contributors
> > saying that our precommit CI was useless / bothering since it didn't
> > regenerate files, leading to false alarms.
> > But now I'm wondering how such contributors regenerate the files
> > impacted by their patches before committing, they probably just
> > regenerate things in their subdir of interest, not noticing the whole
> > picture :-(
> >
> > As a first step, we can probably use autoregen.py too, and declare
> > maintainer-mode broken. However, I do notice that besides the rules
> > about regenerating configure/Makefile.in/..., maintainer-mode is also
> > used to update some files.
> > In gcc:
> > fixincludes: fixincl.x
> > libffi: doc/version.texi
> > libgfortran: some stuff :-)
> > libiberty: functions.texi
>
> My recently proposed patch adds the first of those to gcc_update, the
> other should be done too.
> https://gcc.gnu.org/pipermail/gcc-patches/2024-March/647027.html
>

This script touches files such that they appear more recent than their
dependencies,
so IIUC even if one uses --enable-maintainer-mode, it will have no effect.
For auto* files, this is "fine" as we can run autoreconf or
autoregen.py before starting configure+build, but what about other
files?
For instance, if we have to test a patch which implies changes to
fixincludes/fixincl.x, how should we proceed?
1- git checkout (with possibly "wrong" timestamps)
2- apply patch-to-test
3- contrib/gcc_update -t
4- configure --enable-maintainer-mode

I guess --enable-maintainer-mode would be largely (if not comple

Re: Help needed with maintainer-mode

2024-03-04 Thread Christophe Lyon via Gcc
On Mon, 4 Mar 2024 at 16:41, Richard Earnshaw  wrote:
>
>
>
> On 04/03/2024 15:36, Richard Earnshaw (lists) wrote:
> > On 04/03/2024 14:46, Christophe Lyon via Gcc wrote:
> >> On Mon, 4 Mar 2024 at 12:25, Jonathan Wakely  wrote:
> >>>
> >>> On Mon, 4 Mar 2024 at 10:44, Christophe Lyon via Gcc  
> >>> wrote:
> >>>>
> >>>> Hi!
> >>>>
> >>>> On Mon, 4 Mar 2024 at 10:36, Thomas Schwinge  
> >>>> wrote:
> >>>>>
> >>>>> Hi!
> >>>>>
> >>>>> On 2024-03-04T00:30:05+, Sam James  wrote:
> >>>>>> Mark Wielaard  writes:
> >>>>>>> On Fri, Mar 01, 2024 at 05:32:15PM +0100, Christophe Lyon wrote:
> >>>>>>>> [...], I read
> >>>>>>>> https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration 
> >>>>>>>> <https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration>
> >>>>>>>> which basically says "run autoreconf in every dir where there is a
> >>>>>>>> configure script"
> >>>>>>>> but this is not exactly what autoregen.py is doing. IIRC it is based
> >>>>>>>> on a script from Martin Liska, do you know/remember why it follows a
> >>>>>>>> different process?
> >>>>>>>
> >>>>>>> CCing Sam and Arsen who helped refine the autoregen.py script, who
> >>>>>>> might remember more details. We wanted a script that worked for both
> >>>>>>> gcc and binutils-gdb. And as far as I know autoreconf simply didn't
> >>>>>>> work in all directories. We also needed to skip some directories that
> >>>>>>> did contain a configure script, but that were imported (gotools,
> >>>>>>> readline, minizip).
> >>>>>>
> >>>>>> What we really need to do is, for a start, land tschwinge/aoliva's 
> >>>>>> patches [0]
> >>>>>> for AC_CONFIG_SUBDIRS.
> >>>>>
> >>>>> Let me allocate some time this week to get that one completed.
> >>>>>
> >>>>>> Right now, the current situation is janky and it's nowhere near
> >>>>>> idiomatic autotools usage. It is not a comfortable experience
> >>>>>> interacting with it even as someone who is familiar and happy with 
> >>>>>> using
> >>>>>> autotools otherwise.
> >>>>>>
> >>>>>> I didn't yet play with maintainer-mode myself but I also didn't see 
> >>>>>> much
> >>>>>> point given I knew of more fundamental problems like this.
> >>>>>>
> >>>>>> [0] 
> >>>>>> https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/
> >>>>>>  
> >>>>>> <https://inbox.sourceware.org/gcc-patches/oril72c4yh@lxoliva.fsfla.org/>
> >>>>>
> >>>>
> >>>> Thanks for the background. I didn't follow that discussion at that time 
> >>>> :-)
> >>>>
> >>>> So... I was confused because I noticed many warnings when doing a simple
> >>>> find . -name configure |while read f; do echo $f;d=$(dirname $f) &&
> >>>> autoreconf -f $d && echo $d; done
> >>>> as suggested by https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration 
> >>>> <https://gcc.gnu.org/wiki/Regenerating_GCC_Configuration>
> >>>>
> >>>> Then I tried with autoregen.py, and saw the same and now just
> >>>> checked Sourceware's bot logs and saw the same numerous warnings at
> >>>> least in GCC (didn't check binutils yet). Looks like this is
> >>>> "expected" 
> >>>>
> >>>> I started looking at auto-regenerating these files in our CI a couple
> >>>> of weeks ago, after we received several "complaints" from contributors
> >>>> saying that our precommit CI was useless / bothering since it didn't
> >>>> regenerate files, leading to false alarms.
> >>>> But now I'm wondering how such contributors regenerate the files
> >>>> impacted by their patches before committing, they probably jus

[RFC] add regenerate Makefile target

2024-03-13 Thread Christophe Lyon via Gcc
Hi!

After recent discussions on IRC and on the lists about maintainer-mode
and various problems with auto-generated source files, I've written
this small prototype.

Based on those discussions, I assumed that people generally want to
update autotools files using a script similar to autoregen.py, which
takes care of running aclocal, autoheader, automake and autoconf as
appropriate.

What is currently missing is a "simple" way of regenerating other
files, which happens normally with --enable-maintainer-mode (which is
reportedly broken).  This patch as a "regenerate" Makefile target
which can be called to update those files, provided
--enable-maintainer-mode is used.

I tried this approach with the following workflow for binutils/gdb:
- run autoregen.py in srcdir
- cd builddir
- configure --enable-maintainer-mode 
- make all-bfd all-libiberty regenerate -j1
- for gdb: make all -C gdb/data-directory -j1
- make all -jXXX

Making 'all' in bfd and libiberty is needed by some XXX-gen host
programs in opcodes.

The advantage (for instance for CI) is that we can regenerate files at
-j1, thus avoiding the existing race conditions, and build the rest
with -j XXX.

Among drawbacks:
- most sub-components use Makefile.am, but gdb does not: this may make
  maintenance more complex (different rules for different projects)
- maintaining such ad-hoc "regenerate" rules would require special
  attention from maintainers/reviewers
- dependency on -all-bfd and all-libiberty is probably not fully
   intuitive, but should not be a problem if the "regenerate" rules
   are used after a full build for instance

Of course Makefile.def/Makefile.tpl would need further cleanup as I
didn't try to take gcc into account is this patch.

Thoughts?

Thanks,

Christophe


---
 Makefile.def |   37 +-
 Makefile.in  | 1902 ++
 Makefile.tpl |7 +
 bfd/Makefile.am  |1 +
 bfd/Makefile.in  |1 +
 binutils/Makefile.am |1 +
 binutils/Makefile.in |1 +
 gas/Makefile.am  |1 +
 gas/Makefile.in  |1 +
 gdb/Makefile.in  |1 +
 gold/Makefile.am |2 +-
 gold/Makefile.in |2 +-
 gprof/Makefile.am|1 +
 gprof/Makefile.in|1 +
 ld/Makefile.am   |1 +
 ld/Makefile.in   |1 +
 opcodes/Makefile.am  |2 +
 opcodes/Makefile.in  |2 +
 18 files changed, 1952 insertions(+), 13 deletions(-)

diff --git a/Makefile.def b/Makefile.def
index 3e00a729a0c..42e71a9ffa2 100644
--- a/Makefile.def
+++ b/Makefile.def
@@ -39,7 +39,8 @@ host_modules= { module= binutils; bootstrap=true; };
 host_modules= { module= bison; no_check_cross= true; };
 host_modules= { module= cgen; };
 host_modules= { module= dejagnu; };
-host_modules= { module= etc; };
+host_modules= { module= etc;
+missing= regenerate; };
 host_modules= { module= fastjar; no_check_cross= true; };
 host_modules= { module= fixincludes; bootstrap=true;
missing= TAGS;
@@ -73,7 +74,8 @@ host_modules= { module= isl; lib_path=.libs; bootstrap=true;
no_install= true; };
 host_modules= { module= gold; bootstrap=true; };
 host_modules= { module= gprof; };
-host_modules= { module= gprofng; };
+host_modules= { module= gprofng;
+missing= regenerate; };
 host_modules= { module= gettext; bootstrap=true; no_install=true;
 module_srcdir= "gettext/gettext-runtime";
// We always build gettext with pic, because some packages 
(e.g. gdbserver)
@@ -95,7 +97,8 @@ host_modules= { module= tcl;
 missing=mostlyclean; };
 host_modules= { module= itcl; };
 host_modules= { module= ld; bootstrap=true; };
-host_modules= { module= libbacktrace; bootstrap=true; };
+host_modules= { module= libbacktrace; bootstrap=true;
+missing= regenerate; };
 host_modules= { module= libcpp; bootstrap=true; };
 // As with libiconv, don't install any of libcody
 host_modules= { module= libcody; bootstrap=true;
@@ -110,9 +113,11 @@ host_modules= { module= libcody; bootstrap=true;
missing= install-dvi;
missing=TAGS; };
 host_modules= { module= libdecnumber; bootstrap=true;
-   missing=TAGS; };
+   missing=TAGS;
+missing= regenerate; };
 host_modules= { module= libgui; };
 host_modules= { module= libiberty; bootstrap=true;
+missing= regenerate;

extra_configure_flags='@extra_host_libiberty_configure_flags@';};
 // Linker plugins may need their own build of libiberty; see
 // gcc/doc/install.texi.  We take care that this build of libiberty doesn't get
@@ -134,16 +139,22 @@ host_modules= { module= libiconv;
missing= install-html;
missing= install-info; };
 host_modules= { module= m4; };
-host_modules= { module= readline; };
+host_modules= { module= readline;
+missing= regenerate; };
 host_modules= { module= sid; };
-host_modules= { module= sim; }

Re: [RFC] add regenerate Makefile target

2024-03-15 Thread Christophe Lyon via Gcc
On Thu, 14 Mar 2024 at 19:10, Simon Marchi  wrote:
>
>
>
> On 2024-03-13 04:02, Christophe Lyon via Gdb wrote:
> > Hi!
> >
> > After recent discussions on IRC and on the lists about maintainer-mode
> > and various problems with auto-generated source files, I've written
> > this small prototype.
> >
> > Based on those discussions, I assumed that people generally want to
> > update autotools files using a script similar to autoregen.py, which
> > takes care of running aclocal, autoheader, automake and autoconf as
> > appropriate.
> >
> > What is currently missing is a "simple" way of regenerating other
> > files, which happens normally with --enable-maintainer-mode (which is
> > reportedly broken).  This patch as a "regenerate" Makefile target
> > which can be called to update those files, provided
> > --enable-maintainer-mode is used.
> >
> > I tried this approach with the following workflow for binutils/gdb:
> > - run autoregen.py in srcdir
> > - cd builddir
> > - configure --enable-maintainer-mode
> > - make all-bfd all-libiberty regenerate -j1
> > - for gdb: make all -C gdb/data-directory -j1
> > - make all -jXXX
> >
> > Making 'all' in bfd and libiberty is needed by some XXX-gen host
> > programs in opcodes.
> >
> > The advantage (for instance for CI) is that we can regenerate files at
> > -j1, thus avoiding the existing race conditions, and build the rest
> > with -j XXX.
> >
> > Among drawbacks:
> > - most sub-components use Makefile.am, but gdb does not: this may make
> >   maintenance more complex (different rules for different projects)
> > - maintaining such ad-hoc "regenerate" rules would require special
> >   attention from maintainers/reviewers
> > - dependency on -all-bfd and all-libiberty is probably not fully
> >intuitive, but should not be a problem if the "regenerate" rules
> >are used after a full build for instance
> >
> > Of course Makefile.def/Makefile.tpl would need further cleanup as I
> > didn't try to take gcc into account is this patch.
> >
> > Thoughts?
>
> My first thought it: why is it a Makefile target, instead of some script
> on the side (like autoregen.sh).  It would be nice / useful to be
> able to it without configuring / building anything.  For instance, the
> autoregen buildbot job could run it without configuring anything.
> Ideally, the buildbot wouldn't maintain its own autoregen.py file on the
> side, it would just use whatever is in the repo.

Firstly because of what you mention later: some regeneration steps
require building host tools first, like the XXX-gen in opcodes.

Since the existing Makefiles already contain the rules to autoregen
all these files, it seemed natural to me to reuse them, to avoid
reinventing the wheel with the risk of introducing new bugs.

This involves changes in places where I've never looked at before, so
I'd rather reuse as much existing support as possible.

For instance, there are the generators in opcodes/, but also things in
sim/, bfd/, updates to the docs and potfiles. In gcc, there's also
something "unusual" in fixincludes/ and libgfortran/

In fact, I considered also including 'configure', 'Makefile.in',
etc... in the 'regenerate' target, it does not seem natural to me to
invoke a script on the side, where you have to replicate the behaviour
of existing Makefiles, possibly getting out-of-sync when someone
forgets to update either Makefile or autoregen.py. What is currently
missing is a way to easily regenerate files without having to run a
full 'make all' (which currently takes care of calling autoconf &
friends to update configure/Makefile.in).

But yeah, having to configure before being able to regenerate files is
a bit awkward too :-)


>
> Looking at the rule to re-generate copying.c in gdb for instance:
>
> # Make copying.c from COPYING
> $(srcdir)/copying.c: @MAINTAINER_MODE_TRUE@ $(srcdir)/../COPYING3 
> $(srcdir)/copying.awk
>awk -f $(srcdir)/copying.awk \
>< $(srcdir)/../COPYING3 > $(srcdir)/copying.tmp
>mv $(srcdir)/copying.tmp $(srcdir)/copying.c
>
> There is nothing in this code that requires having configured the source
> tree.  This code could for instance be moved to some
> generate-copying-c.sh script.  generate-copying-c.sh could be called by
> an hypothetical autoregen.sh script, as well as the copying.c Makefile
> target, if we want to continue supporting the maintainer mode.
Wouldn't it be more obscure than now? Currently such build rules are
all in the relevant Makefi

  1   2   >