SIGILL on Sparc
Hello, I have a problem with a (big) C++ program compiled with gcc 4.4.0 on a 64-bit Sparc. Target: sparc-sun-solaris2.10 Configured with: /opt/sources/gnu/gcc-4.4.0/configure --prefix=/opt/gnu/gcc-4.4.0 --with-local-prefix=/opt/gnu/gcc-4.4.0 --enable-threads=posix --with-cpu=ultrasparc3 --enable-tls=yes --with-tune=ultrasparc3 --enable-languages=c,c++ --with-as=/usr/ccs/bin/as --with-ld=/usr/ccs/bin/ld --with-gmp=/opt/gnu/gmp-4.2.1 --with-mpfr=/opt/gnu/mpfr-2.4.1 --disable-nls Thread model: posix gcc version 4.4.0 (GCC) The compiler generated the following code (debug version): 0x751407b4 : nop 0x751407b8 : ldx [ %fp + 0x87f ], %o0 0x751407bc : clr %o1 0x751407c0 : call 0x7543bc00 <_zn6parser8internal9functions11setcategoryens0_11eh_catego...@plt> 0x751407c4 : nop 0x751407c8 : ta 5 <=== Here the SIGILL happens The function SetCategory(v) returns void and simply assigns the value of v to a class member, so there are no trap conditions. TA, on the other hand, stands for "trap always", so the condition code is unimportant anyway. Why has the trap instruction been generated? Best regards Piotr Wyderski
Re: i370 port - constructing compile script
Would you be able to give me the two suggested configure commands so that I can find out the answer to the above, one way or another? For step 2 (building the cross-compiler), you'd need something along the lines of .../configure --target=i370-mvs --prefix=... --with-sysroot=... \ --enable-languages=c where prefix points to the directory where the cross-compiler should be installed, and sysroot points to the directory where the MVS libraries and header are installed. I tried to action this today. But first I tried to get the normal make process working, ie without the --prefix and --with-sysroot above, and just using defaults. I had some surprising success, but also one failure. The failure (on 3.4.6, but not on 3.2.3) is that after the successful build, when I do an xgcc -S, it produces the assembler file, and then hangs. I traced this to gcc.c which was in a loop doing this: pid = pwait (commands[i].pid, &status, 0); getting a return of 0 all the time, while the process (cc1) that it is waiting on is showing up as being . Not sure what that is about. I have gcc 3.2.3 working without that problem, so I'll spend some time comparing how the two pexecutes work differently. Of course I don't have system-related problems like this on MVS, because I have a single executable and a simple function call. :-) In the meantime, I have a question. You said above that I have to point sysroot to the MVS libraries and headers. What libraries? And why, at the point of building a cross-compiler, do I need any of those things? The normal way I build a cross-compiler I just do the above configure without prefix or with-sysroot, and it builds an xgcc executable as expected, using the Linux headers, as expected. I would certainly like an option to force it to use my C90-only Linux headers and my C90-only libraries, but that should be strictly optional, and if I did do that, I would expect to see configure saying things like "no you don't have fork, or getrusage, or sys/types" etc etc. I think I am still failing to understand some major aspect of the build process. BFN. Paul.
Re: COMPONENT_REF problem ?
On Tue, Oct 6, 2009 at 3:00 AM, Pranav Bhandarkar wrote: > Richard, > >> If you are not working on trunk this can happen because the way >> MEM_EXPRs are "canonicalized". > > Thanks. Yes, I am not on trunk and may not be able to move right away. > I would appreciate some pointers about where I should look, If I want > to fix this ? Look at 2009-07-14 Richard Guenther Andrey Belevantsev * tree-ssa-alias.h (refs_may_alias_p_1): Declare. (pt_solution_set): Likewise. * tree-ssa-alias.c (refs_may_alias_p_1): Export. * tree-ssa-structalias.c (pt_solution_set): New function. * final.c (rest_of_clean_state): Free SSA data structures. ... * emit-rtl.c (component_ref_for_mem_expr): Remove. (mem_expr_equal_p): Use operand_equal_p. (set_mem_attributes_minus_bitpos): Do not use component_ref_for_mem_expr. ... this change. Richard. > Thanks, > Pranav >
Re: SIGILL on Sparc
The function SetCategory(v) returns void and simply assigns the value of v to a class member, so there are no trap conditions. TA, on the other hand, stands for "trap always", so the condition code is unimportant anyway. Why has the trap instruction been generated? Usually this is because you have code with undefined behavior, that the compiler cannot make sense of. One typical case is casting a function that is inlined: static inline __attribute__ ((always_inline)) int x(int x) { } ... ((float (*) (float)) x) (1.0); The compiler cannot raise a compile-time error, so the best it can do is raise a warning (you probably have one, otherwise it is a bug in GCC) and compile the call to a run-time trap. It can do this because the code is undefined in the first place. Paolo
Re: Is this code legal?
On 10/05/2009 09:29 PM, Sergey Sadovnikov wrote: Can anybody explain why line marked with '{*1}' produce this error message: I think it's because there is no constructor for array that takes an initializer_list. I get this message if I change your {*2} line to: std::array < wchar_t, sizeof...(Chars) > msg({Chars...}); // {*2} f.cc:12: error: no matching function for call to ‘std::array::array()’ tr1_impl/array:50: note: candidates are: std::array::array(const std::array&) tr1_impl/array:50: note: std::array::array() In general, this kind of question is best asked on a C++ forum. Paolo
Re: SIGILL on Sparc
Paolo Bonzini wrote: > Usually this is because you have code with undefined behavior, that the > compiler cannot make sense of. Yes, you were right, that was the case indeed. Thank you Paulo. Best regards Piotr Wyderski
new libstdc++-v3 decimal failures
Janis, We are seeing failures of the new decimal testcases on x86_64-apple-darwin10 which you committed into the libstdc++-v3 testsuite... FAIL: decimal/binary-arith.cc (test for excess errors) WARNING: decimal/binary-arith.cc compilation failed to produce executable FAIL: decimal/cast_neg.cc (test for excess errors) FAIL: decimal/comparison.cc (test for excess errors) WARNING: decimal/comparison.cc compilation failed to produce executable FAIL: decimal/compound-assignment-memfunc.cc (test for excess errors) WARNING: decimal/compound-assignment-memfunc.cc compilation failed to produce executable FAIL: decimal/compound-assignment.cc (test for excess errors) WARNING: decimal/compound-assignment.cc compilation failed to produce executable FAIL: decimal/conversion-from-float.cc (test for excess errors) WARNING: decimal/conversion-from-float.cc compilation failed to produce executable FAIL: decimal/conversion-from-integral.cc (test for excess errors) WARNING: decimal/conversion-from-integral.cc compilation failed to produce executable FAIL: decimal/conversion-to-generic-float.cc (test for excess errors) WARNING: decimal/conversion-to-generic-float.cc compilation failed to produce executable FAIL: decimal/conversion-to-integral.cc (test for excess errors) WARNING: decimal/conversion-to-integral.cc compilation failed to produce executable FAIL: decimal/ctor.cc (test for excess errors) WARNING: decimal/ctor.cc compilation failed to produce executable FAIL: decimal/incdec-memfunc.cc (test for excess errors) WARNING: decimal/incdec-memfunc.cc compilation failed to produce executable FAIL: decimal/incdec.cc (test for excess errors) WARNING: decimal/incdec.cc compilation failed to produce executable FAIL: decimal/make-decimal.cc (test for excess errors) WARNING: decimal/make-decimal.cc compilation failed to produce executable FAIL: decimal/unary-arith.cc (test for excess errors) WARNING: decimal/unary-arith.cc compilation failed to produce executable Are these tests entirely glibc-centric and shouldn't they be disabled for darwin? Jack
Re: i370 port - constructing compile script
Paul Edwards: > The failure (on 3.4.6, but not on 3.2.3) is that after the successful > build, when I do an xgcc -S, it produces the assembler file, and then > hangs. I traced this to gcc.c which was in a loop doing this: > > pid = pwait (commands[i].pid, &status, 0); > > getting a return of 0 all the time, while the process (cc1) that it is > waiting on is showing up as being . > > Not sure what that is about. I have gcc 3.2.3 working without that > problem, so I'll spend some time comparing how the two pexecutes > work differently. Huh. I've never seen this before. Is this with your patches to generate a "single executable" or without? For the cross-compiler, you shouldn't need any of the MVS host-specific patches ... > In the meantime, I have a question. You said above that I have to > point sysroot to the MVS libraries and headers. What libraries? > And why, at the point of building a cross-compiler, do I need any > of those things? The normal way I build a cross-compiler I just > do the above configure without prefix or with-sysroot, and it > builds an xgcc executable as expected, using the Linux headers, > as expected. > > I would certainly like an option to force it to use my C90-only > Linux headers and my C90-only libraries, but that should be > strictly optional, and if I did do that, I would expect to see > configure saying things like "no you don't have fork, or getrusage, > or sys/types" etc etc. > > I think I am still failing to understand some major aspect of the > build process. Maybe the confusion is about what "sysroot" for a cross-compiler means. The libraries and headers in the sysroot are *not* used to build the compiler itself. You need to specify the sysroot location at build time of the compiler only so that this location can be compiled into the gcc/cc1 binaries. Once you later *use* the resulting cross-compiler, this cross-compiler will refer to the sysroot location for standard headers and libraries. That is to say, if you build a cross-compiler with --prefix=/home/gccmvs/cross --sysroot=/home/gccmvs/sysroot --target=i370-mvs the result of the build process ("make" and then "make install") will be a cross-compiler in /home/gccmvs/cross/bin/i370-mvs-gcc (and additional files in /home/gccmvs/cross/lib/gcc/...). Note that the build process of the compiler itself will refer to the host's default headers in /usr/include and libraries in /usr/lib. However, once you *run* this i370-mvs-gcc, and it processes a source file using #include , the compiler will search the directory /home/gccmvs/sysroot/include for the stdio.h header file, and it will invoke the cross-linker passing /home/gccmvs/sysroot/lib as the location to search for standard libraries like libc. (Note that the names of such standard libraries, if any, are defined by the MVS target definitions, in particular the setting of target macros like LIB_SPEC in your target header files in config/i370/*.h.) It is important to get this cross-compiler working correctly, i.e. refering to the proper headers and libraries, because in the next step, when you configure and build the native compiler, you'll be using the cross-compiler, and what headers it uses will determine which host features are detected by configure ... Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE ulrich.weig...@de.ibm.com
Re: Turning off unrolling to certain loops
Yes, I'd be happy to look into how you did it or where you were up to. I don't know what I'll be able to do but it might lead me in the right direction and allow me to finish what you started. Thanks, Jc On Tue, Oct 6, 2009 at 2:53 AM, Zdenek Dvorak wrote: > Hi, > >> I was wondering if it was possible to turn off the unrolling to >> certain loops. Basically, I'd like the compiler not to consider >> certain loops for unrolling but fail to see how exactly I can achieve >> that. >> >> I've traced the unrolling code to multiple places in the code (I'm >> working with the 4.3.2 version) and, for the moment, I'm trying to >> figure out if I can add something in the loop such as a note that I >> can later find in the FOR_EACH_LOOP loops in order to turn the >> unrolling for that particular loop off. >> >> Have you got any ideas of what I could use like "note" or even a >> better idea all together? > > the normal way of doing this is using a #pragma. For instance, > icc implements #pragma nounroll for this purpose. I have some prototype > implementation for gcc, I can send it to you if you were interested in > finishing it, > > Zdenek >
Re: i370 port - constructing compile script
The failure (on 3.4.6, but not on 3.2.3) is that after the successful build, when I do an xgcc -S, it produces the assembler file, and then hangs. I traced this to gcc.c which was in a loop doing this: pid = pwait (commands[i].pid, &status, 0); getting a return of 0 all the time, while the process (cc1) that it is waiting on is showing up as being . Not sure what that is about. I have gcc 3.2.3 working without that problem, so I'll spend some time comparing how the two pexecutes work differently. Huh. I've never seen this before. Is this with your patches to generate a "single executable" or without? My patches are applied, but shouldn't be activated, because I haven't defined SINGLE_EXECUTABLE. I could try taking it back to raw 3.4.6 though and see if that has the same problem. For the cross-compiler, you shouldn't need any of the MVS host-specific patches ... My target is new basically. It's closest to mvsdignus, which was used as a starting point, but it has evolved. :-) In the meantime, I have a question. You said above that I have to point sysroot to the MVS libraries and headers. What libraries? And why, at the point of building a cross-compiler, do I need any of those things? The normal way I build a cross-compiler I just do the above configure without prefix or with-sysroot, and it builds an xgcc executable as expected, using the Linux headers, as expected. I would certainly like an option to force it to use my C90-only Linux headers and my C90-only libraries, but that should be strictly optional, and if I did do that, I would expect to see configure saying things like "no you don't have fork, or getrusage, or sys/types" etc etc. I think I am still failing to understand some major aspect of the build process. Maybe the confusion is about what "sysroot" for a cross-compiler means. The libraries and headers in the sysroot are *not* used to build the compiler itself. You need to specify the sysroot location at build time of the compiler only so that this location can be compiled into the gcc/cc1 binaries. Once you later *use* the resulting cross-compiler, this cross-compiler will refer to the sysroot location for standard headers and libraries. That is to say, if you build a cross-compiler with --prefix=/home/gccmvs/cross --sysroot=/home/gccmvs/sysroot --target=i370-mvs the result of the build process ("make" and then "make install") will be a cross-compiler in /home/gccmvs/cross/bin/i370-mvs-gcc (and additional files in /home/gccmvs/cross/lib/gcc/...). Ok, that's a new concept to me. Thanks. Note that the build process of the compiler itself will refer to the host's default headers in /usr/include and libraries in /usr/lib. However, once you *run* this i370-mvs-gcc, and it processes a source file using #include , the compiler will search the directory /home/gccmvs/sysroot/include for the stdio.h header file, and it will invoke the cross-linker passing /home/gccmvs/sysroot/lib as the location to search for standard libraries like libc. (Note that the names of such standard libraries, if any, are defined by the MVS target definitions, in particular the setting of target macros like LIB_SPEC in your target header files in config/i370/*.h.) I don't seem to have that variable defined. Not surprising since there's no include or lib directories like that on MVS. It is important to get this cross-compiler working correctly, i.e. refering to the proper headers and libraries, because in the next step, when you configure and build the native compiler, you'll be using the cross-compiler, and what headers it uses will determine which host features are detected by configure ... I see. Anyway, I'll start with raw 3.4.6 and move forward from there. The good thing about doing it that way is that I know the end result is actually achievable. Gone are the dark days of the 3.2 port where I was doing a lot of work, and had no idea whether at the end of all that work it would all be wasted, because the compiler wouldn't run natively due to some ASCII-specific bug beyond my ability to fix or work around. :-) BFN. Paul.
Re: i370 port - constructing compile script
Paul Edwards wrote: > > Huh. I've never seen this before. Is this with your patches to > > generate a "single executable" or without? > > My patches are applied, but shouldn't be activated, because > I haven't defined SINGLE_EXECUTABLE. > > I could try taking it back to raw 3.4.6 though and see if that has > the same problem. Might be interesting ... > > For the cross-compiler, > > you shouldn't need any of the MVS host-specific patches ... > > My target is new basically. It's closest to mvsdignus, which > was used as a starting point, but it has evolved. :-) Host vs. target confusion again? :-) You have some patches needed to support MVS as *target* of compilation. You have some other patches needed to support MVS as *host* of the compiler itself. I'm saying that for the cross-compiler, you need the first set of patches, but you do not need the second set of patches. For the native compiler, you'll then need both sets ... > > However, once you *run* this i370-mvs-gcc, and it processes a source > > file using #include , the compiler will search the directory > > /home/gccmvs/sysroot/include for the stdio.h header file, and it will > > invoke the cross-linker passing /home/gccmvs/sysroot/lib as the > > location to search for standard libraries like libc. (Note that the > > names of such standard libraries, if any, are defined by the MVS > > target definitions, in particular the setting of target macros like > > LIB_SPEC in your target header files in config/i370/*.h.) > > I don't seem to have that variable defined. Not surprising since > there's no include or lib directories like that on MVS. I'm not sure how this works on MVS, but the C standard says that if your application uses #include , this must work and find the appropriate system header ... When running the compiler natively on MVS, there may not be a notion of "directories" in the Unix sense, but I guess those headers must still come from *somewhere*, right? For the *cross-compiler*, because it is hosted on a Unix system, it must provide those same headers in a directory somewhere. This is the directory you specify via --with-sysroot. Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE ulrich.weig...@de.ibm.com
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > Janis, >We are seeing failures of the new decimal testcases on > x86_64-apple-darwin10 > which you committed into the libstdc++-v3 testsuite... > > FAIL: decimal/binary-arith.cc (test for excess errors) > WARNING: decimal/binary-arith.cc compilation failed to produce executable > > Are these tests entirely glibc-centric and shouldn't they be disabled for > darwin? Each test contains // { dg-require-effective-target-dfp } which checks that the compiler supports modes SD, DD, and TD, which in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the compiler. That should not be defined for darwin; I'll take a look. Janis
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 09:10 -0700, Janis Johnson wrote: > On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > > Janis, > >We are seeing failures of the new decimal testcases on > > x86_64-apple-darwin10 > > which you committed into the libstdc++-v3 testsuite... > > > > FAIL: decimal/binary-arith.cc (test for excess errors) > > WARNING: decimal/binary-arith.cc compilation failed to produce executable > > > > > Are these tests entirely glibc-centric and shouldn't they be disabled for > > darwin? > > Each test contains > > // { dg-require-effective-target-dfp } > > which checks that the compiler supports modes SD, DD, and TD, which > in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the > compiler. That should not be defined for darwin; I'll take a look. I built a cross cc1plus for x86_64-apple-darwin10 and got the behavior I expected. From $objdir/gcc: elm3b149% fgrep -l ENABLE_DECIMAL_FLOAT *.h auto-host.h:#define ENABLE_DECIMAL_FLOAT 0 elm3b149% echo "float x __attribute__((mode(DD)));" > x.c elm3b149% ./cc1plus -quiet x.c x.c:1:33: error: unable to emulate ‘DD’ Please try that little test with your cc1plus to see if the problem is with your compiler not rejecting DD mode, or with the test framework not handling it correctly. Janis
Re: Request for code review - (ZEE patch : Redundant Zero extension elimination)
Hi Richard, I was wondering if you got a chance to see if this new patch is alright ?. Thanks, -Sriraman. On Thu, Oct 1, 2009 at 2:37 PM, Sriraman Tallam wrote: > Hi, > > I moved implicit-zee.c to config/i386. Can you please take another look ? > > * tree-pass.h (pass_implicit_zee): New pass. > * testsuite/gcc.target/i386/zee.c: New test. > * timevar.def (TV_ZEE): New. > * common.opt (fzee): New flag. > * config.gcc: Add implicit-zee.o for x86_64 target. > * implicit-zee.c: New file, zero extension elimination pass. > * config/i386/t-i386: Add rule for implicit-zee.o. > * i386.c (optimization_options): Enable zee pass for x86_64 target. > > Thanks, > -Sriraman. > > > On Thu, Sep 24, 2009 at 9:34 AM, Sriraman Tallam wrote: >> On Thu, Sep 24, 2009 at 1:36 AM, Richard Guenther >> wrote: >>> On Thu, Sep 24, 2009 at 8:25 AM, Paolo Bonzini wrote: On 09/24/2009 08:24 AM, Ian Lance Taylor wrote: > > We already have the hooks, they have just been stuck in plugin.c when > they should really be in the generic backend. See register_pass. > > (Sigh, every time I looked at this I said "the pass control has to be > generic" but it still wound up in plugin.c.) Then I'll rephrase and say only that the pass should be in config/i386/. >>> >>> It should also be on by default on -O[23s] I think (didn't check if it >>> already >>> is). Otherwise it shortly will go the see lala-land. >> >> It is already on by default in O2 and higher. >> >>> >>> Richard. >>> Paolo >>> >> >
Re: COMPONENT_REF problem ?
> Look at > > 2009-07-14 Richard Guenther > Andrey Belevantsev > > * tree-ssa-alias.h (refs_may_alias_p_1): Declare. > (pt_solution_set): Likewise. > * tree-ssa-alias.c (refs_may_alias_p_1): Export. > * tree-ssa-structalias.c (pt_solution_set): New function. > * final.c (rest_of_clean_state): Free SSA data structures. > ... > * emit-rtl.c (component_ref_for_mem_expr): Remove. > (mem_expr_equal_p): Use operand_equal_p. > (set_mem_attributes_minus_bitpos): Do not use > component_ref_for_mem_expr. > ... > > this change. > > Richard. Great. Thanks. Pranav
Re[2]: Is this code legal?
Hello, Paolo. Tuesday, October 6, 2009 at 2:05:10 PM you wrote: PB> On 10/05/2009 09:29 PM, Sergey Sadovnikov wrote: >> Can anybody explain why line marked with '{*1}' produce this error >> message: PB> I think it's because there is no constructor for array that takes an PB> initializer_list. I get this message if I change your {*2} line to: No. I think it's because my version of gcc compiler was without 'Standard layout types' C++0x extension. There were no errors after I updated my trunk, recompile gcc and then recompile the sample. PB> In general, this kind of question is best asked on a C++ forum. Probably you're right. -- Best Regards, Sergey mailto:flex_fer...@artberg.ru
LTO: Speedup.
L.S., On our weather forecasting code (compiled with -O3 -flto and linked with -O3 -flto -fwhole-program) I get a speedup of 65 seconds per time step in the model integration vs. 75 seconds with -O3 alone. That is a 10/75 ~ 13 % improvement. This compares favorably to an experiment I did back in '92: Compiling all forecast Fortran code with f2c to make it C code, combine all the C files in one file in the right order, and slap "static inline" on all functions. then compiling the resulting file with -O3. That gave me 10 % at that time. -- Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands At home: http://moene.org/~toon/ Progress of GNU Fortran: http://gcc.gnu.org/gcc-4.5/changes.html
Re: LTO: Speedup.
> L.S., > > On our weather forecasting code (compiled with -O3 -flto and linked with > -O3 -flto -fwhole-program) I get a speedup of 65 seconds per time step > in the model integration vs. 75 seconds with -O3 alone. There is bug making -fwhole-program disabled with LTO compilations. I hope to get this fixed in mainline tomorrow. It will be interesting to see how much difference -fwhole-program makes for you. Also ipa-sra was finally enabled at -O2 and I would be greatly interested if it makes any difference (in general it should help to fortran codebases by eliminating need to pass stuff around by reference) Honza
Re: LTO: Speedup.
> > L.S., > > > > On our weather forecasting code (compiled with -O3 -flto and linked with > > -O3 -flto -fwhole-program) I get a speedup of 65 seconds per time step > > in the model integration vs. 75 seconds with -O3 alone. > > There is bug making -fwhole-program disabled with LTO compilations. > I hope to get this fixed in mainline tomorrow. > > It will be interesting to see how much difference -fwhole-program makes > for you. Also ipa-sra was finally enabled at -O2 and I would be greatly > interested if it makes any difference (in general it should help to > fortran codebases by eliminating need to pass stuff around by reference) and just for non-scientific comparsion, this is with the patches I sent tonight. -rwxr-xr-x 1 jh jh 57000 2009-10-06 23:53 gzip-O3 -rwxr-xr-x 1 jh jh 73296 2009-10-06 23:53 gzip-O3-flto -rwxr-xr-x 1 jh jh 56368 2009-10-06 23:53 gzip-O3-flto-fwhole-program -rwxr-xr-x 1 jh jh 76496 2009-10-06 23:56 gzip-O3-combine -rwxr-xr-x 1 jh jh 57136 2009-10-06 23:55 gzip-O3-combine-fwhole-program So things seems to work now plus minus as expected. I.e. LTO builds seems similar to combined builds and whole-programs improves code size quite noticeably. Runtime results for gzip are pretty much unchanged, but that is expected. I am quite curoius about full SPEC run. Honza > > Honza
Re: new libstdc++-v3 decimal failures
On Tue, Oct 06, 2009 at 09:44:42AM -0700, Janis Johnson wrote: > On Tue, 2009-10-06 at 09:10 -0700, Janis Johnson wrote: > > On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > > > Janis, > > >We are seeing failures of the new decimal testcases on > > > x86_64-apple-darwin10 > > > which you committed into the libstdc++-v3 testsuite... > > > > > > FAIL: decimal/binary-arith.cc (test for excess errors) > > > WARNING: decimal/binary-arith.cc compilation failed to produce executable > > > > > > > > Are these tests entirely glibc-centric and shouldn't they be disabled for > > > darwin? > > > > Each test contains > > > > // { dg-require-effective-target-dfp } > > > > which checks that the compiler supports modes SD, DD, and TD, which > > in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the > > compiler. That should not be defined for darwin; I'll take a look. > > I built a cross cc1plus for x86_64-apple-darwin10 and got the behavior > I expected. From $objdir/gcc: > > elm3b149% fgrep -l ENABLE_DECIMAL_FLOAT *.h > auto-host.h:#define ENABLE_DECIMAL_FLOAT 0 > > elm3b149% echo "float x __attribute__((mode(DD)));" > x.c > elm3b149% ./cc1plus -quiet x.c > x.c:1:33: error: unable to emulate ‘DD’ > > Please try that little test with your cc1plus to see if the problem > is with your compiler not rejecting DD mode, or with the test > framework not handling it correctly. > > Janis Janis, I find that ENABLE_DECIMAL_FLOAT is set to 0 on x86_64-apple-darwin10... [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# grep ENABLE_DECIMAL_FLOAT auto-host.h #define ENABLE_DECIMAL_FLOAT 0 and the test code fails to compile... [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# echo "float x __attribute__((mode(DD)));" > x.c [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# ./cc1plus -quiet x.c x.c:1:33: error: unable to emulate ‘DD’ However, the testsuite failures still occurs as follows... Executing on host: /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc/g++ -shared-libgcc -B/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc -nostdinc++ -L/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/src -L/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/src/.libs -B/sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/bin/ -B/sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/lib/ -isystem /sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/include -isystem /sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/sys-include -g -O2 -D_GLIBCXX_ASSERT -fmessage-length=0 -ffunction-sections -fdata-sections -g -O2 -g -O2 -DLOCALEDIR="." -nostdinc++ -I/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/x86_64-apple-darwin10.0.0 -I/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/libsupc++ -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/include/backward -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/util /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-arith.cc -include bits/stdc++.h ./libtestc++.a -L/sw/lib -liconv -lm -o ./binary-arith.exe(timeout = 600) In file included from /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-arith.cc:22:0: /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:39:2: error: #error This file requires compiler and library support for ISO/IEC TR 24733 that is currently not available. /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:228:56: error: unable to emulate 'SD' /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:249:5: error: 'std::decimal::decimal32::decimal32(std::decimal::decimal32::__decfloat32)' cannot be overloaded /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:236:14: error: with 'std::decimal::decimal32::decimal32(float)' etc...for about a hundred errors. Doesn't this imply that the dejagnu test harness isn't properly recognizing the absence of the decimal support? Jack
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 18:19 -0400, Jack Howarth wrote: > On Tue, Oct 06, 2009 at 09:44:42AM -0700, Janis Johnson wrote: > > On Tue, 2009-10-06 at 09:10 -0700, Janis Johnson wrote: > > > On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > > > > Janis, > > > >We are seeing failures of the new decimal testcases on > > > > x86_64-apple-darwin10 > > > > which you committed into the libstdc++-v3 testsuite... > > > > > > > > FAIL: decimal/binary-arith.cc (test for excess errors) > > > > WARNING: decimal/binary-arith.cc compilation failed to produce > > > > executable > > > > > > > > > > > Are these tests entirely glibc-centric and shouldn't they be disabled > > > > for > > > > darwin? > > > > > > Each test contains > > > > > > // { dg-require-effective-target-dfp } > > > > > > which checks that the compiler supports modes SD, DD, and TD, which > > > in turn are supported if ENABLE_DECIMAL_FLOAT is defined within the > > > compiler. That should not be defined for darwin; I'll take a look. > > > > I built a cross cc1plus for x86_64-apple-darwin10 and got the behavior > > I expected. From $objdir/gcc: > > > > elm3b149% fgrep -l ENABLE_DECIMAL_FLOAT *.h > > auto-host.h:#define ENABLE_DECIMAL_FLOAT 0 > > > > elm3b149% echo "float x __attribute__((mode(DD)));" > x.c > > elm3b149% ./cc1plus -quiet x.c > > x.c:1:33: error: unable to emulate ‘DD’ > > > > Please try that little test with your cc1plus to see if the problem > > is with your compiler not rejecting DD mode, or with the test > > framework not handling it correctly. > > > > Janis > > Janis, >I find that ENABLE_DECIMAL_FLOAT is set to 0 on x86_64-apple-darwin10... > > [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# grep > ENABLE_DECIMAL_FLOAT auto-host.h > #define ENABLE_DECIMAL_FLOAT 0 > > and the test code fails to compile... > > [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# echo "float x > __attribute__((mode(DD)));" > x.c > [MacPro:gcc45-4.4.999-20091005/darwin_objdir/gcc] root# ./cc1plus -quiet x.c > x.c:1:33: error: unable to emulate ‘DD’ > > However, the testsuite failures still occurs as follows... > > Executing on host: > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc/g++ > -shared-libgcc > -B/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc -nostdinc++ > -L/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/src > > -L/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/src/.libs > -B/sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/bin/ > -B/sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/lib/ -isystem > /sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/include -isystem > /sw/lib/gcc4.5/x86_64-apple-darwin10.0.0/sys-include -g -O2 -D_GLIBCXX_ASSERT > -fmessage-length=0 -ffunction-sections -fdata-sections -g -O2 -g -O2 > -DLOCALEDIR="." -nostdinc++ > -I/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/x86_64-apple-darwin10.0.0 > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/libsupc++ > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/include/backward > > -I/sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/util > > /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-arith.cc > -include bits/stdc++.h ./libtestc++.a -L/sw/lib -liconv -lm -o > ./binary-arith.exe(timeout = 600) > In file included from > /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-arith.cc:22:0: > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:39:2: > error: #error This file requires compiler and library support for ISO/IEC TR > 24733 that is currently not available. > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:228:56: > error: unable to emulate 'SD' > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:249:5: > error: > 'std::decimal::decimal32::decimal32(std::decimal::decimal32::__decfloat32)' > cannot be overloaded > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/decimal/decimal:236:14: > error: with 'std::decimal::decimal32::decimal32(float)' > > etc...for about a hundred errors. Doesn't this imply that the dejagnu test > harness isn't properly recognizing the absence of > the decimal support? Oh, maybe the libstdc++ tests don't support dg-require-effective-target. Janis
Re: new libstdc++-v3 decimal failures
Why do we have a libstdc++ list? For questions like this... > > > > FAIL: decimal/binary-arith.cc (test for excess errors) plus > However, the testsuite failures still occurs as follows... > > Executing on > host: /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/./gcc/g++ > -shared-libgcc > from > /sw/src/fink.build/gcc45-4.4.999-20091005/gcc-4.5-20091005/libstdc++-v3/testsuite/decimal/binary-> > arith.cc:22:0: > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-> > v3/include/decimal/decimal:39:2: > error: #error This file requires compiler and library support for > ISO/IEC TR 24733 that is currently not > available. > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/> > include/decimal/decimal:228:56: > error: unable to emulate > 'SD' > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/include/> > decimal/decimal:249:5: > error: > 'std::decimal::decimal32::decimal32(std::decimal::decimal32::__decfloat32)' > cannot be > overloaded > /sw/src/fink.build/gcc45-4.4.999-20091005/darwin_objdir/x86_64-apple-darwin10.0.0/libstdc++-v3/> > include/decimal/decimal:236:14: > error: with 'std::decimal::decimal32::decimal32(float)' ...means that it's the excess error that is the fail. Not the error. On the brightside, the error is better now, I think. > etc...for about a hundred errors. Doesn't this imply that the dejagnu > test harness isn't properly recognizing the absence of the decimal > support? I changed this from what Janis posted, so she is certainly not to blame here. In any case, all you should have to do is add a new line for this new error: > error: #error This file requires compiler and library support for > ISO/IEC TR 24733 that is currently not > available. -benjamin
Re: new libstdc++-v3 decimal failures
On Tue, Oct 06, 2009 at 03:30:34PM -0700, Janis Johnson wrote: > > Oh, maybe the libstdc++ tests don't support dg-require-effective-target. > > Janis Janis, Yes, doesn't it need something like... # Skip these tests for targets that don't support this extension. if { ![check_effective_target_dfp] } { return; } which is in testsuite/gcc.dg/dfp/dfp.exp. I find... grep -R check_effective_target_dfp testsuite | grep ".exp" | grep -v svn | grep -v Change testsuite/g++.dg/dfp/dfp.exp:if { ![check_effective_target_dfp] } { testsuite/g++.dg/dfp/dfp.exp:if { ![check_effective_target_dfprt] } { testsuite/gcc.dg/dfp/dfp.exp:if { ![check_effective_target_dfp] } { testsuite/gcc.dg/dfp/dfp.exp:if { ![check_effective_target_dfprt] } { testsuite/gcc.misc-tests/dectest.exp:if { ![check_effective_target_dfp] } { testsuite/lib/c-compat.exp:set compat_have_dfp [check_effective_target_dfprt_nocache] testsuite/lib/c-compat.exp: set compat_have_dfp [check_effective_target_dfprt_nocache] testsuite/lib/target-supports.exp:proc check_effective_target_dfp_nocache { } { testsuite/lib/target-supports.exp:verbose "check_effective_target_dfp_nocache: compiling source" 2 testsuite/lib/target-supports.exp:verbose "check_effective_target_dfp_nocache: returning $ret" 2 testsuite/lib/target-supports.exp:proc check_effective_target_dfprt_nocache { } { testsuite/lib/target-supports.exp:proc check_effective_target_dfp { } { testsuite/lib/target-supports.exp: check_effective_target_dfp_nocache testsuite/lib/target-supports.exp:proc check_effective_target_dfprt { } { testsuite/lib/target-supports.exp: check_effective_target_dfprt_nocache but for libstdc++-v3... grep -R check_effective_target_dfp testsuite | grep ".exp" | grep -v svn | grep -v Change nothing appears. Jack
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 15:30 -0700, Janis Johnson wrote: > On Tue, 2009-10-06 at 18:19 -0400, Jack Howarth wrote: > > On Tue, Oct 06, 2009 at 09:44:42AM -0700, Janis Johnson wrote: > > > On Tue, 2009-10-06 at 09:10 -0700, Janis Johnson wrote: > > > > On Tue, 2009-10-06 at 09:04 -0400, Jack Howarth wrote: > > > > etc...for about a hundred errors. Doesn't this imply that the dejagnu test > > harness isn't properly recognizing the absence of > > the decimal support? > > Oh, maybe the libstdc++ tests don't support dg-require-effective-target. I spoke too soon. I'm now building a compiler with decimal float disabled and will dig into this. Janis
Re: new libstdc++-v3 decimal failures
On Tue, Oct 06, 2009 at 03:40:29PM -0700, Janis Johnson wrote: > > I spoke too soon. I'm now building a compiler with decimal float > disabled and will dig into this. > > Janis Janis, Don't you have to include something like gcc/testsuite/lib/target-supports.exp to be able to use check_effective_target_dfp? I am somewhat surprised that the dejagnu harness didn't throw an error when trying to process... // { dg-require-effective-target-dfp } without having a "load_lib target-supports.exp" in the calling .exp script. Jack
Re: Request for code review - (ZEE patch : Redundant Zero extension elimination)
On 10/01/2009 11:37 PM, Sriraman Tallam wrote: Hi, I moved implicit-zee.c to config/i386. Can you please take another look ? I think this patch is best reviewed by an x86 backend maintainer now. Thanks for doing the adjustments, BTW. Paolo
gcc-4.4-20091006 is now available
Snapshot gcc-4.4-20091006 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.4-20091006/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.4 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_4-branch revision 152501 You'll find: gcc-4.4-20091006.tar.bz2 Complete GCC (includes all of below) gcc-core-4.4-20091006.tar.bz2 C front end and core compiler gcc-ada-4.4-20091006.tar.bz2 Ada front end and runtime gcc-fortran-4.4-20091006.tar.bz2 Fortran front end and runtime gcc-g++-4.4-20091006.tar.bz2 C++ front end and runtime gcc-java-4.4-20091006.tar.bz2 Java front end and runtime gcc-objc-4.4-20091006.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.4-20091006.tar.bz2The GCC testsuite Diffs from 4.4-20090929 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.4 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: new libstdc++-v3 decimal failures
On Tue, Oct 06, 2009 at 03:34:30PM -0700, Benjamin Kosnik wrote: > > Why do we have a libstdc++ list? For questions like this... > Because this is a flaw in the libstdc++-v3 testsuite harness which obviously the core gcc testsuite handles properly. The other gcc developers might have an insight on the best way to fix this for libstdc++-v3 (since it appears you need some of the procs from gcc/testsuite/lib/target-supports.exp). Jack
Re: new libstdc++-v3 decimal failures
On Tue, 2009-10-06 at 18:56 -0400, Jack Howarth wrote: > On Tue, Oct 06, 2009 at 03:34:30PM -0700, Benjamin Kosnik wrote: > > > > Why do we have a libstdc++ list? For questions like this... > > > Because this is a flaw in the libstdc++-v3 testsuite harness > which obviously the core gcc testsuite handles properly. The > other gcc developers might have an insight on the best way > to fix this for libstdc++-v3 (since it appears you need some > of the procs from gcc/testsuite/lib/target-supports.exp). Actually it's a flaw in my new tests, a missing space in // { dg-require-effective-target dfp } Patch will be checked in momentarily; sorry about that. By the way, gcc/testsuite/lib/target-supports-dg.exp is included indirectly, and the check doesn't look for any specific error message. Janis
Re: how to get the .dfa output file in gcc
On Sat, 2009-10-03 at 11:37 -0700, ddmetro wrote: > 1. In the initiate_automaton_gen() function of 'genautomata.c', initialize > the v_flag variable to 1 i.e., v_flag = 1; It should not be necessary to do this. Can you retry with the .md syntax? Ben
Collapsing control-flow that leads to undefined behavior
Fellow GCC developers, Does GCC make any effort to collapse control-flow that is guaranteed to have undefined behavior? Such an optimization would improve performance of Proc_2 from Dhrystone: typedef int One_Fifty; typedef enum{Ident_1, Ident_2, Ident_3, Ident_4, Ident_5} Enumeration; charCh_1_Glob, Ch_2_Glob; int Int_Glob; Proc_2 (Int_Par_Ref) /**/ /* executed once */ /* *Int_Par_Ref == 1, becomes 4 */ One_Fifty *Int_Par_Ref; { One_Fifty Int_Loc; Enumeration Enum_Loc; Int_Loc = *Int_Par_Ref + 10; do /* executed once */ if (Ch_1_Glob == 'A') /* then, executed */ { Int_Loc -= 1; *Int_Par_Ref = Int_Loc - Int_Glob; Enum_Loc = Ident_1; } /* if */ while (Enum_Loc != Ident_1); /* false */ } /* Proc_2 */ The variable Enum_Loc is referenced in the condition of the do-while loop, but the only place it is set is inside the if block. For this code to have defined behavior, the code in the if block must be executed. Thus, it is valid to transform the code to the equivalent of Proc_2 (Int_Par_Ref) One_Fifty *Int_Par_Ref; { *Int_Par_Ref += 9 - Int_Glob; } Does any pass in GCC attempt to do anything like this? If not, how feasible would it be? GCC already eliminates the do-while loop during the Conditional Constant Propogation pass. It appears that ccp1 is able to deduce that the value of Enum_Loc in the do-while condition is either Ident_1 or undefined. It proceeds to substitute Ident_1 for Enum_Loc and fold the condition. Once ccp1 (or some earlier pass) finds a basic block that references a variable that is undefined on an incoming edge, how feasible would it be to eliminate that edge and any control-flow post-dominated by that edge? Thank you, Charles J. Tabony