Re: building LTO plugin fails for x86_64-unknown -linux-gnu
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Richard Guenther schrieb: > On Sun, Oct 11, 2009 at 6:21 PM, Richard Guenther > wrote: >> On Sun, Oct 11, 2009 at 6:04 PM, Rainer Emrich >> wrote: >>> -BEGIN PGP SIGNED MESSAGE- >>> Hash: SHA1 >>> >>> libtool: link: gcc -shared .libs/lto-plugin.o >>> - >>> -L/SCRATCH/tmp.dYgZv17836/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/install/lib64 >> is there a libiberty in that path? >> >>> - -lelf >>> - >>> -L/SCRATCH/tmp.dYgZv17836/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.5.0/gcc-4.5.0/libiberty/pic >> it's supposed to be picked up from here instead. > > Thus, I guess > > Index: lto-plugin/Makefile.am > === > --- lto-plugin/Makefile.am(revision 152638) > +++ lto-plugin/Makefile.am(working copy) > @@ -16,4 +16,4 @@ AM_CFLAGS = -Wall -Werror > libexecsub_LTLIBRARIES = liblto_plugin.la > > liblto_plugin_la_SOURCES = lto-plugin.c > -liblto_plugin_la_LIBADD = $(LIBELFLIBS) -L../libiberty/pic -liberty > +liblto_plugin_la_LIBADD = $(LIBELFLIBS) ../libiberty/pic/libiberty.a > > would fix it. > > Richard. > Yes, I think that should fix the issue. I think it's a good idea in general to make sure that the library is picked up from within the build tree. So, for testing this change I have to regenerate the dependent files with automake, no? Rainer -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkrS7noACgkQoUhjsh59BL4b7ACfV6hhQBRPOzpHTZYvwkUFRHP/ tqoAnA3ek9q/wIvbhUCRDqiK0z8dYDXt =oh/6 -END PGP SIGNATURE-
Re: building LTO plugin fails for x86_64-unknown -linux-gnu
On Mon, Oct 12, 2009 at 10:53 AM, Rainer Emrich wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Richard Guenther schrieb: >> On Sun, Oct 11, 2009 at 6:21 PM, Richard Guenther >> wrote: >>> On Sun, Oct 11, 2009 at 6:04 PM, Rainer Emrich >>> wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 libtool: link: gcc -shared .libs/lto-plugin.o - -L/SCRATCH/tmp.dYgZv17836/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/install/lib64 >>> is there a libiberty in that path? >>> - -lelf - -L/SCRATCH/tmp.dYgZv17836/Linux/x86_64-unknown-linux-gnu/openSUSE_10.3/gcc-4.5.0/gcc-4.5.0/libiberty/pic >>> it's supposed to be picked up from here instead. >> >> Thus, I guess >> >> Index: lto-plugin/Makefile.am >> === >> --- lto-plugin/Makefile.am (revision 152638) >> +++ lto-plugin/Makefile.am (working copy) >> @@ -16,4 +16,4 @@ AM_CFLAGS = -Wall -Werror >> libexecsub_LTLIBRARIES = liblto_plugin.la >> >> liblto_plugin_la_SOURCES = lto-plugin.c >> -liblto_plugin_la_LIBADD = $(LIBELFLIBS) -L../libiberty/pic -liberty >> +liblto_plugin_la_LIBADD = $(LIBELFLIBS) ../libiberty/pic/libiberty.a >> >> would fix it. >> >> Richard. >> > Yes, I think that should fix the issue. I think it's a good idea in general to > make sure that the library is picked up from within the build tree. > > So, for testing this change I have to regenerate the dependent files with > automake, no? Yes, see the patch I posted, which includes the generated files. Richard.
VOIDmode in ZERO_EXTEND crashed
hi, everyone: I have ported the gcc to new RISC chip. But when I build the newlib with it, the gcc crashed in simplify-rtx.c. error message is like this: ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c: In function _tzset_r? ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c:195: internal compiler error: in simplify_const_unary_operation, at simplify-rtx.c:1108 And the code there is simplify-rtx.c 1108: case ZERO_EXTEND: /* When zero-extending a CONST_INT, we need to know its original mode. */ gcc_assert (op_mode != VOIDmode); I tracked the gcc, It caused by the RTX (const_int 60). As I know the CONST_INT is alway being VOIDmode. I dumped the rtl tzset_r.c.161r.dce, and I think it caused by the following rtx(because the there is const_int 60 in this rtx and the register used is exactly what I saw in rtx which causes the gcc crash): (insn 229 228 230 21 ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c:78 (set (reg:SI 181) (mult:SI (zero_extend:SI (reg:HI 182 [ mm ])) (zero_extend:SI (const_int 60 [0x3c] 63 {rice_umulhisi3} (expr_list:REG_DEAD (reg:HI 182 [ mm ]) (nil))) And the problem is I don't know why it lead crashing. PS: Does gcc have a function which could dump the specified rtx? I wanna dump the rtx when the crash happening. Can somebody give me some advice? Any suggestion is appreciated. Thanks. daniel.
RE: VOIDmode in ZERO_EXTEND crashed
> PS: Does gcc have a function which could dump the specified rtx? > I wanna dump the rtx when the crash happening. debug_rtx(x); You can also call this from within GDB, by typing: call debug_rtx(x) Cheers, Jon
glibc configure: error: Need linker with .init_array/.fini_array support
Hi, I'm new to the GNU tool building environment. I'm trying to build cross GCC for powerpc-linux platform. I could compile binutils and first-stage gcc. These are my configuration options: binutils: ../configure --prefix=/home/tellabs/GNU/PPC --target=powerpc-linux-gnu make all make install first-stage gcc: --- ../configure --target=powerpc-linux-gnu --prefix=/home/tellabs/GNU/PPC --disable-shared --disable-threads --enable-languages=c make all-gcc make install-gcc when i try to compile glibc, the following configuration error occurs: ../configure CFLAGS=" -march=i686 -O2" --host=i686-pc-linux-gnu --target=powerpc-linux-gnu --prefix=/home/tellabs/GNU/PPC --with-headers=/home/tellabs/GNU/include --with-binutils=/home/tellabs/GNU/PPC/powerpc-linux-gnu/bin checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu configure: running configure fragment for add-on nptl ... ... checking for autoconf... autoconf checking whether autoconf works... yes checking whether ranlib is necessary... /tmp/ccE5O0vD.s: Assembler messages: /tmp/ccE5O0vD.s:7: Error: Unrecognized opcode: `pushl' /tmp/ccE5O0vD.s:8: Error: Unrecognized opcode: `movl' /tmp/ccE5O0vD.s:9: Error: Unrecognized opcode: `popl' /tmp/ccE5O0vD.s:10: Error: Unrecognized opcode: `ret' /home/tellabs/GNU/PPC/powerpc-linux-gnu/bin/ar: conftest.o: No such file or directory cp: cannot stat `conftest.a': No such file or directory /home/tellabs/GNU/PPC/powerpc-linux-gnu/bin/ranlib: 'conftest.a': No such file yes checking LD_LIBRARY_PATH variable... ok checking whether GCC supports -static-libgcc... -static-libgcc ... ... checking for broken __attribute__((alias()))... no checking whether to put _rtld_local into .sdata section... no checking for .preinit_array/.init_array/.fini_array support... no configure: error: Need linker with .init_array/.fini_array support. Could anyone please figure out what the problem is. Have any of you faced similar problem? It will be a great help if i get the result. I got stuck in this step for many days. Also please tell me whether the configuration options I ahve used is correct. Thanks in advance, Jeff.J -- View this message in context: http://www.nabble.com/glibc-configure%3A-error%3A-Need-linker-with-.init_array-.fini_array-support-tp25853540p25853540.html Sent from the gcc - Dev mailing list archive at Nabble.com.
Re: VOIDmode in ZERO_EXTEND crashed
2009/10/12 Jon Beniston : >> PS: Does gcc have a function which could dump the specified rtx? >> I wanna dump the rtx when the crash happening. > > debug_rtx(x); > > You can also call this from within GDB, by typing: > > call debug_rtx(x) > > Cheers, > Jon > Thanks. I dump the context. Here it is: (mult:SI (zero_extend:SI (mem/c/i:HI (plus:SI (reg/f:SI 14 R14) (const_int 46 [0x2e])) [8 mm+0 S2 A16])) (zero_extend:SI (const_int 60 [0x3c]))) (zero_extend:SI (const_int 60 [0x3c])) makes the gcc crash. It is exactly the rtx mentioned in first email: (insn 228 227 229 21 ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c:78 (set (reg:HI 182 [ mm ]) (mem/c/i:HI (plus:SI (reg/f:SI 14 R14) (const_int 46 [0x2e])) [8 mm+0 S2 A16])) 10 {load_hi} (nil)) (insn 229 228 230 21 ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c:78 (set (reg:SI 181) (mult:SI (zero_extend:SI (reg:HI 182 [ mm ])) (zero_extend:SI (const_int 60 [0x3c] 63 {rice_umulhisi3} (expr_list:REG_DEAD (reg:HI 182 [ mm ]) (nil))) You can see gcc just merged the two insn. But problem is HOW I could avoid this error. Thanks. daniel
Re: VOIDmode in ZERO_EXTEND crashed
Quoting daniel tian : hi, everyone: I have ported the gcc to new RISC chip. But when I build the newlib with it, the gcc crashed in simplify-rtx.c. error message is like this: ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c: In function _tzset_r? ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c:195: internal compiler error: in simplify_const_unary_operation, at simplify-rtx.c:1108 And the code there is simplify-rtx.c 1108: case ZERO_EXTEND: /* When zero-extending a CONST_INT, we need to know its original mode. */ gcc_assert (op_mode != VOIDmode); I tracked the gcc, It caused by the RTX (const_int 60). As I know the CONST_INT is alway being VOIDmode. That exactly is the problem. You can't have a CONST_INT inside a ZERO_EXTEND. That is not valid. You'll need a separate pattern to recognize the CONST_INT without a ZERO_EXTEND around it. Unfortunately, this will not give reload the freedom it should have.
Re: VOIDmode in ZERO_EXTEND crashed
2009/10/12 Joern Rennecke : > Quoting daniel tian : > >> hi, everyone: >> I have ported the gcc to new RISC chip. But when I build the >> newlib with it, the gcc crashed in simplify-rtx.c. >> error message is like this: >> >> ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c: In >> function _tzset_r? >> ../../../../../newlib-1.16.0/newlib/libc/time/tzset_r.c:195: >> internal compiler error: in simplify_const_unary_operation, at >> simplify-rtx.c:1108 >> >> And the code there is simplify-rtx.c 1108: >> >> case ZERO_EXTEND: >> /* When zero-extending a CONST_INT, we need to know its >> original mode. */ >> gcc_assert (op_mode != VOIDmode); >> >> I tracked the gcc, It caused by the RTX (const_int 60). As I >> know the CONST_INT is alway being VOIDmode. > > That exactly is the problem. You can't have a CONST_INT inside a > ZERO_EXTEND. That is not valid. > You'll need a separate pattern to recognize the CONST_INT without a > ZERO_EXTEND around it. Unfortunately, this will not give reload > the freedom it should have. > You mean I should remove the const_int (by predicator) in umulhisi3 pattern? I mean if I remove it, how to deal with the "reg = const_int * reg" in umulhisi3 pattern. Thank you very much. daniel.
Re: VOIDmode in ZERO_EXTEND crashed
That exactly is the problem. You can't have a CONST_INT inside a ZERO_EXTEND. That is not valid. You'll need a separate pattern to recognize the CONST_INT without a ZERO_EXTEND around it. Unfortunately, this will not give reload the freedom it should have. You mean I should remove the const_int (by predicator) in umulhisi3 pattern? I mean if I remove it, how to deal with the "reg = const_int * reg" in umulhisi3 pattern. You follow these steps: 1) rename the existing pattern to something else than umulhisi3, and make it only accept register_operands 2) create another insn that matches (mult (zero_extend (match_operand "register_operand") (const_int)) 3) create a define_expand that checks for a CONST_INT and does not wrap it (but wraps REGs and SUBREGs and if applicable MEMs). This is in general how you deal with too-forgiving predicates during expansion. Paolo
cross build x86_64-unknown-linux-gnu to i686-pc-cygwin fails in libstdc++-v3
in libstdc++-v3: configure:57398: error: No support for this host/target combination. Used to work for the gcc-4.4.x series. Any ideas? Rainer
Re: delete dead feature branches?
On 09/25/2009 09:35 PM, Joseph S. Myers wrote: Viewing deleted files and their history (and for SVN deleted branches are just a special case of deleted files) is something SVN is bad at since you do need to work out the last revision the file was present first. Yep. Anyone deleting dead branches should add a link to the last "live" version in branches.html. It seems easier to me to move them under branches/dead, and possibly create branches/merged. Paolo
Issues of the latest trunk with LTO merges
Hello, I ran into an issue with the LTO merges when updating to current trunk. The problem is that my target calls a few functions/uses some data structures in the gcc directory: c_language, paragma_lex, c_register_pragma, etc. gcc -m32 -g -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wwrite-strings -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -Wold-style-definition -Wc++-compat -fno-common -DHAVE_CONFIG_H -o lto1 \ lto/lto-lang.o lto/lto.o lto/lto-elf.o attribs.o main.o tree-browser.o libbackend.a ../libcpp/libcpp.a ../libdecnumber/libdecnumber.a -L/projects/firepath/tools/work/bmei/packages/gmp/4.3.0/lib -L/projects/firepath/tools/work/bmei/packages/mpfr/2.4.1/lib -lmpfr -lgmp -rdynamic -ldl -L../zlib -lz -L/projects/firepath/tools/work/bmei/packages/libelf/lib -lelf ../libcpp/libcpp.a ../libiberty/libiberty.a ../libdecnumber/libdecnumber.a -lelf When compiling for lto1 in above step, I have many linking errors consequently. I tried to add some extra object files like c-common.o, c-pragma.o, etc into lto/Make-lang.in. More linking errors are produced. One problem is that lto code redefines some data exist in the main code: flag_no_builtin, flag_isoc99 lang_hooks, etc, which prevent it from linking with object files in main directory. What is the clean solution for this? Thanks in advance. Cheers, Bingfeng Mei
Re: Issues of the latest trunk with LTO merges
On Mon, Oct 12, 2009 at 4:31 PM, Bingfeng Mei wrote: > Hello, > I ran into an issue with the LTO merges when updating to current trunk. > The problem is that my target calls a few functions/uses some data structures > in the gcc directory: c_language, paragma_lex, c_register_pragma, etc. > > gcc -m32 -g -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wwrite-strings > -Wcast-qual -Wstrict-prototypes -Wmissing-prototypes > -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros > -Wno-overlength-strings -Wold-style-definition -Wc++-compat -fno-common > -DHAVE_CONFIG_H -o lto1 \ > lto/lto-lang.o lto/lto.o lto/lto-elf.o attribs.o main.o > tree-browser.o libbackend.a ../libcpp/libcpp.a ../libdecnumber/libdecnumber.a > -L/projects/firepath/tools/work/bmei/packages/gmp/4.3.0/lib > -L/projects/firepath/tools/work/bmei/packages/mpfr/2.4.1/lib -lmpfr -lgmp > -rdynamic -ldl -L../zlib -lz > -L/projects/firepath/tools/work/bmei/packages/libelf/lib -lelf > ../libcpp/libcpp.a ../libiberty/libiberty.a ../libdecnumber/libdecnumber.a > -lelf > > When compiling for lto1 in above step, I have many linking errors > consequently. > I tried to add some extra object files like c-common.o, c-pragma.o, etc into > lto/Make-lang.in. More linking errors are produced. One problem is that lto > code redefines some data exist in the main code: flag_no_builtin, flag_isoc99 > lang_hooks, etc, which prevent it from linking with object files in main > directory. > > What is the clean solution for this? Thanks in advance. You should not use C frontend specific stuff when not building the C frontend. Richard. > Cheers, > Bingfeng Mei > > >
RE: Issues of the latest trunk with LTO merges
Richard, Doesn't REGISTER_TARGET_PRAGMAS need to call c_register_pragma, etc, if we want to specify target-specific pragma? It becomes part of libbackend.a, which is linked to lto1. One solution I see is to put them into a separate file so the linker won't produce undefined references when they are not actually used by lto1. Thanks, Bingfeng > -Original Message- > From: Richard Guenther [mailto:richard.guent...@gmail.com] > Sent: 12 October 2009 15:34 > To: Bingfeng Mei > Cc: gcc@gcc.gnu.org > Subject: Re: Issues of the latest trunk with LTO merges > > On Mon, Oct 12, 2009 at 4:31 PM, Bingfeng Mei > wrote: > > Hello, > > I ran into an issue with the LTO merges when updating to > current trunk. > > The problem is that my target calls a few functions/uses > some data structures > > in the gcc directory: c_language, paragma_lex, > c_register_pragma, etc. > > > > gcc -m32 -g -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall > -Wwrite-strings -Wcast-qual -Wstrict-prototypes > -Wmissing-prototypes -Wmissing-format-attribute -pedantic > -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings > -Wold-style-definition -Wc++-compat -fno-common > -DHAVE_CONFIG_H -o lto1 \ > > lto/lto-lang.o lto/lto.o lto/lto-elf.o > attribs.o main.o tree-browser.o libbackend.a > ../libcpp/libcpp.a ../libdecnumber/libdecnumber.a > -L/projects/firepath/tools/work/bmei/packages/gmp/4.3.0/lib > -L/projects/firepath/tools/work/bmei/packages/mpfr/2.4.1/lib > -lmpfr -lgmp -rdynamic -ldl -L../zlib -lz > -L/projects/firepath/tools/work/bmei/packages/libelf/lib > -lelf ../libcpp/libcpp.a ../libiberty/libiberty.a > ../libdecnumber/libdecnumber.a -lelf > > > > When compiling for lto1 in above step, I have many linking > errors consequently. > > I tried to add some extra object files like c-common.o, > c-pragma.o, etc into > > lto/Make-lang.in. More linking errors are produced. One > problem is that lto > > code redefines some data exist in the main code: > flag_no_builtin, flag_isoc99 > > lang_hooks, etc, which prevent it from linking with object > files in main directory. > > > > What is the clean solution for this? Thanks in advance. > > You should not use C frontend specific stuff when not building > the C frontend. > > Richard. > > > Cheers, > > Bingfeng Mei > > > > > > > >
Re: VOIDmode in ZERO_EXTEND crashed
2009/10/12 Paolo Bonzini : > >>> That exactly is the problem. You can't have a CONST_INT inside a >>> ZERO_EXTEND. That is not valid. >>> You'll need a separate pattern to recognize the CONST_INT without a >>> ZERO_EXTEND around it. Unfortunately, this will not give reload >>> the freedom it should have. >>> >> >> >> You mean I should remove the const_int (by predicator) in umulhisi3 >> pattern? >> I mean if I remove it, how to deal with the "reg = const_int * reg" >> in umulhisi3 pattern. > > You follow these steps: > > 1) rename the existing pattern to something else than umulhisi3, and make it > only accept register_operands > > 2) create another insn that matches (mult (zero_extend (match_operand > "register_operand") (const_int)) > > 3) create a define_expand that checks for a CONST_INT and does not wrap it > (but wraps REGs and SUBREGs and if applicable MEMs). > > This is in general how you deal with too-forgiving predicates during > expansion. > > Paolo > It sounds good point. I will try it first I would like to accepted only register and force all const_int operand into register first. Obviously, this may generate low efficiency code compared with your solution. I still can't get a clear mind with step3. I may need take a while to think about it. I am a newcomer. :) Thanks for your guys advice. Best wishes. daniel.
Re: Issues of the latest trunk with LTO merges
On 10/12/2009 08:09 AM, Bingfeng Mei wrote: Richard, Doesn't REGISTER_TARGET_PRAGMAS need to call c_register_pragma, etc, if we want to specify target-specific pragma? It becomes part of libbackend.a, which is linked to lto1. One solution I see is to put them into a separate file so the linker won't produce undefined references when they are not actually used by lto1. The separate file solution is used by all of the other targets that have special pragmas, e.g. config/ia64/ia64-c.c. r~
Re: Issues of the latest trunk with LTO merges
On Mon, Oct 12, 2009 at 08:09:48AM -0700, Bingfeng Mei wrote: > Richard, > Doesn't REGISTER_TARGET_PRAGMAS need to call c_register_pragma, etc, if we > want to specify target-specific pragma? It becomes part of libbackend.a, > which is linked to lto1. One solution I see is to put them into a separate > file so the linker won't produce undefined references when they are not > actually used by lto1. Yes. Take a look at config/arm/arm-c.c, which does not go into libbackend.a. -- Daniel Jacobowitz CodeSourcery
status of http://sshproxy.sourceware.org:443/
Since around Wednesday of last week, I have been unable to access svn+ssh://ljrit...@gcc.gnu.org/svn/gcc through http://sshproxy.sourceware.org:443/ I have not changed any local configuration in some time but definitely not since it worked the day before that. I do not control the outbound gateway that I use but: When I hit http://sshproxy.sourceware.org:443/ with a plain browser, I see: "SSH-1.99-OpenSSH_3.9p1" reported back. Is everything working from sshproxy.sourceware.org back to gcc.gnu.org? Loren
Re: glibc configure: error: Need linker with .init_array/.fini_array support
jeffiedward writes: > when i try to compile glibc, the following configuration error occurs: > > ../configure CFLAGS=" -march=i686 -O2" --host=i686-pc-linux-gnu > --target=powerpc-linux-gnu --prefix=/home/tellabs/GNU/PPC > --with-headers=/home/tellabs/GNU/include > --with-binutils=/home/tellabs/GNU/PPC/powerpc-linux-gnu/bin > > checking build system type... i686-pc-linux-gnu > checking host system type... i686-pc-linux-gnu > configure: running configure fragment for add-on nptl > ... > ... > checking for autoconf... autoconf > checking whether autoconf works... yes > checking whether ranlib is necessary... /tmp/ccE5O0vD.s: Assembler messages: > /tmp/ccE5O0vD.s:7: Error: Unrecognized opcode: `pushl' > /tmp/ccE5O0vD.s:8: Error: Unrecognized opcode: `movl' > /tmp/ccE5O0vD.s:9: Error: Unrecognized opcode: `popl' > /tmp/ccE5O0vD.s:10: Error: Unrecognized opcode: `ret' This question is not appropriate for the gcc@gcc.gnu.org mailing list. It should be sent to gcc-h...@gcc.gnu.org. Please take any follow ups to gcc-help. Thanks. It looks like this is using the wrong assembler. ../configure CFLAGS=" -march=i686 -O2" --host=i686-pc-linux-gnu --target=powerpc-linux-gnu --prefix=/home/tellabs/GNU/PPC --with-headers=/home/tellabs/GNU/include --with-binutils=/home/tellabs/GNU/PPC/powerpc-linux-gnu/bin I'm not sure, but I think this should be --build=i686-pc-linux-gnu --host=powerpc-linux-gnu. glibc is not a compilation tool and it does not have a target. It only has a host. Ian
Re: status of http://sshproxy.sourceware.org:443/
Loren James Rittle writes: > Since around Wednesday of last week, I have been unable to access > svn+ssh://ljrit...@gcc.gnu.org/svn/gcc through > http://sshproxy.sourceware.org:443/ > > I have not changed any local configuration in some time but definitely > not since it worked the day before that. > > I do not control the outbound gateway that I use but: When I hit > http://sshproxy.sourceware.org:443/ with a plain browser, I see: > "SSH-1.99-OpenSSH_3.9p1" reported back. > > Is everything working from sshproxy.sourceware.org back to gcc.gnu.org? It seems to be. I just did ssh -p 443 sshproxy.sourceware.org and I was connected to gcc.gnu.org as usual. I don't see any problems, but I restarted the port forwarder in case it makes any difference. Ian
inlining problems
Hi list, I ran into a little issue when trying to force inlining with __attribute__(( always_inline )). The reason why i am trying to force the compiler to inline my code is simple: I want to implement handwritten optimizations using SSE intrinsics. However it seems that gcc is not willing to inline that code anymore. That is why i came up with the idea of trying to force gcc. There is a problem now. I get tons of error messages that gcc is not able to inline that function. Example error message: sorry, unimplemented: inlining failed in call to ‘const pe::Vector3::AddType> pe::operator+(const pe::Vector3&, const pe::Vector3&) [with T1 = double, T2 = double]’: function not inlinable I was trying to put that in a little testcase. However it seems that i can't reproduce that error with a small code base. Any ideas? Need more information? Thanks, Thomas
Re: inlining problems
Thomas Heller writes: > I ran into a little issue when trying to force inlining with > __attribute__(( always_inline )). The reason why i am trying to force > the compiler to inline my code is simple: I want to implement handwritten > optimizations using SSE intrinsics. However it seems that gcc is not willing > to inline that code anymore. That is why i came up with the idea of trying to > force gcc. > There is a problem now. I get tons of error messages that gcc is not able to > inline that function. > Example error message: > sorry, unimplemented: inlining failed in call to ‘const pe::Vector3 pe::MathTrait::AddType> pe::operator+(const pe::Vector3&, > const pe::Vector3&) [with T1 = double, T2 = double]’: function not > inlinable > > I was trying to put that in a little testcase. However it seems that i can't > reproduce that error with a small code base. > Any ideas? Need more information? This question is not appropriate for the gcc@gcc.gnu.org mailing list. It would be appropriate for gcc-h...@gcc.gnu.org. Please take any followups to gcc-help. Thanks. Unfortunate, it's basically impossible for us to say anything useful without some sort of test case. I assume you are using the SSE intrinsics from mmintrin.h and friends. Those intrinsics should be reliably inlined. Why is it necessary for you to inline them further? For whatever it's worth, the development version of gcc gives better messages about why a function can not be inlined. Ian
Re: loop optimization in gcc
On Mon, Oct 12, 2009 at 12:13 PM, Ian Lance Taylor wrote: > I'm not really sure what you are asking. gcc supports OpenMP for > parallelizing loops. That is mostly done in the frontends. I have been told that openMP does parallelizing of loops, but these types of optimizations are generally done for a large clusters of computers (Is this correct?).Meaning thereby that these optimizations are not used for low scale systems. What i aim for in parallelizing, is to improve the speed of execution of relatively simple tools used for distribution of packages.(for eg: speed improvements in using dwarf utilities and elf utilities during distribution of software to test for backward compatibilities and dependency checks) for modestly multicore (2 or 4 to start out) processors.A few ideas in this regard are brewing in which i will be trying out along with my team in the near future. I will also be happy to know if there are any good ideas to improve the speed of execution by optimizing the loops with a focus to parallelize them than the one which are in place in gcc so that i can try this out with my team. -- cheers sandy
Re: loop optimization in gcc
sandeep soni writes: > On Mon, Oct 12, 2009 at 12:13 PM, Ian Lance Taylor wrote: > >> I'm not really sure what you are asking. gcc supports OpenMP for >> parallelizing loops. That is mostly done in the frontends. > > I have been told that openMP does parallelizing of loops, but these > types of optimizations are generally done for a large clusters of > computers (Is this correct?).Meaning thereby that these optimizations > are not used for low scale systems. That is not correct. OpenMP parallelizes loops within a single system, using pthreads. It does not parallelize loops across different systems. Ian
Re: loop optimization in gcc
On Mon, Oct 12, 2009 at 11:28 PM, Ian Lance Taylor wrote: > sandeep soni writes: > >> On Mon, Oct 12, 2009 at 12:13 PM, Ian Lance Taylor wrote: >> >>> I'm not really sure what you are asking. gcc supports OpenMP for >>> parallelizing loops. That is mostly done in the frontends. >> >> I have been told that openMP does parallelizing of loops, but these >> types of optimizations are generally done for a large clusters of >> computers (Is this correct?).Meaning thereby that these optimizations >> are not used for low scale systems. > > That is not correct. OpenMP parallelizes loops within a single > system, using pthreads. It does not parallelize loops across > different systems. > > Ian > Well i was under a misconception then.Thanks for clearing this. -- cheers sandy
Re: delete dead feature branches?
On 10/12/2009 10:22 AM, Paolo Bonzini wrote: Yep. Anyone deleting dead branches should add a link to the last "live" version in branches.html. It seems easier to me to move them under branches/dead, and possibly create branches/merged. Multiple directory levels under branches/ confuse git-svn; it thinks "dead" is a single branch. I'd rather not expand on that usage. Jason
Re: delete dead feature branches?
On Mon, Oct 12, 2009 at 2:15 PM, Jason Merrill wrote: > On 10/12/2009 10:22 AM, Paolo Bonzini wrote: >> >> Yep. Anyone deleting dead branches should add a link to the last "live" >> version in branches.html. It seems easier to me to move them under >> branches/dead, and possibly create branches/merged. > > Multiple directory levels under branches/ confuse git-svn; it thinks "dead" > is a single branch. I'd rather not expand on that usage. That seems like a huge bug in git-svn because we already use multiple directory levels under branches. Hint ibm and redhat and debain. Thanks, Andrew Pinski
Re: delete dead feature branches?
On 10/12/2009 05:17 PM, Andrew Pinski wrote: That seems like a huge bug in git-svn because we already use multiple directory levels under branches. Hint ibm and redhat and debain. Yep, that's why I said "expand". I've thought about fixing that aspect of git-svn, but I'm not sure how it would tell the difference between a branch directory and a directory of branches given that SVN basically models a filesystem. Jason
Re: LTO and the inlining of functions only called once.
On 10/10/09 10:40, Richard Guenther wrote: Well - that will print one diagnostic per callgraph edge. A bit too much, no? Possibly -- it's not yet clear (to me) how to present this data to users, but it's clearly something they're interested in. To put things in perspective, the particular person I spoke with spent many days trying to understand why a particular function wasn't being inlined -- presumably they'd see "grep logfile" as a huge improvement over the days and days of twiddling sources, tuning options, etc, even if that presented them with a large amount of data to analyze. jeff
Re: delete dead feature branches?
Hi, On Mon, 12 Oct 2009, Jason Merrill wrote: > On 10/12/2009 10:22 AM, Paolo Bonzini wrote: > > Yep. Anyone deleting dead branches should add a link to the last "live" > > version in branches.html. It seems easier to me to move them under > > branches/dead, and possibly create branches/merged. > > Multiple directory levels under branches/ confuse git-svn; it thinks "dead" is > a single branch. I'd rather not expand on that usage. I don't think we should necessarily limit ourself by bugs in foreign tools if it reduces useful information. What about a new top-level directory dead-branches/, not under branches/ but parallel to it? Should be easy to exempt from git-svn handling, shouldn't it? Ciao, Michael.
Re: LTO and the inlining of functions only called once.
Hi, On Mon, 12 Oct 2009, Jeff Law wrote: > To put things in perspective, the particular person I spoke with spent > many days trying to understand why a particular function wasn't being > inlined -- presumably they'd see "grep logfile" as a > huge improvement over the days and days of twiddling sources, tuning > options, etc, even if that presented them with a large amount of data to > analyze. If we would listen to such requests by providing the requested information, nothing stops users from asking to have something like that also for other transformations. Like "I've spent days and days with analyzing why this loop isn't unrolled, I'd like to have -Winfo-unroll to tell me exactly when a loop is unrolled, and when it isn't for which reason". Make "loop is unrolled" be $TRANSFORMATION and it becomes silly. I don't think this is reductio ad absurdum. We have dump files for exactly such information. Maybe the latter could be molded (via an new flag) into something less detailed than now, but still containing the larger decisions. Ciao, Michael.
Re: LTO and the inlining of functions only called once.
On 10/12/09 19:18, Michael Matz wrote: Hi, On Mon, 12 Oct 2009, Jeff Law wrote: To put things in perspective, the particular person I spoke with spent many days trying to understand why a particular function wasn't being inlined -- presumably they'd see "grep logfile" as a huge improvement over the days and days of twiddling sources, tuning options, etc, even if that presented them with a large amount of data to analyze. If we would listen to such requests by providing the requested information, nothing stops users from asking to have something like that also for other transformations. Like "I've spent days and days with analyzing why this loop isn't unrolled, I'd like to have -Winfo-unroll to tell me exactly when a loop is unrolled, and when it isn't for which reason". Make "loop is unrolled" be $TRANSFORMATION and it becomes silly. I don't think this is reductio ad absurdum. We have dump files for exactly such information. Maybe the latter could be molded (via an new flag) into something less detailed than now, but still containing the larger decisions. I'm virtually certain this customer would ask for that precise information about unrolling once they can get it for inline functions :-) Nothing you've said changes the fact that there are a class of users for whom that information is vital and we ought to spend some time thinking about how to provide the information in a form they can digest. GCC dumps as they exist today are largely useless for a non-GCC developer to use to understand why a particular transformation did or did not occur in their code. This has come up time and time again and will continue to do so unless we find a way to provide visibility into the optimizer's decision making. jeff
Re: loop optimization in gcc
On Sun, 2009-10-11 at 20:20 +0530, sandeep soni wrote: > Hi All, > > I have been studying the gcc code lately as part of my project.I have > got info from this mailing list about CFG and DFG information.I want > to know how gcc uses this information to perform loop optimization? > Does it Follow any particular algorithm or in particular what are the > different techniques that it uses to parallelize the code by > performing the loop optimizations?(correct me please if this sentence > is not right). > > If possible please provide any pointers to any form of literature > available regarding it. > > Thanks in advance. Hi, you also might want to take a look at the Graphite project. http://gcc.gnu.org/wiki/Graphite where we do loop optimizations and automatic parallelization based on the polytop model. If you need any help feel free to ask. Tobias