Re: writing patterns
On Thu, Jul 31, 2014 at 7:41 AM, Prathamesh Kulkarni wrote: > On Wed, Jul 30, 2014 at 11:49 PM, Prathamesh Kulkarni > wrote: >> On Wed, Jul 30, 2014 at 4:49 PM, Richard Biener >> wrote: >>> On Wed, Jul 30, 2014 at 1:11 PM, Richard Biener >>> wrote: On Wed, Jul 30, 2014 at 12:49 PM, Prathamesh Kulkarni wrote: > Hi, >Sorry to ask a stupid question, but I am having issues writing patterns > involving casts. I am trying to write patterns from simplify_rotate. > > Could you show me how to write a patterns that involve > casts ? > for eg: > ((T) ((T2) X << CNT1)) + ((T) ((T2) X >> CNT2)) iff CNT1 + CNT2 == B > T -> some unsigned type with bitsize B, and some type T2 wider than T. > How to express this in the pattern ? [copying gcc@ because of the syntax stuff] for example with (leaving captures as the appear in the pattern above) (match_and_simplify (plus (convert@2 (lshift (convert@0 X) CNT1)) (convert (rshift (convert@1 X) CNT2))) /* Types T2 have to match */ (if (types_compatible_p (TREE_TYPE (@0), TREE_TYPE (@1)) /* Type T should be unsigned. */ && TYPE_UNSIGNED (TREE_TYPE (@2)) /* T2 should be wider than T. */ && TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE (@2)) /* CNT1 + CNT2 == B */ && wi::eq_p (TYPE_PRECISION (TREE_TYPE (@2)), wi::add (CNT1, CNT2 (lrotate CNT1)) >>> >>> Note that this will _explicitely_ require a conversion. That is, if T == T2 >>> it won't match because the conversion to T will not be there, nor if X >>> is already of type T2. >>> >>> Which means that we want to match multiple variants of this >>> (with conversions in place or not). Hmm. Maybe with extending 'for' like >>> >>> (for cvt1 in convert * >>> (fot cvt2 in convert * >>> (plus@2 (cvt1 (lshift@0 (cvt2 X) CNT1)) >>> (cvt1 (rshift@1 (cvt2 X) CNT2))) >>> ... >>> >>> adding an "empty" operator to the list of operators to substitute for cvt1 >>> and allowing nested for. The "empty" operator would necessarily be >>> unary and be just un-wrapped. >> Would it be better to have syntax (say using ?) for denoting that an >> operator is optional ? >> operator should be unary, and it's operand must be an expression. >> >> so the pattern becomes sth like: >> (plus@2 (convert? (lshift@0 (convert? X) CNT1)) >> (convert? (rshift@1 (convert? X) CNT2))) >> > Let me rephrase it. > An operator can be marked optional, if > a) it's unary > b) if in outermost-expr, the operand must be an expression > > I want to reject case like: > (negate? @0) > > (op? operand) > generates code : > match (op) >match (operand) > > and once without op > match (operand) > > Implementation-wise I think, we could duplicate the AST, like we do > for "for pattern". > Would that be okay ? I thought of something similar but how exactly would you do the duplication in the above example? The point is that we know that the conversions will exist in pairs, that is, either the two outer and no inner or no outer and both inner or both outer and both inner. You can express that with the nested for - with just a '?' you can't do that. Of course you could do sth like (plus@2 (convert?1 (lshift@0 (convert?2 X) CNT1)) (convert?1 (rshift@1 (convert?2 X) CNT2))) that is, add an index to ?s and tie them together. We want to avoid generating useless patterns - in this case 4 are sufficient but if we generate all possible combinations we'd have an additional 12 useless patterns. But using '?' indeed looks better than (ab)using 'for'. Note that it is _only_ 'convert' that I can see this useful for (or can you think of other cases?). So maybe we instead want to make it a special operator like (plus@2 (opt_convert 1 (lshift@0 (opt_convert 2 X) CNT1)) (opt_convert 1 (rshift@1 (opt_convert 2 X) CONT2))) with an additional operand specifying the group (or simply have opt_convert1 and opt_convert2 operands - hoping for more "groups" to never happen ;)). Actually I like opt_convert[123] most. Richard. > Thanks, > Prathamesh > > >> Thanks, >> Prathamesh >> >>> >>> Extending for in this way avoids treating conversions specially >>> (I don't see an easy way to do very good at that automagically). We >>> need multiple patterns in the decision tree here anyway. >>> >>> Now guess sth fancy in place of '*' ... >>> >>> Lisp style would be less free-form like >>> >>> (for cvt (convert ()) >>> >>> where we could use an empty list for the "empty" operator. >>> >>> Is nested for support already implemented? >>> >>> Thanks, >>> Richard. >>> which suggests that we may want to add some type capturing / matching so we can maybe write (plus (convert@T (lshift (convert@T2 X) CNT1)) (convert (rshift (convert@T2 X) CNT2))) (if (
Re: writing patterns
On Thu, Jul 31, 2014 at 2:15 PM, Richard Biener wrote: > On Thu, Jul 31, 2014 at 7:41 AM, Prathamesh Kulkarni > wrote: >> On Wed, Jul 30, 2014 at 11:49 PM, Prathamesh Kulkarni >> wrote: >>> On Wed, Jul 30, 2014 at 4:49 PM, Richard Biener >>> wrote: On Wed, Jul 30, 2014 at 1:11 PM, Richard Biener wrote: > On Wed, Jul 30, 2014 at 12:49 PM, Prathamesh Kulkarni > wrote: >> Hi, >>Sorry to ask a stupid question, but I am having issues writing >> patterns >> involving casts. I am trying to write patterns from simplify_rotate. >> >> Could you show me how to write a patterns that involve >> casts ? >> for eg: >> ((T) ((T2) X << CNT1)) + ((T) ((T2) X >> CNT2)) iff CNT1 + CNT2 == B >> T -> some unsigned type with bitsize B, and some type T2 wider than T. >> How to express this in the pattern ? > > [copying gcc@ because of the syntax stuff] > > for example with (leaving captures as the appear in the pattern above) > > (match_and_simplify >(plus (convert@2 (lshift (convert@0 X) CNT1)) >(convert (rshift (convert@1 X) CNT2))) > /* Types T2 have to match */ >(if (types_compatible_p (TREE_TYPE (@0), TREE_TYPE (@1)) > /* Type T should be unsigned. */ >&& TYPE_UNSIGNED (TREE_TYPE (@2)) >/* T2 should be wider than T. */ >&& TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE > (@2)) >/* CNT1 + CNT2 == B */ >&& wi::eq_p (TYPE_PRECISION (TREE_TYPE (@2)), >wi::add (CNT1, CNT2 >(lrotate CNT1)) Note that this will _explicitely_ require a conversion. That is, if T == T2 it won't match because the conversion to T will not be there, nor if X is already of type T2. Which means that we want to match multiple variants of this (with conversions in place or not). Hmm. Maybe with extending 'for' like (for cvt1 in convert * (fot cvt2 in convert * (plus@2 (cvt1 (lshift@0 (cvt2 X) CNT1)) (cvt1 (rshift@1 (cvt2 X) CNT2))) ... adding an "empty" operator to the list of operators to substitute for cvt1 and allowing nested for. The "empty" operator would necessarily be unary and be just un-wrapped. >>> Would it be better to have syntax (say using ?) for denoting that an >>> operator is optional ? >>> operator should be unary, and it's operand must be an expression. >>> >>> so the pattern becomes sth like: >>> (plus@2 (convert? (lshift@0 (convert? X) CNT1)) >>> (convert? (rshift@1 (convert? X) CNT2))) >>> >> Let me rephrase it. >> An operator can be marked optional, if >> a) it's unary >> b) if in outermost-expr, the operand must be an expression >> >> I want to reject case like: >> (negate? @0) >> >> (op? operand) >> generates code : >> match (op) >>match (operand) >> >> and once without op >> match (operand) >> >> Implementation-wise I think, we could duplicate the AST, like we do >> for "for pattern". >> Would that be okay ? > > I thought of something similar but how exactly would you do the > duplication in the above example? The point is that we know > that the conversions will exist in pairs, that is, either > the two outer and no inner or no outer and both inner or > both outer and both inner. You can express that with the > nested for - with just a '?' you can't do that. Of course you could > do sth like > > (plus@2 (convert?1 (lshift@0 (convert?2 X) CNT1)) > (convert?1 (rshift@1 (convert?2 X) CNT2))) > > that is, add an index to ?s and tie them together. We want to > avoid generating useless patterns - in this case 4 are sufficient > but if we generate all possible combinations we'd have an additional > 12 useless patterns. Ah yes, didn't realize that, thanks. > > But using '?' indeed looks better than (ab)using 'for'. Note that > it is _only_ 'convert' that I can see this useful for (or can you > think of other cases?). So maybe we instead want to make > it a special operator like > > (plus@2 (opt_convert 1 (lshift@0 (opt_convert 2 X) CNT1)) > (opt_convert 1 (rshift@1 (opt_convert 2 X) CONT2))) > > with an additional operand specifying the group (or simply > have opt_convert1 and opt_convert2 operands - hoping for > more "groups" to never happen ;)). > > Actually I like opt_convert[123] most. I like opt_convert[123] too. Implementation-wise I don't think it would be more difficult to generalize it for other operators ... I was thinking about this ... Instead of grouping by indexes, how about defining operator aliases ? sth like: (define_op cvt1 convert) (define_op cvt2 convert) and then the pattern becomes: (plus@2 (cvt1? (lshift@0 (cvt2? X) CNT1)) (cvt1? (rshift@1 (cvt2? X) CNT2))) that would avoid generating useless patterns (since cvt1, cvt2 are "different"), however during code-gen both s
Re: writing patterns
On Thu, Jul 31, 2014 at 11:09 AM, Prathamesh Kulkarni wrote: > On Thu, Jul 31, 2014 at 2:15 PM, Richard Biener > wrote: >> On Thu, Jul 31, 2014 at 7:41 AM, Prathamesh Kulkarni >> wrote: >>> On Wed, Jul 30, 2014 at 11:49 PM, Prathamesh Kulkarni >>> wrote: On Wed, Jul 30, 2014 at 4:49 PM, Richard Biener wrote: > On Wed, Jul 30, 2014 at 1:11 PM, Richard Biener > wrote: >> On Wed, Jul 30, 2014 at 12:49 PM, Prathamesh Kulkarni >> wrote: >>> Hi, >>>Sorry to ask a stupid question, but I am having issues writing >>> patterns >>> involving casts. I am trying to write patterns from simplify_rotate. >>> >>> Could you show me how to write a patterns that involve >>> casts ? >>> for eg: >>> ((T) ((T2) X << CNT1)) + ((T) ((T2) X >> CNT2)) iff CNT1 + CNT2 == B >>> T -> some unsigned type with bitsize B, and some type T2 wider than T. >>> How to express this in the pattern ? >> >> [copying gcc@ because of the syntax stuff] >> >> for example with (leaving captures as the appear in the pattern above) >> >> (match_and_simplify >>(plus (convert@2 (lshift (convert@0 X) CNT1)) >>(convert (rshift (convert@1 X) CNT2))) >> /* Types T2 have to match */ >>(if (types_compatible_p (TREE_TYPE (@0), TREE_TYPE (@1)) >> /* Type T should be unsigned. */ >>&& TYPE_UNSIGNED (TREE_TYPE (@2)) >>/* T2 should be wider than T. */ >>&& TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE >> (@2)) >>/* CNT1 + CNT2 == B */ >>&& wi::eq_p (TYPE_PRECISION (TREE_TYPE (@2)), >>wi::add (CNT1, CNT2 >>(lrotate CNT1)) > > Note that this will _explicitely_ require a conversion. That is, if T == > T2 > it won't match because the conversion to T will not be there, nor if X > is already of type T2. > > Which means that we want to match multiple variants of this > (with conversions in place or not). Hmm. Maybe with extending 'for' like > > (for cvt1 in convert * > (fot cvt2 in convert * > (plus@2 (cvt1 (lshift@0 (cvt2 X) CNT1)) > (cvt1 (rshift@1 (cvt2 X) CNT2))) > ... > > adding an "empty" operator to the list of operators to substitute for cvt1 > and allowing nested for. The "empty" operator would necessarily be > unary and be just un-wrapped. Would it be better to have syntax (say using ?) for denoting that an operator is optional ? operator should be unary, and it's operand must be an expression. so the pattern becomes sth like: (plus@2 (convert? (lshift@0 (convert? X) CNT1)) (convert? (rshift@1 (convert? X) CNT2))) >>> Let me rephrase it. >>> An operator can be marked optional, if >>> a) it's unary >>> b) if in outermost-expr, the operand must be an expression >>> >>> I want to reject case like: >>> (negate? @0) >>> >>> (op? operand) >>> generates code : >>> match (op) >>>match (operand) >>> >>> and once without op >>> match (operand) >>> >>> Implementation-wise I think, we could duplicate the AST, like we do >>> for "for pattern". >>> Would that be okay ? >> >> I thought of something similar but how exactly would you do the >> duplication in the above example? The point is that we know >> that the conversions will exist in pairs, that is, either >> the two outer and no inner or no outer and both inner or >> both outer and both inner. You can express that with the >> nested for - with just a '?' you can't do that. Of course you could >> do sth like >> >> (plus@2 (convert?1 (lshift@0 (convert?2 X) CNT1)) >> (convert?1 (rshift@1 (convert?2 X) CNT2))) >> >> that is, add an index to ?s and tie them together. We want to >> avoid generating useless patterns - in this case 4 are sufficient >> but if we generate all possible combinations we'd have an additional >> 12 useless patterns. > Ah yes, didn't realize that, thanks. >> >> But using '?' indeed looks better than (ab)using 'for'. Note that >> it is _only_ 'convert' that I can see this useful for (or can you >> think of other cases?). So maybe we instead want to make >> it a special operator like >> >> (plus@2 (opt_convert 1 (lshift@0 (opt_convert 2 X) CNT1)) >> (opt_convert 1 (rshift@1 (opt_convert 2 X) CONT2))) >> >> with an additional operand specifying the group (or simply >> have opt_convert1 and opt_convert2 operands - hoping for >> more "groups" to never happen ;)). >> >> Actually I like opt_convert[123] most. > I like opt_convert[123] too. > Implementation-wise I don't think it would be more difficult to > generalize it for other operators ... Of course, but I don't see the need for that. Conversions are really special in this regard. > I was thinking about this ... Instead of grouping by indexes, > how about defining operator aliases ? > sth
Re: writing patterns
On Thu, Jul 31, 2014 at 2:44 PM, Richard Biener wrote: > On Thu, Jul 31, 2014 at 11:09 AM, Prathamesh Kulkarni > wrote: >> On Thu, Jul 31, 2014 at 2:15 PM, Richard Biener >> wrote: >>> On Thu, Jul 31, 2014 at 7:41 AM, Prathamesh Kulkarni >>> wrote: On Wed, Jul 30, 2014 at 11:49 PM, Prathamesh Kulkarni wrote: > On Wed, Jul 30, 2014 at 4:49 PM, Richard Biener > wrote: >> On Wed, Jul 30, 2014 at 1:11 PM, Richard Biener >> wrote: >>> On Wed, Jul 30, 2014 at 12:49 PM, Prathamesh Kulkarni >>> wrote: Hi, Sorry to ask a stupid question, but I am having issues writing patterns involving casts. I am trying to write patterns from simplify_rotate. Could you show me how to write a patterns that involve casts ? for eg: ((T) ((T2) X << CNT1)) + ((T) ((T2) X >> CNT2)) iff CNT1 + CNT2 == B T -> some unsigned type with bitsize B, and some type T2 wider than T. How to express this in the pattern ? >>> >>> [copying gcc@ because of the syntax stuff] >>> >>> for example with (leaving captures as the appear in the pattern above) >>> >>> (match_and_simplify >>>(plus (convert@2 (lshift (convert@0 X) CNT1)) >>>(convert (rshift (convert@1 X) CNT2))) >>> /* Types T2 have to match */ >>>(if (types_compatible_p (TREE_TYPE (@0), TREE_TYPE (@1)) >>> /* Type T should be unsigned. */ >>>&& TYPE_UNSIGNED (TREE_TYPE (@2)) >>>/* T2 should be wider than T. */ >>>&& TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE >>> (@2)) >>>/* CNT1 + CNT2 == B */ >>>&& wi::eq_p (TYPE_PRECISION (TREE_TYPE (@2)), >>>wi::add (CNT1, CNT2 >>>(lrotate CNT1)) >> >> Note that this will _explicitely_ require a conversion. That is, if T >> == T2 >> it won't match because the conversion to T will not be there, nor if X >> is already of type T2. >> >> Which means that we want to match multiple variants of this >> (with conversions in place or not). Hmm. Maybe with extending 'for' >> like >> >> (for cvt1 in convert * >> (fot cvt2 in convert * >> (plus@2 (cvt1 (lshift@0 (cvt2 X) CNT1)) >> (cvt1 (rshift@1 (cvt2 X) CNT2))) >> ... >> >> adding an "empty" operator to the list of operators to substitute for >> cvt1 >> and allowing nested for. The "empty" operator would necessarily be >> unary and be just un-wrapped. > Would it be better to have syntax (say using ?) for denoting that an > operator is optional ? > operator should be unary, and it's operand must be an expression. > > so the pattern becomes sth like: > (plus@2 (convert? (lshift@0 (convert? X) CNT1)) > (convert? (rshift@1 (convert? X) CNT2))) > Let me rephrase it. An operator can be marked optional, if a) it's unary b) if in outermost-expr, the operand must be an expression I want to reject case like: (negate? @0) (op? operand) generates code : match (op) match (operand) and once without op match (operand) Implementation-wise I think, we could duplicate the AST, like we do for "for pattern". Would that be okay ? >>> >>> I thought of something similar but how exactly would you do the >>> duplication in the above example? The point is that we know >>> that the conversions will exist in pairs, that is, either >>> the two outer and no inner or no outer and both inner or >>> both outer and both inner. You can express that with the >>> nested for - with just a '?' you can't do that. Of course you could >>> do sth like >>> >>> (plus@2 (convert?1 (lshift@0 (convert?2 X) CNT1)) >>> (convert?1 (rshift@1 (convert?2 X) CNT2))) >>> >>> that is, add an index to ?s and tie them together. We want to >>> avoid generating useless patterns - in this case 4 are sufficient >>> but if we generate all possible combinations we'd have an additional >>> 12 useless patterns. >> Ah yes, didn't realize that, thanks. >>> >>> But using '?' indeed looks better than (ab)using 'for'. Note that >>> it is _only_ 'convert' that I can see this useful for (or can you >>> think of other cases?). So maybe we instead want to make >>> it a special operator like >>> >>> (plus@2 (opt_convert 1 (lshift@0 (opt_convert 2 X) CNT1)) >>> (opt_convert 1 (rshift@1 (opt_convert 2 X) CONT2))) >>> >>> with an additional operand specifying the group (or simply >>> have opt_convert1 and opt_convert2 operands - hoping for >>> more "groups" to never happen ;)). >>> >>> Actually I like opt_convert[123] most. >> I like opt_convert[123] too. >> Implementation-wise I don't think it would be more difficult to >> generalize it for other operators ...
About the most time consuming compiler process
Hi, I want to find the most consuming compiler process, the time report is as following: usr sys wall TOTAL 759.59 92.92 1396.19 phase generate 498.34 31.1 870.94 template instantiation 90.49 13.9469.89 phase parsing 245.56 59.55 465.9 phase cgraph 418.14 21.59 443.51 why the phase parsing is spent so much time, it only do the parsing job. the proportion should be small. How to explain it? Thank you. Best Regards. Geng.
Re: About the most time consuming compiler process
On 07/31/2014 11:25 AM, Gengyulei (Gengyl) wrote: > How to explain it? We can't. You haven't told us what is being parsed or even which language it's written in. Note that GCC's parsing pass isn't simply a parse: it has to build trees, which can be an expensive process. Andrew.
Re: About the most time consuming compiler process
Thank you for your answer. I compiled the android source codes. -邮件原件- 发件人: Andrew Haley [mailto:a...@redhat.com] 发送时间: 2014年7月31日 18:54 收件人: Gengyulei (Gengyl); gcc@gcc.gnu.org 主题: Re: About the most time consuming compiler process On 07/31/2014 11:25 AM, Gengyulei (Gengyl) wrote: > How to explain it? We can't. You haven't told us what is being parsed or even which language it's written in. Note that GCC's parsing pass isn't simply a parse: it has to build trees, which can be an expensive process. Andrew.
Re: GCC version bikeshedding
On Wed, Jul 30, 2014 at 8:00 PM, Jonathan Wakely wrote: > On 30 July 2014 23:18, Eric Botcazou wrote: >>> What are you objecting to, calling the next release from trunk 5.0, >>> and the next one after that 6.0? Or the wording chosen to describe the >>> new versioning scheme? >> >> Let's not start another subthread, please, this will be even more confusing. > > I'm not. I'm trying to get your message back on topic. > >> You can reply to my reply to Ian's message if you deem it necessary. > > Are you objecting to the numbering scheme, or Ian's description of it? > > If you have an objection to the concrete plan it would be nice if you > stated clearly what it is. One thing you might want to consider is that with the typical X.Y.Z versioning of most GNU projects, changing X allows breaking compatibility in a significant way with previous versions. While Z fixes regressions and Y adds new features, X is a place to make infrequent but paradigm shifting changes that are unconstrained by a desire to stay backwards compatible with older values of X. By going to what is essentially a Y.Z.0 release mechanism, you lose that ability to some degree. Maybe that's ok in a mature project like GCC.
[GSoC] checking for the loop parallelism
Tobias, could you please advise me how is it better to determine a depth for loop (it is necessary for loop_is_parallel_p)? In graphite-clast-to-gimple.c. it is done by using the number of dimensions of a domain, which is attached to clast_for. As far as I know, isl doesn't give such a opportunity. If I'm not mistaken “2 * ( level + 1) “ is also not suitable for all cases. -- Cheers, Roman Gareev.
Re: [GSoC] checking for the loop parallelism
On 31/07/2014 16:41, Roman Gareev wrote: Tobias, could you please advise me how is it better to determine a depth for loop (it is necessary for loop_is_parallel_p)? In graphite-clast-to-gimple.c. it is done by using the number of dimensions of a domain, which is attached to clast_for. As far as I know, isl doesn't give such a opportunity. If I'm not mistaken “2 * ( level + 1) “ is also not suitable for all cases. Hi Roman, you can get this information from the isl_ast_build that was used when generating a certain loop (you can access this isl_ast_build from the callbacks isl_ast_build_set_before_each_for and isl_ast_build_set_after_each_for). With isl_ast_build_get_schedule, you can get an incomplete schedule (less dimensions then the schedule that you gave to the isl ast generator). Specifically, it only contains the dimensions of the current loop and all surrounding ones. Consequently the last dimension in this incomplete schedule is the dimension you want to check for parallelism. If you use this schedule in loop_level_carries_dependences instead of the result from scop_get_transformed_schedule you should even be able to detect parallelism for something like the following: for (i A[i] += i for (i) A[0] += i Here the first loop is parallel, but the second not. The old approach would have said this dimension is not parallel at all. If you use the schedule provided, you only look at the statements involved in a certain loop, so the first loop would be detected as parallel. Tobias
Build failure for sparc-sun-solaris2.10
Hi. I've had no luck building GCC on a Sparc-based Solaris machine here since early March - right around the time some LTO related patches landed. I've started trying again and the build fails when linking libgcc_so at the end of stage1. Here's the end of the build log: { ... snip ... } /bin/bash /export/home/arth/src/gcc.git/libgcc/../mkinstalldirs sparcv9 mkdir -p -- sparcv9 /export/home/arth/src/gcc-0731/./gcc/xgcc -B/export/home/arth/src/gcc-0731/./gcc/ -B/export/home/arth/local/sparc-sun-solaris2.10/bin / -B/export/home/arth/local/sparc-sun-solaris2.10/lib/ -isystem /export/home/arth/local/sparc-sun-solaris2.10/include -isystem /expor t/home/arth/local/sparc-sun-solaris2.10/sys-include-O2 -g -O2 -DIN_GCC -W -Wall -Wno-narrowing -Wwrite-strings -Wcast-qual -W no-format -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -isystem ./include -fPIC -g -DIN_LIBGCC2 -fbuilding-libg cc -fno-stack-protector -shared -nodefaultlibs -Wl,--soname=libgcc_s.so.1 -Wl,--version-script=libgcc.map -o sparcv9/libgcc_s.so.1.t mp -g -O2 -m64 -B./ _muldi3_s.o _negdi2_s.o _lshrdi3_s.o _ashldi3_s.o _ashrdi3_s.o _cmpdi2_s.o _ucmpdi2_s.o _clear_cache_s.o _trampol ine_s.o __main_s.o _absvsi2_s.o _absvdi2_s.o _addvsi3_s.o _addvdi3_s.o _subvsi3_s.o _subvdi3_s.o _mulvsi3_s.o _mulvdi3_s.o _negvsi2_s .o _negvdi2_s.o _ctors_s.o _ffssi2_s.o _ffsdi2_s.o _clz_s.o _clzsi2_s.o _clzdi2_s.o _ctzsi2_s.o _ctzdi2_s.o _popcount_tab_s.o _popcou ntsi2_s.o _popcountdi2_s.o _paritysi2_s.o _paritydi2_s.o _powisf2_s.o _powidf2_s.o _powixf2_s.o _powitf2_s.o _mulsc3_s.o _muldc3_s.o _mulxc3_s.o _multc3_s.o _divsc3_s.o _divdc3_s.o _divxc3_s.o _divtc3_s.o _bswapsi2_s.o _bswapdi2_s.o _clrsbsi2_s.o _clrsbdi2_s.o _fixu nssfsi_s.o _fixunsdfsi_s.o _fixunsxfsi_s.o _fixsfdi_s.o _fixdfdi_s.o _fixxfdi_s.o _fixtfdi_s.o _fixunssfdi_s.o _fixunsdfdi_s.o _fixun sxfdi_s.o _fixunstfdi_s.o _floatdisf_s.o _floatdidf_s.o _floatdixf_s.o _floatditf_s.o _floatundisf_s.o _floatundidf_s.o _floatundixf_ s.o _floatunditf_s.o _divdi3_s.o _moddi3_s.o _udivdi3_s.o _umoddi3_s.o _udiv_w_sdiv_s.o _udivmoddi4_s.o enable-execute-stack_s.o unwi nd-dw2_s.o unwind-dw2-fde-dip_s.o unwind-sjlj_s.o unwind-c_s.o emutls_s.o libgcc.a -lc && rm -f sparcv9/libgcc_s.so && if [ -f sparcv 9/libgcc_s.so.1 ]; then mv -f sparcv9/libgcc_s.so.1 sparcv9/libgcc_s.so.1.backup; else true; fi && mv sparcv9/libgcc_s.so.1.tmp sparc v9/libgcc_s.so.1 && ln -s libgcc_s.so.1 sparcv9/libgcc_s.so collect2: fatal error: ld terminated with signal 10 [Bus Error], core dumped compilation terminated. gmake[5]: *** [libgcc_s.so] Error 1 gmake[5]: Leaving directory `/export/home/arth/src/gcc-0731/sparc-sun-solaris2.1 0/sparcv9/libgcc' gmake[4]: *** [multi-do] Error 1 gmake[4]: Leaving directory `/export/home/arth/src/gcc-0731/sparc-sun-solaris2.1 0/libgcc' gmake[3]: *** [all-multi] Error 2 gmake[3]: *** Waiting for unfinished jobs gmake[3]: Leaving directory `/export/home/arth/src/gcc-0731/sparc-sun-solaris2.1 0/libgcc' gmake[2]: *** [all-stage1-target-libgcc] Error 2 gmake[2]: Leaving directory `/export/home/arth/src/gcc-0731' gmake[1]: *** [stage1-bubble] Error 2 gmake[1]: Leaving directory `/export/home/arth/src/gcc-0731' gmake: *** [bootstrap] Error 2 The machine has the 2.24 release of GNU binutils installed. The crash leaves a core file which contains this backtrace: $ cd ~/src/gcc-0731/sparc-sun-solaris2.10/sparcv9/libgcc $ pstack core core 'core' of 20626: /export/home/arth/local/bin/ld -plugin /export/home/arth/src/gcc-0731/ fbb54bc0 claim_file_handler (ffbfb664, ffbfb5fc, 0, 0, 0, 0) + 38 0003198c plugin_maybe_claim (ffbfb664, 186840, 2, 181000, 181000, 189220) + 6c 0002ecf0 ldfile_try_open_bfd (1, 186840, fbb43800, 0, 6, ffbfbf59) + 1b0 0002f3ec ldfile_open_file (186840, 1, e8070, fba563a0, fbb3e3c0, fbb485b8) + ac 00022560 load_symbols (186840, ffbfb7a0, , fff8, 0, 186840) + 20 000233a0 open_input_bfds (186840, 0, 1b174, 1, 1859ac, 1811ac) + 1a0 000259a4 lang_process (185800, 185800, 181000, 0, 181000, 1859d8) + a4 000129e4 main (185a64, 181000, 185800, 185ab8, 1859d8, 0) + 604 00012f9c _start (0, 0, 0, 0, 0, 0) + 5c $ I hope the backtrace is helpful to some degree even though the crash is in the GNU 'ld' linker. When looking over the mailing list archives I found the long thread regarding the LTO linker change and the resulting fallout: Patch lands: https://gcc.gnu.org/ml/gcc-patches/2014-03/msg00157.html Maintainer notes it breaks the build: https://gcc.gnu.org/ml/gcc-patches/2014-03/msg00391.html After several days a fix was applied, but my builds still fail. Is there a configuration change needed to activate the fix? Here's the info for my last successful GCC build: $ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/export/home/arth/local/libexec/gcc/sparc-sun-solaris2.10/4.9.0/lto-wrapper Target: sparc-sun-solaris2.10 Configured with: /export/home/arth/src/gcc.git/configure --prefix=/export/home/arth/local
Implementation of Zeta functions in libstdc++
Hi! I've noticed that gcc's libstdc++ has implementations of the zeta function and the hurwitz-zeta functions. Is there any work going on with them? Would an implementation of hurwitz_zeta in terms of the Polylogarithm be helpful? Thanks, Florian.
Re: writing patterns
Richard Biener writes: > On Wed, Jul 30, 2014 at 12:49 PM, Prathamesh Kulkarni > wrote: >> Hi, >>Sorry to ask a stupid question, but I am having issues writing patterns >> involving casts. I am trying to write patterns from simplify_rotate. >> >> Could you show me how to write a patterns that involve >> casts ? >> for eg: >> ((T) ((T2) X << CNT1)) + ((T) ((T2) X >> CNT2)) iff CNT1 + CNT2 == B >> T -> some unsigned type with bitsize B, and some type T2 wider than T. >> How to express this in the pattern ? > > [copying gcc@ because of the syntax stuff] > > for example with (leaving captures as the appear in the pattern above) > > (match_and_simplify >(plus (convert@2 (lshift (convert@0 X) CNT1)) >(convert (rshift (convert@1 X) CNT2))) > /* Types T2 have to match */ >(if (types_compatible_p (TREE_TYPE (@0), TREE_TYPE (@1)) > /* Type T should be unsigned. */ >&& TYPE_UNSIGNED (TREE_TYPE (@2)) >/* T2 should be wider than T. */ >&& TYPE_PRECISION (TREE_TYPE (@0)) > TYPE_PRECISION (TREE_TYPE (@2)) >/* CNT1 + CNT2 == B */ >&& wi::eq_p (TYPE_PRECISION (TREE_TYPE (@2)), >wi::add (CNT1, CNT2 >(lrotate CNT1)) > > which suggests that we may want to add some type capturing / matching > so we can maybe write > > (plus (convert@T (lshift (convert@T2 X) CNT1)) > (convert (rshift (convert@T2 X) CNT2))) > (if (/* T2s will be matched automagically */ >&& TYPE_UNSIGNED (@T) >&& TYPE_PRECISION (@T2) > TYPE_PRECISION (@T) >&& wi::eq_p (TYPE_PRECISION (@T), wi::add (CNT1, CNT2 > > which is less to type and supports requiring matching types. Maybe > require @T[0-9]+ here thus use @T0 and disallow plain @T. We could > then also use @T for the implicitely "captured" outermost type we > refer to as plain 'type' right now. Would it also be worth trying to push more of the type properties into the pattern, a bit like md predicates? (Not the same syntax though, obviously.) Was just thinking that postponing things like "TYPE_UNSIGNED (@T)" until the whole tree has been matched could be inefficient in some cases. Might be going over old ground though, sorry. Richard
gcc-4.8-20140731 is now available
Snapshot gcc-4.8-20140731 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.8-20140731/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.8 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_8-branch revision 213387 You'll find: gcc-4.8-20140731.tar.bz2 Complete GCC MD5=151f200167f44addb1eb323f59a5b079 SHA1=a77d2cdbcb95950691e2fd8e3ad2d7979a934c4e Diffs from 4.8-20140724 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.8 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: GCC version bikeshedding
On Thu, Jul 31, 2014 at 4:52 AM, NightStrike wrote: > > One thing you might want to consider is that with the typical X.Y.Z > versioning of most GNU projects, changing X allows breaking > compatibility in a significant way with previous versions. While Z > fixes regressions and Y adds new features, X is a place to make > infrequent but paradigm shifting changes that are unconstrained by a > desire to stay backwards compatible with older values of X. > > By going to what is essentially a Y.Z.0 release mechanism, you lose > that ability to some degree. Maybe that's ok in a mature project like > GCC. I believe the GCC project has become too large to be able to usefully speak about breaking compatibility with previous versions. There are too many different moving parts. We break compatibility in various minor ways with every release. We try pretty hard to never break compatibility in a big way. Historically, as far as I can recall, the GCC major release number has never indicated a compatibility break that was relevant to most users. Ian
Re: GCC version bikeshedding
On Tue, Jul 29, 2014 at 10:27 AM, Markus Trippelsdorf wrote: > On 2014.07.29 at 19:14 +0200, Richard Biener wrote: >> On July 29, 2014 6:45:13 PM CEST, Eric Botcazou >> wrote: >> >> I think that if anybody has strong objections, now is the time to >> >make >> >> them. Otherwise I think we should go with this plan. >> > >> >IMHO the cure is worse than the disease. >> > >> >> Given that there is no clear reason to ever change the major version >> >> number, making that change will not convey any useful information to >> >> our users. So let's just drop the major version number. Once we've >> >> made that decision, then the next release (in 2015) naturally becomes >> >> 5.0, the release after that (in 2016) becomes 6.0, etc. >> > >> >I don't really understand the "naturally": if you drop the major >> >version >> >number, the next release should be 10.0, not 5.0. >> >> 10.0 would be even better from a marketing perspective. > > Since gcc is released annually why not tie the version to the year of > the release, instead of choosing an arbitrary number? > > 15.o Personally I would prefer that we retain the ability to make more rapid releases. Ian
Re: GCC version bikeshedding
On Tue, Jul 29, 2014 at 9:45 AM, Eric Botcazou wrote: >> I think that if anybody has strong objections, now is the time to make >> them. Otherwise I think we should go with this plan. > > IMHO the cure is worse than the disease. What do you propose that we do? >> Given that there is no clear reason to ever change the major version >> number, making that change will not convey any useful information to >> our users. So let's just drop the major version number. Once we've >> made that decision, then the next release (in 2015) naturally becomes >> 5.0, the release after that (in 2016) becomes 6.0, etc. > > I don't really understand the "naturally": if you drop the major version > number, the next release should be 10.0, not 5.0. Here we seem to be leaning > towards a weird scheme where we retain the major version number but change its > meaning, which will be even more confusing than the current scheme. Step 1: We agree that the current major revision number conveys no information, and therefore we will change the major revision number with every release. (I understand that you do not agree with this.) Step 2: Assuming we agree about step 1, what should the next version number be? Well, the current version is 4.9. Therefore, the next version should be 5.0. That seems entirely natural to me. Having the next release be 10.0 would make no sense to anybody who is not an active GCC developer. Ian
Re: GCC version bikeshedding
On 07/31/2014 07:03 PM, Ian Lance Taylor wrote: On Thu, Jul 31, 2014 at 4:52 AM, NightStrike wrote: One thing you might want to consider is that with the typical X.Y.Z versioning of most GNU projects, changing X allows breaking compatibility in a significant way with previous versions. While Z fixes regressions and Y adds new features, X is a place to make infrequent but paradigm shifting changes that are unconstrained by a desire to stay backwards compatible with older values of X. By going to what is essentially a Y.Z.0 release mechanism, you lose that ability to some degree. Maybe that's ok in a mature project like GCC. I believe the GCC project has become too large to be able to usefully speak about breaking compatibility with previous versions. There are too many different moving parts. We break compatibility in various minor ways with every release. We try pretty hard to never break compatibility in a big way. Historically, as far as I can recall, the GCC major release number has never indicated a compatibility break that was relevant to most users. Ian What about bumping the default compiler front end versions to C11 or C++11 or C++14? Even for bootstrap? There may be some breaking changes that are larger than the usual. FWIW, I do not object to going to 5.0, 6.0. Ed
Re: GCC version bikeshedding
On Thu, Jul 31, 2014 at 4:22 PM, Ed Smith-Rowland <3dw...@verizon.net> wrote: > On 07/31/2014 07:03 PM, Ian Lance Taylor wrote: >> >> I believe the GCC project has become too large to be able to usefully >> speak about breaking compatibility with previous versions. There are >> too many different moving parts. We break compatibility in various >> minor ways with every release. We try pretty hard to never break >> compatibility in a big way. Historically, as far as I can recall, the >> GCC major release number has never indicated a compatibility break >> that was relevant to most users. >> > What about bumping the default compiler front end versions to C11 or C++11 > or C++14? Even for bootstrap? > There may be some breaking changes that are larger than the usual. C++ is only one of several languages supported by GCC these days. Changing the default C++ version will not affect Fortran, Java, or Ada users. Or, for that matter, C users. Ian
Re: Implementation of Zeta functions in libstdc++
On 07/31/2014 01:03 PM, Florian Goth wrote: Hi! I've noticed that gcc's libstdc++ has implementations of the zeta function and the hurwitz-zeta functions. Is there any work going on with them? Would an implementation of hurwitz_zeta in terms of the Polylogarithm be helpful? Thanks, Florian. The tr1 math functions are in deep maintenance mode - generally we don't enhance them. On the other hand, there is TR 29124 Special math functions which is essentially the same thing. To contribute you need Copyright assignment. See https://gcc.gnu.org/contribute.html. Otherwise I, for one, would be interested in this contribution. Ed