SPEC 2006 - binary size comparison for Options that control Optimization
Hello, following table compares optimization levels as -O0, -Os, -O1-3 and -Ofast. Columns in the table include all ELF sections bigger than 5% for a binary. Apart from that I took -O2 as a base option and I tried to disable every option in this level. Similarly I measured impact of the rest of optimizations enabled in O3 or Ofast. That gives us about 100 configurations that can be compared for speed. I would like to ask Honza to mark configurations that make sense to be run for speed comparison. Thanks, Martin ++-+---+-+-++-+---+-+-+-+-+ || .rodata | .gcc_except_table | .data |.bss | .debug_str | .symtab | .eh_frame | .debug_info | .strtab | .text |SIZE | ++-+---+-+-++-+---+-+-+-+-+ | Os | 93.90% |77.38% | 98.93% | 97.85% |109.82% | 99.97% |94.50% | 125.83% | 100.75% | 79.24% | 86.23% | | O1 | 100.26% |87.20% | 100.00% | 100.01% |100.00% | 99.52% |93.15% | 100.00% | 99.21% | 93.46% | 95.49% | | O2_fbranch_probabilities | 100.00% |97.87% | 100.00% | 100.00% |100.00% | 99.72% |99.65% | 100.00% | 99.46% | 98.25% | 98.71% | | O2_fwrapv | 100.00% | 100.02% | 100.00% | 100.00% |100.00% | 99.99% |99.71% | 100.00% | 100.00% | 98.61% | 98.90% | | O2_fno_align_jumps | 100.00% |99.97% | 100.00% | 100.00% |100.00% | 100.00% |99.99% | 100.00% | 100.00% | 98.19% | 98.94% | | O2_fno_align_functions | 100.00% | 100.02% | 100.00% | 100.00% |100.00% | 100.00% |99.99% | 100.00% | 100.00% | 98.65% | 99.27% | | O2_fno_tree_pre| 99.83% | 100.46% | 100.00% | 100.00% |100.00% | 99.58% |99.66% | 100.00% | 99.88% | 98.94% | 99.29% | | O2_fno_align_loops | 100.00% |99.98% | 100.00% | 100.00% |100.00% | 100.00% |99.96% | 100.00% | 100.00% | 99.05% | 99.42% | | O2_fipa_pta| 100.00% |98.64% | 100.00% | 100.00% |100.00% | 99.96% | 100.39% | 100.00% | 99.99% | 99.32% | 99.47% | | O2_fno_caller_saves| 100.00% | 100.08% | 100.00% | 100.00% |100.00% | 100.01% | 100.10% | 100.00% | 100.00% | 99.29% | 99.54% | | O2_fno_inline_small_functions | 100.01% |99.41% | 100.00% | 100.01% |100.00% | 100.13% | 100.31% | 100.00% | 100.13% | 99.23% | 99.66% | | O2_fno_devirtualize_speculatively | 100.00% |98.24% | 100.00% | 100.00% |100.00% | 99.98% |99.87% | 100.00% | 99.98% | 99.50% | 99.73% | | O2_fno_devirtualize| 100.00% |98.26% | 100.00% | 100.00% |100.00% | 99.98% |99.87% | 100.00% | 99.98% | 99.50% | 99.73% | | O2_ftree_slp_vectorize | 100.82% | 100.00% | 100.00% | 100.00% |100.00% | 99.87% |99.07% | 100.00% | 99.88% | 99.89% | 99.92% | | O2_fira_loop_pressure | 100.00% | 100.02% | 100.00% | 100.00% |100.00% | 99.98% |99.97% | 100.00% | 100.00% | 99.90% | 99.97% | | O2_fno_optimize_sibling_calls | 100.00% |99.99% | 100.00% | 100.00% |100.00% | 100.02% |99.14% | 100.00% | 100.01% | 99.99% | 99.98% | | O2_fsignaling_nans | 99.56% | 100.02% | 100.00% | 100.00% |100.00% | 99.81% | 100.04% | 100.00% | 100.03% | 100.00% | 99.98% | | O2_fgcse_after_reload | 100.00% | 100.04% | 100.00% | 100.00% |100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 99.98% | 99.98% | | O2_fno_ipa_sra | 100.01% |99.90% | 100.00% | 100.00% |100.00% | 99.98% |99.97% | 100.00% | 99.81% | 99.98% | 99.98% | | O2_fno_reorder_blocks_and_partition| 100.00% |99.68% | 100.00% | 100.00% |100.00% | 100.00% | 100.00% | 100.00% | 99.79% | 99.99% | 99.99% | | O2_fsched2_use_superblocks | 100.00% | 100.00% | 100.00% | 100.00% |100.00% | 100.00% | 100.01% | 100.00% | 100.00% | 99.99% | 99.99% | | O2_fdata_sections | 99.88% | 100.00% | 99.73% | 99.63% |100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 100.00% | 99.99% | | O2_fno_reorder_functio
Re: SPEC 2006 - binary size comparison for Options that control Optimization
On Tue, Jul 15, 2014 at 12:45 AM, Martin Liška wrote: > Hello, >following table compares optimization levels as -O0, -Os, -O1-3 and > -Ofast. Columns in the table include all ELF sections bigger than 5% for a > binary. Apart from that I took -O2 as a base option and I tried to disable > every option in this level. Similarly I measured impact of the rest of > optimizations enabled in O3 or Ofast. That gives us about 100 configurations > that can be compared for speed. I would like to ask Honza to mark > configurations that make sense to be run for speed comparison. > Can you remove debug_str and debug_info from this list as they only change the file size and not the mapped in size? Thanks, Andrew > Thanks, > Martin
Re: SPEC 2006 - binary size comparison for Options that control Optimization
On 07/15/2014 09:50 AM, Andrew Pinski wrote: On Tue, Jul 15, 2014 at 12:45 AM, Martin Liška wrote: Hello, following table compares optimization levels as -O0, -Os, -O1-3 and -Ofast. Columns in the table include all ELF sections bigger than 5% for a binary. Apart from that I took -O2 as a base option and I tried to disable every option in this level. Similarly I measured impact of the rest of optimizations enabled in O3 or Ofast. That gives us about 100 configurations that can be compared for speed. I would like to ask Honza to mark configurations that make sense to be run for speed comparison. Can you remove debug_str and debug_info from this list as they only change the file size and not the mapped in size? Thanks, Andrew Sure, I added new column SIZE_WITHOUT_DEBUG, where I compare sum of all sections that do not start with '.debug.*'. Martin Thanks, Martin ++-+-+-++---+---+-+-+-+-+-++ ||.bss | .data | .debug_info | .debug_str | .eh_frame | .gcc_except_table | .rodata | .strtab | .symtab | .text |SIZE | SIZE_WITHOUT_DEBUG | ++-+-+-++---+---+-+-+-+-+-++ | Os | 97.85% | 98.93% | 125.83% |109.82% |94.50% |77.38% | 93.90% | 100.75% | 99.97% | 79.24% | 86.23% | 85.60% | | O1 | 100.01% | 100.00% | 100.00% |100.00% |93.15% |87.20% | 100.26% | 99.21% | 99.52% | 93.46% | 95.49% | 95.36% | | O2_fbranch_probabilities | 100.00% | 100.00% | 100.00% |100.00% |99.65% |97.87% | 100.00% | 99.46% | 99.72% | 98.25% | 98.71% | 98.68% | | O2_fwrapv | 100.00% | 100.00% | 100.00% |100.00% |99.71% | 100.02% | 100.00% | 100.00% | 99.99% | 98.61% | 98.90% | 98.86% | | O2_fno_align_jumps | 100.00% | 100.00% | 100.00% |100.00% |99.99% |99.97% | 100.00% | 100.00% | 100.00% | 98.19% | 98.94% | 98.92% | | O2_fno_align_functions | 100.00% | 100.00% | 100.00% |100.00% |99.99% | 100.02% | 100.00% | 100.00% | 100.00% | 98.65% | 99.27% | 99.25% | | O2_fno_tree_pre| 100.00% | 100.00% | 100.00% |100.00% |99.66% | 100.46% | 99.83% | 99.88% | 99.58% | 98.94% | 99.29% | 99.25% | | O2_fno_align_loops | 100.00% | 100.00% | 100.00% |100.00% |99.96% |99.98% | 100.00% | 100.00% | 100.00% | 99.05% | 99.42% | 99.40% | | O2_fipa_pta| 100.00% | 100.00% | 100.00% |100.00% | 100.39% |98.64% | 100.00% | 99.99% | 99.96% | 99.32% | 99.47% | 99.45% | | O2_fno_caller_saves| 100.00% | 100.00% | 100.00% |100.00% | 100.10% | 100.08% | 100.00% | 100.00% | 100.01% | 99.29% | 99.54% | 99.53% | | O2_fno_inline_small_functions | 100.01% | 100.00% | 100.00% |100.00% | 100.31% |99.41% | 100.01% | 100.13% | 100.13% | 99.23% | 99.66% | 99.65% | | O2_fno_devirtualize_speculatively | 100.00% | 100.00% | 100.00% |100.00% |99.87% |98.24% | 100.00% | 99.98% | 99.98% | 99.50% | 99.73% | 99.73% | | O2_fno_devirtualize| 100.00% | 100.00% | 100.00% |100.00% |99.87% |98.26% | 100.00% | 99.98% | 99.98% | 99.50% | 99.73% | 99.73% | | O2_ftree_slp_vectorize | 100.00% | 100.00% | 100.00% |100.00% |99.07% | 100.00% | 100.82% | 99.88% | 99.87% | 99.89% | 99.92% | 99.92% | | O2_fira_loop_pressure | 100.00% | 100.00% | 100.00% |100.00% |99.97% | 100.02% | 100.00% | 100.00% | 99.98% | 99.90% | 99.97% | 99.97% | | O2_fsignaling_nans | 100.00% | 100.00% | 100.00% |100.00% | 100.04% | 100.02% | 99.56% | 100.03% | 99.81% | 100.00% | 99.98% | 99.97% | | O2_fno_optimize_sibling_calls | 100.00% | 100.00% | 100.00% |100.00% |99.14% |99.99% | 100.00% | 100.01% | 100.02% | 99.99% | 99.98% | 99.98% | | O2_fgcse_after_reload | 100.00% | 100.00% | 100.00% |100.00% | 100.00% | 100.04% | 10
Re: predicates on expressions ?
On Mon, Jul 14, 2014 at 10:52 PM, Prathamesh Kulkarni wrote: > On Mon, Jul 14, 2014 at 6:35 PM, Richard Biener > wrote: >> On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni >> wrote: >>> I was wondering if it was a good idea to implement >>> predicate on expressions ? >>> >>> Sth like: >>> (match_and_simplify >>> (op (op2:predicate @0)) >>> transform) >>> >>> instead of: >>> (match_and_simplify >>> (op (op2@1 @0)) >>> if (predicate (@1)) >>> transform) >>> >>> When predicate is simple as just being a macro/function, >>> we could use this style and when the predicate is more complex >>> resort to using if-expr (or write the predicate as macro in >>> gimple-match-head.c >>> and use the macro in pattern instead ...) >>> >>> Example: >>> we could rewrite the pattern >>> (match_and_simplify >>> (plus:c @0 (negate @1)) >>> if (!TYPE_SATURATING (type)) >>> (minus @0 @1)) >>> >>> to >>> >>> (match_and_simplify >>> (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) >>> (minus @0 @1)) >>> >>> with NOT_TYPE_SATURATING_P predicate defined >>> appropriately in gimple-match-head.c >>> >>> However I am not entirely sure if adding predicates on expressions >>> would be very useful >> >> Well. I think there are two aspects to this. First is pattern >> readability where I think that the if-expr form is more readable. >> Second is the ability to do less work in the code generated >> from the decision tree. >> >> For example most of the patterns from associate_plusminus >> still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && >> !FIXED_POINT_TYPE_P if-expr. That is, we'd have >> >> /* (A +- B) - A -> +-B. */ >> (match_and_simplify >> (minus (plus @0 @1) @0) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> @1) >> (match_and_simplify >> (minus (minus @0 @1) @0) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> (negate @1)) >> /* (A +- B) -+ B -> A. */ >> (match_and_simplify >> (minus (plus @0 @1) @1) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> @0) >> (match_and_simplify >> (plus:c (minus @0 @1) @1) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> @0) >> >> with code-generation checking the if-expr after matching. And >> with using expression predicates we'd be able to check the >> predicate when matching the outermost 'minus' and "CSE" >> the predicate check for the first three patterns. Runtime-wise >> it depends on whether there is a point to back-track to. >> >> I would say it's more interesting to support >> >> if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && >> !FiXED_POINT_TYPE_P (type)) >>(match_and_simplify ) >>(match_and_simplify ) >> >> >> and treat this if-expression like a predicate on the outermost >> expression. That's getting both benefits >> (bah, the free-form if-expr makes it ugly, what do we use as >> grouping syntax? I guess wrapping the whole thing in ()s, >> similar to (for ...)). > Um, I was wondering instead of defining new syntax > if it would be better to make genmatch detect common if-expr > and hoist them ? I suppose we could compare if-expr's lexicographically ? > > However I guess having some syntax to group common if-expr patterns > explicitly would > avoid the need for writing the if-expr in each pattern. Yeah, the main motiviation is to make the patterns itself easier to read and group them by boiler-plate if-exprs. > For now should we go with free-form if ? I'd say (if (!TYPE_SATURATING (type) ) ) thus wrap the if inside ()s. Otherwise there would be no way to "end" an if. > If desired, we could change syntax later to > something else (only parsing code need change, the rest would be in place). > If we change the syntax for outer-if, for consistency should we also > change syntax of inner if ? Probably yes, let's wrap the inner if inside ()s as well. Code-generation-wise we should record a vector of if-exprs and thus evaluate the outer ifs at the same place we evaluate inner if-exprs. Thanks, Richard. > Thanks and Regards, > Prathamesh >> >> Richard. >> >>> Thanks and Regards, >>> Prathamesh
Re: predicates on expressions ?
On Mon, Jul 14, 2014 at 3:05 PM, Richard Biener wrote: > On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni > wrote: >> I was wondering if it was a good idea to implement >> predicate on expressions ? >> >> Sth like: >> (match_and_simplify >> (op (op2:predicate @0)) >> transform) >> >> instead of: >> (match_and_simplify >> (op (op2@1 @0)) >> if (predicate (@1)) >> transform) >> >> When predicate is simple as just being a macro/function, >> we could use this style and when the predicate is more complex >> resort to using if-expr (or write the predicate as macro in >> gimple-match-head.c >> and use the macro in pattern instead ...) >> >> Example: >> we could rewrite the pattern >> (match_and_simplify >> (plus:c @0 (negate @1)) >> if (!TYPE_SATURATING (type)) >> (minus @0 @1)) >> >> to >> >> (match_and_simplify >> (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) >> (minus @0 @1)) >> >> with NOT_TYPE_SATURATING_P predicate defined >> appropriately in gimple-match-head.c >> >> However I am not entirely sure if adding predicates on expressions >> would be very useful > > Well. I think there are two aspects to this. First is pattern > readability where I think that the if-expr form is more readable. > Second is the ability to do less work in the code generated > from the decision tree. > > For example most of the patterns from associate_plusminus > still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && > !FIXED_POINT_TYPE_P if-expr. That is, we'd have > > /* (A +- B) - A -> +-B. */ > (match_and_simplify > (minus (plus @0 @1) @0) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > @1) > (match_and_simplify > (minus (minus @0 @1) @0) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > (negate @1)) > /* (A +- B) -+ B -> A. */ > (match_and_simplify > (minus (plus @0 @1) @1) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > @0) > (match_and_simplify > (plus:c (minus @0 @1) @1) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > @0) > > with code-generation checking the if-expr after matching. And > with using expression predicates we'd be able to check the > predicate when matching the outermost 'minus' and "CSE" > the predicate check for the first three patterns. Runtime-wise > it depends on whether there is a point to back-track to. Actually now that I look at the current state of the testsuite on the branch and notice FAIL: gcc.c-torture/execute/20081112-1.c execution, -O1 which points at (match_and_simplify (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) (plus @0 (plus @1 @2)) which we may not apply to (a - 1) + INT_MIN as -1 + INT_MIN overflows and a + (-1 + INT_MIN) then introduces undefined signed integer overflow. tree-ssa-forwprop.c checks TREE_OVERFLOW on the result of (plus @1 @2) and disables the simplification properly. We can do the same with re-writing the pattern to (match_and_simplify (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) /* If the constant operation overflows we cannot do the transform as we would introduce undefined overflow, for example with (a - 1) + INT_MIN. */ if (!TREE_OVERFLOW (@1 = int_const_binop (PLUS_EXPR, @1, @2))) (plus @0 @1)) also using something I'd like to more formally allow (re-using sth computed in the if-expr in the replacement). But of course writing it this way is ugly and the following would be nicer? (match_and_simplify (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) (plus @0 (plus:!TREE_OVERFLOW @1 @2))) ? That would be predicates on replacement expressions ... (also negated predicates). Now it doesn't look all-that-pretty :/ Another possibility is to always fail if TREE_OVERFLOW constants leak into the replacement IL. (but I'd like to avoid those behind-the-backs things at the moment) Richard.
Re: predicates on expressions ?
On Tue, Jul 15, 2014 at 2:07 PM, Richard Biener wrote: > On Mon, Jul 14, 2014 at 10:52 PM, Prathamesh Kulkarni > wrote: >> On Mon, Jul 14, 2014 at 6:35 PM, Richard Biener >> wrote: >>> On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni >>> wrote: I was wondering if it was a good idea to implement predicate on expressions ? Sth like: (match_and_simplify (op (op2:predicate @0)) transform) instead of: (match_and_simplify (op (op2@1 @0)) if (predicate (@1)) transform) When predicate is simple as just being a macro/function, we could use this style and when the predicate is more complex resort to using if-expr (or write the predicate as macro in gimple-match-head.c and use the macro in pattern instead ...) Example: we could rewrite the pattern (match_and_simplify (plus:c @0 (negate @1)) if (!TYPE_SATURATING (type)) (minus @0 @1)) to (match_and_simplify (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) (minus @0 @1)) with NOT_TYPE_SATURATING_P predicate defined appropriately in gimple-match-head.c However I am not entirely sure if adding predicates on expressions would be very useful >>> >>> Well. I think there are two aspects to this. First is pattern >>> readability where I think that the if-expr form is more readable. >>> Second is the ability to do less work in the code generated >>> from the decision tree. >>> >>> For example most of the patterns from associate_plusminus >>> still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && >>> !FIXED_POINT_TYPE_P if-expr. That is, we'd have >>> >>> /* (A +- B) - A -> +-B. */ >>> (match_and_simplify >>> (minus (plus @0 @1) @0) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> @1) >>> (match_and_simplify >>> (minus (minus @0 @1) @0) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> (negate @1)) >>> /* (A +- B) -+ B -> A. */ >>> (match_and_simplify >>> (minus (plus @0 @1) @1) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> @0) >>> (match_and_simplify >>> (plus:c (minus @0 @1) @1) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> @0) >>> >>> with code-generation checking the if-expr after matching. And >>> with using expression predicates we'd be able to check the >>> predicate when matching the outermost 'minus' and "CSE" >>> the predicate check for the first three patterns. Runtime-wise >>> it depends on whether there is a point to back-track to. >>> >>> I would say it's more interesting to support >>> >>> if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && >>> !FiXED_POINT_TYPE_P (type)) >>>(match_and_simplify ) >>>(match_and_simplify ) >>> >>> >>> and treat this if-expression like a predicate on the outermost >>> expression. That's getting both benefits >>> (bah, the free-form if-expr makes it ugly, what do we use as >>> grouping syntax? I guess wrapping the whole thing in ()s, >>> similar to (for ...)). >> Um, I was wondering instead of defining new syntax >> if it would be better to make genmatch detect common if-expr >> and hoist them ? I suppose we could compare if-expr's lexicographically ? >> >> However I guess having some syntax to group common if-expr patterns >> explicitly would >> avoid the need for writing the if-expr in each pattern. > > Yeah, the main motiviation is to make the patterns itself easier to read > and group them by boiler-plate if-exprs. > >> For now should we go with free-form if ? > > I'd say > > (if (!TYPE_SATURATING (type) ) > > ) > > thus wrap the if inside ()s. Otherwise there would be no way to > "end" an if. maybe use braces ? if (c_expr) { patterns } but (if ...) is better. > >> If desired, we could change syntax later to >> something else (only parsing code need change, the rest would be in place). >> If we change the syntax for outer-if, for consistency should we also >> change syntax of inner if ? > > Probably yes, let's wrap the inner if inside ()s as well. Okay. > > Code-generation-wise we should record a vector of if-exprs and thus > evaluate the outer ifs at the same place we evaluate inner if-exprs. Um, I don't get this. say we have the following pattern: (if cond (match_and_simplify match1 transform1) (match_and_simplify match2 transform2)) The generated code would be the following ? if (cond) { match1 transform1 match2 transform2 } Currently we do: match1 if (cond) transform1 match2 if (cond) transform2 Thanks and Regards, Prathamesh > > Thanks, > Richard. > >> Thanks and Regards, >> Prathamesh >>> >>> Richard. >>> Thanks and Regards, Prathamesh
Re: predicates on expressions ?
On Tue, Jul 15, 2014 at 2:47 PM, Prathamesh Kulkarni wrote: > On Tue, Jul 15, 2014 at 2:07 PM, Richard Biener > wrote: >> On Mon, Jul 14, 2014 at 10:52 PM, Prathamesh Kulkarni >> wrote: >>> On Mon, Jul 14, 2014 at 6:35 PM, Richard Biener >>> wrote: On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni wrote: > I was wondering if it was a good idea to implement > predicate on expressions ? > > Sth like: > (match_and_simplify > (op (op2:predicate @0)) > transform) > > instead of: > (match_and_simplify > (op (op2@1 @0)) > if (predicate (@1)) > transform) > > When predicate is simple as just being a macro/function, > we could use this style and when the predicate is more complex > resort to using if-expr (or write the predicate as macro in > gimple-match-head.c > and use the macro in pattern instead ...) > > Example: > we could rewrite the pattern > (match_and_simplify > (plus:c @0 (negate @1)) > if (!TYPE_SATURATING (type)) > (minus @0 @1)) > > to > > (match_and_simplify > (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) > (minus @0 @1)) > > with NOT_TYPE_SATURATING_P predicate defined > appropriately in gimple-match-head.c > > However I am not entirely sure if adding predicates on expressions > would be very useful Well. I think there are two aspects to this. First is pattern readability where I think that the if-expr form is more readable. Second is the ability to do less work in the code generated from the decision tree. For example most of the patterns from associate_plusminus still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && !FIXED_POINT_TYPE_P if-expr. That is, we'd have /* (A +- B) - A -> +-B. */ (match_and_simplify (minus (plus @0 @1) @0) if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) @1) (match_and_simplify (minus (minus @0 @1) @0) if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) (negate @1)) /* (A +- B) -+ B -> A. */ (match_and_simplify (minus (plus @0 @1) @1) if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) @0) (match_and_simplify (plus:c (minus @0 @1) @1) if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) @0) with code-generation checking the if-expr after matching. And with using expression predicates we'd be able to check the predicate when matching the outermost 'minus' and "CSE" the predicate check for the first three patterns. Runtime-wise it depends on whether there is a point to back-track to. I would say it's more interesting to support if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && !FiXED_POINT_TYPE_P (type)) (match_and_simplify ) (match_and_simplify ) and treat this if-expression like a predicate on the outermost expression. That's getting both benefits (bah, the free-form if-expr makes it ugly, what do we use as grouping syntax? I guess wrapping the whole thing in ()s, similar to (for ...)). >>> Um, I was wondering instead of defining new syntax >>> if it would be better to make genmatch detect common if-expr >>> and hoist them ? I suppose we could compare if-expr's lexicographically ? >>> >>> However I guess having some syntax to group common if-expr patterns >>> explicitly would >>> avoid the need for writing the if-expr in each pattern. >> >> Yeah, the main motiviation is to make the patterns itself easier to read >> and group them by boiler-plate if-exprs. >> >>> For now should we go with free-form if ? >> >> I'd say >> >> (if (!TYPE_SATURATING (type) ) >> >> ) >> >> thus wrap the if inside ()s. Otherwise there would be no way to >> "end" an if. > maybe use braces ? > if (c_expr) > { >patterns > } > but (if ...) is better. >> >>> If desired, we could change syntax later to >>> something else (only parsing code need change, the rest would be in place). >>> If we change the syntax for outer-if, for consistency should we also >>> change syntax of inner if ? >> >> Probably yes, let's wrap the inner if inside ()s as well. > Okay. >> >> Code-generation-wise we should record a vector of if-exprs and thus >> evaluate the outer ifs at the same place we evaluate inner if-exprs. > Um, I don't get this. > say we have the following pattern: > (if cond >(match_and_simplify > match1 > transform1) > >(match_and_simplify > match2 > transform2)) > > The generated code would be the following ? > if (cond) > { >match1 > transform1 > >match2 > transform2 > } > > Currently we do: > match1
Re: predicates on expressions ?
On Tue, Jul 15, 2014 at 6:05 PM, Richard Biener wrote: > On Mon, Jul 14, 2014 at 3:05 PM, Richard Biener > wrote: >> On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni >> wrote: >>> I was wondering if it was a good idea to implement >>> predicate on expressions ? >>> >>> Sth like: >>> (match_and_simplify >>> (op (op2:predicate @0)) >>> transform) >>> >>> instead of: >>> (match_and_simplify >>> (op (op2@1 @0)) >>> if (predicate (@1)) >>> transform) >>> >>> When predicate is simple as just being a macro/function, >>> we could use this style and when the predicate is more complex >>> resort to using if-expr (or write the predicate as macro in >>> gimple-match-head.c >>> and use the macro in pattern instead ...) >>> >>> Example: >>> we could rewrite the pattern >>> (match_and_simplify >>> (plus:c @0 (negate @1)) >>> if (!TYPE_SATURATING (type)) >>> (minus @0 @1)) >>> >>> to >>> >>> (match_and_simplify >>> (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) >>> (minus @0 @1)) >>> >>> with NOT_TYPE_SATURATING_P predicate defined >>> appropriately in gimple-match-head.c >>> >>> However I am not entirely sure if adding predicates on expressions >>> would be very useful >> >> Well. I think there are two aspects to this. First is pattern >> readability where I think that the if-expr form is more readable. >> Second is the ability to do less work in the code generated >> from the decision tree. >> >> For example most of the patterns from associate_plusminus >> still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && >> !FIXED_POINT_TYPE_P if-expr. That is, we'd have >> >> /* (A +- B) - A -> +-B. */ >> (match_and_simplify >> (minus (plus @0 @1) @0) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> @1) >> (match_and_simplify >> (minus (minus @0 @1) @0) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> (negate @1)) >> /* (A +- B) -+ B -> A. */ >> (match_and_simplify >> (minus (plus @0 @1) @1) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> @0) >> (match_and_simplify >> (plus:c (minus @0 @1) @1) >> if (!TYPE_SATURATING (type) >> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >> @0) >> >> with code-generation checking the if-expr after matching. And >> with using expression predicates we'd be able to check the >> predicate when matching the outermost 'minus' and "CSE" >> the predicate check for the first three patterns. Runtime-wise >> it depends on whether there is a point to back-track to. > > Actually now that I look at the current state of the testsuite on the > branch and notice > > FAIL: gcc.c-torture/execute/20081112-1.c execution, -O1 > > which points at > > (match_and_simplify > (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) > (plus @0 (plus @1 @2)) > > which we may not apply to (a - 1) + INT_MIN as -1 + INT_MIN > overflows and a + (-1 + INT_MIN) then introduces undefined > signed integer overflow. tree-ssa-forwprop.c checks TREE_OVERFLOW > on the result of (plus @1 @2) and disables the simplification > properly. We can do the same with re-writing the pattern to > > (match_and_simplify > (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) > /* If the constant operation overflows we cannot do the transform > as we would introduce undefined overflow, for example > with (a - 1) + INT_MIN. */ > if (!TREE_OVERFLOW (@1 = int_const_binop (PLUS_EXPR, @1, @2))) > (plus @0 @1)) > > also using something I'd like to more formally allow (re-using sth > computed in the if-expr in the replacement). But of course writing > it this way is ugly and the following would be nicer? > > (match_and_simplify > (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) > (plus @0 (plus:!TREE_OVERFLOW @1 @2))) > > ? That would be predicates on replacement expressions ... > (also negated predicates). Or maybe allow intermingling c-expr and expr ? (match_and_simplify (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) /* If the constant operation overflows we cannot do the transform as we would introduce undefined overflow, for example with (a - 1) + INT_MIN. */ { tree sum; sum = int_const_binop (PLUS_EXPR, @1, @2); if (!TREE_OVERFLOW (sum)) (plus @0 @1) else FAIL; }) However the predicates version looks better, compared to others... Thanks and Regards, Prathamesh > Now it doesn't look all-that-pretty :/ > > Another possibility is to always fail if TREE_OVERFLOW constants > leak into the replacement IL. (but I'd like to avoid those behind-the-backs > things at the moment) > > Richard.
Re: predicates on expressions ?
On Tue, Jul 15, 2014 at 6:28 PM, Richard Biener wrote: > On Tue, Jul 15, 2014 at 2:47 PM, Prathamesh Kulkarni > wrote: >> On Tue, Jul 15, 2014 at 2:07 PM, Richard Biener >> wrote: >>> On Mon, Jul 14, 2014 at 10:52 PM, Prathamesh Kulkarni >>> wrote: On Mon, Jul 14, 2014 at 6:35 PM, Richard Biener wrote: > On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni > wrote: >> I was wondering if it was a good idea to implement >> predicate on expressions ? >> >> Sth like: >> (match_and_simplify >> (op (op2:predicate @0)) >> transform) >> >> instead of: >> (match_and_simplify >> (op (op2@1 @0)) >> if (predicate (@1)) >> transform) >> >> When predicate is simple as just being a macro/function, >> we could use this style and when the predicate is more complex >> resort to using if-expr (or write the predicate as macro in >> gimple-match-head.c >> and use the macro in pattern instead ...) >> >> Example: >> we could rewrite the pattern >> (match_and_simplify >> (plus:c @0 (negate @1)) >> if (!TYPE_SATURATING (type)) >> (minus @0 @1)) >> >> to >> >> (match_and_simplify >> (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) >> (minus @0 @1)) >> >> with NOT_TYPE_SATURATING_P predicate defined >> appropriately in gimple-match-head.c >> >> However I am not entirely sure if adding predicates on expressions >> would be very useful > > Well. I think there are two aspects to this. First is pattern > readability where I think that the if-expr form is more readable. > Second is the ability to do less work in the code generated > from the decision tree. > > For example most of the patterns from associate_plusminus > still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && > !FIXED_POINT_TYPE_P if-expr. That is, we'd have > > /* (A +- B) - A -> +-B. */ > (match_and_simplify > (minus (plus @0 @1) @0) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > @1) > (match_and_simplify > (minus (minus @0 @1) @0) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > (negate @1)) > /* (A +- B) -+ B -> A. */ > (match_and_simplify > (minus (plus @0 @1) @1) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > @0) > (match_and_simplify > (plus:c (minus @0 @1) @1) > if (!TYPE_SATURATING (type) > && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) > @0) > > with code-generation checking the if-expr after matching. And > with using expression predicates we'd be able to check the > predicate when matching the outermost 'minus' and "CSE" > the predicate check for the first three patterns. Runtime-wise > it depends on whether there is a point to back-track to. > > I would say it's more interesting to support > > if (!TYPE_SATURATING (type) && !FLOAT_TYPE_P (type) && > !FiXED_POINT_TYPE_P (type)) >(match_and_simplify ) >(match_and_simplify ) > > > and treat this if-expression like a predicate on the outermost > expression. That's getting both benefits > (bah, the free-form if-expr makes it ugly, what do we use as > grouping syntax? I guess wrapping the whole thing in ()s, > similar to (for ...)). Um, I was wondering instead of defining new syntax if it would be better to make genmatch detect common if-expr and hoist them ? I suppose we could compare if-expr's lexicographically ? However I guess having some syntax to group common if-expr patterns explicitly would avoid the need for writing the if-expr in each pattern. >>> >>> Yeah, the main motiviation is to make the patterns itself easier to read >>> and group them by boiler-plate if-exprs. >>> For now should we go with free-form if ? >>> >>> I'd say >>> >>> (if (!TYPE_SATURATING (type) ) >>> >>> ) >>> >>> thus wrap the if inside ()s. Otherwise there would be no way to >>> "end" an if. >> maybe use braces ? >> if (c_expr) >> { >>patterns >> } >> but (if ...) is better. >>> If desired, we could change syntax later to something else (only parsing code need change, the rest would be in place). If we change the syntax for outer-if, for consistency should we also change syntax of inner if ? >>> >>> Probably yes, let's wrap the inner if inside ()s as well. >> Okay. >>> >>> Code-generation-wise we should record a vector of if-exprs and thus >>> evaluate the outer ifs at the same place we evaluate inner if-exprs. >> Um, I don't get this. >> say we have the following pattern: >> (if cond >>(match_and_simplify >> match1 >> transform1) >> >>(match_an
Re: predicates on expressions ?
On Tue, Jul 15, 2014 at 6:29 PM, Prathamesh Kulkarni wrote: > On Tue, Jul 15, 2014 at 6:05 PM, Richard Biener > wrote: >> On Mon, Jul 14, 2014 at 3:05 PM, Richard Biener >> wrote: >>> On Mon, Jul 14, 2014 at 12:07 PM, Prathamesh Kulkarni >>> wrote: I was wondering if it was a good idea to implement predicate on expressions ? Sth like: (match_and_simplify (op (op2:predicate @0)) transform) instead of: (match_and_simplify (op (op2@1 @0)) if (predicate (@1)) transform) When predicate is simple as just being a macro/function, we could use this style and when the predicate is more complex resort to using if-expr (or write the predicate as macro in gimple-match-head.c and use the macro in pattern instead ...) Example: we could rewrite the pattern (match_and_simplify (plus:c @0 (negate @1)) if (!TYPE_SATURATING (type)) (minus @0 @1)) to (match_and_simplify (plus:c:NOT_TYPE_SATURATING_P @0 (negate @1)) (minus @0 @1)) with NOT_TYPE_SATURATING_P predicate defined appropriately in gimple-match-head.c However I am not entirely sure if adding predicates on expressions would be very useful >>> >>> Well. I think there are two aspects to this. First is pattern >>> readability where I think that the if-expr form is more readable. >>> Second is the ability to do less work in the code generated >>> from the decision tree. >>> >>> For example most of the patterns from associate_plusminus >>> still miss the !TYPE_SATURATING && !FLOAT_TYPE_P && >>> !FIXED_POINT_TYPE_P if-expr. That is, we'd have >>> >>> /* (A +- B) - A -> +-B. */ >>> (match_and_simplify >>> (minus (plus @0 @1) @0) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> @1) >>> (match_and_simplify >>> (minus (minus @0 @1) @0) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> (negate @1)) >>> /* (A +- B) -+ B -> A. */ >>> (match_and_simplify >>> (minus (plus @0 @1) @1) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> @0) >>> (match_and_simplify >>> (plus:c (minus @0 @1) @1) >>> if (!TYPE_SATURATING (type) >>> && !FLOAT_TYPE_P (type) && !FIXED_POINT_TYPE_P (type)) >>> @0) >>> >>> with code-generation checking the if-expr after matching. And >>> with using expression predicates we'd be able to check the >>> predicate when matching the outermost 'minus' and "CSE" >>> the predicate check for the first three patterns. Runtime-wise >>> it depends on whether there is a point to back-track to. >> >> Actually now that I look at the current state of the testsuite on the >> branch and notice >> >> FAIL: gcc.c-torture/execute/20081112-1.c execution, -O1 >> >> which points at >> >> (match_and_simplify >> (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) >> (plus @0 (plus @1 @2)) >> >> which we may not apply to (a - 1) + INT_MIN as -1 + INT_MIN >> overflows and a + (-1 + INT_MIN) then introduces undefined >> signed integer overflow. tree-ssa-forwprop.c checks TREE_OVERFLOW >> on the result of (plus @1 @2) and disables the simplification >> properly. We can do the same with re-writing the pattern to >> >> (match_and_simplify >> (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) >> /* If the constant operation overflows we cannot do the transform >> as we would introduce undefined overflow, for example >> with (a - 1) + INT_MIN. */ >> if (!TREE_OVERFLOW (@1 = int_const_binop (PLUS_EXPR, @1, @2))) >> (plus @0 @1)) >> >> also using something I'd like to more formally allow (re-using sth >> computed in the if-expr in the replacement). But of course writing >> it this way is ugly and the following would be nicer? >> >> (match_and_simplify >> (plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) >> (plus @0 (plus:!TREE_OVERFLOW @1 @2))) >> >> ? That would be predicates on replacement expressions ... >> (also negated predicates). > Or maybe allow intermingling c-expr and expr ? > (match_and_simplify >(plus (plus @0 INTEGER_CST_P@1) INTEGER_CST_P@2) >/* If the constant operation overflows we cannot do the transform > as we would introduce undefined overflow, for example > with (a - 1) + INT_MIN. */ > { >tree sum; >sum = int_const_binop (PLUS_EXPR, @1, @2); >if (!TREE_OVERFLOW (sum)) > (plus @0 @1) (plus @0 sum) This is not good >else > FAIL; > }) > > However the predicates version looks better, compared to others... > > Thanks and Regards, > Prathamesh >> Now it doesn't look all-that-pretty :/ >> >> Another possibility is to always fail if TREE_OVERFLOW constants >> leak into the replacement IL. (but I'd like to avoid those behind-the-backs >> things at the moment) >> >
Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM
> On 25 June 2014 10:26, Bingfeng Mei wrote: > > Why is GCC code size so much bigger than LLVM? Does -Ofast have more > > unrolling > > on GCC? It doesn't seem increasing code size help performance (164.gzip & > > 197.parser) > > Is there comparisons for O2? I guess that is more useful for typical > > mobile/embedded programmers. > > Hi Bingfeng, > > My analysis wasn't as thorough as Vladimir's, but I found that GCC > wasn't eliminating some large blocks of dead code or inlining as much > as LLVM was. I haven't dug deeper, though. Some of the differences I would be defnitely interested in such testcases.. I do not think we should be just missing DCE oppurtunities. As for inlining, GCC often disables inlining based on profile (by concluding that given call is cold), but that should not result in larger code of course. I also noticed that GCC code size is bigger for both firefox and libreoffice. There was some extra bloat in 4.9 compared to 4.8. Martin did some tests with -O2 and various flags, perhaps we could trottle some of -O2 optimizations. Honza > were quite big, I'd be surprised if it all can be explained by > unrolling loops and vectorization... > > cheers, > --renato
Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM
On 15 July 2014 15:43, Jan Hubicka wrote: > I also noticed that GCC code size is bigger for both firefox and libreoffice. > There was some extra bloat in 4.9 compared to 4.8. > Martin did some tests with -O2 and various flags, perhaps we could trottle > some of -O2 optimizations. Now that you mention, I do believe that was with 4.9 in comparison with both 4.8 and LLVM 3.4, all on -O3, around Feb. Unfortunately, I can't share with you the results, but since both firefox and libreoffice show the same behaviour, I guess you already have a way through. cheers, --renato
Re: Comparison of GCC-4.9 and LLVM-3.4 performance on SPECInt2000 for x86-64 and ARM
> On 15 July 2014 15:43, Jan Hubicka wrote: > > I also noticed that GCC code size is bigger for both firefox and > > libreoffice. > > There was some extra bloat in 4.9 compared to 4.8. > > Martin did some tests with -O2 and various flags, perhaps we could trottle > > some of -O2 optimizations. > > Now that you mention, I do believe that was with 4.9 in comparison > with both 4.8 and LLVM 3.4, all on -O3, around Feb. > > Unfortunately, I can't share with you the results, but since both > firefox and libreoffice show the same behaviour, I guess you already > have a way through. Well, not really - those are really huge codebases and thus hard to analyze, so I did not have time to nail it down to something useful yet. Honza > > cheers, > --renato
Re: [GSoC] generation of Gimple loops with empty bodies
This is not a patch review, lets move this to gcc@gcc.gnu.org. On 15/07/2014 17:03, Roman Gareev wrote: I've found out that int128_integer_type_node and long_long_integer_type_node are NULL at the moment of definition of the graphite_expression_size_type. Maybe we should use long_long_integer_type_node, because, as you said before, using of signed 64 has also been proved to be very robust. What do you think about this? I do not fully understand this message. You first say that long_long_integer_type_node is NULL, but then want to use this. This does not seem to be a solution. Most likely it is the solution, but the problem description makes it hard to understand it. Is the problem caused by initialization order issues? Or why are such types NULL? (I am fine with using 64 bits by default, but I would like to keep the possibility to compile with 128 bits to allow the size to be changed easily during debugging. So using a specific type directly without going through a graphite specific variable is something I would like to avoid. Cheers, Tobias
Re: Crashes inside libgcc_s_dw2-1.dll
Hi Eli, Corinna has asked me to take a look at your bug report[1] on this problem (since she has now encountered it in an Cygwin environment). Unfortunately I am not an x86 expert so I am not really able to dig deeply into it. But what I would recommend is filing an official gcc bug report and then pinging the x86 gcc maintainers to see if you can persuade them that it is a problem worth investigating. If that fails, please could you ping me directly and I will try to have a go at fixing the problem myself. No promises on solving it though... :-) Cheers Nick [1]: https://gcc.gnu.org/ml/gcc/2013-05/msg00214.html
Re: PLEASE RE-ADD MIRRORS (small correction)
Hi Gerald. Are you still interested in the mirrors? Thanks, Dan & Go-Parts -Original Message- From: Gerald Pfeifer Sent: Tuesday, July 08, 2014 11:52 AM To: Dan D. Cc: gcc@gcc.gnu.org Subject: Re: PLEASE RE-ADD MIRRORS (small correction) Hi Dan, I see there is a later mail from Steven which I'm going to look into wrt. adding the mirrors. There seems to be a number of you looking into mirroring?? Gerald On Fri, 14 Mar 2014, Dan D. wrote: I made a small mistake below on the ftp/rsync mirrors for the USA mirror. They should be: (USA) http://mirrors-usa.go-parts.com/gcc ftp://mirrors-usa.go-parts.com/gcc rsync://mirrors-usa.go-parts.com/gcc From: dan1...@msn.com To: gcc@gcc.gnu.org Subject: PLEASE RE-ADD MIRRORS Date: Fri, 14 Mar 2014 16:53:22 -0700 Hello, We previously had these same mirrors up under Go-Part.com but then changed our domain to Go-Parts.com. The mirror links then dropped off. We apologize deeply for this, and assure you that this is a one-time event. Going forward, the mirrors will stay up for a very long time to come, and are being served from very reliable and fast servers, and being monitored and maintained by a very competent server admin team. PLEASE ADD: (USA) http://mirrors-usa.go-parts.com/gcc ftp://mirrors.go-parts.com/gcc rsync://mirrors.go-parts.com/gcc (Australia) http://mirrors-au.go-parts.com/gcc ftp://mirrors-au.go-parts.com/gcc rsync://mirrors-au.go-parts.com/gcc (Russia) http://mirrors-ru.go-parts.com/gcc ftp://mirrors-ru.go-parts.com/gcc rsync://mirrors-ru.go-parts.com/gcc Thanks, Dan
GNU Tools Cauldron 2014 - Local information and useful links
Some useful information for the conference this weekend: Friday, 18th July 2014, 6.30pm to 9pm The Centre for Computing History Rene Court Coldhams Road Cambridge CB1 3EW http://www.computinghistory.org.uk/ Saturday, 19th July 2014, 7.30pm to 10.30pm Murray Edwards College University of Cambridge Huntingdon Road Cambridge CB3 0DF. http://www.murrayedwards.cam.ac.uk/ Sunday evening, Post Conference Networking The Regal 38-39 St Andrews Street Cambridge CB2 3AR http://www.jdwetherspoon.co.uk/home/pubs/the-regal I have updated the wiki with this and other useful local information. https://gcc.gnu.org/wiki/cauldron2014 Diego.
Overloading raw pointers
Hi, I am the author of a deterministic memory manager: https://svn.boost.org/svn/boost/sandbox/block_ptr/ I just have a quick question: is it possible to overload all raw pointers with a template "smart pointer"? If not then I would hope this can be made possible. Regards, -Phil