On Mar 22, 2007, at 12:28 PM, Mike Stump wrote:
for a -g 16-bit code compile:
real 0m2.629s 0.15% slower
user 0m2.504s
sys 0m0.121s
for a -g -O2 16-bit code compile:
real 0m12.958s 0.023% slower
user 0m12.190s
sys 0m0.754s
Oops, both of those should say faster.
My hope is that disable-checking numbers hold up reasonable well
Anyway, for --disable-checking, expr,c, I get:
-g 8-bit code:
real 0m0.950s
user 0m0.867s
sys 0m0.081s
-g -O2 8-bit:
real 0m3.107s
user 0m2.956s
sys 0m0.147s
-g 16-bit code:
real 0m0.957s 0.74% slower
user 0m0.872s
sys 0m0.083s
-g -O2 16-bit code:
real 0m3.127s 0.64% slower
user 0m2.974s
sys 0m0.148s
I think I want to argue for the 16-bit patch version. I think the
hit in compile speed is paid for by the flexibility of not having to
ever again worry about the issue, and never having to subtype. In
addition, this speeds up compilation of any language that would be
forced to use subcodes.
Also, the correctness of:
Doing diffs in tree.h.~1~:
--- tree.h.~1~ 2007-03-20 19:07:00.000000000 -0700
+++ tree.h 2007-03-22 15:05:03.000000000 -0700
@@ -363,7 +363,7 @@ union tree_ann_d;
struct tree_base GTY(())
{
- ENUM_BITFIELD(tree_code) code : 8;
+ ENUM_BITFIELD(tree_code) code : 16;
unsigned side_effects_flag : 1;
unsigned constant_flag : 1;
--------------
is more obvious than the correctness of the subcoding. Thoughts?