On Nov 7, 2006, at 2:13 PM, Richard Kenner wrote:
Like when int and long have the same range on a platform?
The answer is they are different, even when they imply the same
object
representation.
The notion of unified type nodes is closer to syntax than semantics.
I'm more than a little confused, then, as to what we are talking about
canonicalizing. We already have only one pointer to each type, for
example.
Ok, great, pointers are unique, or, to be precise, they are unique iff
the types they point to are unique. Notice how that doesn't actually
buy you very much.
Anyway, in C++, the entire template mechanism was rife with building
up duplicates. I'd propose that this problem can (and should be
addressed) and that we can do it incrementally. Start with a hello
world, then in comptypes, check to see when it says yes, they are the
same, but the address equality checker says they might not be the
same, print a warning. Fix the dup builders until no warnings. Then
rebuild the world and repeat. :-) Work can be checked in, as dups
are eliminated.
This will tend to reduce memory consumption. Compile time will be
sped up when you think you have an entire category of duplicates (same
TREE_CODE for example) handled, and you limit recursion in comptypes
and instead return not_equal when you hit the completely handled
category.
I did an extensive investigation in this area years ago, templates
were the worse offender at that time. <int,int,char> !=
<int,int,char> type stuff.