On Mon, Aug 22, 2011 at 10:32:35AM +0300, Dimitrios Apostolou wrote:
> --- gcc/emit-rtl.c 2011-05-29 17:40:05 +0000
> +++ gcc/emit-rtl.c 2011-08-21 04:44:25 +0000
> @@ -256,11 +256,10 @@ mem_attrs_htab_hash (const void *x)
> {
> const mem_attrs *const p = (const mem_attrs *) x;
>
> - return (p->alias ^ (p->align * 1000)
> - ^ (p->addrspace * 4000)
> - ^ ((p->offset ? INTVAL (p->offset) : 0) * 50000)
> - ^ ((p->size ? INTVAL (p->size) : 0) * 2500000)
> - ^ (size_t) iterative_hash_expr (p->expr, 0));
> + /* By massively feeding the mem_attrs struct to iterative_hash() we
> + disregard the p->offset and p->size rtx, but in total the hash is
> + quick and good enough. */
> + return iterative_hash_object (*p, iterative_hash_expr (p->expr, 0));
> }
>
> /* Returns nonzero if the value represented by X (which is really a
This patch isn't against the trunk, where p->offset and p->size aren't rtxes
anymore, but HOST_WIDE_INTs. Furthermore, it is a bad idea to hash
the p->expr address itself, it doesn't make any sense to hash on what
p->expr points to in that case. And p->offset and p->size should be ignored
if the *known_p corresponding fields are false. So, if you really think
using iterative_hash_object is a win, it should be something like:
mem_attrs q = *p;
q.expr = NULL;
if (!q.offset_known_p) q.offset = 0;
if (!q.size_known_p) q.size = 0;
return iterative_hash_object (q, iterative_hash_expr (p->expr, 0));
(or better yet avoid q.expr = NULL and instead start hashing from the next
field after expr). Hashing the struct padding might not be a good idea
either.
Jakub