> > +/* For operands of load/stores estimate cost of the address computations
> > + involved. */
> > +
> > +static int
> > +estimate_operand_cost (tree op)
> > +{
> > + int cost = 0;
> > + while (handled_component_p (op))
> > + {
> > + cost += estimate_ref_cost (op);
> > + op = TREE_OPERAND (op, 0);
> > + }
> > + /* account (TARGET_)MEM_REF. */
> > + return cost + estimate_ref_cost (op);
>
> ICK ...
>
> Why not sth as simple as
>
> return num_ssa_operands (stmt, SSA_OP_USE);
>
> ? a[1][2] and b[2] really have the same cost, variable length
> objects have extra SSA operands in ARRAY_REF/COMPONENT_REF for
> the size. Thus, stmt cost somehow should reflect the number
> of dependent stmts.
>
> So in estimate_num_insns I'd try
>
> int
> estimate_num_insns (gimple stmt, eni_weights *weights)
> {
> unsigned cost, i;
> enum gimple_code code = gimple_code (stmt);
> tree lhs;
> tree rhs;
>
> switch (code)
> {
> case GIMPLE_ASSIGN:
> /* Initialize with the number of SSA uses, one is free. */
> cost = num_ssa_operands (stmt, SSA_OP_USE);
> if (cost > 1)
> --cost;
>
> ...
Hmm, also in ASM/call/return? It will definitely make quite a fuzz into the
cost metric
by making a=b+c to have cost of 3 instead of 1 as it have now. I am not 100%
sure if
a+b should be more expensive than a+1.
I can give that a try and we will see after re-tunning how well it works.
Diagonal walk
of multi-dimensional array, i.e. a[b][b][b] will be mis-accounted, too, right?
Honza