NagyDonat wrote:

> I played with the idea and there is one wrinkle. EvalBinOp applies tactics 
> that can reduce the requested operation to known values or ranges after 
> applying some logic, like:
> 
>     * eagerly fold away multiplications to 1
> 
>     * shifting 0 left or right to a cast
> 
>     * zero divided/modulo by some value to 0
> 
>     * others
> 
> 
> By checking the sum of the operand complexities before applying these 
> heuristics would mean that we would lose out on these benefits, thus reduce 
> cases into `Unknown` while in the past we could have deduced a simple result 
> for the case.

Yes, doing the test at the beginning of `EvalBinOp` (instead of placing it in 
`makeNonLoc`) moves the threshold to a somewhat earlier step: the complexity 
cutoff will affect somewhat more symbols -- but this comes with a proportional 
performance improvement (as we skip the simplification steps that could create 
the overly complex symbol), so I don't think that this is a problem. (As a 
compensation, we could "sell" the performance advantage to slightly increase 
the complexity threshold -- but the threshold is an arbitrary round value 
anyway, so I don't think that we need to actually do this.)

> So, I think if we go with the evalbinop approach, then it should work as 
> efficiently as my original proposal, while sacreficing the special cases that 
> fold away the binop. I'm fine with either of the approaches.
> I scheduled a measurement for the evalbinop approach, and I expect the 
> results by tomorrow the latest.

I'm looking forward to it :) I think this evalbinop approach could be a good 
compromise that eliminates the outliers without messing up the code.

https://github.com/llvm/llvm-project/pull/144327
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to