On 11/29/25 12:48 AM, [email protected] wrote:


Actully I'm thinking If there is a PGO-like feedback loop that automatically 
tunes vector cost adjustments
by benchmarking different configurations would be very valuable.
This could help us catch cases where the current heuristics make poor choices
(e.g., bad LMUL selection or vectorizing when scalar is faster).

Right now I’m finding these issues manually, which is slow and doesn’t scale.
Is there any existing GCC infrastructure that could support automated 
cost-model tuning,
or do you have recommendations on the best way to build such a system?
Nothing for automated cost model tuning. These things take considerable time to discover, then chase down.

What usually happens is there's one or more benchmarks (real world code you care about or industry standard benchmarks, the former is much preferable) and you just have to dive in and understand where the hotspots are and how they're behaving. linux-perf is your friend.

jeff



Reply via email to