https://gcc.gnu.org/bugzilla/show_bug.cgi?id=109971

--- Comment #10 from Kewen Lin <linkw at gcc dot gnu.org> ---
(In reply to JuzheZhong from comment #9)
> (In reply to Kewen Lin from comment #8)
> > I did SPEC2017 int/fp evaluation on Power10 at Ofast and an extra explicit
> > --param=vect-partial-vector-usage=2 (the default is 1 on Power), baseline
> > r14-1241 vs. new r14-1242, the results showed that it can offer some
> > speedups for 500.perlbench_r 1.12%, 525.x264_r 1.96%, 544.nab_r 1.91%,
> > 549.fotonik3d_r 1.25%, but it degraded 510.parest_r by 5.01%.
> > 
> > I just tested Juzhe's new proposed fix which makes the loop closing iv
> > SCEV-ed, it can fix the degradation of 510.parest_r, also the miss
> > optimization on cunroll (in #c5), the test failures are gone as well. One
> > SPEC2017 re-evaluation with that fix is ongoing, I'd expect it won't degrade
> > anything.
> 
> Thanks so much. You mean you are trying this patch:
> https://gcc.gnu.org/pipermail/gcc-patches/2023-May/620086.html ?

Yes, it means that Richi's concern (niter analysis but all analyses relying on
SCEV are pessimized) does affect the exposed degradation and failures. Thanks
for looking into it.

> 
> I believe it can improve even more for IBM's target.

Hope so, I'll post the new SPEC2017 results once the run finishes.

btw, the SPEC2017 run with --param=vect-partial-vector-usage=2 here is mainly
to verify the expectation on the decrement IV change, the normal SPEC2017 runs
still use --param=vect-partial-vector-usage=1 which isn't affected by this
change and it beats the former in general as the cost for length setting up.

Reply via email to