http://gcc.gnu.org/bugzilla/show_bug.cgi?id=50374

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
  Attachment #25333|0                           |1
        is obsolete|                            |

--- Comment #19 from Jakub Jelinek <jakub at gcc dot gnu.org> 2011-09-22 
15:10:34 UTC ---
Created attachment 25341
  --> http://gcc.gnu.org/bugzilla/attachment.cgi?id=25341
gcc47-pr50374.patch

Thanks.  Here is an updated patch, with hopefully fixed backend part, which
passes the whole newly added testsuite.
Unfortunately, even with -fno-tree-pre -fno-vect-cost-model, on the *-12.c
testcase it vectorizes just 12 loops (f_*_[fiu]_u), not even with -mavx2 where
e.g. I'd expect f_*_{d,ll,ull}_ull to be vectorized too, or e.g. the [fiu]_i
etc.  Seems the pattern recognizer is just too restrictive in finding the IV.
On the other side as I wrote earlier, the check whether the index is strigly
increasing through the whole loop is missing (if the loop bounds are known, I
guess we could check its POLYNOMIAL_CHREC whether it has the expected form and
whether it won't wrap/overflow, and if the loop bound is unknown, if there is
addition done in a signed type, assume it won't wrap, and for unsigned if the
increment is 1 and it has the size bigger or equal to pointer size and init
0/1, then it won't wrap either.

BTW, I think on i?86/x86_64 we could in theory support even mixed size
reductions, e.g. when the index is long long (64-bit) and comparison int or
float, then I think we could use {,v}pmovsxdq instruction to extend the mask
where extremes are present from vector of 4 ints or floats to a vector of 4
long longs.

Reply via email to