https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83004

--- Comment #3 from rsandifo at gcc dot gnu.org <rsandifo at gcc dot gnu.org> 
---
(In reply to Jakub Jelinek from comment #1)
> I think this test fails with -mavx and later since it has been introduced.
> The test uses the VECTOR_BITS macro and assumes that is the vector size, but
> tree-vect.h hardcodes VECTOR_BITS to 128 on all targets and all ISAs.
> Strangely, various tests test for VECTOR_BITS > 128, > 256 etc.

Yeah, this is used by SVE, when testing with -msve-vector-bits=256, 512, etc.

> So, shall we define VECTOR_BITS to higher values based on preprocessor
> macros?
> For x86, the question then would be if __AVX__ without __AVX2__ should enable
> VECTOR_BITS 256 or not, floating point vectors are 256-bit, but integral
> 128-bit.
> Also, -mprefer-avx{128,256} change this stuff.
> Or shall we have VECTOR_BITS as usual vector bits and MAX_VECTOR_BITS as
> maximum for the current option?
> Or shall the test use its own macro, defined by default to VECTOR_BITS but
> for some ISAs to something different?

Defining VECTOR_BITS to the maximum should work (i.e. ignoring -mprefer-*).

TBH I was surprised I was the first to hit the need for VECTOR_BITS, since I'd
have thought AVX2 and AVX512 would have had the same problems.  Were the
vect.exp results clean for those architectures before r254589?

Reply via email to