On Wed, Feb 20, 2019 at 10:04:04AM +0000, Richard Sandiford wrote:
> Martin Liška <mli...@suse.cz> writes:
> > About the SVE: isn't the ABI dependent on the bit width of vectors?
> 
> It's dependent on the types.  There are ABI types for Advanced SIMD
> vectors and ABI types for SVE vectors.  The two end up being the same
> length at runtime on when the SVE vector length is 128 bits, but they're
> still separate types with separate conventions.
> 
> (E.g. __attribute__((vector_size)) never creates an ABI-level SVE vector,
> even with -msve-vector-bits=N, but it can create an ABI-level Advanced
> SIMD vector.)
> 
> I think we should leave the SVE stuff out for now though.  ISTM that:
> 
> !GCC$ builtin (sin) attributes simd (notinbranch) if('aarch64')
> !GCC$ builtin (sin) attributes simd (notinbranch) if('aarch64_sve')

The if clause is optional, you don't need to use it if you don't need to
(as in, if glibc is e.g. going to implement those functions for all aarch64
multilibs or if only for some of them but it isn't expected they'd be used
together on the same host except perhaps for multiarch; say if you
implemented it for little-endian only, the question is if could be using the
same header for both little-endian and big-endian compilations or not).

And, as I said in the thread, there is always an option to add some
!GCC$ builtin if clause properties in generic code if it is useful (e.g.
ilp32/lp64/llp64 or the above mentioned endianity), instead of having each
backend invent them on their own.
So, the question is, do you have any of those implemented in glibc already,
or plan to do so soon, and if so, how will the corresponding C header look
like.  The x86 math-vector.h is wrapped with
#if defined __x86_64__ && defined __FAST_MATH__
and thus we really need to limit it to the x32 and x86_64 multilibs, not
ia32.  BTW, I wonder about the __FAST_MATH__, for C it means we enable those
only for -ffast-math or -Ofast, does it mean that for Fortran we enable it
even with -O2 -ftree-vectorize or -O3?

        Jakub

Reply via email to