https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114075

--- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #1)
> In r14-321 this wasn't vectorized, in r14-322 it is with vf 2, but the
> floating point addition is performed in some weird unsigned long operation
> instead:
>   _14 = VIEW_CONVERT_EXPR<unsigned long>(vect__17.11_42);
>   _32 = VIEW_CONVERT_EXPR<unsigned long>(vect__18.12_29);
>   _35 = _14 ^ _32;
>   _34 = _32 & 9223372034707292159;
>   _33 = _14 & 9223372034707292159;
>   _51 = _35 & 9223372039002259456;
>   _52 = _33 + _34;
>   _53 = _52 ^ _51;
>   _54 = VIEW_CONVERT_EXPR<vector(2) float>(_53);
>   _19 = _17 + _18;
>   MEM <vector(2) float> [(float *)&D.2632] = _54;
> The involved constants are 0x7fffffff7fffffff and 0x8000000080000000.

OT, wouldn't it be cheaper to use 0xffffffff7fffffff and 0x80000000 constants
instead,
i.e. for the MSB just use normal addition/subtraction behavior, there we don't
need to be afraid of overflow into another emulated element?
Though, maybe 0x7f7f7f7f7f7f7f7f and 0x8080808080808080 are cheaper than
0xff7f7f7f7f7f7f7f and 0x0080808080808080.

Reply via email to