https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111166

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2023-08-28
     Ever confirmed|0                           |1

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
Confirmed.  We vectorize the scalar code:

> ./cc1 -quiet t.i -O2 -fopt-info-vec -fdump-tree-slp-details
weird_gcc_behaviour.c:15:41: optimized: basic block part vectorized using 16
byte vectors

generating

uint64_t turn_into_struct (u32 a, u32 b, u32 c, u32 d)
{
  u32 * vectp.4;
  vector(4) unsigned int * vectp.3;
  struct quad_u32 D.2865;
  uint64_t _11;
  vector(4) unsigned int _13;

  <bb 2> [local count: 1073741824]:
  _13 = {a_2(D), b_4(D), c_6(D), d_8(D)};
  MEM <vector(4) unsigned int> [(unsigned int *)&D.2865] = _13;
  _11 = do_smth_with_4_u32 (D.2865);
  D.2865 ={v} {CLOBBER(eol)};
  return _11;

and

weird_gcc_behaviour.c:15:41: note: Cost model analysis:
a_2(D) 1 times scalar_store costs 12 in body
b_4(D) 1 times scalar_store costs 12 in body
c_6(D) 1 times scalar_store costs 12 in body
d_8(D) 1 times scalar_store costs 12 in body
a_2(D) 1 times vector_store costs 12 in body
node 0x5d35f70 1 times vec_construct costs 36 in prologue
weird_gcc_behaviour.c:15:41: note: Cost model analysis for part in loop 0:
  Vector cost: 48
  Scalar cost: 48
weird_gcc_behaviour.c:15:41: note: Basic block will be vectorized using SLP

we are choosing the vector side at same cost because we assume it would win
on code size.  Practically a vector store instead of a scalar store is
also good for store forwarding.

We get

turn_into_struct:
.LFB0:
        .cfi_startproc
        movd    %edi, %xmm1
        movd    %esi, %xmm4
        movd    %edx, %xmm0
        movd    %ecx, %xmm3
        punpckldq       %xmm4, %xmm1
        punpckldq       %xmm3, %xmm0
        movdqa  %xmm1, %xmm2
        punpcklqdq      %xmm0, %xmm2
        movaps  %xmm2, -24(%rsp)
        movq    -24(%rsp), %rdi
        movq    -16(%rsp), %rsi
        jmp     do_smth_with_4_u32

instead of (-fno-tree-vectorize)

turn_into_struct:
.LFB0:
        .cfi_startproc
        xorl    %eax, %eax
        movl    %ecx, %r8d
        movl    %edi, %ecx
        movl    %edx, %r9d
        movabsq $-4294967296, %r10
        movq    %rax, %rdi
        xorl    %edx, %edx
        salq    $32, %r8
        andq    %r10, %rdi
        orq     %rcx, %rdi
        movq    %rsi, %rcx
        salq    $32, %rcx
        movl    %edi, %esi
        orq     %rcx, %rsi
        movq    %rdx, %rcx
        andq    %r10, %rcx
        movq    %rsi, %rdi
        orq     %r9, %rcx
        movl    %ecx, %ecx
        orq     %r8, %rcx
        movq    %rcx, %rsi
        jmp     do_smth_with_4_u32

and our guess for code-size is correct (47 bytes for vector, 67 for scalar).
The latency for the scalar code is also quite a bit bigger.  The spilling
should be OK, the store should forward nicely.

Unless you can come up with an actual benchmark showing the vector code is
slower I'd say it's not.  Given it's smaller it should win on the icache
side if not executed frequently as well.

So - not a bug?

The spilling could be avoided by using movq, movhlps + movq, but it's
call handling so possibly difficult to achieve.

Reply via email to