[Bug c++/97642] New: Incorrect replacement of vmovdqu32 with vpblendd can cause fault

2020-10-30 Thread goldstein.w.n at gmail dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97642

Bug ID: 97642
   Summary: Incorrect replacement of vmovdqu32 with vpblendd can
cause fault
   Product: gcc
   Version: 10.2.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c++
  Assignee: unassigned at gcc dot gnu.org
  Reporter: goldstein.w.n at gmail dot com
  Target Milestone: ---

GCC sometimes replaces 

_mm256_mask_loadu_epi32(__m256i src, __mask8 k, void const * mem_addr) //
vmovdqu32

With

vpblendd

If mem_addr points to a memory region with less than 32 bytes of accessible
memory and k is a mask that would prevent reading the inaccessible bytes from
mem_addr the replacement will cause a fault.

See: https://godbolt.org/z/Y5cTxz

[Bug target/106038] New: x86_64 vectorization of ALU ops using xmm registers prematurely

2022-06-20 Thread goldstein.w.n at gmail dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106038

Bug ID: 106038
   Summary: x86_64 vectorization of ALU ops using xmm registers
prematurely
   Product: gcc
   Version: 13.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: target
  Assignee: unassigned at gcc dot gnu.org
  Reporter: goldstein.w.n at gmail dot com
  Target Milestone: ---

See: https://godbolt.org/z/YxWEn6Y65

Basically in all cases where the total amount of memory touched is <= 8 bytes
(word size) the vectorization pass is choosing to inefficiently use xmm
registers to vectorize the unrolled loops. 

GPRs (as GCC <= 9.5 was doing) is faster / less code size.


Related to: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106022

[Bug target/106038] x86_64 vectorization of ALU ops using xmm registers prematurely

2022-06-20 Thread goldstein.w.n at gmail dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106038

--- Comment #2 from Noah Goldstein  ---
(In reply to Andrew Pinski from comment #1)
> Created attachment 53175 [details]
> Testcase -O3 -march=icelake-client
> 
> Next time attach the testcase and not link to godbolt without a testcase.

Sorry.

I tried playing around in i386.cc to see if I could modify the `stmt_cost` for
the `BIT_{AND|IOR|XOR}_EXPR` cases but that didn't seem to have any effect. Do
you know where I might go to fix this?

[Bug target/106038] x86_64 vectorization of ALU ops using xmm registers prematurely

2022-06-21 Thread goldstein.w.n at gmail dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106038

--- Comment #5 from Noah Goldstein  ---
(In reply to Richard Biener from comment #4)
> The vectorizer does not anticipate store-merging performing "vectorization"
> in GPRs and thus the scalar cost is off (it also doesn't anticipate the
> different
> ISA constraints wrt xmm vs gpr usage).
> 
> I wonder if we should try to follow what store-merging would do with respect
> to "vector types", thus prefer "general vectors" (but explicitely via integer
> types since we can't have vector types with both integer and vector modes)
> when possible (for bit operations and plain copies).
> 
> scanning over an SLP instance (group) and substituting integer types for
> SLP_TREE_VECTYPE might be possible.  Doing this nicely somewhere is going to
> be more interesting.  Far away vectorizable_* should compute a set of
> { vector-type, cost } pairs from the set of input operand vector-type[, cost]
> pair sets.  Not having "generic" vectors as vector types and relying on
> vector lowering to expand them would be an incremental support step for this
> I guess.
> 
> "backwards STV" could of course also work on the target side.

backwards STV?

[Bug target/106060] New: Inefficient constant broadcast on x86_64

2022-06-22 Thread goldstein.w.n at gmail dot com via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106060

Bug ID: 106060
   Summary: Inefficient constant broadcast on x86_64
   Product: gcc
   Version: 13.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: target
  Assignee: unassigned at gcc dot gnu.org
  Reporter: goldstein.w.n at gmail dot com
  Target Milestone: ---

```
#include 

__m256i
shouldnt_have_movabs ()
{
  return _mm256_set1_epi8 (123);
}

__m256i
should_be_cmpeq_abs ()
{
  return _mm256_set1_epi8 (1);
}

__m256i
should_be_cmpeq_add ()
{
  return _mm256_set1_epi8 (-2);
}
```

Compiled with: '-O3 -march=x86-64-v3'

Results in:
```
Disassembly of section .text:

 :
   0:   48 b8 7b 7b 7b 7b 7bmovabs $0x7b7b7b7b7b7b7b7b,%rax
   7:   7b 7b 7b
   a:   c4 e1 f9 6e c8  vmovq  %rax,%xmm1
   f:   c4 e2 7d 59 c1  vpbroadcastq %xmm1,%ymm0
  14:   c3  retq
  15:   66 66 2e 0f 1f 84 00data16 nopw %cs:0x0(%rax,%rax,1)
  1c:   00 00 00 00

0020 :
  20:   48 b8 01 01 01 01 01movabs $0x101010101010101,%rax
  27:   01 01 01
  2a:   c4 e1 f9 6e c8  vmovq  %rax,%xmm1
  2f:   c4 e2 7d 59 c1  vpbroadcastq %xmm1,%ymm0
  34:   c3  retq
  35:   66 66 2e 0f 1f 84 00data16 nopw %cs:0x0(%rax,%rax,1)
  3c:   00 00 00 00

0040 :
  40:   48 b8 fe fe fe fe femovabs $0xfefefefefefefefe,%rax
  47:   fe fe fe
  4a:   c4 e1 f9 6e c8  vmovq  %rax,%xmm1
  4f:   c4 e2 7d 59 c1  vpbroadcastq %xmm1,%ymm0
  54:   c3  retq
```

Compiled with: '-O3 -march=x86-64-v4'

Results in:
```
 :
   0:   48 b8 7b 7b 7b 7b 7bmovabs $0x7b7b7b7b7b7b7b7b,%rax
   7:   7b 7b 7b
   a:   62 f2 fd 28 7c c0   vpbroadcastq %rax,%ymm0
  10:   c3  retq
  11:   66 66 2e 0f 1f 84 00data16 nopw %cs:0x0(%rax,%rax,1)
  18:   00 00 00 00
  1c:   0f 1f 40 00 nopl   0x0(%rax)

0020 :
  20:   48 b8 01 01 01 01 01movabs $0x101010101010101,%rax
  27:   01 01 01
  2a:   62 f2 fd 28 7c c0   vpbroadcastq %rax,%ymm0
  30:   c3  retq
  31:   66 66 2e 0f 1f 84 00data16 nopw %cs:0x0(%rax,%rax,1)
  38:   00 00 00 00
  3c:   0f 1f 40 00 nopl   0x0(%rax)

0040 :
  40:   48 b8 fe fe fe fe femovabs $0xfefefefefefefefe,%rax
  47:   fe fe fe
  4a:   62 f2 fd 28 7c c0   vpbroadcastq %rax,%ymm0
  50:   c3  retq
```


All functions / targets are inoptimal.

Generating 1/2 can be done without any lane-cross broadcast.

Generating constants like 123 shouldn't first be constant broadcast
into an imm64. That makes it require an 10-byte `movabs` and wastes
spaces.