On 20/03/2025 08:11, Mingzhu Yan wrote:
This patch support svrsw60t59b extension[1].
To enable GCC to recognize and process svrsw60t59b extension correctly at
compile time.
[1] https://github.com/riscv/Svrsw60t59b
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc: New extension.
On 10/02/2025 08:37, Jin Ma wrote:
On Sun, 09 Feb 2025 14:04:00 +0800, Jin Ma wrote:
PR target/118601
Ok for trunk?
Best regards,
Jin Ma
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_use_by_pieces_infrastructure_p):
Exclude XTheadVector.
Reported-by: Edwin Lu
---
stack_protect_{set,test}_ were showing up in RTL dumps as
UNSPEC_COPYSIGN and UNSPEC_FMV_X_W due to UNSPEC_SSP_SET and
UNSPEC_SSP_TEST being put in the unspecv enum instead of unspec.
gcc/ChangeLog:
* config/riscv/riscv.md: Move UNSPEC_SSP_SET and UNSPEC_SSP_TEST
to unspec enum.
-
`expand_vec_setmem` only generated vectorized memset if it fitted into a
single vector store of at least (TARGET_MIN_VLEN / 8) bytes. Also,
without dynamic LMUL the operation was always TARGET_MAX_LMUL even if it
would have fitted a smaller LMUL.
Allow vectorized memset to be generated for smalle
For fast unaligned access targets, by pieces uses up to UNITS_PER_WORD
size pieces resulting in more store instructions than needed. For
example gcc.target/riscv/rvv/base/setmem-2.c:f1 built with
`-O3 -march=rv64gcv -mtune=thead-c906`:
```
f1:
vsetivlizero,8,e8,mf2,ta,ma
vm
Patch 1-5 of v1 have already been pushed. This is v2 of patch 6 and 7
of that series.
Changes since v1:
RISC-V: Make vectorized memset handle more cases
* Removed vector memset loop generation.
RISC-V: Disable by pieces for vector setmem length > UNITS_PER_WORD
* No changes.
gcc/config/riscv/ri
The function body checks for f3 only ran with -mcmodel explicitly set
which meant I missed a regression in my local testing of:
commit b039d06c9a810a3fab4c5eb9d50b0c7aff94b2d8
Author: Craig Blackmore
Date: Fri Oct 18 09:17:21 2024 -0600
[PATCH 3/7] RISC-V: Fix vector memcpy
On 29/10/2024 15:09, Jeff Law wrote:
On 10/29/24 7:59 AM, Craig Blackmore wrote:
On 19/10/2024 14:05, Jeff Law wrote:
On 10/18/24 7:12 AM, Craig Blackmore wrote:
`expand_vec_setmem` only generated vectorized memset if it fitted
into a
single vector store. Extend it to generate a loop
The function body checks for f3 only ran with -mcmodel explicitly set
which meant I missed a regression in my local testing of:
commit b039d06c9a810a3fab4c5eb9d50b0c7aff94b2d8
Author: Craig Blackmore
Date: Fri Oct 18 09:17:21 2024 -0600
[PATCH 3/7] RISC-V: Fix vector memcpy
On 19/10/2024 14:05, Jeff Law wrote:
On 10/18/24 7:12 AM, Craig Blackmore wrote:
`expand_vec_setmem` only generated vectorized memset if it fitted into a
single vector store. Extend it to generate a loop for longer and
unknown lengths.
The test cases now use -O1 so that they are not
On 20/10/2024 17:36, Jeff Law wrote:
On 10/19/24 7:09 AM, Jeff Law wrote:
On 10/18/24 7:13 AM, Craig Blackmore wrote:
For fast unaligned access targets, by pieces uses up to UNITS_PER_WORD
size pieces resulting in more store instructions than needed. For
example gcc.target/riscv/rvv/base
gcc/ChangeLog:
* config/riscv/riscv-string.cc (expand_block_move): Fix
indentation.
---
gcc/config/riscv/riscv-string.cc | 32
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv
This moves the code for deciding whether to generate a vectorized
memcpy, what vector mode to use and whether a loop is needed out of
riscv_vector::expand_block_move and into a new function
riscv_vector::use_stringop_p so that it can be reused for other string
operations.
gcc/ChangeLog:
*
riscv_vector::expand_block_move contains a gen_rtx_NE that uses
uninitialized reg rtx `end`. It looks like `length_rtx` was supposed to
be used here.
gcc/ChangeLog:
* config/riscv/riscv-string.cc (expand_block_move): Replace
`end` with `length_rtx` in gen_rtx_NE.
---
gcc/config/
`expand_vec_setmem` only generated vectorized memset if it fitted into a
single vector store. Extend it to generate a loop for longer and
unknown lengths.
The test cases now use -O1 so that they are not sensitive to scheduling.
gcc/ChangeLog:
* config/riscv/riscv-string.cc
(use_
For fast unaligned access targets, by pieces uses up to UNITS_PER_WORD
size pieces resulting in more store instructions than needed. For
example gcc.target/riscv/rvv/base/setmem-1.c:f1 built with
`-O3 -march=rv64gcv -mtune=thead-c906`:
```
f1:
vsetivlizero,8,e8,mf2,ta,ma
vm
Unlike the other vector string ops, expand_block_move was using max LMUL
m8 regardless of TARGET_MAX_LMUL.
The check for whether to generate inline vector code for movmem has been
moved from movmem to riscv_vector::expand_block_move to avoid
maintaining multiple versions of similar logic. They al
If riscv_vector::expand_block_move is generating a straight-line memcpy
using a predicated store, it tries to use a smaller LMUL to reduce
register pressure if it still allows an entire transfer.
This happens in the inner loop of riscv_vector::expand_block_move,
however, the vmode chosen by this l
The main aim of this patch series is to make inline vector memcpy
respect -mrvv-max-lmul and to extend inline vector memset to be used
in more cases. It includes some preparatory fixes and refactoring along
the way.
Craig Blackmore (7):
RISC-V: Fix indentation in riscv_vector
These tests check the sched2 dump, so skip them for optimization levels
that do not enable sched2.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/mcpu-6.c: Skip for -O0, -O1, -Og.
* gcc.target/riscv/mcpu-7.c: Likewise.
---
gcc/testsuite/gcc.target/riscv/mcpu-6.c | 1 +
gcc/testsuite
On 12/05/2020 23:33, Jim Wilson wrote:
> On Mon, Apr 27, 2020 at 10:08 AM Craig Blackmore
> wrote:
>> Thanks for the review. I have updated the following patch with those changes.
> This looks good, and the tree is open for development work again, so I
> committed both parts 1 a
On 08/04/2020 17:04, Jim Wilson wrote:
> On Wed, Feb 19, 2020 at 3:40 AM Craig Blackmore
> wrote:
>> On 10/12/2019 18:28, Craig Blackmore wrote:
>> Thank you for your review. I have posted an updated patch below which I think
>> addresses your comments.
>>
>&g
On 10/12/2019 18:28, Craig Blackmore wrote:
>
> Hi Jim,
>
> Thank you for your review. I have posted an updated patch below which I think
> addresses your comments.
>
Ping
https://gcc.gnu.org/ml/gcc-patches/2019-12/msg00712.html
https://gcc.gnu.org/ml/gcc-patches/2019-12/msg00713.html
Craig
Hi Jim,
On 31/10/2019 00:03, Jim Wilson wrote:
> On Fri, Oct 25, 2019 at 10:40 AM Craig Blackmore
> wrote:
>> The sched2 pass undoes some of the addresses generated by the RISC-V
>> shorten_memrefs code size optimization (patch 1/2) and consequently increases
>> code s
Hi Jim,
Thank you for your review. I have posted an updated patch below which I think
addresses your comments.
On 30/10/2019 23:57, Jim Wilson wrote:
> On Fri, Oct 25, 2019 at 10:40 AM Craig Blackmore
> wrote:
>> This patch aims to allow more load/store instructions to be c
This patch aims to allow more load/store instructions to be compressed by
replacing a load/store of 'base register + large offset' with a new load/store
of 'new base + small offset'. If the new base gets stored in a compressed
register, then the new load/store can be compressed. Since there is an o
The sched2 pass undoes some of the addresses generated by the RISC-V
shorten_memrefs code size optimization (patch 1/2) and consequently increases
code size. This patch prevents sched-deps.c from changing an address if it is
expected to increase address cost.
Tested on bare metal rv32i, rv32iac, r
reg so that increase code size?
>
Before reload, we do not know whether the base reg will be a compressed register
or not.
>
> On Fri, Sep 13, 2019 at 12:20 AM Craig Blackmore
> wrote:
>>
>> This patch aims to allow more load/store instructions to be compressed by
>
This patch aims to allow more load/store instructions to be compressed by
replacing a load/store of 'base register + large offset' with a new load/store
of 'new base + small offset'. If the new base gets stored in a compressed
register, then the new load/store can be compressed. Since there is an o
29 matches
Mail list logo