Re: [PATCH] libatomic: Improve ifunc selection on AArch64

2023-08-10 Thread Richard Henderson via Gcc-patches
On 8/10/23 02:50, Wilco Dijkstra wrote: Hi Richard, Why would HWCAP_USCAT not be set by the kernel? Failing that, I would think you would check ID_AA64MMFR2_EL1.AT. Answering my own question, N1 does not officially have FEAT_LSE2. It doesn't indeed. However most cores support atomic 128-bi

Re: [PATCH] libatomic: Improve ifunc selection on AArch64

2023-08-09 Thread Richard Henderson via Gcc-patches
On 8/9/23 19:11, Richard Henderson wrote: On 8/4/23 08:05, Wilco Dijkstra via Gcc-patches wrote: +#ifdef HWCAP_USCAT + +#define MIDR_IMPLEMENTOR(midr)    (((midr) >> 24) & 255) +#define MIDR_PARTNUM(midr)    (((midr) >> 4) & 0xfff) + +static inline bool +ifunc1 (unsigned long hwcap) +{ +  if (hw

Re: [PATCH] libatomic: Improve ifunc selection on AArch64

2023-08-09 Thread Richard Henderson via Gcc-patches
On 8/4/23 08:05, Wilco Dijkstra via Gcc-patches wrote: +#ifdef HWCAP_USCAT + +#define MIDR_IMPLEMENTOR(midr) (((midr) >> 24) & 255) +#define MIDR_PARTNUM(midr) (((midr) >> 4) & 0xfff) + +static inline bool +ifunc1 (unsigned long hwcap) +{ + if (hwcap & HWCAP_USCAT) +return true; + if (!

[PATCH] MAINTAINERS: Update my email address.

2022-04-19 Thread Richard Henderson via Gcc-patches
2022-04-19 Richard Henderson * MAINTAINERS: Update my email address. --- MAINTAINERS | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/MAINTAINERS b/MAINTAINERS index 30f81b3dd52..15973503722 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -53,7 +53,7 @@ aarch64 port

[PATCH v4 11/12] aarch64: Accept 0 as first argument to compares

2020-04-09 Thread Richard Henderson via Gcc-patches
While cmp (extended register) and cmp (immediate) uses , cmp (shifted register) uses . So we can perform cmp xzr, x0. For ccmp, we only have as an input. * config/aarch64/aarch64.md (cmp): For operand 0, use aarch64_reg_or_zero. Shuffle reg/reg to last alternative and a

[PATCH v4 12/12] aarch64: Implement TImode comparisons

2020-04-09 Thread Richard Henderson via Gcc-patches
* config/aarch64/aarch64-modes.def (CC_NV): New. * config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of the comparisons for TImode, not just NE. (aarch64_select_cc_mode): Recognize cmp_carryin. (aarch64_get_condition_code_1): Handle CC_NVmode.

[PATCH v4 07/12] aarch64: Rename CC_ADCmode to CC_NOTCmode

2020-04-09 Thread Richard Henderson via Gcc-patches
We are about to use !C in more contexts than add-with-carry. Choose a more generic name. * config/aarch64/aarch64-modes.def (CC_NOTC): Rename CC_ADC. * config/aarch64/aarch64.c (aarch64_select_cc_mode): Update. (aarch64_get_condition_code_1): Likewise. * config/aarc

[PATCH v4 10/12] aarch64: Adjust result of aarch64_gen_compare_reg

2020-04-09 Thread Richard Henderson via Gcc-patches
Return the entire comparison expression, not just the cc_reg. This will allow the routine to adjust the comparison code as needed for TImode comparisons. Note that some users were passing e.g. EQ to aarch64_gen_compare_reg and then using gen_rtx_NE. Pass the proper code in the first place.

[PATCH v4 01/12] aarch64: Provide expander for sub3_compare1

2020-04-09 Thread Richard Henderson via Gcc-patches
In one place we open-code a special case of this pattern into the more specific sub3_compare1_imm, and miss this special case in other places. Centralize that special case into an expander. * config/aarch64/aarch64.md (*sub3_compare1): Rename from sub3_compare1. (sub3_comp

[PATCH v4 09/12] aarch64: Use CC_NOTCmode for double-word subtract

2020-04-09 Thread Richard Henderson via Gcc-patches
We have been using CCmode, which is not correct for this case. Mirror the same code from the arm target. * config/aarch64/aarch64.c (aarch64_select_cc_mode): Recognize usub*_carryinC patterns. * config/aarch64/aarch64.md (usubvti4): Use CC_NOTC. (usub3_carryinC): Li

[PATCH v4 06/12] aarch64: Introduce aarch64_expand_addsubti

2020-04-09 Thread Richard Henderson via Gcc-patches
Modify aarch64_expand_subvti into a form that handles all addition and subtraction, modulo, signed or unsigned overflow. Use expand_insn to put the operands into the proper form, and do not force values into register if not required. * config/aarch64/aarch64.c (aarch64_ti_split) New.

[PATCH v4 08/12] arm: Merge CC_ADC and CC_B to CC_NOTC

2020-04-09 Thread Richard Henderson via Gcc-patches
These CC_MODEs are identical, merge them into a more generic name. * config/arm/arm-modes.def (CC_NOTC): New. (CC_ADC, CC_B): Remove. * config/arm/arm.c (arm_select_cc_mode): Update to match. (arm_gen_dicompare_reg): Likewise. (maybe_get_arm_condition_code):

[PATCH v4 05/12] aarch64: Improvements to aarch64_select_cc_mode from arm

2020-04-09 Thread Richard Henderson via Gcc-patches
The arm target has some improvements over aarch64 for double-word arithmetic and comparisons. * config/aarch64/aarch64.c (aarch64_select_cc_mode): Check for swapped operands to CC_Cmode; check for zero_extend to CC_ADCmode; check for swapped operands to CC_Vmode. --- gcc/c

[PATCH v4 03/12] aarch64: Add cset, csetm, cinc patterns for carry/borrow

2020-04-09 Thread Richard Henderson via Gcc-patches
Some implementations have a higher cost for the csel insn (and its specializations) than they do for adc/sbc. * config/aarch64/aarch64.md (*cstore_carry): New. (*cstoresi_carry_uxtw): New. (*cstore_borrow): New. (*cstoresi_borrow_uxtw): New. (*csinc2_carry):

[PATCH v4 04/12] aarch64: Add const_dword_umaxp1

2020-04-09 Thread Richard Henderson via Gcc-patches
Rather than duplicating the rather verbose integral test, pull it out to a predicate. * config/aarch64/predicates.md (const_dword_umaxp1): New. * config/aarch64/aarch64.c (aarch64_select_cc_mode): Use it. * config/aarch64/aarch64.md (add*add3_carryinC): Likewise. (*

[PATCH v4 00/12] aarch64: Implement TImode comparisons

2020-04-09 Thread Richard Henderson via Gcc-patches
This is attacking case 3 of PR 94174. In v4, I attempt to bring over as many patterns from config/arm as are applicable. It's not too far away from what I had from v2. In the process of checking all of the combinations (below), I discovered that we could probably have a better represenation for

[PATCH v4 02/12] aarch64: Match add3_carryin expander and insn

2020-04-09 Thread Richard Henderson via Gcc-patches
The expander and insn predicates do not match, which can lead to insn recognition errors. * config/aarch64/aarch64.md (add3_carryin): Use register_operand instead of aarch64_reg_or_zero. --- gcc/config/aarch64/aarch64.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) d

Re: [PATCH v2 00/11] aarch64: Implement TImode comparisons

2020-04-07 Thread Richard Henderson via Gcc-patches
On 4/7/20 4:58 PM, Segher Boessenkool wrote: >> I wonder if it would be helpful to have >> >> (uoverflow_plus x y carry) >> (soverflow_plus x y carry) >> >> etc. > > Those have three operands, which is nasty to express. How so? It's a perfectly natural operation. > On rs6000 we have the car

Re: [PATCH v2 00/11] aarch64: Implement TImode comparisons

2020-04-07 Thread Richard Henderson via Gcc-patches
On 4/7/20 9:32 AM, Richard Sandiford wrote: > It's not really reversibility that I'm after (at least not for its > own sake). > > If we had a three-input compare_cc rtx_code that described a comparison > involving a carry input, we'd certainly be using it here, because that's > what the instructio

[PATCH v2 09/11] aarch64: Adjust result of aarch64_gen_compare_reg

2020-04-02 Thread Richard Henderson via Gcc-patches
Return the entire comparison expression, not just the cc_reg. This will allow the routine to adjust the comparison code as needed for TImode comparisons. Note that some users were passing e.g. EQ to aarch64_gen_compare_reg and then using gen_rtx_NE. Pass the proper code in the first place.

[PATCH v2 04/11] aarch64: Introduce aarch64_expand_addsubti

2020-04-02 Thread Richard Henderson via Gcc-patches
Modify aarch64_expand_subvti into a form that handles all addition and subtraction, modulo, signed or unsigned overflow. Use expand_insn to put the operands into the proper form, and do not force values into register if not required. * config/aarch64/aarch64.c (aarch64_ti_split) New.

[PATCH v2 05/11] aarch64: Use UNSPEC_SBCS for subtract-with-borrow + output flags

2020-04-02 Thread Richard Henderson via Gcc-patches
The rtl description of signed/unsigned overflow from subtract was fine, as far as it goes -- we have CC_Cmode and CC_Vmode that indicate that only those particular bits are valid. However, it's not clear how to extend that description to handle signed comparison, where N == V (GE) N != V (LT) are

[PATCH v2 07/11] aarch64: Remove CC_ADCmode

2020-04-02 Thread Richard Henderson via Gcc-patches
Now that we're using UNSPEC_ADCS instead of rtl, there's no reason to distinguish CC_ADCmode from CC_Cmode. Both examine only the C bit. Within uaddvti4, using CC_Cmode is clearer, since it's the carry-outthat's relevant. * config/aarch64/aarch64-modes.def (CC_ADC): Remove. * con

[PATCH v2 11/11] aarch64: Implement absti2

2020-04-02 Thread Richard Henderson via Gcc-patches
* config/aarch64/aarch64.md (absti2): New. --- gcc/config/aarch64/aarch64.md | 29 + 1 file changed, 29 insertions(+) diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index cf716f815a1..4a30d4cca93 100644 --- a/gcc/config/aarch64/aarch

[PATCH v2 08/11] aarch64: Accept -1 as second argument to add3_carryin

2020-04-02 Thread Richard Henderson via Gcc-patches
* config/aarch64/predicates.md (aarch64_reg_or_minus1): New. * config/aarch64/aarch64.md (add3_carryin): Use it. (*add3_carryin): Likewise. (*addsi3_carryin_uxtw): Likewise. --- gcc/config/aarch64/aarch64.md| 26 +++--- gcc/config/aarch64/pre

[PATCH v2 06/11] aarch64: Use UNSPEC_ADCS for add-with-carry + output flags

2020-04-02 Thread Richard Henderson via Gcc-patches
Similar to UNSPEC_SBCS, we can unify the signed/unsigned overflow paths by using an unspec. Accept -1 for the second input by using SBCS. * config/aarch64/aarch64.md (UNSPEC_ADCS): New. (addvti4, uaddvti4): Use adddi_carryin_cmp. (add3_carryinC): Remove. (*add3_car

[PATCH v2 10/11] aarch64: Implement TImode comparisons

2020-04-02 Thread Richard Henderson via Gcc-patches
Use ccmp to perform all TImode comparisons branchless. * config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of the comparisons for TImode, not just NE. * config/aarch64/aarch64.md (cbranchti4, cstoreti4): New. --- gcc/config/aarch64/aarch64.c | 122 +++

[PATCH v2 01/11] aarch64: Accept 0 as first argument to compares

2020-04-02 Thread Richard Henderson via Gcc-patches
While cmp (extended register) and cmp (immediate) uses , cmp (shifted register) uses . So we can perform cmp xzr, x0. For ccmp, we only have as an input. * config/aarch64/aarch64.md (cmp): For operand 0, use aarch64_reg_or_zero. Shuffle reg/reg to last alternative and a

[PATCH v2 00/11] aarch64: Implement TImode comparisons

2020-04-02 Thread Richard Henderson via Gcc-patches
This is attacking case 3 of PR 94174. In v2, I unify the various subtract-with-borrow and add-with-carry patterns that also output flags with unspecs. As suggested by Richard Sandiford during review of v1. It does seem cleaner. r~ Richard Henderson (11): aarch64: Accept 0 as first argument

[PATCH v2 02/11] aarch64: Accept zeros in add3_carryin

2020-04-02 Thread Richard Henderson via Gcc-patches
The expander and the insn pattern did not match, leading to recognition failures in expand. * config/aarch64/aarch64.md (*add3_carryin): Accept zeros. --- gcc/config/aarch64/aarch64.md | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/gcc/config/aarch64/aarch64.

[PATCH v2 03/11] aarch64: Provide expander for sub3_compare1

2020-04-02 Thread Richard Henderson via Gcc-patches
In one place we open-code a special case of this pattern into the more specific sub3_compare1_imm, and miss this special case in other places. Centralize that special case into an expander. * config/aarch64/aarch64.md (*sub3_compare1): Rename from sub3_compare1. (sub3_comp

Re: [PATCH v2 3/9] aarch64: Add cmp_*_carryinC patterns

2020-04-01 Thread Richard Henderson via Gcc-patches
On 4/1/20 9:28 AM, Richard Sandiford wrote: > How important is it to describe the flags operation as a compare though? > Could we instead use an unspec with three inputs, and keep it as :CC? > That would still allow special-case matching for zero operands. I'm not sure. My guess is that the only

Re: [PATCH v2 3/9] aarch64: Add cmp_*_carryinC patterns

2020-03-31 Thread Richard Henderson via Gcc-patches
On 3/31/20 11:34 AM, Richard Sandiford wrote: >> +(define_insn "*cmp3_carryinC" >> + [(set (reg:CC CC_REGNUM) >> +(compare:CC >> + (ANY_EXTEND: >> +(match_operand:GPI 0 "register_operand" "r")) >> + (plus: >> +(ANY_EXTEND: >> + (match_operand:GPI 1 "register_

Re: [PATCH v2 1/9] aarch64: Accept 0 as first argument to compares

2020-03-31 Thread Richard Henderson via Gcc-patches
On 3/31/20 9:55 AM, Richard Sandiford wrote: >> (define_insn "cmp" >>[(set (reg:CC CC_REGNUM) >> -(compare:CC (match_operand:GPI 0 "register_operand" "rk,rk,rk") >> -(match_operand:GPI 1 "aarch64_plus_operand" "r,I,J")))] >> +(compare:CC (match_operand:GPI 0 "aarch64_re

Re: [PATCH v2 3/9] aarch64: Add cmp_*_carryinC patterns

2020-03-22 Thread Richard Henderson via Gcc-patches
On 3/22/20 12:30 PM, Segher Boessenkool wrote: > Hi! > > On Fri, Mar 20, 2020 at 07:42:25PM -0700, Richard Henderson via Gcc-patches > wrote: >> Duplicate all usub_*_carryinC, but use xzr for the output when we >> only require the flags output. The signed versions use s

[PATCH v2 7/9] aarch64: Adjust result of aarch64_gen_compare_reg

2020-03-20 Thread Richard Henderson via Gcc-patches
Return the entire comparison expression, not just the cc_reg. This will allow the routine to adjust the comparison code as needed for TImode comparisons. Note that some users were passing e.g. EQ to aarch64_gen_compare_reg and then using gen_rtx_NE. Pass the proper code in the first place.

[PATCH v2 8/9] aarch64: Implement TImode comparisons

2020-03-20 Thread Richard Henderson via Gcc-patches
Use ccmp to perform all TImode comparisons branchless. * config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of the comparisons for TImode, not just NE. * config/aarch64/aarch64.md (cbranchti4, cstoreti4): New. --- gcc/config/aarch64/aarch64.c | 130 +++

[PATCH v2 6/9] aarch64: Introduce aarch64_expand_addsubti

2020-03-20 Thread Richard Henderson via Gcc-patches
Modify aarch64_expand_subvti into a form that handles all addition and subtraction, modulo, signed or unsigned overflow. Use expand_insn to put the operands into the proper form, and do not force values into register if not required. * config/aarch64/aarch64.c (aarch64_ti_split) New.

[PATCH v2 1/9] aarch64: Accept 0 as first argument to compares

2020-03-20 Thread Richard Henderson via Gcc-patches
While cmp (extended register) and cmp (immediate) uses , cmp (shifted register) uses . So we can perform cmp xzr, x0. For ccmp, we only have as an input. * config/aarch64/aarch64.md (cmp): For operand 0, use aarch64_reg_or_zero. Shuffle reg/reg to last alternative and a

[PATCH v2 9/9] aarch64: Implement absti2

2020-03-20 Thread Richard Henderson via Gcc-patches
* config/aarch64/aarch64.md (absti2): New. --- gcc/config/aarch64/aarch64.md | 30 ++ 1 file changed, 30 insertions(+) diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 284a8038e28..7a112f89487 100644 --- a/gcc/config/aarch64/aarc

[PATCH v2 5/9] aarch64: Provide expander for sub3_compare1

2020-03-20 Thread Richard Henderson via Gcc-patches
In a couple of places we open-code a special case of this pattern into the more specific sub3_compare1_imm. Centralize that special case into an expander. * config/aarch64/aarch64.md (*sub3_compare1): Rename from sub3_compare1. (sub3_compare1): New expander. --- gcc/config

[PATCH v2 3/9] aarch64: Add cmp_*_carryinC patterns

2020-03-20 Thread Richard Henderson via Gcc-patches
Duplicate all usub_*_carryinC, but use xzr for the output when we only require the flags output. The signed versions use sign_extend instead of zero_extend for combine's benefit. These will be used shortly for TImode comparisons. * config/aarch64/aarch64.md (cmp3_carryinC): New.

[PATCH v2 4/9] aarch64: Add cmp_carryinC_m2

2020-03-20 Thread Richard Henderson via Gcc-patches
Combine will fold immediate -1 differently than the other *cmp*_carryinC* patterns. In this case we can use adcs with an xzr input, and it occurs frequently when comparing 128-bit values to small negative constants. * config/aarch64/aarch64.md (cmp_carryinC_m2): New. --- gcc/config/aarch

[PATCH v2 2/9] aarch64: Accept zeros in add3_carryin

2020-03-20 Thread Richard Henderson via Gcc-patches
The expander and the insn pattern did not match, leading to recognition failures in expand. * config/aarch64/aarch64.md (*add3_carryin): Accept zeros. --- gcc/config/aarch64/aarch64.md | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/gcc/config/aarch64/aarch64.

[PATCH v2 0/9] aarch64: Implement TImode comparisons

2020-03-20 Thread Richard Henderson via Gcc-patches
This is attacking case 3 of PR 94174. Although I'm no longer using ccmp for most of the TImode comparisons. Thanks to Wilco Dijkstra for pulling off my blinders and reminding me that we can use subs+sbcs for (almost) all compares. The first 5 patches clean up or add patterns to support the expans

Re: [PATCH 0/6] aarch64: Implement TImode comparisons

2020-03-19 Thread Richard Henderson via Gcc-patches
On 3/19/20 8:47 AM, Wilco Dijkstra wrote: > Hi Richard, > > Thanks for these patches - yes TI mode expansions can certainly be improved! > So looking at your expansions for signed compares, why not copy the optimal > sequence from 32-bit Arm? > > Any compare can be done in at most 2 instructions:

[PATCH 4/6] aarch64: Simplify @ccmp operands

2020-03-18 Thread Richard Henderson via Gcc-patches
The first two arguments were "reversed", in that operand 0 was not the output, but the input cc_reg. Remove operand 0 entirely, since we can get the input cc_reg from within the operand 3 comparison expression. This moves the output operand to index 0. * config/aarch64/aarch64.md (@ccmpc

[PATCH 2/6] aarch64: Adjust result of aarch64_gen_compare_reg

2020-03-18 Thread Richard Henderson via Gcc-patches
Return the entire comparison expression, not just the cc_reg. This will allow the routine to adjust the comparison code as needed for TImode comparisons. Note that some users were passing e.g. EQ to aarch64_gen_compare_reg and then using gen_rtx_NE. Pass the proper code in the first place.

[PATCH 6/6] aarch64: Implement TImode comparisons

2020-03-18 Thread Richard Henderson via Gcc-patches
Use ccmp to perform all TImode comparisons branchless. * config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of the comparisons for TImode, not just NE. * config/aarch64/aarch64.md (cbranchti4, cstoreti4): New. --- gcc/config/aarch64/aarch64.c | 182 +++

[PATCH 5/6] aarch64: Improve nzcv argument to ccmp

2020-03-18 Thread Richard Henderson via Gcc-patches
Currently we use %k to interpret an aarch64_cond_code value. This interpretation is done via an array, aarch64_nzcv_codes. The rtl is neither hindered nor harmed by using the proper nzcv value itself, so index the array earlier than later. This makes it easier to compare the rtl to the assembly. I

[PATCH 1/6] aarch64: Add ucmp_*_carryinC patterns for all usub_*_carryinC

2020-03-18 Thread Richard Henderson via Gcc-patches
Use xzr for the output when we only require the flags output. This will be used shortly for TImode comparisons. * config/aarch64/aarch64.md (ucmp3_carryinC): New. (*ucmp3_carryinC_z1): New. (*ucmp3_carryinC_z2): New. (*ucmp3_carryinC): New. --- gcc/config/aarch64/a

[PATCH 3/6] aarch64: Accept 0 as first argument to compares

2020-03-18 Thread Richard Henderson via Gcc-patches
While cmp (extended register) and cmp (immediate) uses , cmp (shifted register) uses . So we can perform cmp xzr, x0. For ccmp, we only have as an input. * config/aarch64/aarch64.md (cmp): For operand 0, use aarch64_reg_or_zero. Shuffle reg/reg to last alternative and a

[PATCH 0/6] aarch64: Implement TImode comparisons

2020-03-18 Thread Richard Henderson via Gcc-patches
This is attacking case 3 of PR 94174. The existing ccmp optimization happens at the gimple level, which means that rtl expansion of TImode stuff cannot take advantage. But we can to even better than the existing ccmp optimization. This expansion is similar size to our current branchful expansio