On 8/10/23 02:50, Wilco Dijkstra wrote:
Hi Richard,
Why would HWCAP_USCAT not be set by the kernel?
Failing that, I would think you would check ID_AA64MMFR2_EL1.AT.
Answering my own question, N1 does not officially have FEAT_LSE2.
It doesn't indeed. However most cores support atomic 128-bi
On 8/9/23 19:11, Richard Henderson wrote:
On 8/4/23 08:05, Wilco Dijkstra via Gcc-patches wrote:
+#ifdef HWCAP_USCAT
+
+#define MIDR_IMPLEMENTOR(midr) (((midr) >> 24) & 255)
+#define MIDR_PARTNUM(midr) (((midr) >> 4) & 0xfff)
+
+static inline bool
+ifunc1 (unsigned long hwcap)
+{
+ if (hw
On 8/4/23 08:05, Wilco Dijkstra via Gcc-patches wrote:
+#ifdef HWCAP_USCAT
+
+#define MIDR_IMPLEMENTOR(midr) (((midr) >> 24) & 255)
+#define MIDR_PARTNUM(midr) (((midr) >> 4) & 0xfff)
+
+static inline bool
+ifunc1 (unsigned long hwcap)
+{
+ if (hwcap & HWCAP_USCAT)
+return true;
+ if (!
2022-04-19 Richard Henderson
* MAINTAINERS: Update my email address.
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 30f81b3dd52..15973503722 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -53,7 +53,7 @@ aarch64 port
While cmp (extended register) and cmp (immediate) uses ,
cmp (shifted register) uses . So we can perform cmp xzr, x0.
For ccmp, we only have as an input.
* config/aarch64/aarch64.md (cmp): For operand 0, use
aarch64_reg_or_zero. Shuffle reg/reg to last alternative
and a
* config/aarch64/aarch64-modes.def (CC_NV): New.
* config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand
all of the comparisons for TImode, not just NE.
(aarch64_select_cc_mode): Recognize cmp_carryin.
(aarch64_get_condition_code_1): Handle CC_NVmode.
We are about to use !C in more contexts than add-with-carry.
Choose a more generic name.
* config/aarch64/aarch64-modes.def (CC_NOTC): Rename CC_ADC.
* config/aarch64/aarch64.c (aarch64_select_cc_mode): Update.
(aarch64_get_condition_code_1): Likewise.
* config/aarc
Return the entire comparison expression, not just the cc_reg.
This will allow the routine to adjust the comparison code as
needed for TImode comparisons.
Note that some users were passing e.g. EQ to aarch64_gen_compare_reg
and then using gen_rtx_NE. Pass the proper code in the first place.
In one place we open-code a special case of this pattern into the
more specific sub3_compare1_imm, and miss this special case
in other places. Centralize that special case into an expander.
* config/aarch64/aarch64.md (*sub3_compare1): Rename
from sub3_compare1.
(sub3_comp
We have been using CCmode, which is not correct for this case.
Mirror the same code from the arm target.
* config/aarch64/aarch64.c (aarch64_select_cc_mode):
Recognize usub*_carryinC patterns.
* config/aarch64/aarch64.md (usubvti4): Use CC_NOTC.
(usub3_carryinC): Li
Modify aarch64_expand_subvti into a form that handles all
addition and subtraction, modulo, signed or unsigned overflow.
Use expand_insn to put the operands into the proper form,
and do not force values into register if not required.
* config/aarch64/aarch64.c (aarch64_ti_split) New.
These CC_MODEs are identical, merge them into a more generic name.
* config/arm/arm-modes.def (CC_NOTC): New.
(CC_ADC, CC_B): Remove.
* config/arm/arm.c (arm_select_cc_mode): Update to match.
(arm_gen_dicompare_reg): Likewise.
(maybe_get_arm_condition_code):
The arm target has some improvements over aarch64 for
double-word arithmetic and comparisons.
* config/aarch64/aarch64.c (aarch64_select_cc_mode): Check
for swapped operands to CC_Cmode; check for zero_extend to
CC_ADCmode; check for swapped operands to CC_Vmode.
---
gcc/c
Some implementations have a higher cost for the csel insn
(and its specializations) than they do for adc/sbc.
* config/aarch64/aarch64.md (*cstore_carry): New.
(*cstoresi_carry_uxtw): New.
(*cstore_borrow): New.
(*cstoresi_borrow_uxtw): New.
(*csinc2_carry):
Rather than duplicating the rather verbose integral test,
pull it out to a predicate.
* config/aarch64/predicates.md (const_dword_umaxp1): New.
* config/aarch64/aarch64.c (aarch64_select_cc_mode): Use it.
* config/aarch64/aarch64.md (add*add3_carryinC): Likewise.
(*
This is attacking case 3 of PR 94174.
In v4, I attempt to bring over as many patterns from config/arm
as are applicable. It's not too far away from what I had from v2.
In the process of checking all of the combinations (below), I
discovered that we could probably have a better represenation
for
The expander and insn predicates do not match,
which can lead to insn recognition errors.
* config/aarch64/aarch64.md (add3_carryin):
Use register_operand instead of aarch64_reg_or_zero.
---
gcc/config/aarch64/aarch64.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
d
On 4/7/20 4:58 PM, Segher Boessenkool wrote:
>> I wonder if it would be helpful to have
>>
>> (uoverflow_plus x y carry)
>> (soverflow_plus x y carry)
>>
>> etc.
>
> Those have three operands, which is nasty to express.
How so? It's a perfectly natural operation.
> On rs6000 we have the car
On 4/7/20 9:32 AM, Richard Sandiford wrote:
> It's not really reversibility that I'm after (at least not for its
> own sake).
>
> If we had a three-input compare_cc rtx_code that described a comparison
> involving a carry input, we'd certainly be using it here, because that's
> what the instructio
Return the entire comparison expression, not just the cc_reg.
This will allow the routine to adjust the comparison code as
needed for TImode comparisons.
Note that some users were passing e.g. EQ to aarch64_gen_compare_reg
and then using gen_rtx_NE. Pass the proper code in the first place.
Modify aarch64_expand_subvti into a form that handles all
addition and subtraction, modulo, signed or unsigned overflow.
Use expand_insn to put the operands into the proper form,
and do not force values into register if not required.
* config/aarch64/aarch64.c (aarch64_ti_split) New.
The rtl description of signed/unsigned overflow from subtract
was fine, as far as it goes -- we have CC_Cmode and CC_Vmode
that indicate that only those particular bits are valid.
However, it's not clear how to extend that description to
handle signed comparison, where N == V (GE) N != V (LT) are
Now that we're using UNSPEC_ADCS instead of rtl, there's
no reason to distinguish CC_ADCmode from CC_Cmode. Both
examine only the C bit. Within uaddvti4, using CC_Cmode
is clearer, since it's the carry-outthat's relevant.
* config/aarch64/aarch64-modes.def (CC_ADC): Remove.
* con
* config/aarch64/aarch64.md (absti2): New.
---
gcc/config/aarch64/aarch64.md | 29 +
1 file changed, 29 insertions(+)
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index cf716f815a1..4a30d4cca93 100644
--- a/gcc/config/aarch64/aarch
* config/aarch64/predicates.md (aarch64_reg_or_minus1): New.
* config/aarch64/aarch64.md (add3_carryin): Use it.
(*add3_carryin): Likewise.
(*addsi3_carryin_uxtw): Likewise.
---
gcc/config/aarch64/aarch64.md| 26 +++---
gcc/config/aarch64/pre
Similar to UNSPEC_SBCS, we can unify the signed/unsigned overflow
paths by using an unspec.
Accept -1 for the second input by using SBCS.
* config/aarch64/aarch64.md (UNSPEC_ADCS): New.
(addvti4, uaddvti4): Use adddi_carryin_cmp.
(add3_carryinC): Remove.
(*add3_car
Use ccmp to perform all TImode comparisons branchless.
* config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of
the comparisons for TImode, not just NE.
* config/aarch64/aarch64.md (cbranchti4, cstoreti4): New.
---
gcc/config/aarch64/aarch64.c | 122 +++
While cmp (extended register) and cmp (immediate) uses ,
cmp (shifted register) uses . So we can perform cmp xzr, x0.
For ccmp, we only have as an input.
* config/aarch64/aarch64.md (cmp): For operand 0, use
aarch64_reg_or_zero. Shuffle reg/reg to last alternative
and a
This is attacking case 3 of PR 94174.
In v2, I unify the various subtract-with-borrow and add-with-carry
patterns that also output flags with unspecs. As suggested by
Richard Sandiford during review of v1. It does seem cleaner.
r~
Richard Henderson (11):
aarch64: Accept 0 as first argument
The expander and the insn pattern did not match, leading to
recognition failures in expand.
* config/aarch64/aarch64.md (*add3_carryin): Accept zeros.
---
gcc/config/aarch64/aarch64.md | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/gcc/config/aarch64/aarch64.
In one place we open-code a special case of this pattern into the
more specific sub3_compare1_imm, and miss this special case
in other places. Centralize that special case into an expander.
* config/aarch64/aarch64.md (*sub3_compare1): Rename
from sub3_compare1.
(sub3_comp
On 4/1/20 9:28 AM, Richard Sandiford wrote:
> How important is it to describe the flags operation as a compare though?
> Could we instead use an unspec with three inputs, and keep it as :CC?
> That would still allow special-case matching for zero operands.
I'm not sure.
My guess is that the only
On 3/31/20 11:34 AM, Richard Sandiford wrote:
>> +(define_insn "*cmp3_carryinC"
>> + [(set (reg:CC CC_REGNUM)
>> +(compare:CC
>> + (ANY_EXTEND:
>> +(match_operand:GPI 0 "register_operand" "r"))
>> + (plus:
>> +(ANY_EXTEND:
>> + (match_operand:GPI 1 "register_
On 3/31/20 9:55 AM, Richard Sandiford wrote:
>> (define_insn "cmp"
>>[(set (reg:CC CC_REGNUM)
>> -(compare:CC (match_operand:GPI 0 "register_operand" "rk,rk,rk")
>> -(match_operand:GPI 1 "aarch64_plus_operand" "r,I,J")))]
>> +(compare:CC (match_operand:GPI 0 "aarch64_re
On 3/22/20 12:30 PM, Segher Boessenkool wrote:
> Hi!
>
> On Fri, Mar 20, 2020 at 07:42:25PM -0700, Richard Henderson via Gcc-patches
> wrote:
>> Duplicate all usub_*_carryinC, but use xzr for the output when we
>> only require the flags output. The signed versions use s
Return the entire comparison expression, not just the cc_reg.
This will allow the routine to adjust the comparison code as
needed for TImode comparisons.
Note that some users were passing e.g. EQ to aarch64_gen_compare_reg
and then using gen_rtx_NE. Pass the proper code in the first place.
Use ccmp to perform all TImode comparisons branchless.
* config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of
the comparisons for TImode, not just NE.
* config/aarch64/aarch64.md (cbranchti4, cstoreti4): New.
---
gcc/config/aarch64/aarch64.c | 130 +++
Modify aarch64_expand_subvti into a form that handles all
addition and subtraction, modulo, signed or unsigned overflow.
Use expand_insn to put the operands into the proper form,
and do not force values into register if not required.
* config/aarch64/aarch64.c (aarch64_ti_split) New.
While cmp (extended register) and cmp (immediate) uses ,
cmp (shifted register) uses . So we can perform cmp xzr, x0.
For ccmp, we only have as an input.
* config/aarch64/aarch64.md (cmp): For operand 0, use
aarch64_reg_or_zero. Shuffle reg/reg to last alternative
and a
* config/aarch64/aarch64.md (absti2): New.
---
gcc/config/aarch64/aarch64.md | 30 ++
1 file changed, 30 insertions(+)
diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
index 284a8038e28..7a112f89487 100644
--- a/gcc/config/aarch64/aarc
In a couple of places we open-code a special case of this
pattern into the more specific sub3_compare1_imm.
Centralize that special case into an expander.
* config/aarch64/aarch64.md (*sub3_compare1): Rename
from sub3_compare1.
(sub3_compare1): New expander.
---
gcc/config
Duplicate all usub_*_carryinC, but use xzr for the output when we
only require the flags output. The signed versions use sign_extend
instead of zero_extend for combine's benefit.
These will be used shortly for TImode comparisons.
* config/aarch64/aarch64.md (cmp3_carryinC): New.
Combine will fold immediate -1 differently than the other
*cmp*_carryinC* patterns. In this case we can use adcs
with an xzr input, and it occurs frequently when comparing
128-bit values to small negative constants.
* config/aarch64/aarch64.md (cmp_carryinC_m2): New.
---
gcc/config/aarch
The expander and the insn pattern did not match, leading to
recognition failures in expand.
* config/aarch64/aarch64.md (*add3_carryin): Accept zeros.
---
gcc/config/aarch64/aarch64.md | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/gcc/config/aarch64/aarch64.
This is attacking case 3 of PR 94174.
Although I'm no longer using ccmp for most of the TImode comparisons.
Thanks to Wilco Dijkstra for pulling off my blinders and reminding me
that we can use subs+sbcs for (almost) all compares.
The first 5 patches clean up or add patterns to support the expans
On 3/19/20 8:47 AM, Wilco Dijkstra wrote:
> Hi Richard,
>
> Thanks for these patches - yes TI mode expansions can certainly be improved!
> So looking at your expansions for signed compares, why not copy the optimal
> sequence from 32-bit Arm?
>
> Any compare can be done in at most 2 instructions:
The first two arguments were "reversed", in that operand 0 was not
the output, but the input cc_reg. Remove operand 0 entirely, since
we can get the input cc_reg from within the operand 3 comparison
expression. This moves the output operand to index 0.
* config/aarch64/aarch64.md (@ccmpc
Return the entire comparison expression, not just the cc_reg.
This will allow the routine to adjust the comparison code as
needed for TImode comparisons.
Note that some users were passing e.g. EQ to aarch64_gen_compare_reg
and then using gen_rtx_NE. Pass the proper code in the first place.
Use ccmp to perform all TImode comparisons branchless.
* config/aarch64/aarch64.c (aarch64_gen_compare_reg): Expand all of
the comparisons for TImode, not just NE.
* config/aarch64/aarch64.md (cbranchti4, cstoreti4): New.
---
gcc/config/aarch64/aarch64.c | 182 +++
Currently we use %k to interpret an aarch64_cond_code value.
This interpretation is done via an array, aarch64_nzcv_codes.
The rtl is neither hindered nor harmed by using the proper
nzcv value itself, so index the array earlier than later.
This makes it easier to compare the rtl to the assembly.
I
Use xzr for the output when we only require the flags output.
This will be used shortly for TImode comparisons.
* config/aarch64/aarch64.md (ucmp3_carryinC): New.
(*ucmp3_carryinC_z1): New.
(*ucmp3_carryinC_z2): New.
(*ucmp3_carryinC): New.
---
gcc/config/aarch64/a
While cmp (extended register) and cmp (immediate) uses ,
cmp (shifted register) uses . So we can perform cmp xzr, x0.
For ccmp, we only have as an input.
* config/aarch64/aarch64.md (cmp): For operand 0, use
aarch64_reg_or_zero. Shuffle reg/reg to last alternative
and a
This is attacking case 3 of PR 94174.
The existing ccmp optimization happens at the gimple level,
which means that rtl expansion of TImode stuff cannot take
advantage. But we can to even better than the existing
ccmp optimization.
This expansion is similar size to our current branchful
expansio
53 matches
Mail list logo