On 20/10/2023 13:13, Richard Sandiford wrote:
+(define_insn_and_split "*cmov<mode>_insn_insv"
+  [(set (match_operand:GPI 0 "register_operand" "=r")
+        (xor:GPI
+        (neg:GPI
+         (match_operator:GPI 1 "aarch64_comparison_operator"
+          [(match_operand 2 "cc_register" "") (const_int 0)]))
+        (match_operand:GPI 3 "general_operand" "r")))]
+  "can_create_pseudo_p ()"
+  "#"
+  "&& true"
>
IMO this is an ICE trap, since it hard-codes the assumption that there
will be a split pass after the last pre-LRA call to recog.  I think we
should jsut provide the asm directly instead.

So why not add

(clobber (match_operand:GPI 4 "register_operand" "=&r"))

to the insn, then you'll always get the scratch needed and the need to check cane_create_pseudo_p goes away.

R.

Reply via email to