Hi, 

This patch switches A32 (ARM) state code generation to unified syntax. The 
backend already generates unified syntax in Thumb state and for the floating 
point / SIMD instruction set. The backend still continues to use divided syntax 
for inline assembler.

This is beneficial for a few reasons.

1. Assembler output from the compiler is more in line with the documentation 
for the ISA.
2. Removing special casing for various instructions where unified asm went one 
way and divided asm the other.
3. Possible sharing of more patterns between arm.md and thumb2.md - I've not 
addressed that in this patch though.
4. Frees up a few punctuation characters if we ever needed them.

This patch does the following - (some minor follow-ups are required)

- Remove use of TARGET_UNIFIED_ASM
- Consolidate all uses into unified asm and removes all old support for the 
same.
- Remove support for %( and %) punctuation characters. I do not expect these 
characters to be used in inline assembler.
- Remove all use of %. punctuation character - however definition remains as an 
oversight. I will deal with this in a followup patch.
- Need to cleanup the definition of ARM_LSL_NAME and remove that in a future 
patch.
- Adjust testsuite.



Tested with bootstrap and regression run on armhf with and without thumb on 
Cortex-A15.


arm-none-eabi with the following multilibs.

                           
"arm-eabi{-marm/-march=armv7-a/-mfpu=vfpv3-d16/-mfloat-abi=softfp}" \
                           
"arm-eabi{-mthumb/-march=armv8-a/-mfpu=crypto-neon-fp-armv8/-mfloat-abi=hard}" \
                           "arm-eabi{-marm/-mcpu=arm7tdmi/-mfloat-abi=soft}" \
                           "arm-eabi{-mthumb/-mcpu=arm7tdmi/-mfloat-abi=soft}

with no regressions.


I will apply this to trunk in a couple of days if folks don't have any comments 
and try out a few more multilibs in order to stress this a bit.


regards
Ramana
    
<DATE>  Ramana Radhakrishnan  <ramana.radhakrish...@arm.com>
    
        * config/arm/arm-ldmstm.ml: Rewrite to using unified syntax.
        * config/arm/ldmstm.md: Regenerate.
        * config/arm/arm.c (arm_asm_trampoline_template): Use unified syntax.
        (arm_output_multireg_pop): Use unified syntax.
        (output_move_double): Likewise.
        (output_move_quad): Likewise.
        (output_return_instruction): Likewise
        (arm_print_operand): Remove support for '(' and ')'
        (arm_output_shift):  Use unified syntax.
        (arm_declare_function_name): Likewise.
        * config/arm/arm.h (TARGET_UNIFIED_SYNTAX): Delete.
        * config/arm/arm.md: Rewrite to generate unified syntax.
        * config/arm/sync.md: Likewise.
        * config/arm/thumb2.md: Likewise.

gcc/testsuite

<DATE>  Ramana Radhakrishnan  <ramana.radhakrish...@arm.com>

        * gcc.target/arm/combine-movs.c: Adjust.
        * gcc.target/arm/interrupt-1.c: Likewise.
        * gcc.target/arm/interrupt-2.c: Likewise.
        * gcc.target/arm/unaligned-memcpy-4.c: Likewise.


commit ff5c72b06b7aea8b01af25ffe9cbc0154322f614
Author: Ramana Radhakrishnan <ramana.radhakrish...@arm.com>
Date:   Thu Jul 9 14:37:55 2015 +0100

    Remove TARGET_UNIFIED_ASM
    
    2015-07-09  Ramana Radhakrishnan  <ramana.radhakrish...@arm.com>
    
        * config/arm/arm-ldmstm.ml: Rewrite to using unified syntax.
        * config/arm/ldmstm.md: Regenerate.
        * config/arm/arm.c (arm_asm_trampoline_template): Use unified syntax.
        (arm_output_multireg_pop): Use unified syntax.
        (output_move_double): Likewise.
        (output_move_quad): Likewise.
        (output_return_instruction): Likewise
        (arm_print_operand): Remove support for '(' and ')'
        (arm_output_shift):  Use unified syntax.
        (arm_declare_function_name): Likewise.
        * config/arm/arm.h (TARGET_UNIFIED_SYNTAX): Delete.
        * config/arm/arm.md: Rewrite to generate unified syntax.
        * config/arm/sync.md: Likewise.
        * config/arm/thumb2.md: Likewise.
    
    gcc/testsuite/ChangeLog:
    
    2015-07-10  Ramana Radhakrishnan  <ramana.radhakrish...@arm.com>
    
        * gcc.target/arm/combine-movs.c: Adjust.
        * gcc.target/arm/interrupt-1.c: Likewise.
        * gcc.target/arm/interrupt-2.c: Likewise.
        * gcc.target/arm/unaligned-memcpy-4.c: Likewise.

diff --git a/gcc/config/arm/arm-ldmstm.ml b/gcc/config/arm/arm-ldmstm.ml
index bb90192..e88a51c 100644
--- a/gcc/config/arm/arm-ldmstm.ml
+++ b/gcc/config/arm/arm-ldmstm.ml
@@ -33,9 +33,20 @@ type amode = IA | IB | DA | DB
 
 type optype = IN | OUT | INOUT
 
-let rec string_of_addrmode addrmode =
+let rec string_of_addrmode addrmode thumb update =
+  if thumb || update
+then
   match addrmode with
-    IA -> "ia" | IB -> "ib" | DA -> "da" | DB -> "db"
+    IA -> "ia"
+  | IB -> "ib"
+  | DA -> "da"
+  | DB -> "db"
+else
+  match addrmode with
+    IA -> ""
+  | IB -> "ib"
+  | DA -> "da"
+  | DB -> "db"
 
 let rec initial_offset addrmode nregs =
   match addrmode with
@@ -160,7 +171,7 @@ let target addrmode thumb =
   | _, _ -> raise (InvalidAddrMode "ERROR: Invalid Addressing mode for 
Thumb1.")
 
 let write_pattern_1 name ls addrmode nregs write_set_fn update thumb =
-  let astr = string_of_addrmode addrmode in
+  let astr = string_of_addrmode addrmode thumb update in
   Printf.printf "(define_insn \"*%s%s%d_%s%s\"\n"
     (if thumb then "thumb_" else "") name nregs astr
     (if update then "_update" else "");
@@ -180,8 +191,10 @@ let write_pattern_1 name ls addrmode nregs write_set_fn 
update thumb =
   Printf.printf ")]\n  \"%s && XVECLEN (operands[0], 0) == %d\"\n"
     (target addrmode thumb)
     (if update then nregs + 1 else nregs);
-  Printf.printf "  \"%s%%(%s%%)\\t%%%d%s, {"
-    name astr (nregs + 1) (if update then "!" else "");
+  if thumb then
+      Printf.printf "  \"%s%s\\t%%%d%s, {"   name astr (nregs + 1) (if update 
then "!" else "")
+   else
+      Printf.printf "  \"%s%s%%?\\t%%%d%s, {"  name astr (nregs + 1) (if 
update then "!" else ""); 
   for n = 1 to nregs; do
     Printf.printf "%%%d%s" n (if n < nregs then ", " else "")
   done;
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 16bda3b..52167b9 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -909,7 +909,7 @@ int arm_regs_in_sequence[] =
   0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
 };
 
-#define ARM_LSL_NAME (TARGET_UNIFIED_ASM ? "lsl" : "asl")
+#define ARM_LSL_NAME "lsl"
 #define streq(string1, string2) (strcmp (string1, string2) == 0)
 
 #define THUMB2_WORK_REGS (0xff & ~(  (1 << THUMB_HARD_FRAME_POINTER_REGNUM) \
@@ -3525,10 +3525,7 @@ arm_warn_func_return (tree decl)
 static void
 arm_asm_trampoline_template (FILE *f)
 {
-  if (TARGET_UNIFIED_ASM)
-    fprintf (f, "\t.syntax unified\n");
-  else
-    fprintf (f, "\t.syntax divided\n");
+  fprintf (f, "\t.syntax unified\n");
 
   if (TARGET_ARM)
     {
@@ -4996,8 +4993,8 @@ arm_canonicalize_comparison (int *code, rtx *op0, rtx 
*op1,
 /* Define how to find the value returned by a function.  */
 
 static rtx
-arm_function_value(const_tree type, const_tree func,
-                  bool outgoing ATTRIBUTE_UNUSED)
+arm_function_value (const_tree type, const_tree func,
+                   bool outgoing ATTRIBUTE_UNUSED)
 {
   machine_mode mode;
   int unsignedp ATTRIBUTE_UNUSED;
@@ -17576,10 +17573,8 @@ arm_output_multireg_pop (rtx *operands, bool 
return_pc, rtx cond, bool reverse,
     }
 
   conditional = reverse ? "%?%D0" : "%?%d0";
-  if ((regno_base == SP_REGNUM) && TARGET_THUMB)
+  if ((regno_base == SP_REGNUM) && update)
     {
-      /* Output pop (not stmfd) because it has a shorter encoding.  */
-      gcc_assert (update);
       sprintf (pattern, "pop%s\t{", conditional);
     }
   else
@@ -17587,11 +17582,14 @@ arm_output_multireg_pop (rtx *operands, bool 
return_pc, rtx cond, bool reverse,
       /* Output ldmfd when the base register is SP, otherwise output ldmia.
          It's just a convention, their semantics are identical.  */
       if (regno_base == SP_REGNUM)
-        sprintf (pattern, "ldm%sfd\t", conditional);
-      else if (TARGET_UNIFIED_ASM)
-        sprintf (pattern, "ldmia%s\t", conditional);
+         /* update is never true here, hence there is no need to handle
+            pop here.  */
+       sprintf (pattern, "ldmfd%s", conditional);
+
+      if (update)
+       sprintf (pattern, "ldmia%s\t", conditional);
       else
-        sprintf (pattern, "ldm%sia\t", conditional);
+       sprintf (pattern, "ldm%s\t", conditional);
 
       strcat (pattern, reg_names[regno_base]);
       if (update)
@@ -17923,25 +17921,25 @@ output_move_double (rtx *operands, bool emit, int 
*count)
            {
              if (TARGET_LDRD
                  && !(fix_cm3_ldrd && reg0 == REGNO(XEXP (operands[1], 0))))
-               output_asm_insn ("ldr%(d%)\t%0, [%m1]", operands);
+               output_asm_insn ("ldrd%?\t%0, [%m1]", operands);
              else
-               output_asm_insn ("ldm%(ia%)\t%m1, %M0", operands);
+               output_asm_insn ("ldmia%?\t%m1, %M0", operands);
            }
          break;
 
        case PRE_INC:
          gcc_assert (TARGET_LDRD);
          if (emit)
-           output_asm_insn ("ldr%(d%)\t%0, [%m1, #8]!", operands);
+           output_asm_insn ("ldrd%?\t%0, [%m1, #8]!", operands);
          break;
 
        case PRE_DEC:
          if (emit)
            {
              if (TARGET_LDRD)
-               output_asm_insn ("ldr%(d%)\t%0, [%m1, #-8]!", operands);
+               output_asm_insn ("ldrd%?\t%0, [%m1, #-8]!", operands);
              else
-               output_asm_insn ("ldm%(db%)\t%m1!, %M0", operands);
+               output_asm_insn ("ldmdb%?\t%m1!, %M0", operands);
            }
          break;
 
@@ -17949,16 +17947,16 @@ output_move_double (rtx *operands, bool emit, int 
*count)
          if (emit)
            {
              if (TARGET_LDRD)
-               output_asm_insn ("ldr%(d%)\t%0, [%m1], #8", operands);
+               output_asm_insn ("ldrd%?\t%0, [%m1], #8", operands);
              else
-               output_asm_insn ("ldm%(ia%)\t%m1!, %M0", operands);
+               output_asm_insn ("ldmia%?\t%m1!, %M0", operands);
            }
          break;
 
        case POST_DEC:
          gcc_assert (TARGET_LDRD);
          if (emit)
-           output_asm_insn ("ldr%(d%)\t%0, [%m1], #-8", operands);
+           output_asm_insn ("ldrd%?\t%0, [%m1], #-8", operands);
          break;
 
        case PRE_MODIFY:
@@ -17979,7 +17977,7 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                  if (emit)
                    {
                      output_asm_insn ("add%?\t%1, %1, %2", otherops);
-                     output_asm_insn ("ldr%(d%)\t%0, [%1] @split", otherops);
+                     output_asm_insn ("ldrd%?\t%0, [%1] @split", otherops);
                    }
                  if (count)
                    *count = 2;
@@ -17995,7 +17993,7 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                          && INTVAL (otherops[2]) < 256))
                    {
                      if (emit)
-                       output_asm_insn ("ldr%(d%)\t%0, [%1, %2]!", otherops);
+                       output_asm_insn ("ldrd%?\t%0, [%1, %2]!", otherops);
                    }
                  else
                    {
@@ -18021,7 +18019,7 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                      && INTVAL (otherops[2]) < 256))
                {
                  if (emit)
-                   output_asm_insn ("ldr%(d%)\t%0, [%1], %2", otherops);
+                   output_asm_insn ("ldrd%?\t%0, [%1], %2", otherops);
                }
              else
                {
@@ -18050,9 +18048,9 @@ output_move_double (rtx *operands, bool emit, int 
*count)
          if (emit)
            {
              if (TARGET_LDRD)
-               output_asm_insn ("ldr%(d%)\t%0, [%1]", operands);
+               output_asm_insn ("ldrd%?\t%0, [%1]", operands);
              else
-               output_asm_insn ("ldm%(ia%)\t%1, %M0", operands);
+               output_asm_insn ("ldmia%?\t%1, %M0", operands);
            }
 
          if (count)
@@ -18076,19 +18074,19 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                        {
                        case -8:
                          if (emit)
-                           output_asm_insn ("ldm%(db%)\t%1, %M0", otherops);
+                           output_asm_insn ("ldmdb%?\t%1, %M0", otherops);
                          return "";
                        case -4:
                          if (TARGET_THUMB2)
                            break;
                          if (emit)
-                           output_asm_insn ("ldm%(da%)\t%1, %M0", otherops);
+                           output_asm_insn ("ldmda%?\t%1, %M0", otherops);
                          return "";
                        case 4:
                          if (TARGET_THUMB2)
                            break;
                          if (emit)
-                           output_asm_insn ("ldm%(ib%)\t%1, %M0", otherops);
+                           output_asm_insn ("ldmib%?\t%1, %M0", otherops);
                          return "";
                        }
                    }
@@ -18116,7 +18114,7 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                          if (emit)
                            {
                              output_asm_insn ("add%?\t%0, %1, %2", otherops);
-                             output_asm_insn ("ldr%(d%)\t%0, [%1]", operands);
+                             output_asm_insn ("ldrd%?\t%0, [%1]", operands);
                            }
                          if (count)
                            *count = 2;
@@ -18125,7 +18123,7 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                        {
                          otherops[0] = operands[0];
                          if (emit)
-                           output_asm_insn ("ldr%(d%)\t%0, [%1, %2]", 
otherops);
+                           output_asm_insn ("ldrd%?\t%0, [%1, %2]", otherops);
                        }
                      return "";
                    }
@@ -18156,9 +18154,9 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                *count = 2;
 
              if (TARGET_LDRD)
-               return "ldr%(d%)\t%0, [%1]";
+               return "ldrd%?\t%0, [%1]";
 
-             return "ldm%(ia%)\t%1, %M0";
+             return "ldmia%?\t%1, %M0";
            }
          else
            {
@@ -18201,25 +18199,25 @@ output_move_double (rtx *operands, bool emit, int 
*count)
          if (emit)
            {
              if (TARGET_LDRD)
-               output_asm_insn ("str%(d%)\t%1, [%m0]", operands);
+               output_asm_insn ("strd%?\t%1, [%m0]", operands);
              else
-               output_asm_insn ("stm%(ia%)\t%m0, %M1", operands);
+               output_asm_insn ("stm%?\t%m0, %M1", operands);
            }
          break;
 
         case PRE_INC:
          gcc_assert (TARGET_LDRD);
          if (emit)
-           output_asm_insn ("str%(d%)\t%1, [%m0, #8]!", operands);
+           output_asm_insn ("strd%?\t%1, [%m0, #8]!", operands);
          break;
 
         case PRE_DEC:
          if (emit)
            {
              if (TARGET_LDRD)
-               output_asm_insn ("str%(d%)\t%1, [%m0, #-8]!", operands);
+               output_asm_insn ("strd%?\t%1, [%m0, #-8]!", operands);
              else
-               output_asm_insn ("stm%(db%)\t%m0!, %M1", operands);
+               output_asm_insn ("stmdb%?\t%m0!, %M1", operands);
            }
          break;
 
@@ -18227,16 +18225,16 @@ output_move_double (rtx *operands, bool emit, int 
*count)
          if (emit)
            {
              if (TARGET_LDRD)
-               output_asm_insn ("str%(d%)\t%1, [%m0], #8", operands);
+               output_asm_insn ("strd%?\t%1, [%m0], #8", operands);
              else
-               output_asm_insn ("stm%(ia%)\t%m0!, %M1", operands);
+               output_asm_insn ("stm%?\t%m0!, %M1", operands);
            }
          break;
 
         case POST_DEC:
          gcc_assert (TARGET_LDRD);
          if (emit)
-           output_asm_insn ("str%(d%)\t%1, [%m0], #-8", operands);
+           output_asm_insn ("strd%?\t%1, [%m0], #-8", operands);
          break;
 
        case PRE_MODIFY:
@@ -18276,12 +18274,12 @@ output_move_double (rtx *operands, bool emit, int 
*count)
          else if (GET_CODE (XEXP (operands[0], 0)) == PRE_MODIFY)
            {
              if (emit)
-               output_asm_insn ("str%(d%)\t%0, [%1, %2]!", otherops);
+               output_asm_insn ("strd%?\t%0, [%1, %2]!", otherops);
            }
          else
            {
              if (emit)
-               output_asm_insn ("str%(d%)\t%0, [%1], %2", otherops);
+               output_asm_insn ("strd%?\t%0, [%1], %2", otherops);
            }
          break;
 
@@ -18293,21 +18291,21 @@ output_move_double (rtx *operands, bool emit, int 
*count)
                {
                case -8:
                  if (emit)
-                   output_asm_insn ("stm%(db%)\t%m0, %M1", operands);
+                   output_asm_insn ("stmdb%?\t%m0, %M1", operands);
                  return "";
 
                case -4:
                  if (TARGET_THUMB2)
                    break;
                  if (emit)
-                   output_asm_insn ("stm%(da%)\t%m0, %M1", operands);
+                   output_asm_insn ("stmda%?\t%m0, %M1", operands);
                  return "";
 
                case 4:
                  if (TARGET_THUMB2)
                    break;
                  if (emit)
-                   output_asm_insn ("stm%(ib%)\t%m0, %M1", operands);
+                   output_asm_insn ("stmib%?\t%m0, %M1", operands);
                  return "";
                }
            }
@@ -18321,7 +18319,7 @@ output_move_double (rtx *operands, bool emit, int 
*count)
              otherops[0] = operands[1];
              otherops[1] = XEXP (XEXP (operands[0], 0), 0);
              if (emit)
-               output_asm_insn ("str%(d%)\t%0, [%1, %2]", otherops);
+               output_asm_insn ("strd%?\t%0, [%1, %2]", otherops);
              return "";
            }
          /* Fall through */
@@ -18357,13 +18355,13 @@ output_move_quad (rtx *operands)
           switch (GET_CODE (XEXP (operands[1], 0)))
             {
             case REG:
-              output_asm_insn ("ldm%(ia%)\t%m1, %M0", operands);
+              output_asm_insn ("ldmia%?\t%m1, %M0", operands);
               break;
 
             case LABEL_REF:
             case CONST:
               output_asm_insn ("adr%?\t%0, %1", operands);
-              output_asm_insn ("ldm%(ia%)\t%0, %M0", operands);
+              output_asm_insn ("ldmia%?\t%0, %M0", operands);
               break;
 
             default:
@@ -18407,7 +18405,7 @@ output_move_quad (rtx *operands)
       switch (GET_CODE (XEXP (operands[0], 0)))
         {
         case REG:
-          output_asm_insn ("stm%(ia%)\t%m0, %M1", operands);
+          output_asm_insn ("stm%?\t%m0, %M1", operands);
           break;
 
         default:
@@ -19440,10 +19438,7 @@ output_return_instruction (rtx operand, bool 
really_return, bool reverse,
              gcc_assert (stack_adjust == 0 || stack_adjust == 4);
 
              if (stack_adjust && arm_arch5 && TARGET_ARM)
-               if (TARGET_UNIFIED_ASM)
                  sprintf (instr, "ldmib%s\t%%|sp, {", conditional);
-               else
-                 sprintf (instr, "ldm%sib\t%%|sp, {", conditional);
              else
                {
                  /* If we can't use ldmib (SA110 bug),
@@ -19451,17 +19446,11 @@ output_return_instruction (rtx operand, bool 
really_return, bool reverse,
                  if (stack_adjust)
                    live_regs_mask |= 1 << 3;
 
-                 if (TARGET_UNIFIED_ASM)
-                   sprintf (instr, "ldmfd%s\t%%|sp, {", conditional);
-                 else
-                   sprintf (instr, "ldm%sfd\t%%|sp, {", conditional);
+                 sprintf (instr, "ldmfd%s\t%%|sp, {", conditional);
                }
            }
          else
-           if (TARGET_UNIFIED_ASM)
              sprintf (instr, "pop%s\t{", conditional);
-           else
-             sprintf (instr, "ldm%sfd\t%%|sp!, {", conditional);
 
          p = instr + strlen (instr);
 
@@ -21472,37 +21461,17 @@ arm_print_operand (FILE *stream, rtx x, int code)
       arm_print_condition (stream);
       return;
 
-    case '(':
-      /* Nothing in unified syntax, otherwise the current condition code.  */
-      if (!TARGET_UNIFIED_ASM)
-       arm_print_condition (stream);
-      break;
-
-    case ')':
-      /* The current condition code in unified syntax, otherwise nothing.  */
-      if (TARGET_UNIFIED_ASM)
-       arm_print_condition (stream);
-      break;
-
     case '.':
       /* The current condition code for a condition code setting instruction.
         Preceded by 's' in unified syntax, otherwise followed by 's'.  */
-      if (TARGET_UNIFIED_ASM)
-       {
-         fputc('s', stream);
-         arm_print_condition (stream);
-       }
-      else
-       {
-         arm_print_condition (stream);
-         fputc('s', stream);
-       }
+      fputc('s', stream);
+      arm_print_condition (stream);
       return;
 
     case '!':
       /* If the instruction is conditionally executed then print
         the current condition code, otherwise print 's'.  */
-      gcc_assert (TARGET_THUMB2 && TARGET_UNIFIED_ASM);
+      gcc_assert (TARGET_THUMB2);
       if (current_insn_predicate)
        arm_print_condition (stream);
       else
@@ -26914,20 +26883,16 @@ arm_output_shift(rtx * operands, int set_flags)
   char c;
 
   c = flag_chars[set_flags];
-  if (TARGET_UNIFIED_ASM)
+  shift = shift_op(operands[3], &val);
+  if (shift)
     {
-      shift = shift_op(operands[3], &val);
-      if (shift)
-       {
-         if (val != -1)
-           operands[2] = GEN_INT(val);
-         sprintf (pattern, "%s%%%c\t%%0, %%1, %%2", shift, c);
-       }
-      else
-       sprintf (pattern, "mov%%%c\t%%0, %%1", c);
+      if (val != -1)
+       operands[2] = GEN_INT(val);
+      sprintf (pattern, "%s%%%c\t%%0, %%1, %%2", shift, c);
     }
   else
-    sprintf (pattern, "mov%%%c\t%%0, %%1%%S3", c);
+    sprintf (pattern, "mov%%%c\t%%0, %%1", c);
+
   output_asm_insn (pattern, operands);
   return "";
 }
@@ -29569,10 +29534,8 @@ arm_valid_target_attribute_p (tree fndecl, tree 
ARG_UNUSED (name),
 void
 arm_declare_function_name (FILE *stream, const char *name, tree decl)
 {
-  if (TARGET_UNIFIED_ASM)
-    fprintf (stream, "\t.syntax unified\n");
-  else
-    fprintf (stream, "\t.syntax divided\n");
+
+  fprintf (stream, "\t.syntax unified\n");
 
   if (TARGET_THUMB)
     {
diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h
index 836e517..252d98d 100644
--- a/gcc/config/arm/arm.h
+++ b/gcc/config/arm/arm.h
@@ -238,10 +238,6 @@ extern void 
(*arm_lang_output_object_attributes_hook)(void);
    && (arm_disable_literal_pool \
        || (!optimize_size && !current_tune->prefer_constant_pool)))
 
-/* We could use unified syntax for arm mode, but for now we just use it
-   for thumb mode.  */
-#define TARGET_UNIFIED_ASM (TARGET_THUMB)
-
 /* Nonzero if this chip provides the DMB instruction.  */
 #define TARGET_HAVE_DMB                (arm_arch6m || arm_arch7)
 
@@ -2021,8 +2017,7 @@ extern int making_const_table;
                    "\t.syntax divided\n")
 
 #undef  ASM_APP_OFF
-#define ASM_APP_OFF (TARGET_ARM ? "\t.arm\n\t.syntax divided\n" : \
-                    "\t.thumb\n\t.syntax unified\n")
+#define ASM_APP_OFF "\t.syntax unified\n"
 
 /* Output a push or a pop instruction (only used when profiling).
    We can't push STATIC_CHAIN_REGNUM (r12) directly with Thumb-1.  We know
@@ -2033,10 +2028,7 @@ extern int making_const_table;
 #define ASM_OUTPUT_REG_PUSH(STREAM, REGNO)             \
   do                                                   \
     {                                                  \
-      if (TARGET_ARM)                                  \
-       asm_fprintf (STREAM,"\tstmfd\t%r!,{%r}\n",      \
-                    STACK_POINTER_REGNUM, REGNO);      \
-      else if (TARGET_THUMB1                           \
+      if (TARGET_THUMB1                                        \
               && (REGNO) == STATIC_CHAIN_REGNUM)       \
        {                                               \
          asm_fprintf (STREAM, "\tpush\t{r7}\n");       \
@@ -2052,11 +2044,8 @@ extern int making_const_table;
 #define ASM_OUTPUT_REG_POP(STREAM, REGNO)              \
   do                                                   \
     {                                                  \
-      if (TARGET_ARM)                                  \
-       asm_fprintf (STREAM, "\tldmfd\t%r!,{%r}\n",     \
-                    STACK_POINTER_REGNUM, REGNO);      \
-      else if (TARGET_THUMB1                           \
-              && (REGNO) == STATIC_CHAIN_REGNUM)       \
+      if (TARGET_THUMB1                                        \
+         && (REGNO) == STATIC_CHAIN_REGNUM)            \
        {                                               \
          asm_fprintf (STREAM, "\tpop\t{r7}\n");        \
          asm_fprintf (STREAM, "\tmov\t%r, r7\n", REGNO);\
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index be51c77..a06e790 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -622,9 +622,9 @@
        (plus:SI (match_dup 1) (match_dup 2)))]
   "TARGET_ARM"
   "@
-   add%.\\t%0, %1, %2
-   sub%.\\t%0, %1, #%n2
-   add%.\\t%0, %1, %2"
+   adds%?\\t%0, %1, %2
+   subs%?\\t%0, %1, #%n2
+   adds%?\\t%0, %1, %2"
   [(set_attr "conds" "set")
    (set_attr "type" "alus_imm,alus_imm,alus_sreg")]
 )
@@ -672,8 +672,8 @@
                 (match_operand:SI 3 "arm_addimm_operand" "I,L")))]
   "TARGET_32BIT && INTVAL (operands[2]) == -INTVAL (operands[3])"
   "@
-   add%.\\t%0, %1, %3
-   sub%.\\t%0, %1, #%n3"
+   adds%?\\t%0, %1, %3
+   subs%?\\t%0, %1, #%n3"
   [(set_attr "conds" "set")
    (set_attr "type" "alus_sreg")]
 )
@@ -729,9 +729,9 @@
        (plus:SI (match_dup 1) (match_dup 2)))]
   "TARGET_32BIT"
   "@
-   add%.\\t%0, %1, %2
-   sub%.\\t%0, %1, #%n2
-   add%.\\t%0, %1, %2"
+   adds%?\\t%0, %1, %2
+   subs%?\\t%0, %1, #%n2
+   adds%?\\t%0, %1, %2"
   [(set_attr "conds" "set")
    (set_attr "type"  "alus_imm,alus_imm,alus_sreg")]
 )
@@ -746,9 +746,9 @@
        (plus:SI (match_dup 1) (match_dup 2)))]
   "TARGET_32BIT"
   "@
-   add%.\\t%0, %1, %2
-   add%.\\t%0, %1, %2
-   sub%.\\t%0, %1, #%n2"
+   adds%?\\t%0, %1, %2
+   adds%?\\t%0, %1, %2
+   subs%?\\t%0, %1, #%n2"
   [(set_attr "conds" "set")
    (set_attr "type" "alus_imm,alus_imm,alus_sreg")]
 )
@@ -856,7 +856,7 @@
                 (LTUGEU:SI (reg:<cnb> CC_REGNUM) (const_int 0))))
    (clobber (reg:CC CC_REGNUM))]
    "TARGET_32BIT"
-   "adc%.\\t%0, %1, %2"
+   "adcs%?\\t%0, %1, %2"
    [(set_attr "conds" "set")
     (set_attr "type" "adcs_reg")]
 )
@@ -1239,9 +1239,9 @@
        (minus:SI (match_dup 1) (match_dup 2)))]
   "TARGET_32BIT"
   "@
-   sub%.\\t%0, %1, %2
-   sub%.\\t%0, %1, %2
-   rsb%.\\t%0, %2, %1"
+   subs%?\\t%0, %1, %2
+   subs%?\\t%0, %1, %2
+   rsbs%?\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "type"  "alus_imm,alus_sreg,alus_sreg")]
 )
@@ -1254,9 +1254,9 @@
        (minus:SI (match_dup 1) (match_dup 2)))]
   "TARGET_32BIT"
   "@
-   sub%.\\t%0, %1, %2
-   sub%.\\t%0, %1, %2
-   rsb%.\\t%0, %2, %1"
+   subs%?\\t%0, %1, %2
+   subs%?\\t%0, %1, %2
+   rsbs%?\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "type" "alus_imm,alus_sreg,alus_sreg")]
 )
@@ -1335,7 +1335,7 @@
    (set (match_operand:SI 0 "s_register_operand" "=&r,&r")
        (mult:SI (match_dup 2) (match_dup 1)))]
   "TARGET_ARM && !arm_arch6"
-  "mul%.\\t%0, %2, %1"
+  "muls%?\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "type" "muls")]
 )
@@ -1349,7 +1349,7 @@
    (set (match_operand:SI 0 "s_register_operand" "=r")
        (mult:SI (match_dup 2) (match_dup 1)))]
   "TARGET_ARM && arm_arch6 && optimize_size"
-  "mul%.\\t%0, %2, %1"
+  "muls%?\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "type" "muls")]
 )
@@ -1362,7 +1362,7 @@
                         (const_int 0)))
    (clobber (match_scratch:SI 0 "=&r,&r"))]
   "TARGET_ARM && !arm_arch6"
-  "mul%.\\t%0, %2, %1"
+  "muls%?\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "type" "muls")]
 )
@@ -1375,7 +1375,7 @@
                         (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
   "TARGET_ARM && arm_arch6 && optimize_size"
-  "mul%.\\t%0, %2, %1"
+  "muls%?\\t%0, %2, %1"
   [(set_attr "conds" "set")
    (set_attr "type" "muls")]
 )
@@ -1419,7 +1419,7 @@
        (plus:SI (mult:SI (match_dup 2) (match_dup 1))
                 (match_dup 3)))]
   "TARGET_ARM && arm_arch6"
-  "mla%.\\t%0, %2, %1, %3"
+  "mlas%?\\t%0, %2, %1, %3"
   [(set_attr "conds" "set")
    (set_attr "type" "mlas")]
 )
@@ -1436,7 +1436,7 @@
        (plus:SI (mult:SI (match_dup 2) (match_dup 1))
                 (match_dup 3)))]
   "TARGET_ARM && arm_arch6 && optimize_size"
-  "mla%.\\t%0, %2, %1, %3"
+  "mlas%?\\t%0, %2, %1, %3"
   [(set_attr "conds" "set")
    (set_attr "type" "mlas")]
 )
@@ -1451,7 +1451,7 @@
         (const_int 0)))
    (clobber (match_scratch:SI 0 "=&r,&r,&r,&r"))]
   "TARGET_ARM && !arm_arch6"
-  "mla%.\\t%0, %2, %1, %3"
+  "mlas%?\\t%0, %2, %1, %3"
   [(set_attr "conds" "set")
    (set_attr "type" "mlas")]
 )
@@ -1466,7 +1466,7 @@
         (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
   "TARGET_ARM && arm_arch6 && optimize_size"
-  "mla%.\\t%0, %2, %1, %3"
+  "mlas%?\\t%0, %2, %1, %3"
   [(set_attr "conds" "set")
    (set_attr "type" "mlas")]
 )
@@ -2195,9 +2195,9 @@
        (and:SI (match_dup 1) (match_dup 2)))]
   "TARGET_32BIT"
   "@
-   and%.\\t%0, %1, %2
-   bic%.\\t%0, %1, #%B2
-   and%.\\t%0, %1, %2"
+   ands%?\\t%0, %1, %2
+   bics%?\\t%0, %1, #%B2
+   ands%?\\t%0, %1, %2"
   [(set_attr "conds" "set")
    (set_attr "type" "logics_imm,logics_imm,logics_reg")]
 )
@@ -2212,7 +2212,7 @@
   "TARGET_32BIT"
   "@
    tst%?\\t%0, %1
-   bic%.\\t%2, %0, #%B1
+   bics%?\\t%2, %0, #%B1
    tst%?\\t%0, %1"
   [(set_attr "conds" "set")
    (set_attr "type"  "logics_imm,logics_imm,logics_reg")]
@@ -2796,7 +2796,7 @@
                (const_int 0)))
    (clobber (match_scratch:SI 4 "=r"))]
   "TARGET_ARM || (TARGET_THUMB2 && CONST_INT_P (operands[2]))"
-  "bic%.%?\\t%4, %3, %1%S0"
+  "bics%?\\t%4, %3, %1%S0"
   [(set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
    (set_attr "conds" "set")
@@ -2822,7 +2822,7 @@
                      (match_dup 2)]))
                     (match_dup 3)))])]
   "TARGET_ARM || (TARGET_THUMB2 && CONST_INT_P (operands[2]))"
-  "bic%.%?\\t%4, %3, %1%S0"
+  "bics%?\\t%4, %3, %1%S0"
   [(set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
    (set_attr "conds" "set")
@@ -2841,7 +2841,7 @@
    (set (match_operand:SI 0 "s_register_operand" "=r")
        (and:SI (not:SI (match_dup 2)) (match_dup 1)))]
   "TARGET_32BIT"
-  "bic%.\\t%0, %1, %2"
+  "bics\\t%0, %1, %2"
   [(set_attr "conds" "set")
    (set_attr "type" "logics_shift_reg")]
 )
@@ -2854,7 +2854,7 @@
         (const_int 0)))
    (clobber (match_scratch:SI 0 "=r"))]
   "TARGET_32BIT"
-  "bic%.\\t%0, %1, %2"
+  "bics\\t%0, %1, %2"
   [(set_attr "conds" "set")
    (set_attr "type" "logics_shift_reg")]
 )
@@ -4233,7 +4233,7 @@
          (unspec:HI [(match_operand:HI 1 "memory_operand" "Uw,Uh")]
                     UNSPEC_UNALIGNED_LOAD)))]
   "unaligned_access && TARGET_32BIT"
-  "ldr%(sh%)\t%0, %1\t@ unaligned"
+  "ldrsh%?\t%0, %1\t@ unaligned"
   [(set_attr "arch" "t2,any")
    (set_attr "length" "2,4")
    (set_attr "predicable" "yes")
@@ -4246,7 +4246,7 @@
          (unspec:HI [(match_operand:HI 1 "memory_operand" "Uw,m")]
                     UNSPEC_UNALIGNED_LOAD)))]
   "unaligned_access && TARGET_32BIT"
-  "ldr%(h%)\t%0, %1\t@ unaligned"
+  "ldrh%?\t%0, %1\t@ unaligned"
   [(set_attr "arch" "t2,any")
    (set_attr "length" "2,4")
    (set_attr "predicable" "yes")
@@ -4270,7 +4270,7 @@
        (unspec:HI [(match_operand:HI 1 "s_register_operand" "l,r")]
                   UNSPEC_UNALIGNED_STORE))]
   "unaligned_access && TARGET_32BIT"
-  "str%(h%)\t%1, %0\t@ unaligned"
+  "strh%?\t%1, %0\t@ unaligned"
   [(set_attr "arch" "t2,any")
    (set_attr "length" "2,4")
    (set_attr "predicable" "yes")
@@ -5022,7 +5022,7 @@
   "TARGET_ARM && arm_arch4 && !arm_arch6"
   "@
    #
-   ldr%(h%)\\t%0, %1"
+   ldrh%?\\t%0, %1"
   [(set_attr "type" "alu_shift_reg,load_byte")
    (set_attr "predicable" "yes")]
 )
@@ -5033,7 +5033,7 @@
   "TARGET_ARM && arm_arch6"
   "@
    uxth%?\\t%0, %1
-   ldr%(h%)\\t%0, %1"
+   ldrh%?\\t%0, %1"
   [(set_attr "predicable" "yes")
    (set_attr "type" "extend,load_byte")]
 )
@@ -5092,7 +5092,7 @@
   "TARGET_ARM && !arm_arch6"
   "@
    #
-   ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
+   ldrb%?\\t%0, %1\\t%@ zero_extendqisi2"
   [(set_attr "length" "8,4")
    (set_attr "type" "alu_shift_reg,load_byte")
    (set_attr "predicable" "yes")]
@@ -5103,8 +5103,8 @@
        (zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,Uh")))]
   "TARGET_ARM && arm_arch6"
   "@
-   uxtb%(%)\\t%0, %1
-   ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
+   uxtb%?\\t%0, %1
+   ldrb%?\\t%0, %1\\t%@ zero_extendqisi2"
   [(set_attr "type" "extend,load_byte")
    (set_attr "predicable" "yes")]
 )
@@ -5264,7 +5264,7 @@
   "TARGET_ARM && arm_arch4 && !arm_arch6"
   "@
    #
-   ldr%(sh%)\\t%0, %1"
+   ldrsh%?\\t%0, %1"
   [(set_attr "length" "8,4")
    (set_attr "type" "alu_shift_reg,load_byte")
    (set_attr "predicable" "yes")]
@@ -5277,7 +5277,7 @@
   "TARGET_32BIT && arm_arch6"
   "@
    sxth%?\\t%0, %1
-   ldr%(sh%)\\t%0, %1"
+   ldrsh%?\\t%0, %1"
   [(set_attr "type" "extend,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")]
@@ -5320,7 +5320,7 @@
   [(set (match_operand:HI 0 "s_register_operand" "=r")
        (sign_extend:HI (match_operand:QI 1 "arm_extendqisi_mem_op" "Uq")))]
   "TARGET_ARM && arm_arch4"
-  "ldr%(sb%)\\t%0, %1"
+  "ldrsb%?\\t%0, %1"
   [(set_attr "type" "load_byte")
    (set_attr "predicable" "yes")]
 )
@@ -5359,7 +5359,7 @@
   "TARGET_ARM && arm_arch4 && !arm_arch6"
   "@
    #
-   ldr%(sb%)\\t%0, %1"
+   ldrsb%?\\t%0, %1"
   [(set_attr "length" "8,4")
    (set_attr "type" "alu_shift_reg,load_byte")
    (set_attr "predicable" "yes")]
@@ -5372,7 +5372,7 @@
   "TARGET_ARM && arm_arch6"
   "@
    sxtb%?\\t%0, %1
-   ldr%(sb%)\\t%0, %1"
+   ldrsb%?\\t%0, %1"
   [(set_attr "type" "extend,load_byte")
    (set_attr "predicable" "yes")]
 )
@@ -6403,8 +6403,8 @@
    mov%?\\t%0, %1\\t%@ movhi
    mvn%?\\t%0, #%B1\\t%@ movhi
    movw%?\\t%0, %L1\\t%@ movhi
-   str%(h%)\\t%1, %0\\t%@ movhi
-   ldr%(h%)\\t%0, %1\\t%@ movhi"
+   strh%?\\t%1, %0\\t%@ movhi
+   ldrh%?\\t%0, %1\\t%@ movhi"
   [(set_attr "predicable" "yes")
    (set_attr "pool_range" "*,*,*,*,256")
    (set_attr "neg_pool_range" "*,*,*,*,244")
@@ -6546,10 +6546,10 @@
    mov%?\\t%0, %1
    mov%?\\t%0, %1
    mvn%?\\t%0, #%B1
-   ldr%(b%)\\t%0, %1
-   str%(b%)\\t%1, %0
-   ldr%(b%)\\t%0, %1
-   str%(b%)\\t%1, %0"
+   ldrb%?\\t%0, %1
+   strb%?\\t%1, %0
+   ldrb%?\\t%0, %1
+   strb%?\\t%1, %0"
   [(set_attr "type" 
"mov_reg,mov_reg,mov_imm,mov_imm,mvn_imm,load1,store1,load1,store1")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "yes,yes,yes,no,no,no,no,no,no")
@@ -6589,9 +6589,9 @@
   switch (which_alternative)
     {
     case 0:    /* ARM register from memory */
-      return \"ldr%(h%)\\t%0, %1\\t%@ __fp16\";
+      return \"ldrh%?\\t%0, %1\\t%@ __fp16\";
     case 1:    /* memory from ARM register */
-      return \"str%(h%)\\t%1, %0\\t%@ __fp16\";
+      return \"strh%?\\t%1, %0\\t%@ __fp16\";
     case 2:    /* ARM register from ARM register */
       return \"mov%?\\t%0, %1\\t%@ __fp16\";
     case 3:    /* ARM register from constant */
@@ -8326,13 +8326,7 @@
 (define_insn "nop"
   [(const_int 0)]
   "TARGET_EITHER"
-  "*
-  if (TARGET_UNIFIED_ASM)
-    return \"nop\";
-  if (TARGET_ARM)
-    return \"mov%?\\t%|r0, %|r0\\t%@ nop\";
-  return  \"mov\\tr8, r8\";
-  "
+  "nop"
   [(set (attr "length")
        (if_then_else (eq_attr "is_thumb" "yes")
                      (const_int 2)
@@ -10172,7 +10166,7 @@
        if (val1 == 4 || val2 == 4)
          /* Other val must be 8, since we know they are adjacent and neither
             is zero.  */
-         output_asm_insn (\"ldm%(ib%)\\t%0, {%1, %2}\", ldm);
+         output_asm_insn (\"ldmib%?\\t%0, {%1, %2}\", ldm);
        else if (const_ok_for_arm (val1) || const_ok_for_arm (-val1))
          {
            ldm[0] = ops[0] = operands[4];
@@ -10180,9 +10174,9 @@
            ops[2] = GEN_INT (val1);
            output_add_immediate (ops);
            if (val1 < val2)
-             output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
+             output_asm_insn (\"ldmia%?\\t%0, {%1, %2}\", ldm);
            else
-             output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
+             output_asm_insn (\"ldmda%?\\t%0, {%1, %2}\", ldm);
          }
        else
          {
@@ -10199,16 +10193,16 @@
     else if (val1 != 0)
       {
        if (val1 < val2)
-         output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
+         output_asm_insn (\"ldmda%?\\t%0, {%1, %2}\", ldm);
        else
-         output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
+         output_asm_insn (\"ldmia%?\\t%0, {%1, %2}\", ldm);
       }
     else
       {
        if (val1 < val2)
-         output_asm_insn (\"ldm%(ia%)\\t%0, {%1, %2}\", ldm);
+         output_asm_insn (\"ldmia%?\\t%0, {%1, %2}\", ldm);
        else
-         output_asm_insn (\"ldm%(da%)\\t%0, {%1, %2}\", ldm);
+         output_asm_insn (\"ldmda%?\\t%0, {%1, %2}\", ldm);
       }
     output_asm_insn (\"%I3%?\\t%0, %1, %2\", arith);
     return \"\";
@@ -10544,9 +10538,7 @@
        int i;
        char pattern[100];
 
-       if (TARGET_ARM)
-           strcpy (pattern, \"stm%(fd%)\\t%m0!, {%1\");
-       else if (TARGET_THUMB2)
+       if (TARGET_32BIT)
            strcpy (pattern, \"push%?\\t{%1\");
        else
            strcpy (pattern, \"push\\t{%1\");
diff --git a/gcc/config/arm/ldmstm.md b/gcc/config/arm/ldmstm.md
index 20d6b4c..ebb09ab 100644
--- a/gcc/config/arm/ldmstm.md
+++ b/gcc/config/arm/ldmstm.md
@@ -21,7 +21,7 @@
    see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
    <http://www.gnu.org/licenses/>.  */
 
-(define_insn "*ldm4_ia"
+(define_insn "*ldm4_"
   [(match_parallel 0 "load_multiple_operation"
     [(set (match_operand:SI 1 "arm_hard_general_register_operand" "")
           (mem:SI (match_operand:SI 5 "s_register_operand" "rk")))
@@ -35,7 +35,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int 12))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "ldm%(ia%)\t%5, {%1, %2, %3, %4}"
+  "ldm%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -54,7 +54,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int 12))))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 4"
-  "ldm%(ia%)\t%5, {%1, %2, %3, %4}"
+  "ldmia\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")])
 
 (define_insn "*ldm4_ia_update"
@@ -73,7 +73,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int 12))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
-  "ldm%(ia%)\t%5!, {%1, %2, %3, %4}"
+  "ldmia%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -94,10 +94,10 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int 12))))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 5"
-  "ldm%(ia%)\t%5!, {%1, %2, %3, %4}"
+  "ldmia\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")])
 
-(define_insn "*stm4_ia"
+(define_insn "*stm4_"
   [(match_parallel 0 "store_multiple_operation"
     [(set (mem:SI (match_operand:SI 5 "s_register_operand" "rk"))
           (match_operand:SI 1 "arm_hard_general_register_operand" ""))
@@ -108,7 +108,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int 12)))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "stm%(ia%)\t%5, {%1, %2, %3, %4}"
+  "stm%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -126,7 +126,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int 12)))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
-  "stm%(ia%)\t%5!, {%1, %2, %3, %4}"
+  "stmia%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -144,7 +144,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int 12)))
           (match_operand:SI 4 "low_register_operand" ""))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 5"
-  "stm%(ia%)\t%5!, {%1, %2, %3, %4}"
+  "stmia\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")])
 
 (define_insn "*ldm4_ib"
@@ -162,7 +162,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int 16))))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "ldm%(ib%)\t%5, {%1, %2, %3, %4}"
+  "ldmib%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")])
 
@@ -183,7 +183,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int 16))))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
-  "ldm%(ib%)\t%5!, {%1, %2, %3, %4}"
+  "ldmib%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")])
 
@@ -198,7 +198,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int 16)))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "stm%(ib%)\t%5, {%1, %2, %3, %4}"
+  "stmib%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")])
 
@@ -215,7 +215,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int 16)))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
-  "stm%(ib%)\t%5!, {%1, %2, %3, %4}"
+  "stmib%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")])
 
@@ -233,7 +233,7 @@
      (set (match_operand:SI 4 "arm_hard_general_register_operand" "")
           (mem:SI (match_dup 5)))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "ldm%(da%)\t%5, {%1, %2, %3, %4}"
+  "ldmda%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")])
 
@@ -253,7 +253,7 @@
      (set (match_operand:SI 4 "arm_hard_general_register_operand" "")
           (mem:SI (match_dup 5)))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
-  "ldm%(da%)\t%5!, {%1, %2, %3, %4}"
+  "ldmda%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")])
 
@@ -268,7 +268,7 @@
      (set (mem:SI (match_dup 5))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "stm%(da%)\t%5, {%1, %2, %3, %4}"
+  "stmda%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")])
 
@@ -285,7 +285,7 @@
      (set (mem:SI (match_dup 5))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 5"
-  "stm%(da%)\t%5!, {%1, %2, %3, %4}"
+  "stmda%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")])
 
@@ -304,7 +304,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int -4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "ldm%(db%)\t%5, {%1, %2, %3, %4}"
+  "ldmdb%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -326,7 +326,7 @@
           (mem:SI (plus:SI (match_dup 5)
                   (const_int -4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
-  "ldm%(db%)\t%5!, {%1, %2, %3, %4}"
+  "ldmdb%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "load4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -342,7 +342,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int -4)))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "stm%(db%)\t%5, {%1, %2, %3, %4}"
+  "stmdb%?\t%5, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -360,7 +360,7 @@
      (set (mem:SI (plus:SI (match_dup 5) (const_int -4)))
           (match_operand:SI 4 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 5"
-  "stm%(db%)\t%5!, {%1, %2, %3, %4}"
+  "stmdb%?\t%5!, {%1, %2, %3, %4}"
   [(set_attr "type" "store4")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -472,7 +472,7 @@
     FAIL;
 })
 
-(define_insn "*ldm3_ia"
+(define_insn "*ldm3_"
   [(match_parallel 0 "load_multiple_operation"
     [(set (match_operand:SI 1 "arm_hard_general_register_operand" "")
           (mem:SI (match_operand:SI 4 "s_register_operand" "rk")))
@@ -483,7 +483,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int 8))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "ldm%(ia%)\t%4, {%1, %2, %3}"
+  "ldm%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -499,7 +499,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int 8))))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 3"
-  "ldm%(ia%)\t%4, {%1, %2, %3}"
+  "ldmia\t%4, {%1, %2, %3}"
   [(set_attr "type" "load3")])
 
 (define_insn "*ldm3_ia_update"
@@ -515,7 +515,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int 8))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "ldm%(ia%)\t%4!, {%1, %2, %3}"
+  "ldmia%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -533,10 +533,10 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int 8))))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 4"
-  "ldm%(ia%)\t%4!, {%1, %2, %3}"
+  "ldmia\t%4!, {%1, %2, %3}"
   [(set_attr "type" "load3")])
 
-(define_insn "*stm3_ia"
+(define_insn "*stm3_"
   [(match_parallel 0 "store_multiple_operation"
     [(set (mem:SI (match_operand:SI 4 "s_register_operand" "rk"))
           (match_operand:SI 1 "arm_hard_general_register_operand" ""))
@@ -545,7 +545,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int 8)))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "stm%(ia%)\t%4, {%1, %2, %3}"
+  "stm%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -561,7 +561,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int 8)))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "stm%(ia%)\t%4!, {%1, %2, %3}"
+  "stmia%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -577,7 +577,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int 8)))
           (match_operand:SI 3 "low_register_operand" ""))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 4"
-  "stm%(ia%)\t%4!, {%1, %2, %3}"
+  "stmia\t%4!, {%1, %2, %3}"
   [(set_attr "type" "store3")])
 
 (define_insn "*ldm3_ib"
@@ -592,7 +592,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int 12))))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "ldm%(ib%)\t%4, {%1, %2, %3}"
+  "ldmib%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")])
 
@@ -610,7 +610,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int 12))))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "ldm%(ib%)\t%4!, {%1, %2, %3}"
+  "ldmib%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")])
 
@@ -623,7 +623,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int 12)))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "stm%(ib%)\t%4, {%1, %2, %3}"
+  "stmib%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")])
 
@@ -638,7 +638,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int 12)))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "stm%(ib%)\t%4!, {%1, %2, %3}"
+  "stmib%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")])
 
@@ -653,7 +653,7 @@
      (set (match_operand:SI 3 "arm_hard_general_register_operand" "")
           (mem:SI (match_dup 4)))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "ldm%(da%)\t%4, {%1, %2, %3}"
+  "ldmda%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")])
 
@@ -670,7 +670,7 @@
      (set (match_operand:SI 3 "arm_hard_general_register_operand" "")
           (mem:SI (match_dup 4)))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "ldm%(da%)\t%4!, {%1, %2, %3}"
+  "ldmda%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")])
 
@@ -683,7 +683,7 @@
      (set (mem:SI (match_dup 4))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "stm%(da%)\t%4, {%1, %2, %3}"
+  "stmda%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")])
 
@@ -698,7 +698,7 @@
      (set (mem:SI (match_dup 4))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 4"
-  "stm%(da%)\t%4!, {%1, %2, %3}"
+  "stmda%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")])
 
@@ -714,7 +714,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int -4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "ldm%(db%)\t%4, {%1, %2, %3}"
+  "ldmdb%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -733,7 +733,7 @@
           (mem:SI (plus:SI (match_dup 4)
                   (const_int -4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "ldm%(db%)\t%4!, {%1, %2, %3}"
+  "ldmdb%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "load3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -747,7 +747,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int -4)))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "stm%(db%)\t%4, {%1, %2, %3}"
+  "stmdb%?\t%4, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -763,7 +763,7 @@
      (set (mem:SI (plus:SI (match_dup 4) (const_int -4)))
           (match_operand:SI 3 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 4"
-  "stm%(db%)\t%4!, {%1, %2, %3}"
+  "stmdb%?\t%4!, {%1, %2, %3}"
   [(set_attr "type" "store3")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -861,7 +861,7 @@
     FAIL;
 })
 
-(define_insn "*ldm2_ia"
+(define_insn "*ldm2_"
   [(match_parallel 0 "load_multiple_operation"
     [(set (match_operand:SI 1 "arm_hard_general_register_operand" "")
           (mem:SI (match_operand:SI 3 "s_register_operand" "rk")))
@@ -869,7 +869,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int 4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
-  "ldm%(ia%)\t%3, {%1, %2}"
+  "ldm%?\t%3, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -882,7 +882,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int 4))))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 2"
-  "ldm%(ia%)\t%3, {%1, %2}"
+  "ldmia\t%3, {%1, %2}"
   [(set_attr "type" "load2")])
 
 (define_insn "*ldm2_ia_update"
@@ -895,7 +895,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int 4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "ldm%(ia%)\t%3!, {%1, %2}"
+  "ldmia%?\t%3!, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -910,17 +910,17 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int 4))))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 3"
-  "ldm%(ia%)\t%3!, {%1, %2}"
+  "ldmia\t%3!, {%1, %2}"
   [(set_attr "type" "load2")])
 
-(define_insn "*stm2_ia"
+(define_insn "*stm2_"
   [(match_parallel 0 "store_multiple_operation"
     [(set (mem:SI (match_operand:SI 3 "s_register_operand" "rk"))
           (match_operand:SI 1 "arm_hard_general_register_operand" ""))
      (set (mem:SI (plus:SI (match_dup 3) (const_int 4)))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
-  "stm%(ia%)\t%3, {%1, %2}"
+  "stm%?\t%3, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -934,7 +934,7 @@
      (set (mem:SI (plus:SI (match_dup 3) (const_int 4)))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "stm%(ia%)\t%3!, {%1, %2}"
+  "stmia%?\t%3!, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -948,7 +948,7 @@
      (set (mem:SI (plus:SI (match_dup 3) (const_int 4)))
           (match_operand:SI 2 "low_register_operand" ""))])]
   "TARGET_THUMB1 && XVECLEN (operands[0], 0) == 3"
-  "stm%(ia%)\t%3!, {%1, %2}"
+  "stmia\t%3!, {%1, %2}"
   [(set_attr "type" "store2")])
 
 (define_insn "*ldm2_ib"
@@ -960,7 +960,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int 8))))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
-  "ldm%(ib%)\t%3, {%1, %2}"
+  "ldmib%?\t%3, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")])
 
@@ -975,7 +975,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int 8))))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "ldm%(ib%)\t%3!, {%1, %2}"
+  "ldmib%?\t%3!, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")])
 
@@ -986,7 +986,7 @@
      (set (mem:SI (plus:SI (match_dup 3) (const_int 8)))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
-  "stm%(ib%)\t%3, {%1, %2}"
+  "stmib%?\t%3, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")])
 
@@ -999,7 +999,7 @@
      (set (mem:SI (plus:SI (match_dup 3) (const_int 8)))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "stm%(ib%)\t%3!, {%1, %2}"
+  "stmib%?\t%3!, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")])
 
@@ -1011,7 +1011,7 @@
      (set (match_operand:SI 2 "arm_hard_general_register_operand" "")
           (mem:SI (match_dup 3)))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
-  "ldm%(da%)\t%3, {%1, %2}"
+  "ldmda%?\t%3, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")])
 
@@ -1025,7 +1025,7 @@
      (set (match_operand:SI 2 "arm_hard_general_register_operand" "")
           (mem:SI (match_dup 3)))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "ldm%(da%)\t%3!, {%1, %2}"
+  "ldmda%?\t%3!, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")])
 
@@ -1036,7 +1036,7 @@
      (set (mem:SI (match_dup 3))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 2"
-  "stm%(da%)\t%3, {%1, %2}"
+  "stmda%?\t%3, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")])
 
@@ -1049,7 +1049,7 @@
      (set (mem:SI (match_dup 3))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_ARM && XVECLEN (operands[0], 0) == 3"
-  "stm%(da%)\t%3!, {%1, %2}"
+  "stmda%?\t%3!, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")])
 
@@ -1062,7 +1062,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int -4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
-  "ldm%(db%)\t%3, {%1, %2}"
+  "ldmdb%?\t%3, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -1078,7 +1078,7 @@
           (mem:SI (plus:SI (match_dup 3)
                   (const_int -4))))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "ldm%(db%)\t%3!, {%1, %2}"
+  "ldmdb%?\t%3!, {%1, %2}"
   [(set_attr "type" "load2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -1090,7 +1090,7 @@
      (set (mem:SI (plus:SI (match_dup 3) (const_int -4)))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 2"
-  "stm%(db%)\t%3, {%1, %2}"
+  "stmdb%?\t%3, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
@@ -1104,7 +1104,7 @@
      (set (mem:SI (plus:SI (match_dup 3) (const_int -4)))
           (match_operand:SI 2 "arm_hard_general_register_operand" ""))])]
   "TARGET_32BIT && XVECLEN (operands[0], 0) == 3"
-  "stm%(db%)\t%3!, {%1, %2}"
+  "stmdb%?\t%3!, {%1, %2}"
   [(set_attr "type" "store2")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")])
diff --git a/gcc/config/arm/sync.md b/gcc/config/arm/sync.md
index 9ee715c..fc7836f 100644
--- a/gcc/config/arm/sync.md
+++ b/gcc/config/arm/sync.md
@@ -72,7 +72,7 @@
   {
     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_release 
(model))
-      return \"ldr%(<sync_sfx>%)\\t%0, %1\";
+      return \"ldr<sync_sfx>%?\\t%0, %1\";
     else
       return \"lda<sync_sfx>%?\\t%0, %1\";
   }
@@ -89,7 +89,7 @@
   {
     enum memmodel model = memmodel_from_int (INTVAL (operands[2]));
     if (is_mm_relaxed (model) || is_mm_consume (model) || is_mm_acquire 
(model))
-      return \"str%(<sync_sfx>%)\t%1, %0\";
+      return \"str<sync_sfx>%?\t%1, %0\";
     else
       return \"stl<sync_sfx>%?\t%1, %0\";
   }
diff --git a/gcc/config/arm/thumb2.md b/gcc/config/arm/thumb2.md
index 8c754d9..a724752 100644
--- a/gcc/config/arm/thumb2.md
+++ b/gcc/config/arm/thumb2.md
@@ -330,8 +330,8 @@
    mov%?\\t%0, %1\\t%@ movhi
    mov%?\\t%0, %1\\t%@ movhi
    movw%?\\t%0, %L1\\t%@ movhi
-   str%(h%)\\t%1, %0\\t%@ movhi
-   ldr%(h%)\\t%0, %1\\t%@ movhi"
+   strh%?\\t%1, %0\\t%@ movhi
+   ldrh%?\\t%0, %1\\t%@ movhi"
   [(set_attr "type" "mov_reg,mov_imm,mov_imm,mov_imm,store1,load1")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "yes,no,yes,no,no,no")
@@ -1028,7 +1028,7 @@
   "TARGET_THUMB2 && arm_arch6"
   "@
    sxtb%?\\t%0, %1
-   ldr%(sb%)\\t%0, %1"
+   ldrsb%?\\t%0, %1"
   [(set_attr "type" "extend,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
@@ -1042,7 +1042,7 @@
   "TARGET_THUMB2 && arm_arch6"
   "@
    uxth%?\\t%0, %1
-   ldr%(h%)\\t%0, %1"
+   ldrh%?\\t%0, %1"
   [(set_attr "type" "extend,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
@@ -1055,8 +1055,8 @@
        (zero_extend:SI (match_operand:QI 1 "nonimmediate_operand" "r,m")))]
   "TARGET_THUMB2 && arm_arch6"
   "@
-   uxtb%(%)\\t%0, %1
-   ldr%(b%)\\t%0, %1\\t%@ zero_extendqisi2"
+   uxtb%?\\t%0, %1
+   ldrb%?\\t%0, %1\\t%@ zero_extendqisi2"
   [(set_attr "type" "extend,load_byte")
    (set_attr "predicable" "yes")
    (set_attr "predicable_short_it" "no")
diff --git a/gcc/testsuite/gcc.target/arm/combine-movs.c 
b/gcc/testsuite/gcc.target/arm/combine-movs.c
index e9fd6cb..3487743 100644
--- a/gcc/testsuite/gcc.target/arm/combine-movs.c
+++ b/gcc/testsuite/gcc.target/arm/combine-movs.c
@@ -9,5 +9,4 @@ void foo (unsigned long r[], unsigned int d)
     r[i] = 0;
 }
 
-/* { dg-final { scan-assembler "lsrs\tr\[0-9\]" { target arm_thumb2 } } } */
-/* { dg-final { scan-assembler "movs\tr\[0-9\]" { target { ! arm_thumb2 } } } 
} */
+/* { dg-final { scan-assembler "lsrs\tr\[0-9\]" } } */
diff --git a/gcc/testsuite/gcc.target/arm/interrupt-1.c 
b/gcc/testsuite/gcc.target/arm/interrupt-1.c
index a384242..debbaf7 100644
--- a/gcc/testsuite/gcc.target/arm/interrupt-1.c
+++ b/gcc/testsuite/gcc.target/arm/interrupt-1.c
@@ -13,5 +13,5 @@ void foo ()
   bar (0);
 }
 
-/* { dg-final { scan-assembler "stmfd\tsp!, {r0, r1, r2, r3, r4, fp, ip, lr}" 
} } */
-/* { dg-final { scan-assembler "ldmfd\tsp!, {r0, r1, r2, r3, r4, fp, ip, 
pc}\\^" } } */
+/* { dg-final { scan-assembler "push\t{r0, r1, r2, r3, r4, fp, ip, lr}" } } */
+/* { dg-final { scan-assembler "pop\t{r0, r1, r2, r3, r4, fp, ip, pc}\\^" } } 
*/
diff --git a/gcc/testsuite/gcc.target/arm/interrupt-2.c 
b/gcc/testsuite/gcc.target/arm/interrupt-2.c
index 61d3130..92f8630 100644
--- a/gcc/testsuite/gcc.target/arm/interrupt-2.c
+++ b/gcc/testsuite/gcc.target/arm/interrupt-2.c
@@ -15,5 +15,5 @@ void test()
   foo = 0;
 }
 
-/* { dg-final { scan-assembler "stmfd\tsp!, {r0, r1, r2, r3, r4, r5, ip, lr}" 
} } */
-/* { dg-final { scan-assembler "ldmfd\tsp!, {r0, r1, r2, r3, r4, r5, ip, 
pc}\\^" } } */
+/* { dg-final { scan-assembler "push\t{r0, r1, r2, r3, r4, r5, ip, lr}" } } */
+/* { dg-final { scan-assembler "pop\t{r0, r1, r2, r3, r4, r5, ip, pc}\\^" } } 
*/
diff --git a/gcc/testsuite/gcc.target/arm/unaligned-memcpy-4.c 
b/gcc/testsuite/gcc.target/arm/unaligned-memcpy-4.c
index 830e22e..d236513 100644
--- a/gcc/testsuite/gcc.target/arm/unaligned-memcpy-4.c
+++ b/gcc/testsuite/gcc.target/arm/unaligned-memcpy-4.c
@@ -14,7 +14,7 @@ void aligned_both (void)
 
 /* We know both src and dest to be aligned: expect multiword loads/stores.  */
 
-/* { dg-final { scan-assembler-times "ldmia" 1 { target { ! { 
arm_prefer_ldrd_strd } } } } } */
+/* { dg-final { scan-assembler-times "ldm" 1 { target { ! { 
arm_prefer_ldrd_strd } } } } } */
 /* { dg-final { scan-assembler-times "stmia" 1 { target { ! { 
arm_prefer_ldrd_strd } } } } } */
 /* { dg-final { scan-assembler "ldrd" { target { arm_prefer_ldrd_strd } } } } 
*/
 /* { dg-final { scan-assembler-times "ldm" 0 { target { arm_prefer_ldrd_strd } 
} } } */

Reply via email to