Thanks for the review. Richard, what's the situation of unaligned memory access and how does it conflict with this patch?
thanks Carrot On Tue, Jun 7, 2011 at 6:42 PM, Nick Clifton <ni...@redhat.com> wrote: > Hi Carrot, > >> 2011-05-06 Guozhi Wei <car...@google.com> >> >> PR target/47855 >> * config/arm/thumb2.md (thumb2_movsi_insn): Add length addtribute. >> (thumb2_shiftsi3_short and peephole2): Remove 3-register case. >> (thumb2_cbz): Refine length computation. >> (thumb2_cbnz): Likewise. > > Not approved - yet. > > The problem is the change to thumb2_movsi_insn. You are still adding in the > support for the STM instruction despite the fact that Richard is still > researching how this will work with unaligned addresses. Given the fact > that this change is not mentioned in the ChangeLog entry, I will assume that > you intended to remove it and just forgot. > > I have no issues with the rest of your patch, so if you could submit an > updated patch I will be happy to review it again. > > One small point - when/if you do resubmit the STM part of the patch, you > could make the code slightly cleaner by enclosing it in curly parentheses, > thus avoiding the need to escape the double quote marks. Ie: > > + { > + switch (which_alternative) > + { > + case 0: > + case 1: > + return "mov%?\t%0, %1"; > + > + case 2: > + return "mvn%?\t%0, #%B1"; > + > + case 3: > + return "movw%?\t%0, %1"; > + > + case 4: > + if (GET_CODE (XEXP (operands[1], 0)) == POST_INC) > + { > + operands[1] = XEXP (XEXP (operands[1], 0), 0); > + return "ldm%(ia%)\t%1!, {%0}"; > + } > + /* Fall through. */ > + case 5: > + return "ldr%?\t%0, %1"; > + > + case 6: > + if (GET_CODE (XEXP (operands[0], 0)) == POST_INC) > + { > + operands[0] = XEXP (XEXP (operands[0], 0), 0); > + return "stm%(ia%)\t%0!, {%1}"; > + } > + /* Fall through. */ > + case 7: > + return "str%?\t%1, %0"; > + > + default: > + gcc_unreachable (); > + } > + } > > Cheers > Nick > >