Hi Ramana,

as you pointed out, in the gcc.dg/vect/vect-double-reduc-6.c test case,
using compiler options as described in PR 51819, we see the following
inefficient code generation:

        vmov.32 r2, d28[0]      @ 57    vec_extractv4si [length = 4]
        vmov.32 r1, d22[0]      @ 84    vec_extractv4si [length = 4]
        str     r2, [r0, #4]    @ 58    *thumb2_movsi_vfp/7     [length =
4]
        vmov.32 r3, d0[0]       @ 111   vec_extractv4si [length = 4]
        str     r1, [r0, #8]    @ 85    *thumb2_movsi_vfp/7     [length =
4]
        vst1.32 {d2[0]}, [r0:64]        @ 31    neon_vst1_lanev4si
[length = 4]
        str     r3, [r0, #12]   @ 112   *thumb2_movsi_vfp/7     [length =
4]
        bx      lr      @ 120   *thumb2_return  [length = 12]

(The :64 alignment in vst1.32 is incorrect; that is that actual problem in
PR 51819, which is now fixed.)

The reason for this particular code sequence turns out to be as follows:
The middle end tries to store the LSB vector lane to memory, and uses the
vec_extract named pattern to do so.  This pattern currently only supports
an "s_register_operand" destination, and is implemented via vmov to a core
register.  The contents of that register are then stored to memory.   Now
why does any vst1 instruction show up?  This is because combine is able to
merge the vec_extract back into the store pattern and ends up with a
pattern that matches neon_vst1_lanev4si.   Note that the latter pattern is
actually intended to implement NEON built-ins (vst1_lane_... etc).

Now there seem to be two problems with this scenario:

First of all, the neon_vst1_lane<mode> patterns seem to be actually
incorrect on big-endian systems due to lane-numbering problems.  As I
understand it, all NEON intrinsics are supposed to take lane numbers
according to the NEON ordering scheme, while the vec_select RTX pattern is
defined to take lane numbers according to the in-memory order.  Those
disagree in the big-endian case.  All other patterns implementing NEON
intrinsics therefore avoid using vec_select, and instead resort to using
special UNSPEC codes -- the sole exception to this happens to be
neon_vst1_lane<mode>.  It would appear that this is actually incorrect, and
the pattern ought to use a UNSPEC_VST1_LANE unspec instead (that UNSPEC
code is already defined, but nowhere used).

Now if we make that change, then the above code sequence will contain no
vst1 any more.  But in any case, expanding first into a vec_extract
followed by a store pattern, only to rely on combine to merge them back
together, is a suboptimal approach.  One obvious drawback is that the
auto-inc-dec pass runs before reload, and therefore only sees plain stores
-- with no reason whatsoever to attempt to introduce post-inc operations.
Also, just in general it normally works out best to allow the final choice
between register and memory operands to be make in reload ...

Therefore, I think the vec_extract patterns ought to support *both*
register and memory destination operands, and implement those via vmov or
vst1 in final code generation, as appropriate.  This means that we can make
the choice in reload, guided as usual by alternative ordering and/or
penalties -- for example, we can choose to reload the address and still use
vst1 over reloading the contents to a core register and then using an
offsetted store.

Finally, this sequence will also allow the auto-inc-dec pass to do a better
job.  The current in-tree pass doesn't manage unfortunately, but with
Richard's proposed replacement, assisted by a tweak to the cost function to
make sure the (future) address reload is "priced in" correctly, I'm now
seeing what appears to be the optimal code sequence:

        vst1.32 {d6[0]}, [r0:64]!       @ 30    vec_extractv4si/1
[length = 4]
        vst1.32 {d22[0]}, [r0]! @ 56    vec_extractv4si/1       [length =
4]
        vst1.32 {d2[0]}, [r0:64]!       @ 82    vec_extractv4si/1
[length = 4]
        vst1.32 {d4[0]}, [r0]   @ 108   vec_extractv4si/1       [length =
4]
        bx      lr      @ 116   *thumb2_return  [length = 12]

(Again the :64 is wrong; it's already fixed on mainline but I haven't
pulled that change in yet.)


The attached patch implements the above-mentioned changes.   Any comments?
I'll try to get some performance numbers as well before moving forward with
the patch ...

(As an aside, it might likewise be helpful to update the vec_set patterns
to allow for memory operands, implemented via vld1.)

(See attached file: diff-gcc-arm-vecextractmem)

B.t.w. I'm wondering how I can properly test:
- that the NEON intrinsics still work
- that everything works on big-endian
Any suggestions would be appreciated!


Mit freundlichen Gruessen / Best Regards

Ulrich Weigand

--
  Dr. Ulrich Weigand | Phone: +49-7031/16-3727
  STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E.
  IBM Deutschland Research & Development GmbH
  Vorsitzende des Aufsichtsrats: Martina Koederitz | Geschäftsführung: Dirk
Wittkopp
  Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht
Stuttgart, HRB 243294

Attachment: diff-gcc-arm-vecextractmem
Description: Binary data

_______________________________________________
linaro-toolchain mailing list
linaro-toolchain@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-toolchain

Reply via email to