Hi all,

On 8/22/19 10:16 AM, Kyrill Tkachov wrote:
Hi all,

The optimisation to optimise:
   typedef unsigned long long u64;

   void bar(u64 *x)
   {
     *x = 0xabcdef10abcdef10;
   }

from:
        mov     x1, 61200
        movk    x1, 0xabcd, lsl 16
        movk    x1, 0xef10, lsl 32
        movk    x1, 0xabcd, lsl 48
        str     x1, [x0]

into:
        mov     w1, 61200
        movk    w1, 0xabcd, lsl 16
        stp     w1, w1, [x0]

ends up producing two distinct stores if the destination is volatile:
  void bar(u64 *x)
  {
    *(volatile u64 *)x = 0xabcdef10abcdef10;
  }
        mov     w1, 61200
        movk    w1, 0xabcd, lsl 16
        str     w1, [x0]
        str     w1, [x0, 4]

because we end up not merging the strs into an stp. It's questionable whether the use of STP is valid for volatile in the first place. To avoid unnecessary pain in a context where it's unlikely to be performance critical [1] (use of volatile), this patch avoids this transformation for volatile destinations, so we produce the original single STR-X.

Bootstrapped and tested on aarch64-none-linux-gnu.

Ok for trunk (and eventual backports)?

This has been approved by James offline.

Committed to trunk with r276098.

Thanks,

Kyrill

Thanks,
Kyrill

[1] https://lore.kernel.org/lkml/20190821103200.kpufwtviqhpbuv2n@willie-the-truck/


gcc/
2019-08-22  Kyrylo Tkachov <kyrylo.tkac...@arm.com>

    * config/aarch64/aarch64.md (mov<mode>): Don't call
    aarch64_split_dimode_const_store on volatile MEM.

gcc/testsuite/
2019-08-22  Kyrylo Tkachov <kyrylo.tkac...@arm.com>

    * gcc.target/aarch64/nosplit-di-const-volatile_1.c: New test.

Reply via email to