Hi,
On 18 Apr 21:13, Uros Bizjak wrote:
> On Mon, Apr 18, 2016 at 8:40 PM, H.J. Lu wrote:
> > On Sun, Jan 10, 2016 at 11:45 PM, Uros Bizjak wrote:
> >> On Sun, Jan 10, 2016 at 11:32 PM, H.J. Lu wrote:
> >>> Since *mov_internal and _(load|store)_mask patterns
> >>> can handle unaligned load and s
On Mon, Apr 18, 2016 at 9:17 PM, H.J. Lu wrote:
>>> Here is the updated patch for GCC 7. Tested on x86-64. OK for
>>> trrunk?
>>
>> IIRC from previous discussion, are we sure we won't propagate
>> unaligned memory into SSE arithmetic insns?
>
> Yes, it is true and it is what
>
> (define_special
On Mon, Apr 18, 2016 at 12:13 PM, Uros Bizjak wrote:
> On Mon, Apr 18, 2016 at 8:40 PM, H.J. Lu wrote:
>> On Sun, Jan 10, 2016 at 11:45 PM, Uros Bizjak wrote:
>>> On Sun, Jan 10, 2016 at 11:32 PM, H.J. Lu wrote:
Since *mov_internal and _(load|store)_mask patterns
can handle unaligned
On Mon, Apr 18, 2016 at 8:40 PM, H.J. Lu wrote:
> On Sun, Jan 10, 2016 at 11:45 PM, Uros Bizjak wrote:
>> On Sun, Jan 10, 2016 at 11:32 PM, H.J. Lu wrote:
>>> Since *mov_internal and _(load|store)_mask patterns
>>> can handle unaligned load and store, we can remove UNSPEC_LOADU and
>>> UNSPEC_ST
: Likewise.
>> (d): Likewise.
>> Check vmovups.*movv8sf_internal/3 instead of avx_storeups256.
>> Don't check `*' before movv4sf_internal.
>> * gcc.target/i386/avx256-unaligned-store-2.c: Check
>> vmovups.*movv32qi
On Sun, Jan 10, 2016 at 11:32 PM, H.J. Lu wrote:
> Since *mov_internal and _(load|store)_mask patterns
> can handle unaligned load and store, we can remove UNSPEC_LOADU and
> UNSPEC_STOREU. We use function prototypes with pointer to scalar for
> unaligned load/store builtin functions so that memo
Since *mov_internal and _(load|store)_mask patterns
can handle unaligned load and store, we can remove UNSPEC_LOADU and
UNSPEC_STOREU. We use function prototypes with pointer to scalar for
unaligned load/store builtin functions so that memory passed to
*mov_internal is unaligned.
Tested on x86-64