Jeff Law <l...@redhat.com> writes:
> On 11/20/18 7:57 AM, Renlin Li wrote:
>> Hi Richard,
>> 
>> On 11/14/2018 02:59 PM, Richard Biener wrote:
>>> On Fri, Nov 9, 2018 at 4:49 PM Renlin Li <renlin...@foss.arm.com> wrote:
>>>>
>>>> Hi Richard,
>>>>
>>>> On 11/09/2018 11:48 AM, Richard Biener wrote:
>>>>> On Thu, Nov 8, 2018 at 5:55 PM Renlin Li <renlin...@foss.arm.com> wrote:
>>>>>
>>>>>>
>>>>>> Hi Richard,
>>>>>>
>>>>>>
>>> I don't see the masked load here on x86_64 btw. (I don't see
>>> if-conversion generating a load).
>>> I guess that's again when store-data-races are allowed that it uses a
>>> RMW cycle and vectorization
>>> generating the masked variants for the loop-mask.  Which means for SVE
>>> if-conversion should
>>> prefer the masked-store variant even when store data races are allowed?
>> 
>> Yes, it looks like, for SVE, masked-store variant is preferred even when 
>> store data races are allowed.

I agree, and I think that would cope with more cases than fixing it up
later.  E.g. the current fixup code requires the load and store to have
the same vuse, so it won't cope with cases in which another store comes
between the mask_load and mask_store.

An easy heuristic would be to test
targetm.vectorize.empty_mask_is_expensive.  If that's false and masked
stores are available, it's probably better to use them than introduce
the data race.

Thanks,
Richard

Reply via email to