On Monday, February 26, 2018 1:15 PM, Florian Weimer <fwei...@redhat.com> wrote:
 


> I think x86-64 should be able to do atomic load and store via SSE2 
> registers, but perhaps if the memory is suitably aligned (which is the 
> other problem—the libatomic code will work irrespective of alignment, as 
> far as I understand it).

IIRC, it is not always guaranteed to be atomic, so RMW is probably the only 
safe option for x86-64. And for ARM64, too, as far as I understand.

Just to summarize what can be done if the proposed change is accepted (from the 
discussion so far):

1. _Atomic on objects larger than 8 bytes should not be placed in .rodata even 
if declared as const. It can also be specified that atomic_load should not be 
used on read-only memory with double-width operations.

2. libatomic can be modified to redirect to functions that use cmpxchg16b 
(whenever available on target CPU) through regular functions pointers even if 
IFFUNC is not available. This will provide consistent behavior everywhere, and 
binary compatibility for mcx16 and mno-cx16

3. never redirect to libatomic for arm64 (since ldaxp/staxp are available), 
redirect for x86-64 only if mcx16 is not specified. For ARM64, there is no 
mcx16 option at all.

-- Ruslan



   

Reply via email to