https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65697
--- Comment #30 from mwahab at gcc dot gnu.org --- (In reply to James Greenhalgh from comment #28) > Which leaves 3). From Andrew's two proposed solutions: > > > a) introduce an additional memory model... MEMMODEL_SYNC or something > > which is even stronger than SEQ_CST. This would involve fixing any places > > that assume SEQ_CST is the highest. And then we use this from the places > > that expand sync calls as the memory model. > > This seems a sensible approach and leads to a nicer design, but I worry that > it might be overkill for primitives which we ultimately want to reduce > support for and eventually deprecate. I don't understand why it would be overkill. Its already expected that not all barriers will be meaningful for all targets and target code to handle redundant barriers usually comes down to a few clauses in a conditional statement. > > (b) may be easier to implement, but puts more onus on the target.. > > probably involves more complex patterns since you need both atomic and > > sync patterns. My guess is some duplication of code will occur here. But > > the impact is only to specific targets. > > When I looked at this problem internally a few weeks back, this is exactly > how I expected things to work (we can add the documentation for the pattern > names to the list of things which need fixing, as it took me a while to > figure out why my new patterns were never expanded!). > > I don't think this is particularly onerous for a target. The tough part in > all of this is figuring out the minimal cost instructions at the ISA level > to use for the various __atomic primitives. Extending support to a stronger > model, should be straightforward given explicit documentation of the > stronger ordering requirements. > My objection to using sync patterns is that it does take more work, both for the initial implementation and for continuing maintenance. It also means adding sync patterns to targets that would not otherwise need them and preserving a part of the gcc infrastructure that is only needed for a legacy feature and could otherwise be targeted for removal. > Certainly, a target would have to do the same legwork for a) regardless, and > would have to pollute their atomic patterns with checks and code paths for > MEMMODEL_SYNC. Actually, the changes needed for (a) are very simple. The checks and code-paths for handling barriers are already there, reusing them for MEMMODEL_SYNC is trivial. The resulting code is much easier to understand, and safely fix, because it is all in the same place, than if it was spread out across patterns and functions. > This also gives us an easier route to fixing any issues with the > acquire/release __sync primitives (__sync_lock_test_and_set and > __sync_lock_release) if we decide that these also need to be stronger than > their C++11 equivalents. This seems like a chunky workaround to avoid having to add a barrier representation to gcc. It's a, not necessarily straightforward, reworking of the middle-end to use patterns that would need to be added to architectures that don't currently have them and at least checked in the architectures that do. This seems to come down to what enum memmodel is supposed to be for. If it is intended to represent the barriers provided by C11/C+11 atomics and nothing else than a workaround seems unavoidable. If it is meant to represent barriers needed by gcc to compile user code than, it seems to me, that it would be better to just add the barrier to the enum and update code where necessary. Since memmodel is relatively recent, was merged from what looks like the C++ memory model branch (cxx-mem-model), and doesn't seem to have changed since then, it's maybe not surprising that it doesn't already include every thing needed by gcc. I don't see that adding to it should be prohibited, provided the additions can be show to be strictly required.