On Fri, Sep 26, 2025 at 11:22 AM Tomasz Kamiński <[email protected]>
wrote:

> From: Giuseppe D'Angelo <[email protected]>
>
> Implements P3233R1 (DR for C++20/C++11, fixes LWG 4069 and 3508).
>
> This commit implements std::atomic_ref<cv T> support (LWG3508) as DR for
> C++20, by exctracting parts of the __atomic_ref class (that atomic_ref
> inherits
> rom) into a further base class (__atomic_ref_base):
>
> * __atomic_ref_base<const T> implements non-mutating (const) atomic API.
>   Single base class is used, and the difference in is_always_lock_free and
>   required_aligment values between types are handled by
> _S_is_always_lock_free,
>   _S_required_aligment helper functions.
> * __atomic_ref_base<T> implements the common atomic APIs. The non-mutating
>   operations are handled by inherting from __atomic_ref_base<const T>
> partial
>   partial specialization. Tu support that __atomic_ref_base<const T> stores
>   mutable pointer to T, and performs const_cast in constructor.
> * __atomic_ref<T, ....> inherits from __atomic_ref_base<T>, and implement
>   type-specific mutable APIs (fetch_add, -=, ...) and difference_type
> member
>   type.
> * __atomic_ref<const T, ...> inherits from __atomic_ref_base<const T>
>   and adds different_type member, whose presence and denoted type depends
>   on T.
>
> The __atomic_ref specialization selection is adjusted to handle
> cv-qualified
> bool (add remove_cv_t) and pointer types. To handle the later, additional
> constant template parameter is introduced.
>
> The atomic wait and notify operations are currently not supported for
> volatile
> types, to signal that static assert is added to corresponding methods of
> atomic_ref.
>
> At the same time,  disable support for cv-qualified types in std::atomic
> (for instance, std::atomic<volatile T> isn't meaningful; one should use
> volatile std::atomic<T>), again as per the paper, resolving LWG4069 as DR
> for C++11. This only affects atomic<volatile T>, as specialization
> atomic with const-qualifed types was already producing an compile-time
> error.
>
>         PR libstdc++/115402
>
> libstdc++-v3/ChangeLog:
>
>         * include/bits/atomic_base.h (__atomic_ref_base<const _Tp>)
>         (__atomic_ref_base<_Tp>): Define by extracting common methods
>         from atomic_ref specializations.
>         (__atomic_ref<_Tp, In, Fp, Pt>): Inherit from __atomic_ref_base
>         and remove extracted method.
>         (__atomic_ref<const _Tp, In, Fp, Pt>): Define.
>         * include/std/atomic (std::atomic): Added an
>         * testsuite/29_atomics/atomic/requirements/types_neg.cc:
>         Add test for volatile qualified types.
>         * testsuite/29_atomics/atomic_ref/bool.cc: Move the content
>         to op_support.cc, add test for bool.
>         * testsuite/29_atomics/atomic_ref/op_support.cc: New test
>         expanded from atomic_ref/bool.cc.
>         * testsuite/29_atomics/atomic_ref/cv_qual.cc: New test.
>         * testsuite/29_atomics/atomic_ref/requirements_neg.cc: New test.
>         * testsuite/29_atomics/atomic_ref/deduction.cc: Add tests for
>         cv-qualified types.
>         * testsuite/29_atomics/atomic_ref/float.cc: Likewise.
>         * testsuite/29_atomics/atomic_ref/generic.cc: Likewise.
>         * testsuite/29_atomics/atomic_ref/integral.cc: Likewise.
>         * testsuite/29_atomics/atomic_ref/pointer.cc: Likewise.
>         * testsuite/29_atomics/atomic_ref/requirements.cc: Likewise.
>         * testsuite/29_atomics/atomic_ref/wait_notify.cc: Add tests for
>         const qualified types.
>
> Co-authored-by: Tomasz Kamiński <[email protected]>
> Signed-off-by: Giuseppe D'Angelo <[email protected]>
> Signed-off-by: Tomasz Kamiński <[email protected]>
> ---
> The changes and split between original Giusepee patch, and my changes
> can be found here:
> https://forge.sourceware.org/gcc/gcc-TEST/pulls/85
>
> Tested on x86-64-linux locally. Testing on powerpc64le-linux.
> OK for trunk when test passed.
>
Test on powerpc64le-linux passed. OK for the trunk?

>
>  libstdc++-v3/include/bits/atomic_base.h       | 566 ++++++------------
>  libstdc++-v3/include/std/atomic               |   5 +
>  .../atomic/requirements/types_neg.cc          |   4 +-
>  .../testsuite/29_atomics/atomic_ref/bool.cc   |  94 ++-
>  .../29_atomics/atomic_ref/cv_qual.cc          |  94 +++
>  .../29_atomics/atomic_ref/deduction.cc        |  33 +-
>  .../testsuite/29_atomics/atomic_ref/float.cc  |  21 +-
>  .../29_atomics/atomic_ref/generic.cc          |   6 +
>  .../29_atomics/atomic_ref/integral.cc         |   6 +
>  .../29_atomics/atomic_ref/op_support.cc       | 113 ++++
>  .../29_atomics/atomic_ref/pointer.cc          |   6 +
>  .../29_atomics/atomic_ref/requirements.cc     |  68 ++-
>  .../29_atomics/atomic_ref/requirements_neg.cc |  34 ++
>  .../29_atomics/atomic_ref/wait_notify.cc      |  10 +
>  14 files changed, 621 insertions(+), 439 deletions(-)
>  create mode 100644 libstdc++-v3/testsuite/29_atomics/atomic_ref/cv_qual.cc
>  create mode 100644
> libstdc++-v3/testsuite/29_atomics/atomic_ref/op_support.cc
>  create mode 100644
> libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements_neg.cc
>
> diff --git a/libstdc++-v3/include/bits/atomic_base.h
> b/libstdc++-v3/include/bits/atomic_base.h
> index 92d1269493f..5b8c2d49e9b 100644
> --- a/libstdc++-v3/include/bits/atomic_base.h
> +++ b/libstdc++-v3/include/bits/atomic_base.h
> @@ -1508,211 +1508,146 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>      };
>  #undef _GLIBCXX20_INIT
>
> -  template<typename _Tp,
> -           bool = is_integral_v<_Tp> && !is_same_v<_Tp, bool>,
> -           bool = is_floating_point_v<_Tp>>
> -    struct __atomic_ref;
> +  // __atomic_ref_base<const _Tp> provides the common APIs for const and
> +  // types,
> +  // __atomic_ref_base<_Tp> inhserits from  __atomic_ref_base<const _Tp>,
> +  // and provides the commonn APIs implementing constrains in
> [atomic.ref].
> +  // __atomic_ref<_Tp> inherits from __atomic_ref_base<_Tp> (const or
> not-const)
> +  // adds type specific mutating APIs.
> +  // atomic_ref inherits from __atomic_ref;
> +
> +  template<typename _Tp>
> +    struct __atomic_ref_base;
>
> -  // base class for non-integral, non-floating-point, non-pointer types
>    template<typename _Tp>
> -    struct __atomic_ref<_Tp, false, false>
> +    struct __atomic_ref_base<const _Tp>
>      {
> -      static_assert(is_trivially_copyable_v<_Tp>);
> +    private:
> +      using _Vt = remove_cv_t<_Tp>;
> +
> +      static consteval bool
> +      _S_is_always_lock_free()
> +      {
> +       if constexpr (is_pointer_v<_Vt>)
> +         return ATOMIC_POINTER_LOCK_FREE == 2;
> +       else
> +         return __atomic_always_lock_free(sizeof(_Vt), 0);
> +      }
>
> -      // 1/2/4/8/16-byte types must be aligned to at least their size.
> -      static constexpr int _S_min_alignment
> -       = (sizeof(_Tp) & (sizeof(_Tp) - 1)) || sizeof(_Tp) > 16
> -       ? 0 : sizeof(_Tp);
> +      static consteval int
> +      _S_required_aligment()
> +      {
> +       if constexpr (is_floating_point_v<_Vt> || is_pointer_v<_Vt>)
> +         return alignof(_Vt);
> +       else if constexpr ((sizeof(_Vt) & (sizeof(_Vt) - 1)) ||
> sizeof(_Vt) > 16)
> +         return alignof(_Vt);
> +       else
> +         // 1/2/4/8/16-byte types, including integral types,
> +         // must be aligned to at least their size.
> +         return (sizeof(_Vt) > alignof(_Vt)) ? sizeof(_Vt) : alignof(_Vt);
> +      }
>
>      public:
> -      using value_type = _Tp;
> +      using value_type = _Vt;
> +      static_assert(is_trivially_copyable_v<value_type>);
>
> -      static constexpr bool is_always_lock_free
> -       = __atomic_always_lock_free(sizeof(_Tp), 0);
> +      static constexpr bool is_always_lock_free =
> _S_is_always_lock_free();
> +      static_assert(is_always_lock_free || !is_volatile_v<_Tp>,
> +       "atomic_ref of volatile-qualified type is only sypported if
> operations are lock-free");
>
> -      static constexpr size_t required_alignment
> -       = _S_min_alignment > alignof(_Tp) ? _S_min_alignment :
> alignof(_Tp);
> +      static constexpr size_t required_alignment = _S_required_aligment();
>
> -      __atomic_ref& operator=(const __atomic_ref&) = delete;
> +      __atomic_ref_base() = delete;
> +      __atomic_ref_base& operator=(const __atomic_ref_base&) = delete;
>
>        explicit
> -      __atomic_ref(_Tp& __t) : _M_ptr(std::__addressof(__t))
> +      __atomic_ref_base(const _Tp& __t)
> +       : _M_ptr(const_cast<_Tp*>(std::__addressof(__t)))
>        {
>         __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment)
> == 0);
>        }
>
> -      __atomic_ref(const __atomic_ref&) noexcept = default;
> -
> -      _Tp
> -      operator=(_Tp __t) const noexcept
> -      {
> -       this->store(__t);
> -       return __t;
> -      }
> +      __atomic_ref_base(const __atomic_ref_base&) noexcept = default;
>
> -      operator _Tp() const noexcept { return this->load(); }
> +      operator value_type() const noexcept { return this->load(); }
>
>        bool
>        is_lock_free() const noexcept
>        { return __atomic_impl::is_lock_free<sizeof(_Tp),
> required_alignment>(); }
>
> -      void
> -      store(_Tp __t, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::store(_M_ptr, __t, __m); }
> -
> -      _Tp
> +      value_type
>        load(memory_order __m = memory_order_seq_cst) const noexcept
>        { return __atomic_impl::load(_M_ptr, __m); }
>
> -      _Tp
> -      exchange(_Tp __desired, memory_order __m = memory_order_seq_cst)
> -      const noexcept
> -      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
> -
> -      bool
> -      compare_exchange_weak(_Tp& __expected, _Tp __desired,
> -                           memory_order __success,
> -                           memory_order __failure) const noexcept
> -      {
> -       return __atomic_impl::compare_exchange_weak<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> -      }
> -
> -      bool
> -      compare_exchange_strong(_Tp& __expected, _Tp __desired,
> -                           memory_order __success,
> -                           memory_order __failure) const noexcept
> -      {
> -       return __atomic_impl::compare_exchange_strong<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> -      }
> -
> -      bool
> -      compare_exchange_weak(_Tp& __expected, _Tp __desired,
> -                           memory_order __order = memory_order_seq_cst)
> -      const noexcept
> -      {
> -       return compare_exchange_weak(__expected, __desired, __order,
> -                                     __cmpexch_failure_order(__order));
> -      }
> -
> -      bool
> -      compare_exchange_strong(_Tp& __expected, _Tp __desired,
> -                             memory_order __order = memory_order_seq_cst)
> -      const noexcept
> -      {
> -       return compare_exchange_strong(__expected, __desired, __order,
> -                                      __cmpexch_failure_order(__order));
> -      }
> -
>  #if __glibcxx_atomic_wait
>        _GLIBCXX_ALWAYS_INLINE void
> -      wait(_Tp __old, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::wait(_M_ptr, __old, __m); }
> -
> -      // TODO add const volatile overload
> -
> -      _GLIBCXX_ALWAYS_INLINE void
> -      notify_one() const noexcept
> -      { __atomic_impl::notify_one(_M_ptr); }
> -
> -      // TODO add const volatile overload
> -
> -      _GLIBCXX_ALWAYS_INLINE void
> -      notify_all() const noexcept
> -      { __atomic_impl::notify_all(_M_ptr); }
> -
> -      // TODO add const volatile overload
> +      wait(value_type __old, memory_order __m = memory_order_seq_cst)
> const noexcept
> +      {
> +       // TODO remove when volatile is supported
> +       static_assert(!is_volatile_v<_Tp>, "atomics wait on volatile are
> not supported");
> +       __atomic_impl::wait(_M_ptr, __old, __m);
> +      }
>  #endif // __glibcxx_atomic_wait
>
> -    private:
> +    protected:
>        _Tp* _M_ptr;
>      };
>
> -  // base class for atomic_ref<integral-type>
>    template<typename _Tp>
> -    struct __atomic_ref<_Tp, true, false>
> +    struct __atomic_ref_base
> +      : __atomic_ref_base<const _Tp>
>      {
> -      static_assert(is_integral_v<_Tp>);
> -
> -    public:
> -      using value_type = _Tp;
> -      using difference_type = value_type;
> -
> -      static constexpr bool is_always_lock_free
> -       = __atomic_always_lock_free(sizeof(_Tp), 0);
> -
> -      static constexpr size_t required_alignment
> -       = sizeof(_Tp) > alignof(_Tp) ? sizeof(_Tp) : alignof(_Tp);
> -
> -      __atomic_ref() = delete;
> -      __atomic_ref& operator=(const __atomic_ref&) = delete;
> +      using value_type = typename __atomic_ref_base<const
> _Tp>::value_type;
>
>        explicit
> -      __atomic_ref(_Tp& __t) : _M_ptr(&__t)
> -      {
> -       __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment)
> == 0);
> -      }
> +      __atomic_ref_base(_Tp& __t) : __atomic_ref_base<const _Tp>(__t)
> +      { }
>
> -      __atomic_ref(const __atomic_ref&) noexcept = default;
> -
> -      _Tp
> -      operator=(_Tp __t) const noexcept
> +      value_type
> +      operator=(value_type __t) const noexcept
>        {
>         this->store(__t);
>         return __t;
>        }
>
> -      operator _Tp() const noexcept { return this->load(); }
> -
> -      bool
> -      is_lock_free() const noexcept
> -      {
> -       return __atomic_impl::is_lock_free<sizeof(_Tp),
> required_alignment>();
> -      }
> -
>        void
> -      store(_Tp __t, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::store(_M_ptr, __t, __m); }
> -
> -      _Tp
> -      load(memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::load(_M_ptr, __m); }
> +      store(value_type __t, memory_order __m = memory_order_seq_cst)
> const noexcept
> +      { __atomic_impl::store(this->_M_ptr, __t, __m); }
>
> -      _Tp
> -      exchange(_Tp __desired,
> -              memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
> +      value_type
> +      exchange(value_type __desired, memory_order __m =
> memory_order_seq_cst)
> +      const noexcept
> +      { return __atomic_impl::exchange(this->_M_ptr, __desired, __m); }
>
>        bool
> -      compare_exchange_weak(_Tp& __expected, _Tp __desired,
> +      compare_exchange_weak(value_type& __expected, value_type __desired,
>                             memory_order __success,
>                             memory_order __failure) const noexcept
>        {
>         return __atomic_impl::compare_exchange_weak<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> +                this->_M_ptr, __expected, __desired, __success,
> __failure);
>        }
>
>        bool
> -      compare_exchange_strong(_Tp& __expected, _Tp __desired,
> -                             memory_order __success,
> -                             memory_order __failure) const noexcept
> +      compare_exchange_strong(value_type& __expected, value_type
> __desired,
> +                           memory_order __success,
> +                           memory_order __failure) const noexcept
>        {
>         return __atomic_impl::compare_exchange_strong<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> +                this->_M_ptr, __expected, __desired, __success,
> __failure);
>        }
>
>        bool
> -      compare_exchange_weak(_Tp& __expected, _Tp __desired,
> +      compare_exchange_weak(value_type& __expected, value_type __desired,
>                             memory_order __order = memory_order_seq_cst)
>        const noexcept
>        {
>         return compare_exchange_weak(__expected, __desired, __order,
> -                                     __cmpexch_failure_order(__order));
> +                                    __cmpexch_failure_order(__order));
>        }
>
>        bool
> -      compare_exchange_strong(_Tp& __expected, _Tp __desired,
> +      compare_exchange_strong(value_type& __expected, value_type
> __desired,
>                               memory_order __order = memory_order_seq_cst)
>        const noexcept
>        {
> @@ -1721,49 +1656,81 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>        }
>
>  #if __glibcxx_atomic_wait
> -      _GLIBCXX_ALWAYS_INLINE void
> -      wait(_Tp __old, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::wait(_M_ptr, __old, __m); }
> -
> -      // TODO add const volatile overload
> -
>        _GLIBCXX_ALWAYS_INLINE void
>        notify_one() const noexcept
> -      { __atomic_impl::notify_one(_M_ptr); }
> -
> -      // TODO add const volatile overload
> +      {
> +       // TODO remove when volatile is supported
> +       static_assert(!is_volatile_v<_Tp>, "atomics wait on volatile are
> not supported");
> +       __atomic_impl::notify_one(this->_M_ptr);
> +      }
>
>        _GLIBCXX_ALWAYS_INLINE void
>        notify_all() const noexcept
> -      { __atomic_impl::notify_all(_M_ptr); }
> -
> -      // TODO add const volatile overload
> +      {
> +       // TODO remove when volatile is supported
> +       static_assert(!is_volatile_v<_Tp>, "atomics wait on volatile are
> not supported");
> +       __atomic_impl::notify_all(this->_M_ptr);
> +      }
>  #endif // __glibcxx_atomic_wait
> +    };
> +
> +  template<typename _Tp,
> +          bool = is_integral_v<_Tp> && !is_same_v<remove_cv_t<_Tp>, bool>,
> +          bool = is_floating_point_v<_Tp>,
> +          bool = is_pointer_v<_Tp>>
> +    struct __atomic_ref;
> +
> +  // base class for non-integral, non-floating-point, non-pointer types
> +  template<typename _Tp>
> +    struct __atomic_ref<_Tp, false, false, false>
> +      : __atomic_ref_base<_Tp>
> +    {
> +      using __atomic_ref_base<_Tp>::__atomic_ref_base;
> +      using __atomic_ref_base<_Tp>::operator=;
> +    };
> +
> +  template<typename _Tp>
> +    struct __atomic_ref<const _Tp, false, false, false>
> +      : __atomic_ref_base<const _Tp>
> +    {
> +      using __atomic_ref_base<const _Tp>::__atomic_ref_base;
> +    };
> +
> +  // base class for atomic_ref<integral-type>
> +  template<typename _Tp>
> +    struct __atomic_ref<_Tp, true, false, false>
> +      : __atomic_ref_base<_Tp>
> +    {
> +      using value_type = typename __atomic_ref_base<_Tp>::value_type;
> +      using difference_type = value_type;
> +
> +      using __atomic_ref_base<_Tp>::__atomic_ref_base;
> +      using __atomic_ref_base<_Tp>::operator=;
>
>        value_type
>        fetch_add(value_type __i,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_add(_M_ptr, __i, __m); }
> +      { return __atomic_impl::fetch_add(this->_M_ptr, __i, __m); }
>
>        value_type
>        fetch_sub(value_type __i,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_sub(_M_ptr, __i, __m); }
> +      { return __atomic_impl::fetch_sub(this->_M_ptr, __i, __m); }
>
>        value_type
>        fetch_and(value_type __i,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_and(_M_ptr, __i, __m); }
> +      { return __atomic_impl::fetch_and(this->_M_ptr, __i, __m); }
>
>        value_type
>        fetch_or(value_type __i,
>                memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_or(_M_ptr, __i, __m); }
> +      { return __atomic_impl::fetch_or(this->_M_ptr, __i, __m); }
>
>        value_type
>        fetch_xor(value_type __i,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_xor(_M_ptr, __i, __m); }
> +      { return __atomic_impl::fetch_xor(this->_M_ptr, __i, __m); }
>
>        _GLIBCXX_ALWAYS_INLINE value_type
>        operator++(int) const noexcept
> @@ -1775,284 +1742,98 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>
>        value_type
>        operator++() const noexcept
> -      { return __atomic_impl::__add_fetch(_M_ptr, value_type(1)); }
> +      { return __atomic_impl::__add_fetch(this->_M_ptr, value_type(1)); }
>
>        value_type
>        operator--() const noexcept
> -      { return __atomic_impl::__sub_fetch(_M_ptr, value_type(1)); }
> +      { return __atomic_impl::__sub_fetch(this->_M_ptr, value_type(1)); }
>
>        value_type
>        operator+=(value_type __i) const noexcept
> -      { return __atomic_impl::__add_fetch(_M_ptr, __i); }
> +      { return __atomic_impl::__add_fetch(this->_M_ptr, __i); }
>
>        value_type
>        operator-=(value_type __i) const noexcept
> -      { return __atomic_impl::__sub_fetch(_M_ptr, __i); }
> +      { return __atomic_impl::__sub_fetch(this->_M_ptr, __i); }
>
>        value_type
>        operator&=(value_type __i) const noexcept
> -      { return __atomic_impl::__and_fetch(_M_ptr, __i); }
> +      { return __atomic_impl::__and_fetch(this->_M_ptr, __i); }
>
>        value_type
>        operator|=(value_type __i) const noexcept
> -      { return __atomic_impl::__or_fetch(_M_ptr, __i); }
> +      { return __atomic_impl::__or_fetch(this->_M_ptr, __i); }
>
>        value_type
>        operator^=(value_type __i) const noexcept
> -      { return __atomic_impl::__xor_fetch(_M_ptr, __i); }
> +      { return __atomic_impl::__xor_fetch(this->_M_ptr, __i); }
> +    };
>
> -    private:
> -      _Tp* _M_ptr;
> +  template<typename _Tp>
> +    struct __atomic_ref<const _Tp, true, false, false>
> +      : __atomic_ref_base<const _Tp>
> +    {
> +      using difference_type = typename __atomic_ref_base<const
> _Tp>::value_type;
> +      using __atomic_ref_base<const _Tp>::__atomic_ref_base;
>      };
>
>    // base class for atomic_ref<floating-point-type>
>    template<typename _Fp>
> -    struct __atomic_ref<_Fp, false, true>
> +    struct __atomic_ref<_Fp, false, true, false>
> +      : __atomic_ref_base<_Fp>
>      {
> -      static_assert(is_floating_point_v<_Fp>);
> -
> -    public:
> -      using value_type = _Fp;
> +      using value_type = typename __atomic_ref_base<_Fp>::value_type;
>        using difference_type = value_type;
>
> -      static constexpr bool is_always_lock_free
> -       = __atomic_always_lock_free(sizeof(_Fp), 0);
> -
> -      static constexpr size_t required_alignment = __alignof__(_Fp);
> -
> -      __atomic_ref() = delete;
> -      __atomic_ref& operator=(const __atomic_ref&) = delete;
> -
> -      explicit
> -      __atomic_ref(_Fp& __t) : _M_ptr(&__t)
> -      {
> -       __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment)
> == 0);
> -      }
> -
> -      __atomic_ref(const __atomic_ref&) noexcept = default;
> -
> -      _Fp
> -      operator=(_Fp __t) const noexcept
> -      {
> -       this->store(__t);
> -       return __t;
> -      }
> -
> -      operator _Fp() const noexcept { return this->load(); }
> -
> -      bool
> -      is_lock_free() const noexcept
> -      {
> -       return __atomic_impl::is_lock_free<sizeof(_Fp),
> required_alignment>();
> -      }
> -
> -      void
> -      store(_Fp __t, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::store(_M_ptr, __t, __m); }
> -
> -      _Fp
> -      load(memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::load(_M_ptr, __m); }
> -
> -      _Fp
> -      exchange(_Fp __desired,
> -              memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
> -
> -      bool
> -      compare_exchange_weak(_Fp& __expected, _Fp __desired,
> -                           memory_order __success,
> -                           memory_order __failure) const noexcept
> -      {
> -       return __atomic_impl::compare_exchange_weak<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> -      }
> -
> -      bool
> -      compare_exchange_strong(_Fp& __expected, _Fp __desired,
> -                             memory_order __success,
> -                             memory_order __failure) const noexcept
> -      {
> -       return __atomic_impl::compare_exchange_strong<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> -      }
> -
> -      bool
> -      compare_exchange_weak(_Fp& __expected, _Fp __desired,
> -                           memory_order __order = memory_order_seq_cst)
> -      const noexcept
> -      {
> -       return compare_exchange_weak(__expected, __desired, __order,
> -                                     __cmpexch_failure_order(__order));
> -      }
> -
> -      bool
> -      compare_exchange_strong(_Fp& __expected, _Fp __desired,
> -                             memory_order __order = memory_order_seq_cst)
> -      const noexcept
> -      {
> -       return compare_exchange_strong(__expected, __desired, __order,
> -                                      __cmpexch_failure_order(__order));
> -      }
> -
> -#if __glibcxx_atomic_wait
> -      _GLIBCXX_ALWAYS_INLINE void
> -      wait(_Fp __old, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::wait(_M_ptr, __old, __m); }
> -
> -      // TODO add const volatile overload
> -
> -      _GLIBCXX_ALWAYS_INLINE void
> -      notify_one() const noexcept
> -      { __atomic_impl::notify_one(_M_ptr); }
> -
> -      // TODO add const volatile overload
> -
> -      _GLIBCXX_ALWAYS_INLINE void
> -      notify_all() const noexcept
> -      { __atomic_impl::notify_all(_M_ptr); }
> -
> -      // TODO add const volatile overload
> -#endif // __glibcxx_atomic_wait
> +      using __atomic_ref_base<_Fp>::__atomic_ref_base;
> +      using __atomic_ref_base<_Fp>::operator=;
>
>        value_type
>        fetch_add(value_type __i,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::__fetch_add_flt(_M_ptr, __i, __m); }
> +      { return __atomic_impl::__fetch_add_flt(this->_M_ptr, __i, __m); }
>
>        value_type
>        fetch_sub(value_type __i,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::__fetch_sub_flt(_M_ptr, __i, __m); }
> +      { return __atomic_impl::__fetch_sub_flt(this->_M_ptr, __i, __m); }
>
>        value_type
>        operator+=(value_type __i) const noexcept
> -      { return __atomic_impl::__add_fetch_flt(_M_ptr, __i); }
> +      { return __atomic_impl::__add_fetch_flt(this->_M_ptr, __i); }
>
>        value_type
>        operator-=(value_type __i) const noexcept
> -      { return __atomic_impl::__sub_fetch_flt(_M_ptr, __i); }
> +      { return __atomic_impl::__sub_fetch_flt(this->_M_ptr, __i); }
> +    };
>
> -    private:
> -      _Fp* _M_ptr;
> +  template<typename _Fp>
> +    struct __atomic_ref<const _Fp, false, true, false>
> +      : __atomic_ref_base<const _Fp>
> +    {
> +      using difference_type = typename __atomic_ref_base<const
> _Fp>::value_type;
> +      using __atomic_ref_base<const _Fp>::__atomic_ref_base;
>      };
>
>    // base class for atomic_ref<pointer-type>
> -  template<typename _Tp>
> -    struct __atomic_ref<_Tp*, false, false>
> +  template<typename _Pt>
> +    struct __atomic_ref<_Pt, false, false, true>
> +      : __atomic_ref_base<_Pt>
>      {
> -    public:
> -      using value_type = _Tp*;
> +      using value_type = typename __atomic_ref_base<_Pt>::value_type;
>        using difference_type = ptrdiff_t;
>
> -      static constexpr bool is_always_lock_free =
> ATOMIC_POINTER_LOCK_FREE == 2;
> -
> -      static constexpr size_t required_alignment = __alignof__(_Tp*);
> -
> -      __atomic_ref() = delete;
> -      __atomic_ref& operator=(const __atomic_ref&) = delete;
> -
> -      explicit
> -      __atomic_ref(_Tp*& __t) : _M_ptr(std::__addressof(__t))
> -      {
> -       __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment)
> == 0);
> -      }
> -
> -      __atomic_ref(const __atomic_ref&) noexcept = default;
> -
> -      _Tp*
> -      operator=(_Tp* __t) const noexcept
> -      {
> -       this->store(__t);
> -       return __t;
> -      }
> -
> -      operator _Tp*() const noexcept { return this->load(); }
> -
> -      bool
> -      is_lock_free() const noexcept
> -      {
> -       return __atomic_impl::is_lock_free<sizeof(_Tp*),
> required_alignment>();
> -      }
> -
> -      void
> -      store(_Tp* __t, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::store(_M_ptr, __t, __m); }
> -
> -      _Tp*
> -      load(memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::load(_M_ptr, __m); }
> -
> -      _Tp*
> -      exchange(_Tp* __desired,
> -              memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::exchange(_M_ptr, __desired, __m); }
> -
> -      bool
> -      compare_exchange_weak(_Tp*& __expected, _Tp* __desired,
> -                           memory_order __success,
> -                           memory_order __failure) const noexcept
> -      {
> -       return __atomic_impl::compare_exchange_weak<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> -      }
> -
> -      bool
> -      compare_exchange_strong(_Tp*& __expected, _Tp* __desired,
> -                           memory_order __success,
> -                           memory_order __failure) const noexcept
> -      {
> -       return __atomic_impl::compare_exchange_strong<true>(
> -                _M_ptr, __expected, __desired, __success, __failure);
> -      }
> -
> -      bool
> -      compare_exchange_weak(_Tp*& __expected, _Tp* __desired,
> -                           memory_order __order = memory_order_seq_cst)
> -      const noexcept
> -      {
> -       return compare_exchange_weak(__expected, __desired, __order,
> -                                     __cmpexch_failure_order(__order));
> -      }
> -
> -      bool
> -      compare_exchange_strong(_Tp*& __expected, _Tp* __desired,
> -                             memory_order __order = memory_order_seq_cst)
> -      const noexcept
> -      {
> -       return compare_exchange_strong(__expected, __desired, __order,
> -                                      __cmpexch_failure_order(__order));
> -      }
> -
> -#if __glibcxx_atomic_wait
> -      _GLIBCXX_ALWAYS_INLINE void
> -      wait(_Tp* __old, memory_order __m = memory_order_seq_cst) const
> noexcept
> -      { __atomic_impl::wait(_M_ptr, __old, __m); }
> -
> -      // TODO add const volatile overload
> -
> -      _GLIBCXX_ALWAYS_INLINE void
> -      notify_one() const noexcept
> -      { __atomic_impl::notify_one(_M_ptr); }
> -
> -      // TODO add const volatile overload
> -
> -      _GLIBCXX_ALWAYS_INLINE void
> -      notify_all() const noexcept
> -      { __atomic_impl::notify_all(_M_ptr); }
> -
> -      // TODO add const volatile overload
> -#endif // __glibcxx_atomic_wait
> -
> +      using __atomic_ref_base<_Pt>::__atomic_ref_base;
> +      using __atomic_ref_base<_Pt>::operator=;
>        _GLIBCXX_ALWAYS_INLINE value_type
>        fetch_add(difference_type __d,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_add(_M_ptr, _S_type_size(__d), __m); }
> +      { return __atomic_impl::fetch_add(this->_M_ptr, _S_type_size(__d),
> __m); }
>
>        _GLIBCXX_ALWAYS_INLINE value_type
>        fetch_sub(difference_type __d,
>                 memory_order __m = memory_order_seq_cst) const noexcept
> -      { return __atomic_impl::fetch_sub(_M_ptr, _S_type_size(__d), __m); }
> +      { return __atomic_impl::fetch_sub(this->_M_ptr, _S_type_size(__d),
> __m); }
>
>        value_type
>        operator++(int) const noexcept
> @@ -2065,36 +1846,43 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>        value_type
>        operator++() const noexcept
>        {
> -       return __atomic_impl::__add_fetch(_M_ptr, _S_type_size(1));
> +       return __atomic_impl::__add_fetch(this->_M_ptr, _S_type_size(1));
>        }
>
>        value_type
>        operator--() const noexcept
>        {
> -       return __atomic_impl::__sub_fetch(_M_ptr, _S_type_size(1));
> +       return __atomic_impl::__sub_fetch(this->_M_ptr, _S_type_size(1));
>        }
>
>        value_type
>        operator+=(difference_type __d) const noexcept
>        {
> -       return __atomic_impl::__add_fetch(_M_ptr, _S_type_size(__d));
> +       return __atomic_impl::__add_fetch(this->_M_ptr, _S_type_size(__d));
>        }
>
>        value_type
>        operator-=(difference_type __d) const noexcept
>        {
> -       return __atomic_impl::__sub_fetch(_M_ptr, _S_type_size(__d));
> +       return __atomic_impl::__sub_fetch(this->_M_ptr, _S_type_size(__d));
>        }
>
>      private:
>        static constexpr ptrdiff_t
>        _S_type_size(ptrdiff_t __d) noexcept
>        {
> -       static_assert(is_object_v<_Tp>);
> -       return __d * sizeof(_Tp);
> +       using _Et = remove_pointer_t<value_type>;
> +       static_assert(is_object_v<_Et>);
> +       return __d * sizeof(_Et);
>        }
> +    };
>
> -      _Tp** _M_ptr;
> +  template<typename _Pt>
> +    struct __atomic_ref<const _Pt, false, false, true>
> +      : __atomic_ref_base<const _Pt>
> +    {
> +      using difference_type = ptrdiff_t;
> +      using __atomic_ref_base<const _Pt>::__atomic_ref_base;
>      };
>  #endif // C++2a
>
> diff --git a/libstdc++-v3/include/std/atomic
> b/libstdc++-v3/include/std/atomic
> index 9b1aca0fc09..1ea28b1d742 100644
> --- a/libstdc++-v3/include/std/atomic
> +++ b/libstdc++-v3/include/std/atomic
> @@ -217,6 +217,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
>        static_assert(sizeof(_Tp) > 0,
>                     "Incomplete or zero-sized types are not supported");
>
> +      // _GLIBCXX_RESOLVE_LIB_DEFECTS
> +      // 4069. std::atomic<volatile T> should be ill-formed
> +      static_assert(is_same<_Tp, typename remove_cv<_Tp>::type>::value,
> +                   "cv-qualified types are not supported");
> +
>  #if __cplusplus > 201703L
>        static_assert(is_copy_constructible_v<_Tp>);
>        static_assert(is_move_constructible_v<_Tp>);
> diff --git
> a/libstdc++-v3/testsuite/29_atomics/atomic/requirements/types_neg.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic/requirements/types_neg.cc
> index cfe44d255ca..b9105481006 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic/requirements/types_neg.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic/requirements/types_neg.cc
> @@ -19,7 +19,9 @@
>
>  #include <atomic>
>
> -std::atomic<const int> a; // { dg-error "here" }
> +std::atomic<const int> ca; // { dg-error "here" }
> +std::atomic<volatile int> va; // { dg-error "here" }
> +std::atomic<const volatile int> cva; // { dg-error "here" }
>  // { dg-error "assignment to read-only type" "" { target *-*-* } 0 }
>
>  struct MoveOnly
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc
> index 4702932627e..c73319010ee 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc
> @@ -1,15 +1,85 @@
> -// { dg-do compile { target c++20 } }
> +// Copyright (C) 2019-2025 Free Software Foundation, Inc.
> +//
> +// This file is part of the GNU ISO C++ Library.  This library is free
> +// software; you can redistribute it and/or modify it under the
> +// terms of the GNU General Public License as published by the
> +// Free Software Foundation; either version 3, or (at your option)
> +// any later version.
> +
> +// This library is distributed in the hope that it will be useful,
> +// but WITHOUT ANY WARRANTY; without even the implied warranty of
> +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +// GNU General Public License for more details.
> +
> +// You should have received a copy of the GNU General Public License along
> +// with this library; see the file COPYING3.  If not see
> +// <http://www.gnu.org/licenses/>.
> +
> +// { dg-do run { target c++20 } }
> +// { dg-require-atomic-cmpxchg-word "" }
> +// { dg-add-options libatomic }
>
>  #include <atomic>
> +#include <testsuite_hooks.h>
> +
> +void
> +test01()
> +{
> +  bool value;
> +
> +  {
> +    const auto mo = std::memory_order_relaxed;
> +    std::atomic_ref<bool> a(value);
> +    bool ok = a.is_lock_free();
> +    if constexpr (std::atomic_ref<bool>::is_always_lock_free)
> +      VERIFY( ok );
> +    a = false;
> +    VERIFY( !a.load() );
> +    VERIFY( !a.load(mo) );
> +    a.store(true);
> +    VERIFY( a.load() );
> +    auto v = a.exchange(false);
> +    VERIFY( !a.load() );
> +    VERIFY( v );
> +    v = a.exchange(true, mo);
> +    VERIFY( a.load() );
> +    VERIFY( !v );
> +
> +    auto expected = a.load();
> +    while (!a.compare_exchange_weak(expected, false, mo, mo))
> +    { /* weak form can fail spuriously */ }
> +    VERIFY( !a.load() );
> +    VERIFY( expected );
> +
> +    ok = a.compare_exchange_strong(expected, true);
> +    VERIFY( !ok && !a.load() && !expected );
> +
> +    ok = a.compare_exchange_strong(expected, true);
> +    VERIFY( ok && a.load() && !expected );
> +  }
> +}
> +
> +void
> +test02()
> +{
> +  bool b = false;
> +  std::atomic_ref<bool> a0(b);
> +  std::atomic_ref<bool> a1(b);
> +  std::atomic_ref<const bool> a1c(b);
> +  std::atomic_ref<volatile bool> a1v(b);
> +  std::atomic_ref<const volatile bool> a1cv(b);
> +  std::atomic_ref<bool> a2(a0);
> +  b = true;
> +  VERIFY( a1.load() );
> +  VERIFY( a1c.load() );
> +  VERIFY( a1v.load() );
> +  VERIFY( a1cv.load() );
> +  VERIFY( a2.load() );
> +}
>
> -template<class T> concept has_and = requires (T& a) { a &= false; };
> -template<class T> concept has_or = requires (T& a) { a |= false; };
> -template<class T> concept has_xor = requires (T& a) { a ^= false; };
> -template<class T> concept has_fetch_add = requires (T& a) {
> a.fetch_add(true); };
> -template<class T> concept has_fetch_sub = requires (T& a) {
> a.fetch_sub(true); };
> -
> -static_assert( not has_and<std::atomic_ref<bool>> );
> -static_assert( not has_or<std::atomic_ref<bool>> );
> -static_assert( not has_xor<std::atomic_ref<bool>> );
> -static_assert( not has_fetch_add<std::atomic_ref<bool>> );
> -static_assert( not has_fetch_sub<std::atomic_ref<bool>> );
> +int
> +main()
> +{
> +  test01();
> +  test02();
> +}
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/cv_qual.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/cv_qual.cc
> new file mode 100644
> index 00000000000..dfc6a559945
> --- /dev/null
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/cv_qual.cc
> @@ -0,0 +1,94 @@
> +// Copyright (C) 2019-2025 Free Software Foundation, Inc.
> +//
> +// This file is part of the GNU ISO C++ Library.  This library is free
> +// software; you can redistribute it and/or modify it under the
> +// terms of the GNU General Public License as published by the
> +// Free Software Foundation; either version 3, or (at your option)
> +// any later version.
> +
> +// This library is distributed in the hope that it will be useful,
> +// but WITHOUT ANY WARRANTY; without even the implied warranty of
> +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +// GNU General Public License for more details.
> +
> +// You should have received a copy of the GNU General Public License along
> +// with this library; see the file COPYING3.  If not see
> +// <http://www.gnu.org/licenses/>.
> +
> +// { dg-do run { target c++20 } }
> +// { dg-require-atomic-cmpxchg-word "" }
> +// { dg-add-options libatomic }
> +
> +#include <atomic>
> +#include <testsuite_hooks.h>
> +
> +struct X
> +{
> +  X() = default;
> +  X(int i) : i(i) { }
> +  int i;
> +
> +  friend bool
> +  operator==(X, X) = default;
> +};
> +
> +template<typename V>
> +void
> +test01(V v0, V v1)
> +{
> +  V value;
> +
> +  if constexpr (std::atomic_ref<V>::is_always_lock_free)
> +  {
> +    std::atomic_ref<volatile V> a(value);
> +    VERIFY( a.is_lock_free() );
> +
> +    a = v0;
> +    VERIFY( V(a) == v0 );
> +    VERIFY( a.load() == v0 );
> +
> +    a.store(v1);
> +    VERIFY( a.load() == v1 );
> +
> +    V last = a.exchange(v0);
> +    VERIFY( a.load() == v0 );
> +    VERIFY( last == v1 );
> +
> +    V expected = a.load();
> +    while (!a.compare_exchange_weak(expected, v1))
> +    { /* weak form can fail spuriously */ }
> +    VERIFY( a.load() == v1 );
> +    VERIFY( expected == v0 );
> +
> +    bool ok;
> +    ok = a.compare_exchange_strong(expected, v0);
> +    VERIFY( !ok && a.load() == v1 && expected == v1 );
> +
> +    ok = a.compare_exchange_strong(expected, v0);
> +    VERIFY( ok && a.load() == v0 && expected == v1 );
> +
> +    std::atomic_ref<const volatile V> cva(value);
> +    VERIFY( cva.is_lock_free() );
> +    VERIFY( V(cva) == v0 );
> +    VERIFY( cva.load() == v0 );
> +  }
> +
> +  value = v0;
> +  std::atomic_ref<const V> ca(value);
> +  bool lf = ca.is_lock_free();
> +  if constexpr (std::atomic_ref<V>::is_always_lock_free)
> +    VERIFY( lf );
> +  VERIFY( V(ca) == v0 );
> +  VERIFY( ca.load() == v0 );
> +}
> +
> +int
> +main()
> +{
> +  int x;
> +  test01<bool>(false, true);
> +  test01<int>(1, 2);
> +  test01<float>(1.2, 3.4);
> +  test01<int*>(&x, &x+1);
> +  test01<X>(12, 13);
> +}
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc
> index f67190e97a3..01dbfce2375 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc
> @@ -19,22 +19,29 @@
>
>  #include <atomic>
>
> +template <typename T>
>  void
> -test01()
> +test_impl(T v)
>  {
> -  int i = 0;
> -  std::atomic_ref a0(i);
> -  static_assert(std::is_same_v<decltype(a0), std::atomic_ref<int>>);
> -
> -  float f = 1.0f;
> -  std::atomic_ref a1(f);
> -  static_assert(std::is_same_v<decltype(a1), std::atomic_ref<float>>);
> +  std::atomic_ref a(v);
> +  static_assert(std::is_same_v<decltype(a), std::atomic_ref<T>>);
> +}
>
> -  int* p = &i;
> -  std::atomic_ref a2(p);
> -  static_assert(std::is_same_v<decltype(a2), std::atomic_ref<int*>>);
> +template <typename T>
> +void
> +test(T v)
> +{
> +  test_impl<T>(v);
> +  test_impl<const T>(v);
> +  test_impl<volatile T>(v);
> +  test_impl<const volatile T>(v);
> +}
>
> +int main()
> +{
> +  test<int>(0);
> +  test<float>(1.0f);
> +  test<int*>(nullptr);
>    struct X { } x;
> -  std::atomic_ref a3(x);
> -  static_assert(std::is_same_v<decltype(a3), std::atomic_ref<X>>);
> +  test<X>(x);
>  }
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc
> index 5773d144c36..c69f3a711d3 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc
> @@ -299,14 +299,19 @@ test04()
>  {
>    if constexpr (std::atomic_ref<float>::is_always_lock_free)
>    {
> -    float i = 0;
> -    float* ptr = 0;
> -    std::atomic_ref<float*> a0(ptr);
> -    std::atomic_ref<float*> a1(ptr);
> -    std::atomic_ref<float*> a2(a0);
> -    a0 = &i;
> -    VERIFY( a1 == &i );
> -    VERIFY( a2 == &i );
> +    float i = 0.0f;
> +    std::atomic_ref<float> a0(i);
> +    std::atomic_ref<float> a1(i);
> +    std::atomic_ref<const float> a1c(i);
> +    std::atomic_ref<volatile float> a1v(i);
> +    std::atomic_ref<const volatile float> a1cv(i);
> +    std::atomic_ref<float> a2(a0);
> +    a0 = 1.0f;
> +    VERIFY( a1 == 1.0f );
> +    VERIFY( a1c == 1.0f );
> +    VERIFY( a1v == 1.0f );
> +    VERIFY( a1cv == 1.0f );
> +    VERIFY( a2 == 1.0f );
>    }
>  }
>
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc
> index 2e6fa0f90e2..079ec1b1a78 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc
> @@ -108,9 +108,15 @@ test02()
>    X i;
>    std::atomic_ref<X> a0(i);
>    std::atomic_ref<X> a1(i);
> +  std::atomic_ref<const X> a1c(i);
> +  std::atomic_ref<volatile X> a1v(i);
> +  std::atomic_ref<const volatile X> a1cv(i);
>    std::atomic_ref<X> a2(a0);
>    a0 = 42;
>    VERIFY( a1.load() == 42 );
> +  VERIFY( a1c.load() == 42 );
> +  VERIFY( a1v.load() == 42 );
> +  VERIFY( a1cv.load() == 42 );
>    VERIFY( a2.load() == 42 );
>  }
>
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc
> index f6b68ebc598..310434cefb5 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc
> @@ -302,9 +302,15 @@ test03()
>    int i = 0;
>    std::atomic_ref<int> a0(i);
>    std::atomic_ref<int> a1(i);
> +  std::atomic_ref<const int> a1c(i);
> +  std::atomic_ref<volatile int> a1v(i);
> +  std::atomic_ref<const volatile int> a1cv(i);
>    std::atomic_ref<int> a2(a0);
>    a0 = 42;
>    VERIFY( a1 == 42 );
> +  VERIFY( a1c == 42 );
> +  VERIFY( a1v == 42 );
> +  VERIFY( a1cv == 42 );
>    VERIFY( a2 == 42 );
>  }
>
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/op_support.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/op_support.cc
> new file mode 100644
> index 00000000000..93c65dce263
> --- /dev/null
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/op_support.cc
> @@ -0,0 +1,113 @@
> +// { dg-do compile { target c++20 } }
> +
> +#include <atomic>
> +
> +template<class T> concept has_and = requires (T& a) { a &= false; };
> +template<class T> concept has_or = requires (T& a) { a |= false; };
> +template<class T> concept has_xor = requires (T& a) { a ^= false; };
> +template<class T> concept has_fetch_add = requires (T& a) {
> a.fetch_add(true); };
> +template<class T> concept has_fetch_sub = requires (T& a) {
> a.fetch_sub(true); };
> +
> +static constexpr std::memory_order mo = std::memory_order_seq_cst;
> +
> +#define HAS(op) (requires (std::atomic_ref<T> a, T t) { op; })
> +
> +template<typename T>
> +void
> +no_stores()
> +{
> +  static_assert( !HAS(a = t) );
> +  static_assert( !HAS(a.store(t)) );
> +  static_assert( !HAS(a.store(t, mo)) );
> +  static_assert( !HAS(a.exchange(t)) );
> +  static_assert( !HAS(a.exchange(t, mo)) );
> +
> +  static_assert( !HAS(a.compare_exchange_weak(t, t)) );
> +  static_assert( !HAS(a.compare_exchange_weak(t, t, mo)) );
> +  static_assert( !HAS(a.compare_exchange_weak(t, t, mo, mo)) );
> +
> +  static_assert( !HAS(a.compare_exchange_strong(t, t)) );
> +  static_assert( !HAS(a.compare_exchange_strong(t, t, mo)) );
> +  static_assert( !HAS(a.compare_exchange_strong(t, t, mo, mo)) );
> +}
> +
> +template<typename T>
> +void
> +no_additions()
> +{
> +  static_assert( !HAS(a++) );
> +  static_assert( !HAS(++a) );
> +  static_assert( !HAS(a += t) );
> +  static_assert( !HAS(a.fetch_add(t)) );
> +  static_assert( !HAS(a.fetch_add(t, mo)) );
> +
> +  static_assert( !HAS(a--) );
> +  static_assert( !HAS(--a) );
> +  static_assert( !HAS(a -= t) );
> +  static_assert( !HAS(a.fetch_sub(t)) );
> +  static_assert( !HAS(a.fetch_sub(t, mo)) );
> +}
> +
> +template<typename T>
> +void
> +no_bitops()
> +{
> +  static_assert( !HAS(a &= t) );
> +  static_assert( !HAS(a.fetch_and(t)) );
> +  static_assert( !HAS(a.fetch_and(t, mo)) );
> +
> +  static_assert( !HAS(a |= t) );
> +  static_assert( !HAS(a.fetch_or(t)) );
> +  static_assert( !HAS(a.fetch_or(t, mo)) );
> +
> +  static_assert( !HAS(a ^= t) );
> +  static_assert( !HAS(a.fetch_xor(t)) );
> +  static_assert( !HAS(a.fetch_xor(t, mo)) );
> +}
> +
> +template<typename T>
> +void
> +no_math()
> +{
> +  no_additions<T>();
> +  no_bitops<T>();
> +}
> +
> +template<typename T>
> +void
> +no_mutations()
> +{
> +  no_stores<T>();
> +  no_math<T>();
> +}
> +
> +struct S
> +{
> +  int x;
> +  int y;
> +};
> +
> +int main()
> +{
> +  no_mutations<const int>();
> +  no_mutations<const volatile int>();
> +
> +  no_bitops<float>();
> +  no_bitops<volatile float>();
> +  no_mutations<const float>();
> +
> +  no_bitops<int*>();
> +  no_bitops<int* volatile>();
> +  no_mutations<int* const>();
> +  no_mutations<int* const volatile>();
> +
> +  no_math<bool>();
> +  no_math<volatile bool>();
> +  no_mutations<const bool>();
> +  no_mutations<const volatile bool>();
> +
> +  no_math<S>();
> +  no_math<volatile S>();
> +  no_mutations<const S>();
> +  no_mutations<const volatile S>();
> +}
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc
> index d1789af890e..8db45c797c8 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc
> @@ -210,9 +210,15 @@ test03()
>    int* ptr = 0;
>    std::atomic_ref<int*> a0(ptr);
>    std::atomic_ref<int*> a1(ptr);
> +  std::atomic_ref<int* const> a1c(ptr);
> +  std::atomic_ref<int* volatile> a1v(ptr);
> +  std::atomic_ref<int* const volatile> a1cv(ptr);
>    std::atomic_ref<int*> a2(a0);
>    a0 = &i;
>    VERIFY( a1 == &i );
> +  VERIFY( a1c == &i );
> +  VERIFY( a1v == &i );
> +  VERIFY( a1cv == &i );
>    VERIFY( a2 == &i );
>  }
>
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc
> index 3b929563a1e..8617661f8e1 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc
> @@ -18,56 +18,92 @@
>  // { dg-do compile { target c++20 } }
>
>  #include <atomic>
> +#include <type_traits>
>
> +template <class T>
>  void
> -test01()
> +test_generic()
>  {
> -  struct X { int c; };
> -  using A = std::atomic_ref<X>;
> +  using A = std::atomic_ref<T>;
>    static_assert( std::is_standard_layout_v<A> );
>    static_assert( std::is_nothrow_copy_constructible_v<A> );
>    static_assert( std::is_trivially_destructible_v<A> );
> -  static_assert( std::is_same_v<A::value_type, X> );
> +  static_assert( std::is_same_v<typename A::value_type,
> std::remove_cv_t<T>> );
> +  static_assert( !requires { typename A::difference_type; } );
>    static_assert( !std::is_copy_assignable_v<A> );
>    static_assert( !std::is_move_assignable_v<A> );
>  }
>
> +template <class T>
>  void
> -test02()
> +test_integral()
>  {
> -  using A = std::atomic_ref<int>;
> +  using A = std::atomic_ref<T>;
>    static_assert( std::is_standard_layout_v<A> );
>    static_assert( std::is_nothrow_copy_constructible_v<A> );
>    static_assert( std::is_trivially_destructible_v<A> );
> -  static_assert( std::is_same_v<A::value_type, int> );
> -  static_assert( std::is_same_v<A::difference_type, A::value_type> );
> +  static_assert( std::is_same_v<typename A::value_type,
> std::remove_cv_t<T>> );
> +  static_assert( std::is_same_v<typename A::difference_type, typename
> A::value_type> );
>    static_assert( !std::is_copy_assignable_v<A> );
>    static_assert( !std::is_move_assignable_v<A> );
>  }
>
> +template <class T>
>  void
> -test03()
> +test_floating_point()
>  {
> -  using A = std::atomic_ref<double>;
> +  using A = std::atomic_ref<T>;
>    static_assert( std::is_standard_layout_v<A> );
>    static_assert( std::is_nothrow_copy_constructible_v<A> );
>    static_assert( std::is_trivially_destructible_v<A> );
> -  static_assert( std::is_same_v<A::value_type, double> );
> -  static_assert( std::is_same_v<A::difference_type, A::value_type> );
> +  static_assert( std::is_same_v<typename A::value_type,
> std::remove_cv_t<T>> );
> +  static_assert( std::is_same_v<typename A::difference_type, typename
> A::value_type> );
>    static_assert( !std::is_copy_assignable_v<A> );
>    static_assert( !std::is_move_assignable_v<A> );
>  }
>
> +template <class T>
>  void
> -test04()
> +test_pointer()
>  {
> -  using A = std::atomic_ref<int*>;
> +  using A = std::atomic_ref<T>;
>    static_assert( std::is_standard_layout_v<A> );
>    static_assert( std::is_nothrow_copy_constructible_v<A> );
>    static_assert( std::is_trivially_destructible_v<A> );
> -  static_assert( std::is_same_v<A::value_type, int*> );
> -  static_assert( std::is_same_v<A::difference_type, std::ptrdiff_t> );
> +  static_assert( std::is_same_v<typename A::value_type,
> std::remove_cv_t<T>> );
> +  static_assert( std::is_same_v<typename A::difference_type,
> std::ptrdiff_t> );
>    static_assert( std::is_nothrow_copy_constructible_v<A> );
>    static_assert( !std::is_copy_assignable_v<A> );
>    static_assert( !std::is_move_assignable_v<A> );
>  }
> +
> +int
> +main()
> +{
> +  struct X { int c; };
> +  test_generic<X>();
> +  test_generic<const X>();
> +  test_generic<volatile X>();
> +  test_generic<const volatile X>();
> +
> +  // atomic_ref excludes (cv) `bool` from the set of integral types
> +  test_generic<bool>();
> +  test_generic<const bool>();
> +  test_generic<volatile bool>();
> +  test_generic<const volatile bool>();
> +
> +  test_integral<int>();
> +  test_integral<const int>();
> +  test_integral<volatile int>();
> +  test_integral<const volatile int>();
> +
> +  test_floating_point<double>();
> +  test_floating_point<const double>();
> +  test_floating_point<volatile double>();
> +  test_floating_point<const volatile double>();
> +
> +  test_pointer<int*>();
> +  test_pointer<int* const>();
> +  test_pointer<int* volatile>();
> +  test_pointer<int* const volatile>();
> +}
> diff --git
> a/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements_neg.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements_neg.cc
> new file mode 100644
> index 00000000000..8b0abbde023
> --- /dev/null
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements_neg.cc
> @@ -0,0 +1,34 @@
> +// { dg-do compile { target c++20 } }
> +
> +#include <atomic>
> +
> +template<size_t N>
> +struct NonTrivial
> +{
> +  NonTrivial() = default;
> +  NonTrivial(NonTrivial const&) { };
> +};
> +
> +template<size_t N>
> +NonTrivial<N> ntv;
> +
> +std::atomic_ref<NonTrivial<0>> nt(ntv<0>); // { dg-error "here" }
> +std::atomic_ref<const NonTrivial<1>> cnt(ntv<1>); // { dg-error "here" }
> +std::atomic_ref<volatile NonTrivial<2>> vnt(ntv<2>); // { dg-error "here"
> }
> +std::atomic_ref<const volatile NonTrivial<3>> cvnt(ntv<3>); // { dg-error
> "here" }
> +
> +template<size_t N>
> +struct NonLockFree
> +{
> +  char c[1024 + N];
> +};
> +
> +template<size_t N>
> +NonLockFree<N> nlfv;
> +
> +std::atomic_ref<NonLockFree<0>> nlf(nlfv<0>);
> +std::atomic_ref<const NonLockFree<1>> cnlf(nlfv<1>);
> +std::atomic_ref<volatile NonLockFree<2>> vnlf(nlfv<2>); // { dg-error
> "here" }
> +std::atomic_ref<const volatile NonLockFree<3>> cvnlf(nlfv<3>); // {
> dg-error "here" }
> +
> +// { dg-error "static assertion failed" "" { target *-*-* } 0 }
> diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc
> b/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc
> index ecabeecd5bb..db20a197ed0 100644
> --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc
> +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc
> @@ -41,6 +41,16 @@ template<typename S>
>          });
>        a.wait(va);
>        t.join();
> +
> +      std::atomic_ref<const S> b{ aa };
> +      b.wait(va);
> +      std::thread t2([&]
> +        {
> +         a.store(va);
> +         a.notify_one();
> +        });
> +      b.wait(vb);
> +      t2.join();
>      }
>    }
>
> --
> 2.51.0
>
>

Reply via email to