On Fri, 28 Nov 2025 at 11:38, Tomasz Kaminski <[email protected]> wrote:
>
>
>
> On Fri, Nov 28, 2025 at 12:33 PM Jonathan Wakely <[email protected]> wrote:
>>
>> On Fri, 28 Nov 2025 at 08:53, Tomasz Kaminski <[email protected]> wrote:
>> >
>> >
>> >
>> > On Thu, Nov 27, 2025 at 5:38 PM Jonathan Wakely <[email protected]> wrote:
>> >>
>> >> On Thu, 27 Nov 2025 at 16:01, Tomasz Kaminski <[email protected]> wrote:
>> >> > Few lines before, we are passing wait_clock_t::duration with the time
>> >> > since epoch value here,
>> >> > wouldn't it make more sense and make the implementation cleaner if we
>> >> > would pass wait_clock_t::time_point instead?
>> >> > We have different types for duration and time_point for a reason.
>> >>
>> >> I should add a comment about that. Passing a duration instead of a
>> >> time_point means that the choice of clock isn't baked in to the ABI.
>> >> We could use the same entry point into the library to pass a
>> >> system_clock::time_point if we assigned a new bit in the __wait_flags
>> >> structure to mean it uses the system_clock not the steady_clock. Or we
>> >> could use a flag to say that it's a relative time (as a duration) not
>> >> an absolute time as a time_point.
>> >
>> > We still backe the choice of clock period in the API.
>>
>> Yes, but choosing int64_t representing nanoseconds for a timeout
>> doesn't seem like it's over-constraining anything, and unlikely to be
>> a decision we regret.
>
> I am not referring to the specialization of duration used, but the fact that
> parameter is using wait_clock_t::duration, so in case if wait_clock_t clock
> is changed, the parameter type will automatically change, and from what you
> said we do not want that. Having a separate wait_epoch_time_t pointing
> directly to
> that specialization, will make intent clear for me.
Well if we changed the wait_clock we'd have to update the signature,
otherwise it would be an ABI break, and we'd get a 'make check-abi'
failure, so I'm not too worried about that. Either we add a typedef
now, and maybe never need it, or we make a change later if we need to
avoid an ABI test failure.
I might as well just make the signature use chrono::nanoseconds, since
that's what it is.
So something like this:
diff --git a/libstdc++-v3/include/bits/atomic_timed_wait.h
b/libstdc++-v3/include/bits/atomic_timed_wait.h
index 5b3158050668..439b8fafdcf1 100644
--- a/libstdc++-v3/include/bits/atomic_timed_wait.h
+++ b/libstdc++-v3/include/bits/atomic_timed_wait.h
@@ -75,9 +75,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
return chrono::ceil<__w_dur>(__atime);
}
+ // This uses a nanoseconds duration for the timeout argument.
+ // For __abi_version=0 that is the time since the steady_clock's epoch.
+ // It's possible that in future we will add new __wait_flags constants
+ // to indicate that the timeout is the time since the system_clock epoch,
+ // or is a relative timeout not an absolute timeout relative to any epoch.
__wait_result_type
__wait_until_impl(const void* __addr, __wait_args_base& __args,
- const __wait_clock_t::duration& __atime);
+ const chrono::nanoseconds& __atime);
template<typename _Clock, typename _Dur>
__wait_result_type
diff --git a/libstdc++-v3/src/c++20/atomic.cc b/libstdc++-v3/src/c++20/atomic.cc
index c5c900be9fb0..da9d0287ea88 100644
--- a/libstdc++-v3/src/c++20/atomic.cc
+++ b/libstdc++-v3/src/c++20/atomic.cc
@@ -647,7 +647,7 @@ __cond_wait_until(__condvar& __cv, mutex& __mx,
__wait_result_type
__wait_until_impl([[maybe_unused]] const void* __addr, __wait_args_base& __args,
- const __wait_clock_t::duration& __time)
+ const chrono::nanoseconds& __time)
{
const __wait_clock_t::time_point __atime(__time);
auto* __wait_addr = static_cast<const __platform_wait_t*>(__args._M_obj);