http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51618

--- Comment #3 from Jonathan Wakely <redi at gcc dot gnu.org> 2011-12-19 
13:13:32 UTC ---
(In reply to comment #2)
> I'm confused.  IIUC even shared_futures aren't supposed to be accessed
> concurrently from multiple threads.  Why would multiple threads be
> accessing a single plain future?

future::wait() and shared_future::get() are const member functions, so must not
introduce data races if called from multiple threads (17.6.5.9
[res.on.data.races]). So it doesn't matter _why_ someone might want to do that,
they're not forbidden from doing so and therefore we have to provide some level
of internal synchronization to prevent data races.

I'm not claiming the current implementation is optimal but I don't think we can
just turn off all synchronization for deferred functions, and identifying if we
can make specific changes for deferred functions is not high on my priority
list.  I'd rather spend time replacing the mutex-protected
std::unique_ptr<Result_base> with a std::atomic<Result_base*>, which could
benefit all uses of futures rather than just the deferred async case. Although
maybe sub-optimal the current mutex-based impl has the advantage of (relative)
simplicity because the unique_ptr manages the result's lifetime and the mutex
is also used by the condition_variable that signals when the result becomes
ready.

If you have specific suggestions for improving anything then I'll be happy to
hear them, but don't expect me to work on this otherwise, sorry.

Reply via email to