https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115444

--- Comment #12 from GCC Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jonathan Wakely <r...@gcc.gnu.org>:

https://gcc.gnu.org/g:7ed561f63e7955df4d194669998176df5ef47803

commit r15-4475-g7ed561f63e7955df4d194669998176df5ef47803
Author: Jonathan Wakely <jwak...@redhat.com>
Date:   Thu Jun 27 13:01:18 2024 +0100

    libstdc++: Inline memmove optimizations for std::copy etc. [PR115444]

    This removes all the __copy_move class template specializations that
    decide how to optimize std::copy and std::copy_n. We can inline those
    optimizations into the algorithms, using if-constexpr (and macros for
    C++98 compatibility) and remove the code dispatching to the various
    class template specializations.

    Doing this means we implement the optimization directly for std::copy_n
    instead of deferring to std::copy, That avoids the unwanted consequence
    of advancing the iterator in copy_n only to take the difference later to
    get back to the length that we already had in copy_n originally (as
    described in PR 115444).

    With the new flattened implementations, we can also lower contiguous
    iterators to pointers in std::copy/std::copy_n/std::copy_backwards, so
    that they benefit from the same memmove optimizations as pointers.
    There's a subtlety though: contiguous iterators can potentially throw
    exceptions to exit the algorithm early.  So we can only transform the
    loop to memmove if dereferencing the iterator is noexcept. We don't
    check that incrementing the iterator is noexcept because we advance the
    contiguous iterators before using memmove, so that if incrementing would
    throw, that happens first. I am writing a proposal (P3349R0) which would
    make this unnecessary, so I hope we can drop the nothrow requirements
    later.

    This change also solves PR 114817 by checking is_trivially_assignable
    before optimizing copy/copy_n etc. to memmove. It's not enough to check
    that the types are trivially copyable (a precondition for using memmove
    at all), we also need to check that the specific assignment that would
    be performed by the algorithm is also trivial. Replacing a non-trivial
    assignment with memmove would be observable, so not allowed.

    libstdc++-v3/ChangeLog:

            PR libstdc++/115444
            PR libstdc++/114817
            * include/bits/stl_algo.h (__copy_n): Remove generic overload
            and overload for random access iterators.
            (copy_n): Inline generic version of __copy_n here. Do not defer
            to std::copy for random access iterators.
            * include/bits/stl_algobase.h (__copy_move): Remove.
            (__nothrow_contiguous_iterator, __memcpyable_iterators): New
            concepts.
            (__assign_one, _GLIBCXX_TO_ADDR, _GLIBCXX_ADVANCE): New helpers.
            (__copy_move_a2): Inline __copy_move logic and conditional
            memmove optimization into the most generic overload.
            (__copy_n_a): Likewise.
            (__copy_move_backward): Remove.
            (__copy_move_backward_a2): Inline __copy_move_backward logic and
            memmove optimization into the most generic overload.
            *
testsuite/20_util/specialized_algorithms/uninitialized_copy/114817.cc:
            New test.
            *
testsuite/20_util/specialized_algorithms/uninitialized_copy_n/114817.cc:
            New test.
            * testsuite/25_algorithms/copy/114817.cc: New test.
            * testsuite/25_algorithms/copy/115444.cc: New test.
            * testsuite/25_algorithms/copy_n/114817.cc: New test.

    Reviewed-by: Patrick Palka <ppa...@redhat.com>

Reply via email to