On Wed, 11 Feb 2026 at 02:04, Nathan Myers <[email protected]> wrote:
>
> Thank you.
>
> On 2/10/26 7:14 PM, Jonathan Wakely wrote:
> > On Tue, 10 Feb 2026 at 23:14, Nathan Myers <[email protected]> wrote:
> >>
> >> Implement the debug versions of new overloads from P2077.
> >>
> >> Also, make existing unordered_set<>::erase perform the same
> >> bucket check as erase on unordered_multimap.
> >> ...
> >> +# ifdef __glibcxx_heterogeneous_container_erasure
> >> +      // Note that for some types _Kt this may erase more than
> >> +      // one element, such as if _Kt::operator< checks only part
> >> +      // of the key.
> >> +      template <__heterogeneous_tree_key<map> _Kt>
> >> +       size_type
> >> +       erase(_Kt&& __x)
> >> +       {
> >> +         auto __victims = _Base::equal_range<_Kt>(__x);
> >> +         size_type __count = 0;
> >> +         for (auto __victim = __victims.first; __victim != 
> >> __victims.second;)
> >> +           {
> >> +             this->_M_invalidate_if(_Equal(__victim));
> >> +             _Base::erase(__victim++);
> >> +             ++__count;
> >> +           }
> >
> > Would it be more efficient to do one call to:
> > _Base::erase(__victims.first, __victims.second)
> > so that the tree doesn't keep rebalancing?
>
> Like
>
>    for (auto __victim = __victims.first; __victim != __victims.second;)
>      {
>         this->_M_invalidate_if(_Equal(__victim));
>         ++__count;
>      }
>    _Base::erase(__victims.first, victims.second);
>
> ? And do it in the existing overloads, too, and in multimap,
> set, and multiset?

Not in this patch, leave it consistent with the existing overloads.

But maybe as a later change. I've just checked the implementation of
_Rb_tree::_M_erase_aux(first, last) and it just calls erase(first++)
in a loop. The only advantage is when [first, last) is the entire
container, i.e. [c.begin(),c.end()), in which case it just calls
c.clear().

So for c.erase(key) it won't make a lot of difference in practice.


>
> >> +         return __count;
> >> +       }
> >> +# endif
> >> ...
> >>        size_type
> >>         erase(const key_type& __key)
> >>         {
> >> -       size_type __ret(0);
> >> -       auto __pair = _Base::equal_range(__key);
> >> -       for (auto __victim = __pair.first; __victim != __pair.second;)
> >> +       size_type __count(0);
> >> +       size_type __bucket_count = this->bucket_count();
> >> +       auto __victims = _Base::equal_range(__key);
> >> +       for (auto __victim = __victims.first; __victim != 
> >> __victims.second;)
> >>            {
> >>              _M_invalidate(__victim);
> >>              __victim = _Base::erase(__victim);
> >> -           ++__ret;
> >> +           ++__count;
> >>            }
> >> -
> >> -       return __ret;
> >> +       _M_check_rehashed(__bucket_count);
> >
> > Is this a bug fix? It's not mentioned in the commit message.
>
>      "Also, make existing unordered_set<>::erase perform the same
>      bucket check as erase on unordered_multimap."

Yes, but not in the ChangeLog part. That just says "Define overloads".

> It seemed like an unintentional omission.

Do we ever rehash on erase? I think it's only on insert. We're not
allowed to invalidate all iterators on erase:

"The erase members shall invalidate only iterators and references to
the erased elements, and preserve the relative order of the elements
that are not erased."

So I don't think we want these checks on any erase members. So unless
I'm missing something, please don't add them in this patch, we can
remove the existing ones separately.

> And I took the opportunity to regularize names.
>
> >> +       return __count;
> >>         }
> >>
>

Reply via email to