https://gcc.gnu.org/bugzilla/show_bug.cgi?id=116285
--- Comment #8 from Sam James <sjames at gcc dot gnu.org> --- (In reply to Andi Kleen from comment #2) > 0.94% of the cycles are iterative_hash, so you might get another slight > improvement from https://github.com/andikleen/gcc/commits/rapidhash-1 > which switches the hash function to something more modern > (still looking for supporting data that it actually helps) It actually seems worse but maybe bad luck? 7.14% cc1plus cc1plus [.] get_class_binding_direct 4.52% cc1plus cc1plus [.] hash_table<default_hash_traits<tree_node*>, false, xcallocator>::find_slot_with_hash 2.88% cc1plus cc1plus [.] ggc_internal_alloc_no_dtor 2.74% cc1plus libc.so.6 [.] __memset_avx2_unaligned_erms 2.67% cc1plus cc1plus [.] hash_table<int_cst_hasher, false, xcallocator>::find_slot_with_hash