teemperor added a comment. (Seems like my previous comment was cut off in the middle for some reason?)
We could also just let the allocator take a parameter so that is increases the growth size to do it every Nth slab (N=1 for us) and set a slightly larger starting size. By the way, if I look at the flame graph for lldb starting up and setting a break point in an LLDB debug binary <https://teemperor.de/lldb-bench/data/extern-lldb-bt.svg>, it seems that on the benchmark server we spent less than 0.5% of the startup time in that allocator function and much more time in that other ConstString overhead (the double-hashing to find the StringPool, the different locking). I'm curious why the allocator logic is so disproportionally slow on your setup (0.5% vs 10-20%). The only real work we do in the allocator is calling malloc, so I assume calling malloc is much more expensive on your system? > Can you easily benchmark with different numbers? Sadly not, but just running LLDB to attach to a LLDB debug build and running this lldb command list <https://github.com/Teemperor/lldb-bench/blob/master/benchmarks/extern-lldb-bt/commands.lldb> should replicate the most important benchmark. Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D68549/new/ https://reviews.llvm.org/D68549 _______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits