jasonmolenda wrote:

I built the swiftlang `rebranch` sources clean, with the parallel patch, with 
the parallel plus preload patch.  I ran them against the same Slack binary with 
934 binary images, 10 of them outside the shared cache.  I built them something 
like
```
swift/utils/build-script --lldb --release --build-subdir=build.parallel -- 
--bootstrapping=hosttools 
--extra-cmake-options="-DPython3_EXECUTABLE=/usr/bin/python3" --libcxx 
--force-optimized-typechecker --skip-build-benchmarks 
--no-swift-stdlib-assertions --skip-test-swift --skip-test-cmark
```

(don't read too much into this set of options, it's just an old command I have 
that I copy & paste)

The clean sources

```
% time build/Ninja-ReleaseAssert+stdlib-Release/lldb-macosx-arm64//bin/lldb -x 
-b -O 'settings set 
plugin.experimental.dynamic-loader.darwin.enable-parallel-image-load true' -o 
'pro att -n Slack' -o 'det'
6.461u 2.179s 0:10.67 80.8%     0+0k 0+0io 170pf+0w
6.496u 2.166s 0:10.60 81.6%     0+0k 0+0io 152pf+0w
6.455u 2.181s 0:10.62 81.2%     0+0k 0+0io 152pf+0w
```

and the two different patchsets,

```
% time build/build.parallel/lldb-macosx-arm64//bin/lldb -x -b -O 'settings set 
plugin.experimental.dynamic-loader.darwin.enable-parallel-image-load true' -o 
'pro att -n Slack' -o 'det'
9.693u 3.935s 0:05.73 237.6%    0+0k 0+0io 171pf+0w
9.709u 3.941s 0:05.74 237.6%    0+0k 0+0io 152pf+0w
9.645u 3.913s 0:05.69 238.1%    0+0k 0+0io 152pf+0w

% time build/build.parallel-with-preload//lldb-macosx-arm64//bin/lldb -x -b -O 
'settings set 
plugin.experimental.dynamic-loader.darwin.enable-parallel-image-load true' -o 
'pro att -n Slack' -o 'det'
9.611u 3.914s 0:05.72 236.3%    0+0k 0+0io 171pf+0w
9.878u 3.871s 0:05.71 240.6%    0+0k 0+0io 152pf+0w
9.634u 3.923s 0:05.78 234.4%    0+0k 0+0io 152pf+0w
```

We're seeing better parallelism (~170% cpu usage with llvm-project main, ~235% 
cpu usage with swiftlang rebranch), likely because swiftlang rebranch can 
demangle swift mangled names, and that's quite expensive, and parallelizable.

One small thing that I'm not thrilled about is that we lose the progress 
updates currently; I don't know if the progress updates system was really 
intended to handle this.  Instead of seeing a notification for each binary, we 
see a notification for the first binary that starts, and the other 
notifications while it is processing are lost.  On my system, the llvm 
threadpool is running 9 worker threads.  For a local file load notification, 
they go so quickly it isn't much of a loss, but if module creation is slower -- 
if we're reading the binaries out of memory from an iOS device without an 
expanded shared cache, or we're able to find & load DWARF for all the binaries, 
the loss of the updates is not great.  I don't know what the best approach is 
here, or if we just accept that difference.

I haven't started looking at the code changes themselves yet - so far I was 
just trying to play around with them a bit and see how the behavior works.

https://github.com/llvm/llvm-project/pull/110439
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to