aganea added a subscriber: maniccoder.
aganea added a comment.

In D86694#2274682 <https://reviews.llvm.org/D86694#2274682>, @cryptoad wrote:

> In D86694#2274548 <https://reviews.llvm.org/D86694#2274548>, @russell.gallop 
> wrote:
>
>> I guess using scudo as a general purpose allocator that could set a limit on 
>> the number of cores that can be used at once as @aganea found. Would there 
>> be any problem with making this very small (e.g. a couple of GB)?
>
> You can reduce the size of the Primary for Windows with an additional define 
> in the platform file. You probably want to make sure there is at least a few 
> gigs per region (eg: the total size could be 256G).
>
> Once again the memory is reserved but not committed, and this is on a 
> per-process basis. There shouldn't be more than one Primary, and as such we 
> shouldn't run out of VA space. We could possibly run out of memory if we 
> allocate past the amount of RAM (+swap), but this is the committed amount.

I think reserving 4 TB hits a pathological case in the Windows NT Kernel, where 
for some reason, the application's VAD (Virtual Address Descriptor) tree is not 
cleared right away, but deferred to the system zero thread. Since that thread 
is low-priority, the VAD trees are accumulating in the "Active List", until it 
hits the physical memory limit, then it goes to swap, then at some point the 
`VirtualAllocs` calls in the application fail. This looks like an edge case 
that wouldn't happen normally (lots of applications that start, reserve several 
TB of vRAM, then shutdown)
+ @maniccoder

F12971303: linking_clang_with_lld_thinlto_scudo_hardware_crc8.PNG 
<https://reviews.llvm.org/F12971303>

As soon as I pause the `ninja check-llvm` tests (by blocking the console 
output), the zero thread now has more time and the free memory goes down again.

F12971361: image.png <https://reviews.llvm.org/F12971361>

@cryptoad What happens if the primary was much smaller? Or if pages were 
//reserved// in much smaller ranges?

In D86694#2271825 <https://reviews.llvm.org/D86694#2271825>, @russell.gallop 
wrote:

>> (a hardware CRC or AES implemention will certainly help for Scudo)
>
> Actually this wasn't too hard to try out. I added "-msse4.2" to CMAKE_C_FLAGS 
> and CMAKE_CXX_FLAGS (as suggested in scudo_allocator.cpp). This helps, but 
> scudo is still a bit behind the default allocator for me on 6 cores

This improves things a bit, wall clock 145 sec before -> 136 sec after.

Scudo+options+hardware-crc32 - 5,997 cumulated seconds (all threads) - //before 
it was 6,337 seconds//
F12971472: linking_clang_with_lld_thinlto_scudo_hardware_crc.PNG 
<https://reviews.llvm.org/F12971472>

Time spent in the allocator itself:

Scudo+options - 761 cumulated seconds - //before it was 1,171 seconds//
F12971504: linking_clang_with_lld_thinlto_scudo_hardware_crc2.PNG 
<https://reviews.llvm.org/F12971504>

There's one more thing. I think the difference in performance, as when compared 
with competing allocators, is that SCUDO still does some level of locking. Like 
I mentionned previously, this doesn't scale on many-core machine for 
allocation-intensive applications like LLVM. Every waiting lock gives up its 
time slice to the OS scheduler, and that's bad for performance. The more core 
there are, the more we wait for the lock to become free.
F12971551: linking_clang_with_lld_thinlto_scudo_hardware_crc5.PNG 
<https://reviews.llvm.org/F12971551>
Here you can see a Scudo-enabled LLD doing 10x more context-switches that the 
Rpmalloc-enabled LLD:
F12971754: linking_clang_with_lld_thinlto_scudo_hardware_crc3.PNG 
<https://reviews.llvm.org/F12971754>

One thing that is in favor of Scudo though, is that it commits memory in much 
smaller blocks than Rpmalloc (peak LLD commit is 5 GB for Scudo, 11 GB for 
Rpmalloc). Mimalloc employs the same kind of strategy, with similar benefits.
F12971632: linking_clang_with_lld_thinlto_scudo_hardware_crc4.PNG 
<https://reviews.llvm.org/F12971632>

@cryptoad Does SCUDO standalone differs in any of these aspects from this 
version?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D86694/new/

https://reviews.llvm.org/D86694

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to