ymandel added a comment.

Aaron, Salman,

We've seen a serious perfomance regression caused by this patch. The issue is 
that the new code scans every file in its entirety for every single diagnostic 
encountered in that file. In fact, each lines is copied and scanned twice 
(respectively), so it's like O(n * m), n = size of files, m = number of 
diagnostics, with high constant factor.

Can we roll back this patch until a fix is developed? At the least, we'd 
request that this feature be disabled by default, either with a flag or in the 
preprocessor. Additionally, we'd ask you to reconsider the design to ensure 
that each file is processed at most once (via a cache or whatnot).  We'd be 
happy to review the patches.

Thanks!


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D108560/new/

https://reviews.llvm.org/D108560

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to