sammccall added a comment. Sorry about the delay on this, I said I'd prioritize it... had a slightly odd day.
Haven't reviewed the unit test yet (mostly a reminder to myself). Generally looks good... ================ Comment at: clang-tools-extra/clangd/ClangdServer.h:153 + /// Whether to run PreamblePeer asynchronously. + bool AsyncPreambleBuilds = false; ---------------- Group with StorePreamblesInMemory. This comment doesn't make much sense either - PreamblePeer describe's the thread's relation to the main ASTWorker thread, but readers here won't know what it is. What about: Reuse even stale preambles, and rebuild them in the background. This improves latency at the cost of accuracy. ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:244 + // If NextReq was requested with WantDiagnostics::Yes we cannot just drop + // that on the floor. We block the request trying to override the update + // instead of the caller forcing diagnostics to don't block in cases where ---------------- First sentence makes sense, code makes sense, I can't parse the second sentence at all. Just "block until it's built"? Any maybe something about why we won't deadlock, but tests are enough I guess. ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:292 dlog("Preamble worker for {0} stopped", FileName); + BuiltFirst.notify(); } ---------------- add comment // will never build one, or so ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:551 Semaphore &Barrier, DebouncePolicy UpdateDebounce, - bool StorePreamblesInMemory, ParsingCallbacks &Callbacks) { - std::shared_ptr<ASTWorker> Worker( - new ASTWorker(FileName, CDB, IdleASTs, Barrier, /*RunSync=*/!Tasks, - UpdateDebounce, StorePreamblesInMemory, Callbacks)); + bool StorePreamblesInMemory, bool AsyncPreambleBuilds, + ParsingCallbacks &Callbacks) { ---------------- we're passing around enough loose booleans we should consider having TUScheduler hold its Options struct, and pass it here by refrerence... ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:660 std::move(CompilerInvocationDiags), WantDiags); + // Block until first preamble is ready. As patching an empty preamble would + // imply rebuilding it from scratch. ---------------- "ready, as patching..." ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:724 std::move(Req.WantDiags)); + // Set it after notifying ASTPeer about the preamble to prevent any races. + BuiltFirst.notify(); ---------------- hmm, we're notifying the ASTPeer, and then setting builtfirst which is being waited on... by astpeer. This suggests to me the AST thread should own the notification, not the preamble thread. And in fact it already has a notification for the first preamble being built! why can't we use that one? ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:1133 bool ASTWorker::blockUntilIdle(Deadline Timeout) const { std::unique_lock<std::mutex> Lock(Mutex); ---------------- I think this would be clearer as a loop... and maybe more correct. ================ Comment at: clang-tools-extra/clangd/TUScheduler.cpp:1146 + Lock.lock(); + assert(Requests.empty() && "Received new requests during blockUntilIdle"); + // Finally make sure ASTWorker has processed all of the preamble updates. ---------------- um... isn't that allowed? If new traffic means the queue doesn't go idle, I think we're supposed to return false, rather than crash. ================ Comment at: clang-tools-extra/clangd/TUScheduler.h:195 + + /// Whether to run PreamblePeer asynchronously. No-op if AsyncThreadsCount + /// is 0. ---------------- nit: break before second sentence? ================ Comment at: clang-tools-extra/clangd/tool/ClangdMain.cpp:421 +opt<bool> AsyncPreambleBuilds{ + "async-preamble-builds", + cat(Misc), ---------------- I'd drop "builds" at least from the flag name, possibly througout ================ Comment at: clang-tools-extra/clangd/tool/ClangdMain.cpp:423 + cat(Misc), + desc("Enable async preamble builds."), + init(false), ---------------- This isn't very descriptive. I'd use some variation of the comment suggested for the ClangdServer::Options Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D80293/new/ https://reviews.llvm.org/D80293 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits