hokein added inline comments.

================
Comment at: clangd/TUScheduler.cpp:268
+  /// Status of the TU.
+  TUStatus Status; /* GUARDED_BY(DiagMu) */
 };
----------------
ilya-biryukov wrote:
> hokein wrote:
> > ilya-biryukov wrote:
> > > hokein wrote:
> > > > ilya-biryukov wrote:
> > > > > Is `Status` actually ever read from multiple threads?
> > > > > I assume it's only accessed on the worker thread, right? I think we 
> > > > > can leave out the locks around it and put beside other fields only 
> > > > > accessed by the worker thread, i.e. `DiagsWereReported`, etc.
> > > > > 
> > > > > PS Sorry go going back and forth, I didn't think about it in the 
> > > > > previous review iteration.
> > > > Unfortunately not, most statuses are emitted via worker thread, but we 
> > > > emit `Queued` status in the main thread...
> > > Hm, that sounds surprising.
> > > What if the rebuild is in progress on the worker thread and we emit the 
> > > queued status on the main thread at the same time? We might get weird 
> > > sequences of statuses, right? 
> > > E.g. `Queued → BuildingPreamble → [new update on the main thread] Queued 
> > > → [processing request on a worker thread] RunningAction('XRefs')`
> > > 
> > > The `Queued` request status between `BuildingPreamble` and 
> > > `RunningAction` looks **really** weird.
> > > Maybe we could avoid that by setting emitting the `Queued` status on the 
> > > worker thread whenever we're about to sleep on the debounce timeout?
> > This sounds fair enough, and can simplify the code.
> > 
> > > Maybe we could avoid that by setting emitting the Queued status on the 
> > > worker thread whenever we're about to sleep on the debounce timeout?
> > 
> > The `Queued` state is originally designed for the state that the worker 
> > thread is blocked by the max-concurrent-threads semaphore.
> > Emitting it when we're about to sleep on the debounce timeout doesn't seem 
> > a right place to me. I think a reasonable place is before requiring the 
> > `Barrier`.
> > 
> Ah, good point, waiting on a barrier can definitely take some time and is a 
> form of queuing that we might want to notify the clients about.
> It would be nice to notify only when the Barrier couldn't be acquired right 
> away though, this probably be achieved adding `try_lock()` on the semaphore 
> and only emitting a `Queued` notification if it fails. That way we'll avoid 
> spamming the clients with non-useful statuses.
> 
> Wrt to deciding on whether we should notify when waiting on a debounce timer 
> or on acquiring a semaphore... Why not both?
Sounds good.


================
Comment at: clangd/TUScheduler.cpp:422
+        Details.ReuseAST = true;
+        emitTUStatus({TUAction::BuildingFile, TaskName}, &Details);
         return;
----------------
ilya-biryukov wrote:
> Should we also emit this status if we reuse the AST, but still report the 
> diagnostics?
Yes, I missed that. And I think we also should emit a status if `buildAST` 
fails.


================
Comment at: clangd/TUScheduler.cpp:626
     {
+      emitTUStatus({TUAction::Queued, Req.Name});
       std::lock_guard<Semaphore> BarrierLock(Barrier);
----------------
ilya-biryukov wrote:
> See the other suggestion about emitting this status only when locking the 
> barrier failed.
> This might be a good fit for a different patch, though, maybe add a FIXME 
> here?
Agree, added a fixme, will do it in a follow-up patch.


Repository:
  rCTE Clang Tools Extra

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D54796/new/

https://reviews.llvm.org/D54796



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to