sammccall added a comment.

In D116786#3257932 <https://reviews.llvm.org/D116786#3257932>, @kadircet wrote:

>> Any ideas? :-(
>
> I've got a feeling that, this should also be annoying in the case of function 
> calls (especially when there's only one overload)? Maybe we should just 
> debounce the inlayhint requests (by saying contents modified to the clients)?

This seems like the right track, but I'm not sure rejecting requests 
specifically is the right answer. Two reasons:

1. This requires careful impl on the client side. LSP has been amended to 
greatly discourage contentmodified. We still send it heuristically, but there's 
a capability to opt out of this and vscode sets it.

(As long as this is a custom protocol we can say what we want, but it won't be 
forever).

2. Choosing which requests to respond to is hard, stateful work.

This isn't the drop-for-efficiency situation where the queue is full, this 
behavior is most annoying when the server is able to keep up with keystrokes. 
And the errors that cause this may not be trivially brief "i'm in the middle of 
typing a word" anyway.
Really you want to work out if this is a "good" request based on the AST. And 
in fact the analysis you'll do looks a lot like actually generating the hints, 
and probably comparing with the last ones. Then we throw them all away?!

---

I'd suggest something slightly different: we compute inlay hints, heuristically 
merge with some previous state, and return the merged results.
Heuristics are something like:

- generally prefer freshly computed hints
- be willing to add old ones where newer ones are missing
- be willing to use older ones where code has changed near the hints (but not 
when they overlap!)

(There's not much here that's specific to inlay hints vs other decorations tied 
to ranges - certainly we could/should do this for semantictokens too, and 
documentSymbols, ...)

---

A variation on this idea is to compute these decorations frequently but 
"whenever we like" and always serve them from this cache, just patching them 
for changes in the source code.
This gives us more control of when the analyses run, maybe there's a lot of 
extra efficiency here. But I think the cache still needs stateful updates to 
solve this bug, unless we have *great* heuristics for "whenever we like".


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D116786/new/

https://reviews.llvm.org/D116786

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to