Slightly offtopic.

It would be nice to add a way to collect quality during actual completion
sessions, rather than simulated ones.
I'm thinking about clients sending a message to the server mentioning the
completion item user ended up selecting in a completion list (would require
an LSP extension).
I believe this should be enough information to collect metrics for the use
of code completion in the wild.


On Tue, Feb 26, 2019 at 11:04 AM Eric Liu <ioe...@google.com> wrote:

> Unfortunately, the evaluation tool I use only works on compilable code, so
> it doesn't capture the unsolved specifier case in this patch. I didn't try
> to collect the metrics because I think this is more of a bug fix than
> quality improvement.
>
> On Tue, Feb 26, 2019, 10:25 Kadir Cetinkaya via Phabricator <
> revi...@reviews.llvm.org> wrote:
>
>> kadircet added a comment.
>>
>> LG
>>
>> Do we have any metrics regarding change in completion quality?
>>
>>
>> Repository:
>>   rCTE Clang Tools Extra
>>
>> CHANGES SINCE LAST ACTION
>>   https://reviews.llvm.org/D58448/new/
>>
>> https://reviews.llvm.org/D58448
>>
>>
>>
>>

-- 
Regards,
Ilya Biryukov
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to