================
@@ -3053,46 +3053,27 @@ class _CXUnsavedFile(Structure):
class CompletionChunk:
- class Kind:
- def __init__(self, name: str):
- self.name = name
-
- def __str__(self) -> str:
- return self.name
-
- def __repr__(self) -> str:
- return "<ChunkKind: %s>" % self
+ __kind_id: int
def __init__(self, completionString: CObjP, key: int):
self.cs = completionString
self.key = key
- self.__kindNumberCache = -1
+ self.__kind_id = conf.lib.clang_getCompletionChunkKind(self.cs,
self.key)
----------------
DeinAlptraum wrote:
I've tested this by getting all CompletionChunks for all CodeCompletionResults
on the first 2000 cursors of `<iostream>`. That's 184595 CCRs and 774559 code
completion chunks.
Running this 5 times, I found the `CachedProperty` variant to actually be 1.5%
faster, though I guess that's within margin of error anyway.
That said, note that generating the CCRs and CompletionStrings accounts for a
bit more than 90% of execution time, getting the Chunks themselves is fast in
comparison so it is a bit difficult to measure the effect here.
https://github.com/llvm/llvm-project/pull/176631
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits