jpountz commented on PR #14275: URL: https://github.com/apache/lucene/pull/14275#issuecomment-2748960716
I've been thinking a bit more about this. The two potential use-cases that come to mind are the following: - Diverging from the way Lucene does memory management (off-heap / on-heap and related stuff). - Using more optimized logic for some routines (prefix sums, vector comparisons, etc.), e.g. using architecture-specific native code that the JVM could hardly generate itself. As discussed above, I want codecs to own the way memory is managed, so I don't like giving options to customize it. Regarding native optimizations, Lucene core is unlikely to ever include native extensions, so I can see an argument for it given the number of limitations of the vector API that we already found. But then I don't think that codecs are the right extension point for this sort of things, they are too high-level and would force re-implementing (copy-pasting in practice) the whole read-side logic, just optimizing the relevant bits. I would rather like it to look like `DefaultVectorizationProvider`/`PanamaVectorizationProvider` where optimized implementations of some performance-critical routines get plugged in it runtime in a way that is orthogonal to codecs? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org