jpountz commented on PR #14167: URL: https://github.com/apache/lucene/pull/14167#issuecomment-2616408185
Somewhat related, thinking out loud: I have been wondering about what is the best way to parallelize top-k query processing. Lexical search has a similar issue as knn search in that it is not very CPU-efficient to let search threads independently make similar decisions about what it means for a hit to be competitive. This made me wonder if it would be a better trade-off to let just one slice run on its own first, and then let all other N-1 slices run in parallel with one another, taking advantage of what we "learned" from processing the first slice. If these N-1 slices would only look at what we learned from this first slice and ignored everything about any other slice, I believe that there wouldn't be any consistency due to races while query processing would still be mostly parallel and likely more CPU-efficient (as in total CPU time per query). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org