jpountz commented on issue #13179:
URL: https://github.com/apache/lucene/issues/13179#issuecomment-2138323749

   > Am I correct in understanding that prefetching an already-fetched page is 
(at least approximately) a no-op?
   
   We tried to make it cheap (see e.g. the logic to disable calling madvise 
after a while if the data seems to fully fit in the page cache anyway), but so 
is reading a doc-value that fits in the page cache via `RandomAccessInput`, so 
I'm expecting us to see a performance regression in nightlies if we add calls 
to prefetch() to 
`(Numeric|SortedNumeric|Sorted|SortedSet|Binary)DocValues#advance`.
   
   To avoid this per-doc overhead, I imagine that we would need to add some 
prefetch() API on `(Numeric|SortedNumeric|Sorted|SortedSet|Binary)DocValues` 
like @sohami suggests and require it to be called at a higher level when this 
can be more easily amortized across many docs, e.g. by making `BulkScorer` 
score ranges of X doc IDs at once and only calling prefetch once per range.
   
   I'm still debating with myself whether this would be valuable enough vs. 
just giving a hint to the IndexInput that it would be a good idea to read ahead 
because it's being used by postings or doc values that have a forward-only 
access pattern.
   
   > maybe we buffer the next few doc IDs from the first-phase scorer and 
prefetch those
   
   FWIW this would break a few things, e.g. we have collectors that only 
compute the score when needed (e.g. when sorting by field then score). But if 
we need to buffer docs up-front, then we don't know at this point in time if 
scores are going to be needed or not, so we need to score more docs. Maybe it's 
still the right trade-off, I'm mostly pointing out that this would be a bigger 
trade-off than what we've done for prefetching until now.
   
   > If we want to collect all the doc IDs during the collect phase (as 
Lucene's FacetsCollector does) and then prefetch them all at once to compute 
facet counts, that works too.
   
   Maybe such an approach would be ok for application code that can make 
assumptions about how much page cache it has, but I'm expecting Lucene code to 
avoid ever prefetching many MBs at once, because this increases chances that 
the first bytes that got prefetched got paged out before we could use them. 
This is one reason why I like the approach of just giving a hint to the 
`IndexInput` that it should perform read-ahead, `IndexInput` impls that read 
from fast storage can read ahead relatively little, in the order of a few 
pages, while `IndexInput` impls that read from slower storage like the warm 
index use-case that @sohami describes above could fetch MBs from slow remote 
storage and cache it on a local disk or something like that to reduce 
interactions with the slow remote storage.
   
   If one of you would like to take a stab at an approach to prefetching doc 
values, I'd be happy to look at a PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to