wjp719 commented on PR #687:
URL: https://github.com/apache/lucene/pull/687#issuecomment-1254778654

   > I'm (maybe naively) assuming that we could work around this case at the 
inner node level by skipping inner nodes whose max value is equal to the min 
value if we have already seen this value before?
   
   sure, the inner node can be skipped , but for the boundary value, such as 
the range is from 1663837201000 to 1663839001000. we need to load all leaf 
block that with point value is 1663839001000 or 1663837201000. if there are 100 
thousand doc with point value is 1663839001000 or 1663837201000, we need to 
load many leaf block to get the min/max docId. these block maybe cannot be 
skipped?
   
   this case is the real data that there are  60 billion doc per day, and the 
timestamp is second precision, the average doc per second is 100 thousand.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to