benwtrent commented on PR #12844:
URL: https://github.com/apache/lucene/pull/12844#issuecomment-1831865170

   I ran knnPerfTest in Lucene util against this PR. 100k cohere vectors. 
Flushing was depending on memory usage (Do we check the ram usage of 
onHeapgraph during indexing to determine when to flush?)
   
   main branch:
   ```
   recall       latency nDoc    fanout  maxConn beamWidth index
   0.912         0.77   100000  0       16      100       48602
   ```
   
   This branch:
   ```
   recall       latency nDoc    fanout  maxConn beamWidth index ms
   0.912         0.75   100000  0       16      100       165929
   ```
   
   Indexing is about 3.5x slower with this change. I don't know if its due to 
the OnHeapGraph memory estimation being slow or the node resizing. I am gonna 
run a profiler to see whats up.
   
   Good news is that search latency and recall are unchanged. Forced merging 
time seems about the same as well :).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to