uschindler commented on PR #12311:
URL: https://github.com/apache/lucene/pull/12311#issuecomment-1557144955

   Hi,
   
   > I didn't get an anywhere with Luceneutil yet! :-( (I haven't been able to 
run it successfully, getting OOM errors )
   
   Did you get the OOMs only with our vector code? If it OOMs also with current 
code then you might need to tune Xmx for the large dataset.
   
   If it OOMs with the vector code it is the same that I have seen with Panama 
Foreign in Java 16/17. Reason was that the default settings of Mike's tool 
passed something like `-Xbatch` and disbaled tiered compilation. This caused 
escape analysis to be executed much later than expected. As the Panama Foreign 
code in the past created new MemorySegment instances (to produce shapes/slices) 
just to copy a few bytes this produced millions of new instances. This was 
solved by Mauricia by adding System.arraycopy-like copy methods to 
MemorySegment.
   
   For vectors it looks like we create really a lot of objects. When you run 
the searches in parallel with many threads in the benchmark it may also fill 
the heap faster than GC can clean it up or the optimizer kicks in.
   
   P.S.: This could also be th reason for the slowdown I mentioned above: In 
the testsuite we also disable tiered complilation by default for performance 
reasons when just running unit tests. But it is bad for tests doing a lot of 
work.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to