karlney commented on issue #14554:
URL: https://github.com/apache/lucene/issues/14554#issuecomment-2840275822

   We likely have the same problem in our env (Elasticsearch 8.18.0)
   We have ~1B vectors on each node (256g RAM, 64 cores, 3TB disk) . The data 
is in approx 100 shards (lucene indices) and a few 100s of segments. 
   We got heap OOM issues during merges with a 70g heap (and ongoing indexing) 
   Now we are trying with a 100g heap to see if that is enough for our data. 
   
   A preferable solution would be to have some max settings on total heap used 
for merges of the HNSW graphs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to