Thanks Hoss. Yes, in a separate thread on the list I reported that
doing a multi-stage optimize worked around the out of space problem. We
use mergefactor=10, maxSegments = 16, 8, 4, 2, 1 iteratively starting at
the closest power of two below the number of segments to merge.Works
nicely s
: That theory did not work because the error log showed that solr was trying to
: merge into the _1j37 segment files showing as deleted in the lsof above when
: it ran out of space so those are a symptom not a cause of the lost space:
Right, you have to keep in mind Solr is always maintaining a
All,
We're puzzled why we're still unable to optimize a 192GB index on a LVM
volume that has 406GB available. We are not using Solr distribution.
There is no snapshooter in the picture. We run out of disk capacity with
a df showing 100% but a du showing just 379GB of files.
Restarting tomca
Not sure but a quick search turned up:
http://www.walkernews.net/2007/07/13/df-and-du-command-show-different-used-disk-space/
Using upto 2x the index size can happen. Also check if there is a
snapshooter script running through cron which is making hard links to files
while a merge is in progress.