See TestIndexWriterOnDiskFull (on trunk). Look for the test w/ LUCENE-2743 in the comment... but the other tests there also test other cases that may hit disk full.
Can you post the exceptions you hit? (Are these logged?). Yes this could be a hardware issue... Millions of docs indexed per hour sounds like fun! Mike On Fri, Nov 5, 2010 at 5:33 PM, Jason Rutherglen <jason.rutherg...@gmail.com> wrote: >> can you enable IndexWriter's infoStream > > I'd like to however the problem is only happening in production, and > the indexing volume is in the millions per hour. The log would be > clogged up, as it is I have logging in Tomcat turned off because it is > filling up the SSD drive (yes I know, we should have an HD drive as > well, I didn't configure the server, and we're getting new ones, > thanks for wondering). > > Can you point me at the unit test that simulates this issue? Today I > saw a different problem in that the doc store got corrupted, given > we're streaming it to disk, how are we capturing disk full for that > case? Meaning how can we be sure where the doc store stopped writing > at? I haven't had time to explore what's up with this however I will > shortly, ie, examine the unit tests and code. Perhaps though this is > simply hardware related? > > On Fri, Nov 5, 2010 at 1:58 AM, Michael McCandless > <luc...@mikemccandless.com> wrote: >> Hmmm... Jason can you enable IndexWriter's infoStream and get the >> corruption to happen again and post that (along with "ls -l" output)? >> >> Mike >> >> On Thu, Nov 4, 2010 at 5:11 PM, Jason Rutherglen >> <jason.rutherg...@gmail.com> wrote: >>> I'm still seeing this error after downloading the latest 2.9 branch >>> version, compiling, copying to Solr 1.4 and deploying. Basically as >>> mentioned, the .del files are of zero length... Hmm... >>> >>> On Wed, Oct 13, 2010 at 1:33 PM, Jason Rutherglen >>> <jason.rutherg...@gmail.com> wrote: >>>> Thanks Robert, that Jira issue aptly describes what I'm seeing, I think. >>>> >>>> On Wed, Oct 13, 2010 at 10:22 AM, Robert Muir <rcm...@gmail.com> wrote: >>>>> if you are going to fill up your disk space all the time with solr >>>>> 1.4.1, I suggest replacing the lucene jars with lucene jars from >>>>> 2.9-branch >>>>> (http://svn.apache.org/repos/asf/lucene/java/branches/lucene_2_9/). >>>>> >>>>> then you get the fix for >>>>> https://issues.apache.org/jira/browse/LUCENE-2593 too. >>>>> >>>>> On Wed, Oct 13, 2010 at 11:37 AM, Jason Rutherglen >>>>> <jason.rutherg...@gmail.com> wrote: >>>>>> We have unit tests for running out of disk space? However we have >>>>>> Tomcat logs that fill up quickly and starve Solr 1.4.1 of space. The >>>>>> main segments are probably not corrupted, however routinely now, there >>>>>> are deletes files of length 0. >>>>>> >>>>>> 0 2010-10-12 18:35 _cc_8.del >>>>>> >>>>>> Which is fundamental index corruption, though less extreme. Are we >>>>>> testing for this? >>>>>> >>>>> >>>> >>> >> >