Here is a separate configuration: use separate Solr instances for
indexing and querying. Both point to the same data directory. A 'commit'
to the query Solr reloads the index. It works in read-only mode- for
production mode, I would make the indexer and queryer in different
permissions so that
In addition, I had tried and since backed away from (on Solr) indexing
heavily while also searching on the same server. This would lock up
segments and searchers longer than the disk space would allow. I
think that part of Solr can be rewritten to better handle this N/RT
use case as there is no r
See TestIndexWriterOnDiskFull (on trunk). Look for the test w/
LUCENE-2743 in the comment... but the other tests there also test
other cases that may hit disk full.
Can you post the exceptions you hit? (Are these logged?).
Yes this could be a hardware issue...
Millions of docs indexed per hour
> can you enable IndexWriter's infoStream
I'd like to however the problem is only happening in production, and
the indexing volume is in the millions per hour. The log would be
clogged up, as it is I have logging in Tomcat turned off because it is
filling up the SSD drive (yes I know, we should
Hmmm... Jason can you enable IndexWriter's infoStream and get the
corruption to happen again and post that (along with "ls -l" output)?
Mike
On Thu, Nov 4, 2010 at 5:11 PM, Jason Rutherglen
wrote:
> I'm still seeing this error after downloading the latest 2.9 branch
> version, compiling, copying
I'm still seeing this error after downloading the latest 2.9 branch
version, compiling, copying to Solr 1.4 and deploying. Basically as
mentioned, the .del files are of zero length... Hmm...
On Wed, Oct 13, 2010 at 1:33 PM, Jason Rutherglen
wrote:
> Thanks Robert, that Jira issue aptly describes
Thanks Robert, that Jira issue aptly describes what I'm seeing, I think.
On Wed, Oct 13, 2010 at 10:22 AM, Robert Muir wrote:
> if you are going to fill up your disk space all the time with solr
> 1.4.1, I suggest replacing the lucene jars with lucene jars from
> 2.9-branch (http://svn.apache.org
There's a corrupt index exception thrown when opening the searcher.
The rest of the files of the segment are OK. Meaning the problem has
occurred in writing the bit vector well after the segment has been
written. I'm guessing we're simply not verifying that the BV has been
written fully/properly,
if you are going to fill up your disk space all the time with solr
1.4.1, I suggest replacing the lucene jars with lucene jars from
2.9-branch (http://svn.apache.org/repos/asf/lucene/java/branches/lucene_2_9/).
then you get the fix for https://issues.apache.org/jira/browse/LUCENE-2593 too.
On Wed
I'm not certain whether we test this particular case, but we do have
several disk full tests.
But: are you seeing a corrupt index? Ie, exception on open or on
searching or on CheckIndex?
Or: do you see a disk-full exception when writing the del file, during
indexing, that does not in fact corrup
We have unit tests for running out of disk space? However we have
Tomcat logs that fill up quickly and starve Solr 1.4.1 of space. The
main segments are probably not corrupted, however routinely now, there
are deletes files of length 0.
0 2010-10-12 18:35 _cc_8.del
Which is fundamental index co
11 matches
Mail list logo