Hi All, I am running solr in 64 bit HP-UX system. The total index size is about 5GB and when i try load any new document, solr tries to merge the existing segments first and results in following error. I could see a temp file is growng within index dir around 2GB in size and later it fails with this exception. It looks like, by reaching Integer.MAXVALUE, the exception occurs.
Exception in thread "Lucene Merge Thread #0" org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: File too large (errno:27) at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:351) at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:315) Caused by: java.io.IOException: File too large (errno:27) at java.io.RandomAccessFile.writeBytes(Native Method) at java.io.RandomAccessFile.write(RandomAccessFile.java:456) at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:192) at org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96) at org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85) at org.apache.lucene.store.BufferedIndexOutput.close(BufferedIndexOutput.java:109) at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.close(SimpleFSDirectory.java:199) at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:144) at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:357) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:153) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5029) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4614) at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:235) at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:291) ----------------------------------------------------------------------- The solrconfig.xml contains default values for <indexDefaults>, <mainIndex> sections as below. <indexDefaults>^M <!-- Values here affect all index writers and act as a default unless overridden. -->^M <useCompoundFile>false</useCompoundFile>^M ^M <mergeFactor>10</mergeFactor>^M <!-- If both ramBufferSizeMB and maxBufferedDocs is set, then Lucene will flush^M based on whichever limit is hit first. -->^M <!--<maxBufferedDocs>1000</maxBufferedDocs>-->^M ^M <!-- Sets the amount of RAM that may be used by Lucene indexing^M for buffering added documents and deletions before they are^M flushed to the Directory. -->^M <ramBufferSizeMB>32</ramBufferSizeMB>^M <!-- <maxMergeDocs>2147483647</maxMergeDocs> -->^M <maxFieldLength>10000</maxFieldLength>^M <writeLockTimeout>1000</writeLockTimeout>^M <commitLockTimeout>10000</commitLockTimeout>^M <!--<mergePolicy class="org.apache.lucene.index.LogByteSizeMergePolicy"/>-->^M <!--<mergeScheduler class="org.apache.lucene.index.ConcurrentMergeScheduler"/>-->^M </indexDefaults>^ <mainIndex>^M <!-- options specific to the main on-disk lucene index -->^M <useCompoundFile>false</useCompoundFile>^M <ramBufferSizeMB>32</ramBufferSizeMB>^M <mergeFactor>10</mergeFactor>^M <!-- Deprecated -->^M <!--<maxBufferedDocs>1000</maxBufferedDocs>-->^M <!--<maxMergeDocs>2147483647</maxMergeDocs>-->^M </mainIndex>^ Could anyone help me to resolve this exception? Regards, Uma -- View this message in context: http://lucene.472066.n3.nabble.com/index-merge-tp472904p829810.html Sent from the Solr - User mailing list archive at Nabble.com.