Thanks Israel, i've done a sucesfull import using optimize=false

2009/11/11 Israel Ekpo <israele...@gmail.com>

> 2009/11/11 Licinio Fernández Maurelo <licinio.fernan...@gmail.com>
>
> > Hi folks,
> >
> > i'm getting this error while committing after a dataimport of only 12
> docs
> > !!!
> >
> > Exception while solr commit.
> > java.io.IOException: background merge hit exception: _3kta:C2329239
> > _3ktb:c11->_3ktb into _3ktc [optimize] [mergeDocStores]
> > at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2829)
> > at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2750)
> > at
> >
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:401)
> > at
> >
> >
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)
> > at
> >
> >
> org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:138)
> > at
> >
> >
> org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:66)
> > at
> > org.apache.solr.handler.dataimport.SolrWriter.commit(SolrWriter.java:170)
> > at
> > org.apache.solr.handler.dataimport.DocBuilder.finish(DocBuilder.java:208)
> > at
> >
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:185)
> > at
> >
> >
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:333)
> > at
> >
> >
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:393)
> > at
> >
> >
> org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:372)
> > Caused by: java.io.IOException: No hay espacio libre en el dispositivo
> > at java.io.RandomAccessFile.writeBytes(Native Method)
> > at java.io.RandomAccessFile.write(RandomAccessFile.java:499)
> > at
> >
> >
> org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexOutput.flushBuffer(SimpleFSDirectory.java:191)
> > at
> >
> >
> org.apache.lucene.store.BufferedIndexOutput.flushBuffer(BufferedIndexOutput.java:96)
> > at
> >
> >
> org.apache.lucene.store.BufferedIndexOutput.flush(BufferedIndexOutput.java:85)
> > at
> >
> >
> org.apache.lucene.store.BufferedIndexOutput.writeBytes(BufferedIndexOutput.java:75)
> > at org.apache.lucene.store.IndexOutput.writeBytes(IndexOutput.java:45)
> > at
> >
> >
> org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:229)
> > at
> >
> >
> org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:184)
> > at
> >
> >
> org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.java:217)
> > at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:5089)
> > at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4589)
> > at
> >
> >
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:235)
> > at
> >
> >
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:291)
> >
> > Index info: 2.600.000 docs | 11G size
> > System info: 15GB free disk space
> >
> > When attempting to commit the disk usage increases until solr breaks ...
> it
> > looks like 15 GB is not enought space to do the merge | optimize
> >
> > Any advice?
> >
> > --
> > Lici
> >
>
>
> Hi Licinio,
>
> During the the optimization process, the index size would be approximately
> double what it was originally and the remaining space on disk may not be
> enough for the task.
>
> You are describing exactly what could be going on
> --
> "Good Enough" is not good enough.
> To give anything less than your best is to sacrifice the gift.
> Quality First. Measure Twice. Cut Once.
>



-- 
Lici

Reply via email to