Erik,
There is little benefit from having more indexer threads than cores.
You have multiple indexers calling commit?  I suggest you make only one of them 
call commit.  Or use autoCommit.

Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



----- Original Message ----
> From: Erik Holstad <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Wednesday, September 24, 2008 4:16:06 PM
> Subject: Re: java.io.IOException: cannot read directory 
> org.apache.lucene.store.FSDirectory@/home/solr/src/apache-solr-nightly/example/solr/data/index:
>  list() returned null
> 
> Otis,
> 
> The machine we are running on has 4 cores, and that seems to make sense,
> since running for inserters also
> failed. So what you are saying is that one inserter uses 1 core? So we can
> only have as many methods calling the commit()
> as we have cores?
> 
> Regards Erik
> 
> On Wed, Sep 24, 2008 at 12:48 PM, Otis Gospodnetic <
> [EMAIL PROTECTED]> wrote:
> 
> > Erik,
> >
> > Not answering your question directly, but how many cores does your Solr
> > machine have?  If it has 2 cores, for example, then running 6 indexers
> > against it likely doesn't make indexing faster.
> >
> > Otis
> > --
> > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
> >
> >
> >
> > ----- Original Message ----
> > > From: Erik Holstad 
> > > To: solr-user@lucene.apache.org
> > > Sent: Wednesday, September 24, 2008 3:24:51 PM
> > > Subject: java.io.IOException: cannot read directory
> > 
> org.apache.lucene.store.FSDirectory@/home/solr/src/apache-solr-nightly/example/solr/data/index:
> > list() returned null
> > >
> > > We are using Solr out of the box, with only a couple of changes in the
> > > solconfig file.
> > >
> > > We are running a MapReduce job to import into Solr. Every map creates one
> > > document and used to
> > > add and commit it to Solr. We got org.apache.solr.common.SolrException:
> > >
> > 
> Error_opening_new_searcher_exceeded_limit_of_maxWarmingSearchers4_try_again_later,
> > > which we solved by removing the commit statment from the MR job and added
> > > auto-commit in solrconfig.
> > >
> > > We reran the job and got another exception: java.io.IOException: cannot
> > read
> > > directory
> > > org.apache.lucene.store.FSDirectory@
> > /home/solr/src/apache-solr-nightly/example/solr/data/index:
> > > list() returned null
> > > followed by: SEVERE: org.apache.lucene.store.LockObtainFailedException:
> > Lock
> > > obtain timed out: SingleInstanceLock: write.lock
> > >
> > > This was happening when the number of mappers writing to solr was 6, we
> > > lowered the number of inputters to 3 and everything worked fine.
> > >
> > > Does anyone know what happens, and how we can use more than 3 input
> > sources
> > > at the same time?
> > >
> > > Regards Erik
> >
> >

Reply via email to