Ran some more tests and when I'm only using <autoCommit>
<maxDocs>25000</maxDocs> I get

va116__at_orgapachesolrupdateUpdateHandlercreateMainIndexWriterUpdateHandlerjava122__at_orgapachesolrupdateDirectUpdateHandler2openWriterDirectUpdateHandler2java167__at_orgapachesolrupdateDirectUpdateHandler2addDocDirectUpdateHandler2java221__at_orgapachesolrupdateprocessorRunUpdateProcessorprocessAddRunUpdateProcessorFactoryjava59__at_orgapachesolrhandlerXmlUpdateRequestHandlerprocessUpdateXmlUpdateRequestHandlerjava196__at_orgapachesolrhandlerXmlUpdateRequestHandlerhandleRequestBodyXmlUpdateRequestHandlerjava123__at_orgapachesolrhandlerRequestHandlerBasehandleRequestRequestHandlerBasejava131__at_orgapachesolrcoreSolrCoreexecuteSolrCorejava1204__at_orgapachesolrservletSolrDispatchFilterexecuteSolrDispatchFilterjava303__at_orgapachesolrservletSolrDispatchFilterdoFilterSolrDispatchFilterjava232__at_orgmortbayjettyservletServletHandler$CachedChaindoFilterServletHandlerjava1089__at_orgmortbayjettyservletServletHandlerhandleServletHandlerjava365__at_orgmortbayjettysecuritySecurityHandlerhandleSecurityHandlerjava216__at_orgmortbayjettyservletSessionHandlerhandleSessionHandlerjava181__at_orgmortbayjettyhandlerContextHandlerhandleContextHandlerjava712__at_orgmortbayjettywebappWebAppContexthandleWebAppContextjava405__at_orgmortbayjettyhandlerContextHandlerCollectionhandleContextHandlerCollectionjava211__at_orgmortbayjettyhandlerHandlerCollectionhandleHandlerCollectionjava114__at_orgmortbayjettyhandlerHandlerWrapperhandleHandlerWrapperjava13

Lock_obtain_timed_out_SingleInstanceLock_writelock__orgapachelucenestoreLockObtainFailedException_Lock_obtain_timed_out_SingleInstanceLock_writelock__at_orgapachelucenestoreLockobtainLockjava85__at_orgapacheluceneindexIndexWriterinitIndexWriterjava1140__at_orgapacheluceneindexIndexWriterinitIndexWriterjava938__at_orgapachesolrupdateSolrIndexWriterinitSolrIndexWriterjava116__at_orgapachesolrupdateUpdateHandlercreateMainIndexWriterUpdateHandlerjava122__at_orgapachesolrupdateDirectUpdateHandler2openWriterDirectUpdateHandler2java167__at_orgapachesolrupdateDirectUpdateHandler2addDocDirectUpdateHandler2java221__at_orgapachesolrupdateprocessorRunUpdateProcessorprocessAddRunUpdateProcessorFactoryjava59__at_orgapachesolrhandlerXmlUpdateRequestHandlerprocessUpdateXmlUpdateRequestHandlerjava196__at_orgapachesolrhandlerXmlUpdateRequestHandlerhandleRequestBodyXmlUpdateRequestHandlerjava123__at_orgapachesolrhandlerRequestHandlerBasehandleRequestRequestHandlerBasejava131__at_orgapachesolrcoreSolrCoreexecuteSolrCorejava1204__at_orgapachesolrservletSolrDispatchFilterexecuteSolrDispatchFilterjava303__at_orgapachesolrservletSolrDispatchFilterdoFilterSolrDispatchFilterjava232__at_orgmortbayjettyservletServletHandler$CachedChaindoFilterServletHandlerjava1089__at_orgmortbayjettyservletServletHandlerhandleServletHandlerjava365__at_orgmortbayjettysecuritySecurityHandlerhandleSecurityHandlerjava216__at_orgmortbayjettyservletSessionHandlerhandleSessionHandlerjava181__at_orgmortbayjettyhandlerContextHandlerhandleContextHandlerjava712__at_orgmortbayjettywebappWebAppContexthandleWebAppContextjava405__at_orgmortbayjettyhandlerContextHandlerCollectionhandleContextHandlerCollectionjava211__at_orgmortbayjettyhandlerHandlerCollectionhandleHandlerCollectionjava114__at_orgmortbayjettyhandlerHandlerWrapperhandleHandlerWrapperjava13

request: http://ss0:8983/solr/update?wt=javabin&version=2.2
        at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343)

        at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183)
        at 
org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217)
        at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:63)

        at SolrTasks.insertSetupEnd(SolrTasks.java:176)
        at SolrTasks.insert(SolrTasks.java:158)
        at SolrImportMR.map(SolrImportMR.java:81)
        at org.apache.hadoop.hbase.mapred.TableMap.map(TableMap.java:42)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:47)

        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
        at 
org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2209)



This is now when using 6 mappers as input to Solr

So I downed the nr of mappers to 3, and than everything worked. But this was
not an optimal solution
so what we ended up doing was to send all the mappers to one reduce which
did the commit for all
the mappers and this seems to work fine even for more than 3 mappers.

Regards Erik






On Wed, Sep 24, 2008 at 2:36 PM, Erik Holstad <[EMAIL PROTECTED]> wrote:

> That is exactly what we are doing now added all the documents to the server
> in the Map phase of the job and send them all to on reducer, which commits
> them all.
> Seems to be working.
>
> Thanks Erik
>
>
> On Wed, Sep 24, 2008 at 2:27 PM, Otis Gospodnetic <
> [EMAIL PROTECTED]> wrote:
>
>> Erik,
>> There is little benefit from having more indexer threads than cores.
>> You have multiple indexers calling commit?  I suggest you make only one of
>> them call commit.  Or use autoCommit.
>>
>> Otis
>> --
>> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>>
>>
>>
>> ----- Original Message ----
>> > From: Erik Holstad <[EMAIL PROTECTED]>
>> > To: solr-user@lucene.apache.org
>> > Sent: Wednesday, September 24, 2008 4:16:06 PM
>> > Subject: Re: java.io.IOException: cannot read directory
>> org.apache.lucene.store.FSDirectory@/home/solr/src/apache-solr-nightly/example/solr/data/index:
>> list() returned null
>> >
>> > Otis,
>> >
>> > The machine we are running on has 4 cores, and that seems to make sense,
>> > since running for inserters also
>> > failed. So what you are saying is that one inserter uses 1 core? So we
>> can
>> > only have as many methods calling the commit()
>> > as we have cores?
>> >
>> > Regards Erik
>> >
>> > On Wed, Sep 24, 2008 at 12:48 PM, Otis Gospodnetic <
>> > [EMAIL PROTECTED]> wrote:
>> >
>> > > Erik,
>> > >
>> > > Not answering your question directly, but how many cores does your
>> Solr
>> > > machine have?  If it has 2 cores, for example, then running 6 indexers
>> > > against it likely doesn't make indexing faster.
>> > >
>> > > Otis
>> > > --
>> > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>> > >
>> > >
>> > >
>> > > ----- Original Message ----
>> > > > From: Erik Holstad
>> > > > To: solr-user@lucene.apache.org
>> > > > Sent: Wednesday, September 24, 2008 3:24:51 PM
>> > > > Subject: java.io.IOException: cannot read directory
>> > >
>> > org.apache.lucene.store.FSDirectory@
>> /home/solr/src/apache-solr-nightly/example/solr/data/index:
>> > > list() returned null
>> > > >
>> > > > We are using Solr out of the box, with only a couple of changes in
>> the
>> > > > solconfig file.
>> > > >
>> > > > We are running a MapReduce job to import into Solr. Every map
>> creates one
>> > > > document and used to
>> > > > add and commit it to Solr. We got
>> org.apache.solr.common.SolrException:
>> > > >
>> > >
>> >
>> Error_opening_new_searcher_exceeded_limit_of_maxWarmingSearchers4_try_again_later,
>> > > > which we solved by removing the commit statment from the MR job and
>> added
>> > > > auto-commit in solrconfig.
>> > > >
>> > > > We reran the job and got another exception: java.io.IOException:
>> cannot
>> > > read
>> > > > directory
>> > > > org.apache.lucene.store.FSDirectory@
>> > > /home/solr/src/apache-solr-nightly/example/solr/data/index:
>> > > > list() returned null
>> > > > followed by: SEVERE:
>> org.apache.lucene.store.LockObtainFailedException:
>> > > Lock
>> > > > obtain timed out: SingleInstanceLock: write.lock
>> > > >
>> > > > This was happening when the number of mappers writing to solr was 6,
>> we
>> > > > lowered the number of inputters to 3 and everything worked fine.
>> > > >
>> > > > Does anyone know what happens, and how we can use more than 3 input
>> > > sources
>> > > > at the same time?
>> > > >
>> > > > Regards Erik
>> > >
>> > >
>>
>>
>

Reply via email to