Hi.

Changes made in solConfig were mostly done, after the failure, for example:
increasing lucene buffer size, etc.

Upgraded today to 1.3.0 but the old version was from 9/1 so a couple of
weeks old.

Will send in a full traceback asap, just running a job right now, so in a
couple of mins.

Got the Exception both when starting fresh and with stuff already in Solr.

Regards Erik



On Wed, Sep 24, 2008 at 1:16 PM, Grant Ingersoll <[EMAIL PROTECTED]>wrote:

> Can you share more about your setup, specifically what you changed in your
> solrconfig file?  What version of Solr (looks like a nightly, but from
> when)?  What did you set auto-commit to be?  Can you provide the full stack
> trace?  Also, were you starting fresh when you got the second exception?
>
>
>
> On Sep 24, 2008, at 3:24 PM, Erik Holstad wrote:
>
>  We are using Solr out of the box, with only a couple of changes in the
>> solconfig file.
>>
>> We are running a MapReduce job to import into Solr. Every map creates one
>> document and used to
>> add and commit it to Solr. We got org.apache.solr.common.SolrException:
>>
>> Error_opening_new_searcher_exceeded_limit_of_maxWarmingSearchers4_try_again_later,
>> which we solved by removing the commit statment from the MR job and added
>> auto-commit in solrconfig.
>>
>> We reran the job and got another exception: java.io.IOException: cannot
>> read
>> directory org.apache.lucene.store.FSDirectory@
>> /home/solr/src/apache-solr-nightly/example/solr/data/index:
>> list() returned null
>> followed by: SEVERE: org.apache.lucene.store.LockObtainFailedException:
>> Lock
>> obtain timed out: SingleInstanceLock: write.lock
>>
>> This was happening when the number of mappers writing to solr was 6, we
>> lowered the number of inputters to 3 and everything worked fine.
>>
>> Does anyone know what happens, and how we can use more than 3 input
>> sources
>> at the same time?
>>
>> Regards Erik
>>
>
>
>

Reply via email to