Hi Otis,

I'm using 3.2 because I can't get velocity to run on 3.5.

I've changed my writeLockTimeout from 1000 to 10000, and my
commitLockTimeout from 10000 to 50000

Running on a large ec2 box, which has 2 virtual cores.  I don't know how to
find out the # of concurrent indexer threads.  Is that the same as
maxWarmingSearchers?  If that's the case I've changed it from 2 to 5.  I
have about 12 processes running concurrently to read/write to solr at the
moment, but this is just a test and I'm planning to up this number to 50 -
100.

Thanks,
Eric



On Fri, Dec 16, 2011 at 10:14 AM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:

> Hi Eric,
>
> And you are using the latest version of Solr, 3.5.0?
> What is the timeout in solrconfig.xml?
> How many CPU cores does the machine have and how many concurrent indexer
> threads do you have running?
>
> Otis
> ----
> Performance Monitoring SaaS for Solr -
> http://sematext.com/spm/solr-performance-monitoring/index.html
>
>
>
> >________________________________
> > From: Eric Tang <eric.x.t...@gmail.com>
> >To: solr-user@lucene.apache.org
> >Sent: Friday, December 16, 2011 10:08 AM
> >Subject: Lock obtain timed out
> >
> >Hi,
> >
> >I'm doing a lot reads and writes into a single solr server (on the
> >magnitude of 50ish per second), and have around 300,000 documents in the
> >index.
> >
> >Now every 5 minutes I get this exception:
> >SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain
> >timed out: NativeFSLock@./solr/data/index/write.lock
> >
> >And I have to restart my solr process.
> >
> >I've done some googling, some people have suggested raising the limit for
> >linux file open #, or changing the merge factor, but that didn't work.
> >Does anyone have insights into this?
> >
> >
> >Thanks,
> >Eric
> >
> >
> >
>

Reply via email to