Hi,

I'm doing a lot reads and writes into a single solr server (on the
magnitude of 50ish per second), and have around 300,000 documents in the
index.

Now every 5 minutes I get this exception:
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain
timed out: NativeFSLock@./solr/data/index/write.lock

And I have to restart my solr process.

I've done some googling, some people have suggested raising the limit for
linux file open #, or changing the merge factor, but that didn't work.
 Does anyone have insights into this?


Thanks,
Eric

Reply via email to