Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-14 Thread Daniel Brügge
Aha, OK. That was new to me. Will check this. Thanks. On Thu, Jun 14, 2012 at 3:52 PM, Yury Kats wrote: > On 6/14/2012 2:05 AM, Daniel Brügge wrote: > > Will check later to use different data dirs for the core on > > each instance. > > But because each Solr sits in it's own openvz instance (virt

Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-14 Thread Yury Kats
On 6/14/2012 2:05 AM, Daniel Brügge wrote: > Will check later to use different data dirs for the core on > each instance. > But because each Solr sits in it's own openvz instance (virtual > server respectively) they should be totally separated. At least > from my point of understanding virtualizati

Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-14 Thread Daniel Brügge
OK, I think I have found it. I provided when starting the 4 solr instances via start.jar always the data directory property via *-Dsolr.data.dir=/home/myuser/data * After removing this it worked fine. What is weird is, that all 4 instances are totally separated, so that instance-2 should never con

Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-13 Thread Daniel Brügge
Will check later to use different data dirs for the core on each instance. But because each Solr sits in it's own openvz instance (virtual server respectively) they should be totally separated. At least from my point of understanding virtualization. Will check and get back here... Thanks. On Wed

Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-13 Thread Casey Callendrello
What command are you using to create the cores? I had this sort of problem, and it was because I'd accidentally created two cores with the same instanceDir within the same SOLR process. Make sure you don't have that kind of collision. The easiest way is to specify an explicit instanceDir and dataD

Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-13 Thread Mark Miller
Thats an interesting data dir location: NativeFSLock@/home/myuser/ data/index/write.lock Where are the other data dirs located? Are you sharing one drive or something? It looks like something already has a writer lock - are you sure another solr instance is not running somehow? On Wed, Jun 13, 20

Re: LockObtainFailedException after trying to create cores on second SolrCloud instance

2012-06-13 Thread Daniel Brügge
BTW: i am running the solr instances using -Xms512M -Xmx1024M so not so little memory. Daniel On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge < daniel.brue...@googlemail.com> wrote: > Hi, > > am struggling around with creating multiple collections on a 4 instances > SolrCloud > setup: > > I have

Re: LockObtainFailedException

2011-08-12 Thread Naveen Gupta
HI Peter I found the issue, Actually we were getting this exception because of JVM space. I allocated 512 xms and 1024 xmx .. finally increased the time limit for write lock to 20 secs .. things are working fine ... but still it did not help ... On closely analysis of doc which we were indexing,

Re: LockObtainFailedException

2011-08-11 Thread Peter Sturge
Optimizing indexing time is a very different question. I'm guessing your 3mins+ time you refer to is the commit time. There are a whole host of things to take into account regarding indexing, like: number of segments, schema, how many fields, storing fields, omitting norms, caching, autowarming, s

Re: LockObtainFailedException

2011-08-11 Thread Naveen Gupta
Yes this was happening because of JVM heap size But the real issue is that if our index size is growing (very high) then indexing time is taking very long (using streaming) earlier for indexing 15,000 docs at a time (commit after 15000 docs) , it was taking 3 mins 20 secs time, after deleting t

Re: LockObtainFailedException

2011-08-11 Thread Peter Sturge
Hi, When you get this exception with no other error or explananation in the logs, this is almost always because the JVM has run out of memory. Have you checked/profiled your mem usage/GC during the stream operation? On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta wrote: > Hi, > > We are doing st

Re: LockObtainFailedException

2007-09-27 Thread Chris Hostetter
In "normal" solr usage, where Solr is the only thing writing to your index, you should never get a lock timeout ... typical reasosn for this to happen are if your servlet container crashed or was shutdown uncleanly and Solr wasn't able to clean up it's lock file (check your logs) There is an

Re: LockObtainFailedException

2007-09-27 Thread Jae Joo
In solrconfig.xml, false 10 25000 1400 500 1000 1 Does writeLockTimeout too small? Thanks, Jae On 9/27/07, matt davies <[EMAIL PROTECTED]> wrote: > > quick fix > > look for a lucene lock file in your tmp directory and delete it, then > restart solr, should start >

Re: LockObtainFailedException

2007-09-27 Thread matt davies
quick fix look for a lucene lock file in your tmp directory and delete it, then restart solr, should start I am an idiot though, so be careful, in fact, I'm worse than an idiot, I know a little :-) you got a lock file somewhere though, deleting that will help you out, for me it was in