Aha, OK. That was new to me. Will check this. Thanks.
On Thu, Jun 14, 2012 at 3:52 PM, Yury Kats wrote:
> On 6/14/2012 2:05 AM, Daniel Brügge wrote:
> > Will check later to use different data dirs for the core on
> > each instance.
> > But because each Solr sits in it's own openvz instance (virt
On 6/14/2012 2:05 AM, Daniel Brügge wrote:
> Will check later to use different data dirs for the core on
> each instance.
> But because each Solr sits in it's own openvz instance (virtual
> server respectively) they should be totally separated. At least
> from my point of understanding virtualizati
OK, I think I have found it. I provided when starting the 4 solr instances
via start.jar always the data directory property via
*-Dsolr.data.dir=/home/myuser/data
*
After removing this it worked fine. What is weird is, that all 4 instances
are totally separated, so that instance-2 should never con
Will check later to use different data dirs for the core on
each instance.
But because each Solr sits in it's own openvz instance (virtual
server respectively) they should be totally separated. At least
from my point of understanding virtualization.
Will check and get back here...
Thanks.
On Wed
What command are you using to create the cores?
I had this sort of problem, and it was because I'd accidentally created
two cores with the same instanceDir within the same SOLR process. Make
sure you don't have that kind of collision. The easiest way is to
specify an explicit instanceDir and dataD
Thats an interesting data dir location: NativeFSLock@/home/myuser/
data/index/write.lock
Where are the other data dirs located? Are you sharing one drive or
something? It looks like something already has a writer lock - are you sure
another solr instance is not running somehow?
On Wed, Jun 13, 20
BTW: i am running the solr instances using -Xms512M -Xmx1024M
so not so little memory.
Daniel
On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge <
daniel.brue...@googlemail.com> wrote:
> Hi,
>
> am struggling around with creating multiple collections on a 4 instances
> SolrCloud
> setup:
>
> I have
HI Peter
I found the issue,
Actually we were getting this exception because of JVM space. I allocated
512 xms and 1024 xmx .. finally increased the time limit for write lock to
20 secs .. things are working fine ... but still it did not help ...
On closely analysis of doc which we were indexing,
Optimizing indexing time is a very different question.
I'm guessing your 3mins+ time you refer to is the commit time.
There are a whole host of things to take into account regarding
indexing, like: number of segments, schema, how many fields, storing
fields, omitting norms, caching, autowarming, s
Yes this was happening because of JVM heap size
But the real issue is that if our index size is growing (very high)
then indexing time is taking very long (using streaming)
earlier for indexing 15,000 docs at a time (commit after 15000 docs) , it
was taking 3 mins 20 secs time,
after deleting t
Hi,
When you get this exception with no other error or explananation in
the logs, this is almost always because the JVM has run out of memory.
Have you checked/profiled your mem usage/GC during the stream operation?
On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta wrote:
> Hi,
>
> We are doing st
In "normal" solr usage, where Solr is the only thing writing to your
index, you should never get a lock timeout ... typical reasosn for this to
happen are if your servlet container crashed or was shutdown uncleanly and
Solr wasn't able to clean up it's lock file (check your logs)
There is an
In solrconfig.xml,
false
10
25000
1400
500
1000
1
Does writeLockTimeout too small?
Thanks,
Jae
On 9/27/07, matt davies <[EMAIL PROTECTED]> wrote:
>
> quick fix
>
> look for a lucene lock file in your tmp directory and delete it, then
> restart solr, should start
>
quick fix
look for a lucene lock file in your tmp directory and delete it, then
restart solr, should start
I am an idiot though, so be careful, in fact, I'm worse than an
idiot, I know a little
:-)
you got a lock file somewhere though, deleting that will help you
out, for me it was in
14 matches
Mail list logo