t?
>
> Surely to start with 5 zk's (or in fact any odd number - it
> could be 21 even), and from a single failure you drop to an
> even number - then there is the danger of NOT getting quorum.
>
> So ... I can only assume that there is a mechanism in place
> inside zk to gua
:
> Thanks svante.
>
> What if in the cluster of 5 zookeeper only 1 zookeeper goes down, will
> zookeeper election can occur with 4 / even number of zookeepers alive?
>
> With Regards
> Aman Tandon
>
> On Tue, Mar 3, 2015 at 6:35 PM, svante karlsson wrote:
>
> > synchrono
synchronous update of state and a requirement of more than half the
zookeepers alive (and in sync) this makes it impossible to have a "split
brain" situation ie when you partition a network and get let's say 3 alive
on one side and 2 on the other.
In this case the 2 node networks stops serving req
You should have memory to fit your whole database in disk cache and then
some more. I prefer to have at least twice that to accommodate startup of
new searchers while still serving from the "old".
Less than that performance drops a lot.
> Solr home: 185G
If that is your database size then you nee
ZK needs a quorum to keep functional so 3 servers handles one failure. 5
handles 2 node failures. If you Solr with 1 replica per shard then stick to
3 ZK. If you use 2 replicas use 5 ZK
>
lt operator.
>
> -- Jack Krupansky
>
> -----Original Message- From: svante karlsson
> Sent: Thursday, January 23, 2014 6:42 AM
> To: solr-user@lucene.apache.org
> Subject: how to write an efficient query with a subquery to restrict the
> search space?
>
>
> I have
or tokenized
> searches, text_general is a good place to start. Pardon me if this is
> repeating
> what you already know
>
> Lots of string types sometimes lead people with DB backgrounds to
> search for *like* which will be slow FWIW.
>
> Best,
> Erick
>
> On S
inefficient to post one at a
time but I've not done any specific testing to know if 1000 is better that
500
What we're doing now is trying to figure out how to get the query
performance up since is not where we need it to be so we're not done
either...
2014/1/25 svante karlsson
&
kbs.
> >
> > -Original Message-
> > From: saka.csi...@gmail.com [mailto:saka.csi...@gmail.com] On Behalf Of
> > svante karlsson
> > Sent: Friday, January 24, 2014 5:05 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Solr server requirements f
I just indexed 100 million db docs (records) with 22 fields (4 multivalued)
in 9524 sec using libcurl.
11 million took 763 seconds so the speed drops somewhat with increasing
dbsize.
We write 1000 docs (just an arbitrary number) in each request from two
threads. If you will be using solrcloud you
I have a solr db containing 1 billion records that I'm trying to use in a
NoSQL fashion.
What I want to do is find the best matches using all search terms but
restrict the search space to the most unique terms
In this example I know that val2 and val4 is rare terms and val1 and val3
are more comm
to
> check the component chain of your /select handler to make sure tvComponent
> isn't included (or re-index with term vectors enabled).
>
> Cheers,
>
> Timothy Potter
> Sr. Software Engineer, LucidWorks
> www.lucidworks.com
>
> ____
I've been playing around with solr 4.6.0 for some weeks and I'm trying to
get a solrcloud configuration running.
I've installed two physical machines and I'm trying to set up 4 shards on
each.
I installled a zookeeper on each host as well
I uploaded a config to zookeeper with
/opt/solr-4.6.0/exa
13 matches
Mail list logo