why you use 15 replicas?
more replicas more slower.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-performance-issues-tp4186035p4188738.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Did you say you have 150 servers in this cluster? And 10 shards for just
90M docs? If so, that 150 hosts sounds like too much for all other numbers
I see here. I'd love to see some metrics here. e.g. what happens with
disk IO around those commits? How about GC time/size info? Are JVM mem
Hi Vijay,
We're working on SOLR-6816 ... would love for you to be a test site for any
improvements we make ;-)
Curious if you've experimented with changing the mergeFactor to a higher
value, such as 25 and what happens if you set soft-auto-commits to
something lower like 15 seconds? Also, make s
search engn dev [sachinyadav0...@gmail.com] wrote:
> Yes, You are right my facet queries are for text analytic purpose.
Does this mean that facet calls are rare (at most one at a time)?
> Users will send boolean and spatial queries. current performance for spatial
> queries is 100qps with 150 con
Hi,
Increasing the number of replicas per shard will help you take more
concurrent users/queries resulting in increased throughput.
Thanks,
Himanshu
On Mon, Jul 21, 2014 at 9:25 AM, search engn dev
wrote:
> Thanks Erick,
>
> /"So your choices are either to increase memory (a lot) or not do th
Thanks Erick,
/"So your choices are either to increase memory (a lot) or not do this.
It's a valid question whether this is useful information to present to a
user
(or are you doing some kind of analytics here?). "/
Yes, You are right my facet queries are for text analytic purpose. Users
will
search engn dev [sachinyadav0...@gmail.com] wrote:
> out of 700 million documents 95-97% values are unique approx.
That's quite a lot. If you are not already using DocValues for that, you should
do so.
So, each shard handles ~175M documents. Even with DocValues, there is an
overhead of just hav
Right, this is the worst kind of use-case for faceting. You have
150M docs/shard and are asking up to 125M buckets to count
into, plus control structures. Performance of this (even without OOMs)
will be a problem. Having multiple queries execute this simultaneously
will increase memory usage.
So y
out of 700 million documents 95-97% values are unique approx.
My facet query is :
http://localhost:8983/solr/select?q=*:*&rows=0&facet=true&facet.limit=1&facet.field=user_digest
Above query throws OOM exception as soon as fire it to solr.
--
View this message in context:
http://lucene.
From: search engn dev [sachinyadav0...@gmail.com]:
> 1 collection : 4 shards : each shard has one master and one replica
> total documents : 700 million
Are you using DocValues for your facet fields? What is the approximate number
of unique values in your facets and what is their type (string, nu
10 matches
Mail list logo