why you use 15 replicas?
more replicas more slower.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-performance-issues-tp4186035p4188738.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Did you say you have 150 servers in this cluster? And 10 shards for just
90M docs? If so, that 150 hosts sounds like too much for all other numbers
I see here. I'd love to see some metrics here. e.g. what happens with
disk IO around those commits? How about GC time/size info? Are JVM mem
Hi Vijay,
We're working on SOLR-6816 ... would love for you to be a test site for any
improvements we make ;-)
Curious if you've experimented with changing the mergeFactor to a higher
value, such as 25 and what happens if you set soft-auto-commits to
something lower like 15 seconds? Also, make s
Hi Erick,
We have following configuration of our solr cloud
1. 10 Shards
2. 15 replicas per shard
3. 9 GB of index size per shard
4. a total of around 90 mil documents
5. 2 collection viz search1 serving live traffic and search 2 for
indexing. We swap collection when indexing fin
search engn dev [sachinyadav0...@gmail.com] wrote:
> Yes, You are right my facet queries are for text analytic purpose.
Does this mean that facet calls are rare (at most one at a time)?
> Users will send boolean and spatial queries. current performance for spatial
> queries is 100qps with 150 con
;
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-performance-issues-regarding-hardware-configuration-tp4147843p4148222.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Himanshu Mehrotra
Download Our App[imag
l number shards
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-performance-issues-regarding-hardware-configuration-tp4147843p4148222.html
Sent from the Solr - User mailing list archive at Nabble.com.
search engn dev [sachinyadav0...@gmail.com] wrote:
> out of 700 million documents 95-97% values are unique approx.
That's quite a lot. If you are not already using DocValues for that, you should
do so.
So, each shard handles ~175M documents. Even with DocValues, there is an
overhead of just hav
gt; Above query throws OOM exception as soon as fire it to solr.
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-performance-issues-regarding-hardware-configuration-tp4147843p4147871.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
:
http://lucene.472066.n3.nabble.com/SolrCloud-performance-issues-regarding-hardware-configuration-tp4147843p4147871.html
Sent from the Solr - User mailing list archive at Nabble.com.
From: search engn dev [sachinyadav0...@gmail.com]:
> 1 collection : 4 shards : each shard has one master and one replica
> total documents : 700 million
Are you using DocValues for your facet fields? What is the approximate number
of unique values in your facets and what is their type (string, nu
with 32 gb ram each?
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-performance-issues-regarding-hardware-configuration-tp4147843.html
Sent from the Solr - User mailing list archive at Nabble.com.
12 matches
Mail list logo