Are you using shards or have everything in same index? 
- shards == distributed Search over several cores ? => yes, but not always.
but in generally not.

What problem did you experience with the StatsCompnent?
- if i use stats on my 34Million Index, no matter how many docs founded, the
sum takes VEERY long time.

How did you use it? 
- like in the wiki, i think statscomp is not so dynamic usable !? 


I think the right approach will be to optimize StatsComponent to do quick
sum() 
- how can i optimize this ? change the code vom statscomponent and create a
new solr ? 

-----
------------------------------- System ----------------------------------------

One Server, 12 GB RAM, 2 Solr Instances, 7 Cores, 
1 Core with 31 Million Documents other Cores < 100.000

- Solr1 for Search-Requests - commit every Minute  - 5GB Xmx
- Solr2 for Update-Request  - delta every Minute - 4GB Xmx
--
View this message in context: 
http://lucene.472066.n3.nabble.com/getting-much-double-Values-from-solr-timeout-tp2650981p2654721.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to