Has there been any progress on this or tools people might use to capture the
average or 90% time for the last hour?

That would allow us to better match up slowness with other metrics like
CPU/IO/Memory to find bottlenecks in the system.

Thanks,
Ian.

On Wed, Mar 31, 2010 at 9:13 PM, Chris Hostetter
<hossman_luc...@fucit.org>wrote:

>
> : Say I have 3 Cores names core0, core1, and core2, where only core1 and
> core2
> : have documents and caches.  If all my searches hit core0, and core0
> shards
> : out to core1 and core2, then the stats from core0 would be accurate for
> : errors, timeouts, totalTime, avgTimePerRequest, avgRequestsPerSecond,
> etc.
>
> Ahhh.... yes. (i see what you mean by "aggregating core" now ... i thought
> you ment a core just for aggregatign stats)
>
> *If* you are using distributed search, then you can gather stats from the
> core you use for collating/aggregating from the other shards, and
> reloading that core should be cheap.
>
> but if you aren't already using distributed searching, it would be a bad
> idea from a performance standpoint to add it just to take advantage of
> being able to reload the coordinator core (the overhead of searching one
> distributed shard vs doing the same query directly is usually very
> measurable, even on if the shard is the same Solr instance as your
> coordinator)
>
>
>
> -Hoss
>
>


-- 
Regards,

Ian Connor
1 Leighton St #723
Cambridge, MA 02141
Call Center Phone: +1 (714) 239 3875 (24 hrs)
Fax: +1(770) 818 5697
Skype: ian.connor

Reply via email to