I think that is silly. We can still offer per shard stats *and* let a user 
easily see stats for a collection without requiring they jump hoops or use a 
specific monitoring solution where someone else has already jumped hoops for 
them.

You don’t have to guess what ops people really want - *everyone* wants stats 
that make sense for the collections and cluster on top of the per shard stats. 
*Everyone* wouldn’t mind seeing these without having to setup a monitoring 
solution first.

If you want more than that, then you can fiddle with your monitoring solution.

- Mark

http://about.me/markrmiller

On Feb 3, 2014, at 11:10 PM, Otis Gospodnetic <otis.gospodne...@gmail.com> 
wrote:

> Hi,
> 
> Oh, I just saw Greg's email on dev@ about this.
> IMHO aggregating in the search engine is not the way to do.  Leave that to
> external tools, which are likely to be more flexible when it comes to this.
> For example, our SPM for Solr can do all kinds of aggregations and
> filtering by a number of Solr and SolrCloud-specific dimensions already,
> without Solr having to do any sort of aggregation that it thinks Ops people
> will really want.
> 
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
> Solr & Elasticsearch Support * http://sematext.com/
> 
> 
> On Mon, Feb 3, 2014 at 11:08 AM, Mark Miller <markrmil...@gmail.com> wrote:
> 
>> You should contribute that and spread the dev load with others :)
>> 
>> We need something like that at some point, it's just no one has done it.
>> We currently expect you to aggregate in the monitoring layer and it's a lot
>> to ask IMO.
>> 
>> - Mark
>> 
>> http://about.me/markrmiller
>> 
>> On Feb 3, 2014, at 10:49 AM, Greg Walters <greg.walt...@answers.com>
>> wrote:
>> 
>>> I've had some issues monitoring Solr with the per-core mbeans and ended
>> up writing a custom "request handler" that gets loaded then registers
>> itself as an mbean. When called it polls all the per-core mbeans then adds
>> or averages them where appropriate before returning the requested value.
>> I'm not sure if there's a better way to get jvm-wide stats via jmx but it
>> is *a* way to get it done.
>>> 
>>> Thanks,
>>> Greg
>>> 
>>> On Feb 3, 2014, at 1:33 AM, adfel70 <adfe...@gmail.com> wrote:
>>> 
>>>> I'm sending all solr stats data to graphite.
>>>> I have some questions:
>>>> 1. query_handler/select requestTime -
>>>> if i'm looking at some metric, lets say 75thPcRequestTime - I see that
>> each
>>>> core in a single collection has different values.
>>>> Is each value of each core is the time that specific core spent on a
>>>> request?
>>>> so to get an idea of total request time, I should summarize all the
>> values
>>>> of all the cores?
>>>> 
>>>> 
>>>> 2.update_handler/commits - does this include auto_commits? becuaste I'm
>>>> pretty sure I'm not doing any manual commits and yet I see a number
>> there.
>>>> 
>>>> 3. update_handler/docs pending - what does this mean? pending for what?
>> for
>>>> flush to disk?
>>>> 
>>>> thanks.
>>>> 
>>>> 
>>>> 
>>>> --
>>>> View this message in context:
>> http://lucene.472066.n3.nabble.com/need-help-in-understating-solr-cloud-stats-data-tp4114992.html
>>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>> 
>> 
>> 

Reply via email to