Zabbix 2.2 has a jmx client built in as well as a few JVM templates. I
wrote my own templates for my solr instance and monitoring and graphing
is wonderful.
David
On 02/03/2014 12:55 PM, Joel Cohen wrote:
I had to come up with some Solr stats monitoring for my Zabbix instance. I
found that using JMX was the easiest way for us.
There is a command line jmx client that works quite well for me.
http://crawler.archive.org/cmdline-jmxclient/
I wrote a shell script to wrap around that and shove the data back to
Zabbix for ingestion and monitoring. I've listed the stats that I am
gathering, and the mbean that is called. My shell script is rather
simplistic.
!/bin/bash
cmdLineJMXJar=/usr/local/lib/cmdline-jmxclient.jar
jmxHost=$1
port=$2
query=$3
value=$4
java -jar ${cmdLineJMXJar} user:pass ${jmxHost}:${port} ${query} ${value}
2>&1 | awk '{print $NF}'
The script is called as so: jmxstats.sh <solr server name or IP> <jmx port>
<name of mbean> <value to query from mbean>
My collection name is productCatalog, so swap that with yours.
*select requests*:
solr/productCatalog:id=org.apache.solr.handler.component.SearchHandler,type=/select
requests
*select errors:
*solr/productCatalog:id=org.apache.solr.handler.component.SearchHandler,type=/select
errors
*95th percentile request time*:
solr/productCatalog:id=org.apache.solr.handler.component.SearchHandler,type=/select
95thPcRequestTime
*update requests*:
solr/productCatalog:id=org.apache.solr.handler.UpdateRequestHandler,type=/update
requests
*update errors:*
solr/productCatalog:id=org.apache.solr.handler.UpdateRequestHandler,type=/update
errors
*95th percentile update time:*
solr/productCatalog:id=org.apache.solr.handler.UpdateRequestHandler,type=/update
95thPcRequestTime
*query result cache lookups*:
solr/productCatalog:id=org.apache.solr.search.LRUCache,type=queryResultCache
cumulative_lookups
*query result cache inserts*:
solr/productCatalog:id=org.apache.solr.search.LRUCache,type=queryResultCache
cumulative_inserts
*query result cache evictions*:
solr/productCatalog:id=org.apache.solr.search.LRUCache,type=queryResultCache
cumulative_evictions
*query result cache hit ratio:
*solr/productCatalog:id=org.apache.solr.search.LRUCache,type=queryResultCache
cumulative_hitratio
*document cache lookups:
*solr/productCatalog:id=org.apache.solr.search.LRUCache,type=documentCache
cumulative_lookups
*document cache inserts:
*solr/productCatalog:id=org.apache.solr.search.LRUCache,type=documentCache
cumulative_inserts
*document cache evictions:
*solr/productCatalog:id=org.apache.solr.search.LRUCache,type=documentCache
cumulative_evictions
*document cache hit ratio:
*solr/productCatalog:id=org.apache.solr.search.LRUCache,type=documentCache
cumulative_hitratio
*filter cache lookups:
*solr/productCatalog:type=filterCache,id=org.apache.solr.search.FastLRUCache
cumulative_lookups
*filter cache inserts:
*solr/productCatalog:type=filterCache,id=org.apache.solr.search.FastLRUCache
cumulative_inserts
*filter cache evictions:
*solr/productCatalog:type=filterCache,id=org.apache.solr.search.FastLRUCache
cumulative_evictions
*filter cache hit ratio:
*solr/productCatalog:type=filterCache,id=org.apache.solr.search.FastLRUCache
cumulative_hitratio
*field value cache lookups:
*solr/productCatalog:type=fieldValueCache,id=org.apache.solr.search.FastLRUCache
cumulative_lookups
*field value cache inserts:
*solr/productCatalog:type=fieldValueCache,id=org.apache.solr.search.FastLRUCache
cumulative_inserts
*field value cache evictions:
*solr/productCatalog:type=fieldValueCache,id=org.apache.solr.search.FastLRUCache
cumulative_evictions
*field value cache hit ratio:
*solr/productCatalog:type=fieldValueCache,id=org.apache.solr.search.FastLRUCache
cumulative_evictions
This set of stats gets me a pretty good idea of what's going on with my
SolrCloud at any time. Anyone have any thoughts or suggestions?
Joel Cohen
Senior System Engineer
Bluefly, Inc.
On Mon, Feb 3, 2014 at 11:25 AM, Greg Walters <greg.walt...@answers.com>wrote:
The code I wrote is currently a bit of an ugly hack so I'm a bit reluctant
to share it and there's some legal concerns with open-sourcing code within
my company. That being said, I wouldn't mind rewriting it on my own time.
Where can I find a starter kit for contributors with coding guidelines and
the like? Spruced up some I'd be OK with submitting a patch.
Thanks,
Greg
On Feb 3, 2014, at 10:08 AM, Mark Miller <markrmil...@gmail.com> wrote:
You should contribute that and spread the dev load with others :)
We need something like that at some point, it's just no one has done it.
We currently expect you to aggregate in the monitoring layer and it's a lot
to ask IMO.
- Mark
http://about.me/markrmiller
On Feb 3, 2014, at 10:49 AM, Greg Walters <greg.walt...@answers.com>
wrote:
I've had some issues monitoring Solr with the per-core mbeans and ended
up writing a custom "request handler" that gets loaded then registers
itself as an mbean. When called it polls all the per-core mbeans then adds
or averages them where appropriate before returning the requested value.
I'm not sure if there's a better way to get jvm-wide stats via jmx but it
is *a* way to get it done.
Thanks,
Greg
On Feb 3, 2014, at 1:33 AM, adfel70 <adfe...@gmail.com> wrote:
I'm sending all solr stats data to graphite.
I have some questions:
1. query_handler/select requestTime -
if i'm looking at some metric, lets say 75thPcRequestTime - I see that
each
core in a single collection has different values.
Is each value of each core is the time that specific core spent on a
request?
so to get an idea of total request time, I should summarize all the
values
of all the cores?
2.update_handler/commits - does this include auto_commits? becuaste I'm
pretty sure I'm not doing any manual commits and yet I see a number
there.
3. update_handler/docs pending - what does this mean? pending for
what? for
flush to disk?
thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/need-help-in-understating-solr-cloud-stats-data-tp4114992.html
Sent from the Solr - User mailing list archive at Nabble.com.