Hi Amit, I'm not sure I follow what you are after... Yes, seeing how queries that result in cache misses perform is valuable (esp. if you have low cache hit rate in production) But figuring out if you chose a bad field type or bad faceting method or .... doesn't require profiling - you can review configs and logs and such and quickly find performance issues.
In production (or dev, really, too) you can use tools like SPM for Solr or NewRelic. SPM will show you performance breakdown over all Solr SearchComponents used in searches. NewRelic has non-free plans that also let you do on-demand profiling, so you could profile Solr in production, which can be handy. HTH, Otis -- Search Analytics - http://sematext.com/search-analytics/index.html Performance Monitoring - http://sematext.com/spm/index.html On Fri, Oct 19, 2012 at 12:02 PM, Amit Nithian <anith...@gmail.com> wrote: > Hi all, > > I know there have been many posts about this already and I have done > my best to read through them but one lingering question remains. When > doing performance testing on a Solr instance (under normal production > like circumstances, not the ones where commits are happening more > frequently than necessary), is there any value in performance testing > against a server with caches *disabled* with a profiler hooked up to > see where queries in the absence of a cache are spending the most > time? > > The reason I am asking this is to tune things like field types, using > tint vs regular int, different precision steps etc. Or maybe sorting > is taking a long time and the profiler shows an inordinate amount of > time spent there etc. so either we find a different way to solve that > particular problem. Perhaps we are faceting on something bad etc. Then > we can optimize those to at least not be as slow and then ensure that > caching is tuned properly so that cache misses don't yield these > expensive spikes. > > I'm trying to devise a proper performance testing for any new > features/config changes and wanted to get some feedback on whether or > not this approach makes sense. Of course performance testing against a > typical production setup *with* caching will also be done to make sure > things behave as expected. > > Thanks! > Amit