Hi, Assuming you have some web interface, it is not uncommon to apply caching in web browser/middle layer/Solr. The question is if you can live with stale data or if you have some nice mechanism to invalidate data when needed. Solr does that “blindly” - on every commit that includes opening searcher, it will invalidate all caches. Increasing commit time can result in better cache utilisation and better average query latency. You need to monitor your caches to see if cache utilisation justifies having caches or if you are doing queries properly so caches can be utilised. You mentioned that your shard size is 30GB. Shard size is what dictates query latency. Maybe you reached shard size and you can no longer achieve targeted latency and caches will just help a bit but any cache miss will be slow. I would also address this issue rather then hoping that caches will be good enough to hide slow queries.
HTH, Emir -- Monitoring - Log Management - Alerting - Anomaly Detection Solr & Elasticsearch Consulting Support Training - http://sematext.com/ > On 27 Feb 2018, at 06:18, park <afk.s...@gmail.com> wrote: > > I'm indexing and searching documents using solr 6.x. > It is quite efficient when there are fewer shards and fewer cluster units. > However, when the number of shards exceeds 30 and the size of each shard is > 30G, the search performance is significantly reduced. > Currently, usercache in solr is actively used, so we plan queryResultCache > for the entire shards. > Is this the right solution what trying to use an external cache?(for > example, redis, memcahced, apache ignite, etc.) > > > > > -- > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html