Hi, My team provides Solr clusters to several other teams in my company. We get peak-requirements for query-rate and update-rate from our customers and load-test the cluster based on the same. This helps us arrive at a cluster suitable for a given peak load.
Problem is that peak load estimates are just estimates. It would be nice to enforce them from Solr side such that if a rate higher than that is seen at any core, the core will automatically begin to reject the requests. Such a feature would contribute to cluster stability while making sure the customer gets an exception to remind them of a slower rate. A configuration like the following in managed-schema or solrconfig.xml would be great: <coreRateLimiter> <select maxPerSec = 1000/> <update maxPerSec = 500/> <facets maxPerSec = 100/> <pivots maxPerSec = 30/> </coreRateLimiter> If the rate exceeds the above limits, an exception like the following should be thrown: "Cannot process more than 500 updates/second. Please slow down or raise the coreRateLimiter.update limits in solrconfig.xml' Is https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/store/RateLimiter.SimpleRateLimiter.html a step in that direction? Thanks SG