I'd be in favor of the Overseer dropping synchronous requests for which the requestor is no longer waiting (ephemeral ZK node is gone). For sync or async requests, we could let the caller set a timeout after which the processing should not start if it hasn't already, or for async messages allow a cancellation call (that would cancel if processing has not started). Once processing has started, I suggest we let it finish (cancelling processing in progress would be more complicated).
Ilan On Thu, Feb 1, 2024 at 6:46 AM 6harat <bharat.gulati.ce...@gmail.com> wrote: > Thanks David for starting this thread. We have also seen this behavior from > overseer resulting in "orphan collections" or "more than 1 replica created" > due to timeouts especially when our cluster is scaled up during peak > traffic days. > While I am still at a nascent stage of my understanding of solr internals, > I wanted to highlight the below points: (pardon me if these doesn't make > much sense), > > 1. There may be situations where we want solr to still honor the late > message and hence the functionality needs to be configurable and not a > default. For instance, during decommissioning of boxes (when we are scaling > down to our normal cluster size from peak), we send delete replica commands > for 20+ boxes in a short time frame. Majority of these API hits inevitably > times out, however we rely upon the behaviour that the cluster after X mins > is able to reach to the desired state. > > 2. How do we intend to communicate the timeout based rejection of overseer > message to the end-user > > 3. In case of fail-over scenario where the overseer leader node goes down > and is re-elected, the election may have some overhead which may inevitably > result in many of the piled up messages being rejected due to time > constraints. Do we intend to pause the clock ticks during this phase or the > guidance should be to set timeout higher than sum of such possible > overheads > > > On Wed, Jan 31, 2024 at 11:18 PM David Smiley <dsmi...@apache.org> wrote: > > > I have a proposal and am curious what folks think. When the Overseer > > dequeues an admin command message to process, imagine it being > > enhanced to examine the "ctime" (creation time) of the ZK message node > > to determine how long it has been enqueued, and thus roughly how long > > the client has been waiting. If it's greater than a configured > > threshold (1 minute?), respond with an error of a timeout nature. > > "Sorry, the Overseer is so backed up that we fear you have given up; > > please try again". This would not apply to an "async" style > > submission. > > > > Motivation: Due to miscellaneous reasons at scale that are very user > > / situation dependent, the Overseer can get seriously backed up. The > > client, making a typical synchronous call to, say, create a > > collection, may reach its timeout (say a minute) and has given up. > > Today, SolrCloud doesn't know this; it goes on its merry way and > > creates a collection anyway. Depending on how Solr is used, this can > > be an orphaned collection that the client doesn't want anymore. That > > is to say, the client wants a collection but it wanted it at the time > > it asked for it with the name it asked for at that time. If it fails, > > it will come back later and propose a new name. This doesn't have to > > be collection creation specific; I'm thinking that in principle it > > doesn't really matter what the command is. If Solr takes too long for > > the Overseer to receive the message; just timeout, basically. > > > > Thoughts? > > > > This wouldn't be a concern for the distributed mode of collection > > processing as there is no queue bottleneck; the receiving node > > processes the request immediately. > > > > ~ David Smiley > > Apache Lucene/Solr Search Developer > > http://www.linkedin.com/in/davidwsmiley > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: dev-unsubscr...@solr.apache.org > > For additional commands, e-mail: dev-h...@solr.apache.org > > > > > > -- > Regards > 6harat >