>>The only way that I can imagine any part of Solr *crashing* when this message >>happens is if you are also hitting an OutOfMemoryError
exception. You've said that your collection crashes ... but not what actually happens -- what "crash" means for your situation. I've never heard of a collection crashing. >>If you're running version 4.0 or later, you actually *do* want autoCommit >>configured, with openSearcher set to false. This configuration will not >>change document visibility at all, because it will not open a new searcher. >>You need different commits for document visibility. Thank you for the responses. Collection crashes as in, I'm unable to open the core tab in Solr console. Search is not returning. None of the page opens in solr admin dashboard. I do understand how and why this issue occurs and I'm going to do all it takes to avoid this issue. However, on an event of an accidental frequent hard commit close to each other which throws this WARN then - I'm just trying to figure out a way to make my collection throw results without having to delete and re-create the collection or delete the data folder. Again, I know how to avoid this issue but if it still happens then what can be done to avoid a complete reindexing. Thank you, Aswath NS -----Original Message----- From: Shawn Heisey [mailto:apa...@elyograg.org] Sent: Monday, March 21, 2016 4:19 PM To: solr-user@lucene.apache.org Subject: Re: PERFORMANCE WARNING: Overlapping onDeckSearchers=2 On 3/21/2016 12:52 PM, Aswath Srinivasan (TMS) wrote: > Fellow developers, > > PERFORMANCE WARNING: Overlapping onDeckSearchers=2 > > I'm seeing this warning often and whenever I see this, the collection > crashes. The only way to overcome this is by deleting the data folder and > reindexing. > > In my observation, this WARN comes when I hit frequent hard commits or hit > re-load config. I'm not planning on to hit frequent hard commits, however > sometimes accidently it happens. And when it happens the collection crashes > without a recovery. > > Have you faced this issue? Is there a recovery procedure for this WARN? > > Also, I don't want to increase maxWarmingSearchers or set autocommit. This is a lot of the same info that you've gotten from Hoss. I'm just going to leave it all here and add a little bit related to the rest of the thread. Increasing maxWarmingSearchers is almost always the WRONG thing to do. The reason that you are running into this message is that your commits (those that open a new searcher) are taking longer to finish than your commit frequency, so you end up warming multiple searchers at the same time. To limit memory usage, Solr will keep the number of warming searches from exceeding a threshold. You need to either reduce the frequency of your commits that open a new searcher or change your configuration so they complete faster. Here's some info about slow commits: http://wiki.apache.org/solr/SolrPerformanceProblems#Slow_commits The only way that I can imagine any part of Solr *crashing* when this message happens is if you are also hitting an OutOfMemoryError exception. You've said that your collection crashes ... but not what actually happens -- what "crash" means for your situation. I've never heard of a collection crashing. If you're running version 4.0 or later, you actually *do* want autoCommit configured, with openSearcher set to false. This configuration will not change document visibility at all, because it will not open a new searcher. You need different commits for document visibility. This is the updateHandler config that I use which includes autoCommit: 120000 false With this config, there will be at least two minutes between automatic hard commits. Because these commits will not open a new searcher, they cannot cause the message about onDeckSearchers. Commits that do not open a new searcher will normally complete VERY quickly. The reason you want this kind of autoCommit configuration is to avoid extremely large transaction logs. See this blog post for more info than you ever wanted about commits: http://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ If you're going to do all your indexing with the dataimport handler, you could just let the commit option on the dataimport take care of document visibility. Thanks, Shawn