It's worth noting that the fast commit rate is only an indirect part of the issue you're seeing. As the error comes from cache warming - a consequence of committing, it's not the fault of commiting directly. It's well worth having a good close look at exactly what you're caches are doing when they are warmed, and trying as much as possible to remove any uneeded facet/field caching etc. The time it takes to repopulate the caches causes the error - if it's slower than the commit rate, you'll get into the 'try again later' spiral.
There are a number of ways to help mitigate this - NRT is the certainly the [hopefullly near] future for this. Other strategies include distributed search/cloud/ZK - splitting the index into logical shards, so your commits and their associated caches are smaller and more targeted. You can also use two Solr instances - one optimized for writes/commits, one for reads, (write commits are async of the 'read' instance), plus there are customized solutions like RankingAlgorithm, Zoie etc. On Sun, Aug 14, 2011 at 2:47 AM, Naveen Gupta <nkgiit...@gmail.com> wrote: > Hi, > > Most of the settings are default. > > We have single node (Memory 1 GB, Index Size 4GB) > > We have a requirement where we are doing very fast commit. This is kind of > real time requirement where we are polling many threads from third party and > indexes into our system. > > We want these results to be available soon. > > We are committing for each user (may have 10k threads and inside that 1 > thread may have 10 messages). So overall documents per user will be having > around .1 million (100000) > > Earlier we were using commit Within as 10 milliseconds inside the document, > but that was slowing the indexing and we were not getting any error. > > As we removed the commit Within, indexing became very fast. But after that > we started experiencing in the system > > As i read many forums, everybody told that this is happening because of very > fast commit rate, but what is the solution for our problem? > > We are using CURL to post the data and commit > > Also till now we are using default solrconfig. > > Aug 14, 2011 12:12:04 AM org.apache.solr.common.SolrException log > SEVERE: org.apache.solr.common.SolrException: Error opening new searcher. > exceeded limit of maxWarmingSearchers=2, try again later. > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1052) > at > org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:424) > at > org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85) > at > org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:177) > at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) > at > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) > at > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) > at > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) > at > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) > at > org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) > at > org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) > at > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) > at > org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) > at > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) > at > org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) > at > org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) > at > org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) > at java.lang.Thread.run(Thread.java:662) >