Hi, more update. It happened again and this time I'd INFO logged in the Solr log,
INFO: {add=[330274716, 330274717, 330274718, 330274719, 330274720, 330274721, 330274722, 330274723, ...(14992 more)]} 0 6041 Apr 3, 2009 10:38:01 PM org.apache.solr.core.SolrCore execute INFO: [20090403] webapp=/solr path=/update params={wt=javabin} status=0 QTime=6041 Apr 3, 2009 10:38:11 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit(optimize=false,waitFlush=true,waitSearcher=true) It's still hung at commit even after 30 min. So, looks like it takes a long time to commit the records. I'm committing the records myself, but have the auto-commit turned on in the solrconfig.xml, <updateHandler class="solr.DirectUpdateHandler2"> <!-- commit every 10 million doc or every 16 min --> <autoCommit> <maxDocs>10000000</maxDocs> <maxTime>1000000</maxTime> </autoCommit> </updateHandler> In 15 min time period I'm getting approximately 6 million documents/records. Earlier I've read in the mailing list that we shouldn't be committing very often and now it seems not committing on time makes the commit process take forever. I want the records searchable every 30 min basically. So, 30 min old data is ok for searching, but indexing shouldn't slow down. 1) So, what's the good commit strategy? 2) How often (on how many records) should I do this? 3) Should I do it programmatically or can I have it in the solrconfig.xml? Thanks, -vivek On Fri, Apr 3, 2009 at 2:27 PM, vivek sar <vivex...@gmail.com> wrote: > Just an update on this issue, the Solr did come back after 80 min - so > not sure where was it stuck. I do use RAMBuffer of 64MB and have heap > size of 6G. > > There is no error is Solr log and I'd it running under WARNING level > so missed the INFO if there was any during that period. I'm also not > running any "optimize" command. What could cause Solr to hang for 80 > min? > > Thanks, > -vivek > > On Fri, Apr 3, 2009 at 1:55 PM, vivek sar <vivex...@gmail.com> wrote: >> Hi, >> >> I'm using Solr 1.4 (nightly build - 03/29/09). I'm stress testing my >> application with Solr. My app uses Solrj to write to remote Solr (on >> same box, but different JVM). The stress test sends over 2 million >> records (1 record = 500 bytes, with each record having 10 fields) >> within 5 minutes. All was working fine (with 2 million records >> processed - 2G index size) and all the sudden Solr stopped responding >> - I call server.addBeans(...) passing 15K object and don't get any >> response for over an hour (usually it returns in 5 sec). >> >> I've 3 threads writing to the same index at the same time - not sure >> if that could cause any problem. I was told by Otis that it should be >> ok to have multiple threads write to same index - so I'm assuming it's >> ok, though from thread dump I do see couple of "update" threads >> waiting on ReadWriteLock and another thread (pool-6-thread-1) have a >> lock on SolrWriter. >> >> Attached is the thread dump of the Tomcat process where Solr is >> running. Any ideas? >> >> Thanks, >> -vivek >> >