Steve, something else I notice: http://search-campaign.unitedeway.org/solr/admin/stats.jsp shows that there are *many* SolrIndexSearchers open (and that's going to take up a lot of menory).
There should normally be 1 and occasionally 2 when autowarming. There is also sometimes an additional non-caching SolrIndexSearcher opened by the UpdateHandler for use in deleting documents. I've never seen this happen. Is this stock Solr, or have any modifications been made? Do you have logs to check to see what error messages or exceptions may appear? Search for "Memory" to see if you hit an out-of-memory error too. Oh, and there's no security in Solr, so I wouldn't expose it directly to the internet indefinitely w/o something protecting it from vandals :-) -Yonik On 9/27/06, Yonik Seeley <[EMAIL PROTECTED]> wrote:
Hi Steve, It looks like the commit is taking a long time and jetty is timing it out. See this thread: http://www.nabble.com/Synchronizing-commit-and-optimize-tf1498513.html#a4067023 -Yonik On 9/27/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > We're having something strange with our SOLR instance. > > > > When we post a commit, we get an empty reply from the server: > > > > $ curl http://search-campaign.unitedeway.org/solr/update --data-binary > '<commit/>' > > curl: (52) Empty reply from server > > > > When we post the optimize xml the following stack trace is returned: > > > > $ curl http://search-campaign.unitedeway.org/solr/update --data-binary > '<optimize waitFlush="false" />' > > <result status="1">java.io.IOException: Lock obtain timed out: > Lock@/web/search/campaign/jetty.tmp/lucene-d1bba62e1f2e75d919a17dcaa15a9 > 1a7-write.lock > > at org.apache.lucene.store.Lock.obtain(Lock.java:56) > > at > org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:256) > > at > org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:206) > > at > org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:65 > ) > > at > org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler > .java:118) > > at > org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandl > er2.java:153) > > at > org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2. > java:458) > > at org.apache.solr.core.SolrCore.update(SolrCore.java:755) > > at > org.apache.solr.servlet.SolrUpdateServlet.doPost(SolrUpdateServlet.java: > 52) > > at javax.servlet.http.HttpServlet.service(HttpServlet.java:767) > > at javax.servlet.http.HttpServlet.service(HttpServlet.java:860) > > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:408) > > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:350) > > at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:195) > > at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:1 > 64) > > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:536) > > at org.mortbay.jetty.Server.handle(Server.java:309) > > at org.mortbay.jetty.Server.handle(Server.java:285) > > at > org.mortbay.jetty.HttpConnection.doHandler(HttpConnection.java:363) > > at > org.mortbay.jetty.HttpConnection.access$1600(HttpConnection.java:45) > > at > org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.j > ava:625) > > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:613) > > at > org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:195) > > at > org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:297) > > at > org.mortbay.jetty.nio.SelectChannelConnector$HttpEndPoint.run(SelectChan > nelConnector.java:680) > > at > org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.ja > va:412) > > </result> > > > > Really at a loss as what to do. Our cache is huge and we'd like to > optimize things up a bit. > > > > Thoughts? > > > > --Steve