20+ hours? I index 3 million records in 3 hours. Is your auto commit causing a snapshot? What do you have listed in the events.
Jack On 5/14/09, Gargate, Siddharth <sgarg...@ptc.com> wrote: > Hi all, > I am also facing the same issue where autocommit blocks all > other requests. I having around 1,00,000 documents with average size of > 100K each. It took more than 20 hours to index. > I have currently set autocommit maxtime to 7 seconds, mergeFactor to 25. > Do I need more configuration changes? > Also I see that memory usage goes to peak level of heap specified(6 GB > in my case). Looks like Solr spends most of the time in GC. > According to my understanding, fix for Solr-1155 would be that commit > will run in background and new documents will be queued in the memory. > But I am afraid of the memory consumption by this queue if commit takes > much longer to complete. > > Thanks, > Siddharth > > -----Original Message----- > From: jayson.minard [mailto:jayson.min...@gmail.com] > Sent: Saturday, May 09, 2009 10:45 AM > To: solr-user@lucene.apache.org > Subject: Re: Autocommit blocking adds? AutoCommit Speedup? > > > First cut of updated handler now in: > https://issues.apache.org/jira/browse/SOLR-1155 > > Needs review from those that know Lucene better, and double check for > errors > in locking or other areas of the code. Thanks. > > --j > > > jayson.minard wrote: >> >> Can we move this to patch files within the JIRA issue please. Will > make >> it easier to review and help out a as a patch to current trunk. >> >> --j >> >> >> Jim Murphy wrote: >>> >>> >>> >>> Yonik Seeley-2 wrote: >>>> >>>> ...your code snippit elided and edited below ... >>>> >>> >>> >>> >>> Don't take this code as correct (or even compiling) but is this the >>> essence? I moved shared access to the writer inside the read lock > and >>> kept the other non-commit bits to the write lock. I'd need to > rethink >>> the locking in a more fundamental way but is this close to idea? >>> >>> >>> >>> public void commit(CommitUpdateCommand cmd) throws IOException { >>> >>> if (cmd.optimize) { >>> optimizeCommands.incrementAndGet(); >>> } else { >>> commitCommands.incrementAndGet(); >>> } >>> >>> Future[] waitSearcher = null; >>> if (cmd.waitSearcher) { >>> waitSearcher = new Future[1]; >>> } >>> >>> boolean error=true; >>> iwCommit.lock(); >>> try { >>> log.info("start "+cmd); >>> >>> if (cmd.optimize) { >>> closeSearcher(); >>> openWriter(); >>> writer.optimize(cmd.maxOptimizeSegments); >>> } >>> finally { >>> iwCommit.unlock(); >>> } >>> >>> >>> iwAccess.lock(); >>> try >>> { >>> writer.commit(); >>> } >>> finally >>> { >>> iwAccess.unlock(); >>> } >>> >>> iwCommit.lock(); >>> try >>> { >>> callPostCommitCallbacks(); >>> if (cmd.optimize) { >>> callPostOptimizeCallbacks(); >>> } >>> // open a new searcher in the sync block to avoid opening it >>> // after a deleteByQuery changed the index, or in between > deletes >>> // and adds of another commit being done. >>> core.getSearcher(true,false,waitSearcher); >>> >>> // reset commit tracking >>> tracker.didCommit(); >>> >>> log.info("end_commit_flush"); >>> >>> error=false; >>> } >>> finally { >>> iwCommit.unlock(); >>> addCommands.set(0); >>> deleteByIdCommands.set(0); >>> deleteByQueryCommands.set(0); >>> numErrors.set(error ? 1 : 0); >>> } >>> >>> // if we are supposed to wait for the searcher to be registered, > then >>> we should do it >>> // outside of the synchronized block so that other update > operations >>> can proceed. >>> if (waitSearcher!=null && waitSearcher[0] != null) { >>> try { >>> waitSearcher[0].get(); >>> } catch (InterruptedException e) { >>> SolrException.log(log,e); >>> } catch (ExecutionException e) { >>> SolrException.log(log,e); >>> } >>> } >>> } >>> >>> >>> >>> >> >> > > -- > View this message in context: > http://www.nabble.com/Autocommit-blocking-adds---AutoCommit-Speedup--tp2 > 3435224p23457422.html > Sent from the Solr - User mailing list archive at Nabble.com. > > -- Sent from my mobile device