No... I just moved to master/slave, I believe it happened during 'merge' of uncommitted data... And I tuned merge factor and maxBufferedDocs, hope it will help... At least, I don't see any performance problem on Master with 600,000 updates since yesterday...
> do you have a stack trace around the Lucene clone() stuff? > > -Grant > > On Feb 7, 2008, at 9:56 PM, Fuad Efendi wrote: > > > Question: > > > > > > Why constant updates slow down SOLR performance even if I am not > > executing > > Commit? I just noticed this... Thead dump shows something > "Lucene ... > > Clone()", and significant CPU usage. I did about 5 mlns > updates via > > HTTP > > XML, single document at a time, without commit, and > performance went > > down, > > 100% CPU... > > > > After Commit/Optimize it is stabilized, 0.5 - 2 seconds per page > > generation > > (100 facets + 100 products), 15%-25% CPU: > > > > filterCache > > class: org.apache.solr.search.LRUCache > > version: 1.0 > > description: LRU Cache(maxSize=2000000, initialSize=1000000) > > stats: lookups : 109294990 > > hits : 107637040 > > hitratio : 0.98 > > inserts : 1658092 > > evictions : 0 > > size : 879637 > > cumulative_lookups : 341225983 > > cumulative_hits : 337721881 > > cumulative_hitratio : 0.98 > > cumulative_inserts : 3504573 > > cumulative_evictions : 0 > > > > > > Performance of SOLR itself is good/acceptable (even with huge facet > > distribution), but it goes down when I do a lot of updates (without > > commit/autocommit) > > > > Thanks, > > Fuad > > http://www.tokenizer.org > > > > > > > >