This blog by Eric will help you to understand different commit option and
transaction logs and it does provide some recommendation for ingestion
process.

http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/


On Tue, Feb 25, 2014 at 11:40 AM, Furkan KAMACI <furkankam...@gmail.com>wrote:

> Hi;
>
> You should read here:
>
> http://wiki.apache.org/solr/FAQ#What_does_.22exceeded_limit_of_maxWarmingSearchers.3DX.22_mean.3F
>
> On the other hand do you have 4 Zookeeper instances as a quorum?
>
> Thanks;
> Furkan KAMACI
>
>
> 2014-02-25 20:31 GMT+02:00 Joel Cohen <joel.co...@bluefly.com>:
>
> > Hi all,
> >
> > I'm working with Solr 4.6.1 and I'm trying to tune my ingestion process.
> > The ingestion runs a big DB query and then does some ETL on it and
> inserts
> > via SolrJ.
> >
> > I have a 4 node cluster with 1 shard per node running in Tomcat with
> > -Xmx=4096M. Each node has a separate instance of Zookeeper on it, plus
> the
> > ingestion server has one as well. The Solr servers have 8 cores and 64 Gb
> > of total RAM. The ingestion server is a VM with 8 Gb and 2 cores.
> >
> > My ingestion code uses a few settings to control concurrency and batch
> > size.
> >
> > solr.update.batchSize=500
> > solr.threadCount=4
> >
> > With this setup, I'm getting a lot of errors and the ingestion is taking
> > much longer than it should.
> >
> > Every so often during the ingestion I get these errors on the Solr
> servers:
> >
> > WARN  shard1 - 2014-02-25 11:18:34.341;
> > org.apache.solr.update.UpdateLog$LogReplayer; Starting log replay
> >
> >
> tlog{file=/usr/local/solr_shard1/productCatalog/data/tlog/tlog.0000000000000014074
> > refcount=2} active=true starting pos=776774
> > WARN  shard1 - 2014-02-25 11:18:37.275;
> > org.apache.solr.update.UpdateLog$LogReplayer; Log replay finished.
> > recoveryInfo=RecoveryInfo{adds=4065 deletes=0 deleteByQuery=0 errors=0
> > positionOfStart=776774}
> > WARN  shard1 - 2014-02-25 11:18:37.960; org.apache.solr.core.SolrCore;
> > [productCatalog] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
> > WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> > [productCatalog] Error opening new searcher. exceeded limit of
> > maxWarmingSearchers=2, try again later.
> > WARN  shard1 - 2014-02-25 11:18:37.961; org.apache.solr.core.SolrCore;
> > [productCatalog] Error opening new searcher. exceeded limit of
> > maxWarmingSearchers=2, try again later.
> > ERROR shard1 - 2014-02-25 11:18:37.961;
> > org.apache.solr.common.SolrException;
> org.apache.solr.common.SolrException:
> > Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try
> > again later.
> >         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1575)
> >         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1346)
> >         at
> >
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:592)
> >
> > I cut threads down to 1 and batchSize down to 100 and the errors go away,
> > but the upload time jumps up by a factor of 25.
> >
> > My solrconfig.xml has:
> >
> >      <autoCommit>
> >        <maxDocs>${solr.autoCommit.maxDocs:10000}</maxDocs>
> >        <maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
> >        <openSearcher>false</openSearcher>
> >      </autoCommit>
> >
> >      <autoSoftCommit>
> >        <maxTime>${solr.autoSoftCommit.maxTime:1000}</maxTime>
> >      </autoSoftCommit>
> >
> > I turned autowarmCount down to 0 for all the caches. What else can I tune
> > to allow me to run bigger batch sizes and more threads in my upload
> script?
> >
> > --
> >
> > joel cohen, senior system engineer
> >
> > e joel.co...@bluefly.com p 212.944.8000 x276
> > bluefly, inc. 42 w. 39th st. new york, ny 10018
> > www.bluefly.com <http://www.bluefly.com/?referer=autosig> | *fly since
> > 2013...*
> >
>

Reply via email to