Put a profiler on it and see where the hot spots are?
On Sun, Oct 28, 2018 at 8:27 PM Walter Underwood <wun...@wunderwood.org> wrote:
>
> Upgrade, so that indexing isn’t using as much CPU. That leaves more CPU for 
> search.
>
> Make sure you are on a recent release of Java. Run the G1 collector.
>
> If you need more throughput, add more replicas or use instance with more CPUs.
>
> Has the index gotten bigger since the move?
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Oct 28, 2018, at 8:21 PM, Parag Shah <parags.li...@gmail.com> wrote:
> >
> > The original question though is about performance issue in the Searcher.
> > How would you improve that?
> >
> > On Sun, Oct 28, 2018 at 4:37 PM Walter Underwood <wun...@wunderwood.org>
> > wrote:
> >
> >> The original question is for a three-node Solr Cloud cluster with
> >> continuous updates.
> >> Optimize in this configuration won’t help, it will just cause expensive
> >> merges later.
> >>
> >> I would recommend updating from Solr 4.4. that is a very early release for
> >> Solr Cloud. We saw dramatic speedups in indexing with 6.x. In early
> >> releases, the
> >> replicas actually did more indexing work than the leader.
> >>
> >> wunder
> >> Walter Underwood
> >> wun...@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Oct 28, 2018, at 2:13 PM, Erick Erickson <erickerick...@gmail.com>
> >> wrote:
> >>>
> >>> Well, if you optimize on the master you'll inevitably copy the entire
> >>> index to each of the slaves. Consuming that much network bandwidth can
> >>> be A Bad Thing.
> >>>
> >>> Here's the background for Walter's comment:
> >>>
> >> https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
> >>>
> >>> Solr 7.5 is much better about this:
> >>>
> >> https://lucidworks.com/2018/06/20/solr-and-optimizing-your-index-take-ii/
> >>>
> >>> Even with the improvements in Solr 7.5, optimize is still a very
> >>> expensive operation and unless you've measured and can _prove_ it's
> >>> beneficial enough to be worth the cost you should avoid it.
> >>>
> >>> Best,
> >>> Erick
> >>> On Sun, Oct 28, 2018 at 1:51 PM Parag Shah <parags.li...@gmail.com>
> >> wrote:
> >>>>
> >>>> What would you do if your performance is degrading?
> >>>>
> >>>> I am not suggesting doing this for a serving index. Only one at the
> >> Master,
> >>>> which ones optimized gets replicated. Am I missing something here?
> >>>>
> >>>> On Sun, Oct 28, 2018 at 11:05 AM Walter Underwood <
> >> wun...@wunderwood.org>
> >>>> wrote:
> >>>>
> >>>>> Do not run optimize (force merge) unless you really understand the
> >>>>> downside.
> >>>>>
> >>>>> If you are continually adding and deleting documents, you really do not
> >>>>> want
> >>>>> to run optimize.
> >>>>>
> >>>>> wunder
> >>>>> Walter Underwood
> >>>>> wun...@wunderwood.org
> >>>>> http://observer.wunderwood.org/  (my blog)
> >>>>>
> >>>>>> On Oct 28, 2018, at 9:24 AM, Parag Shah <parags.li...@gmail.com>
> >> wrote:
> >>>>>>
> >>>>>> Hi Mugeesh,
> >>>>>>
> >>>>>>  Have you tried optimizing indexes to see if performance improves? It
> >>>>> is
> >>>>>> well known that over time as indexing goes on lucene creates more
> >>>>> segments
> >>>>>> which will be  searched over and hence take longer. Merging happens
> >>>>>> constantly but continuous indexing will still introduce smaller
> >> segments
> >>>>>> all the time. Have your tried running "optimize" periodically. Is it
> >>>>>> something that you can afford to run? If you have a Master-Slave setup
> >>>>> for
> >>>>>> Indexer v/s searchers, you can replicate on optimize in the Master,
> >>>>> thereby
> >>>>>> removing the optimize load on the searchers, but replicate to the
> >>>>> searcher
> >>>>>> periodically. That might help with reducing latency. Optimize merges
> >>>>>> segments and hence creates a more compact index that is faster to
> >> search.
> >>>>>> It may involve some higher latency temporarily right after the
> >>>>> replication,
> >>>>>> but will go away soon after in-memory caches are full.
> >>>>>>
> >>>>>>  What is the search count/sec you are seeing?
> >>>>>>
> >>>>>> Regards
> >>>>>> Parag
> >>>>>>
> >>>>>> On Wed, Sep 26, 2018 at 2:02 AM Mugeesh Husain <muge...@gmail.com>
> >>>>> wrote:
> >>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> We are running 3 node solr cloud(4.4) in our production
> >> infrastructure,
> >>>>> We
> >>>>>>> recently moved our SOLR server host softlayer to digital ocean server
> >>>>> with
> >>>>>>> same configuration as production.
> >>>>>>>
> >>>>>>> Now we are facing some slowness in the searcher when we index
> >> document,
> >>>>>>> when
> >>>>>>> we stop indexing then searches is fine, while adding document then it
> >>>>>>> become
> >>>>>>> slow. one of solr server we are indexing other 2 for searching the
> >>>>> request.
> >>>>>>>
> >>>>>>>
> >>>>>>> I am just wondering what was the reason searches become slow while
> >>>>> indexing
> >>>>>>> even we are using same configuration as we had in prod?
> >>>>>>>
> >>>>>>> at the time we are pushing 500 document at a time, this processing is
> >>>>>>> continuously running(adding & deleting)
> >>>>>>>
> >>>>>>> these are the indexing logs
> >>>>>>>
> >>>>>>> 65497339 [http-apr-8980-exec-45] INFO
> >>>>>>> org.apache.solr.update.processor.LogUpdateProcessor  – [rn0]
> >>>>> webapp=/solr
> >>>>>>> path=/update
> >>>>>>> params={distrib.from=
> >>>>>>>
> >>>>>
> >> http://solrhost:8980/solr/rn0/&update.distrib=FROMLEADER&wt=javabin&version=2&update.chain=dedupe
> >>>>>>> }
> >>>>>>> {add=[E4751FCCE977BAC7 (1612655281518411776), 8E712AD1BE76AB63
> >>>>>>> (1612655281527848960), 789AA5D0FB149A37 (1612655281538334720),
> >>>>>>> B4F3AA526506F6B7 (1612655281553014784), A9F29F556F6CD1C8
> >>>>>>> (1612655281566646272), 8D15813305BF7417 (1612655281584472064),
> >>>>>>> DD13CFA12973E85B (1612655281596006400), 3C93BDBA5DFDE3B3
> >>>>>>> (1612655281613832192), 96981A0785BFC9BF (1612655281625366528),
> >>>>>>> D1E52788A466E484 (1612655281636900864)]} 0 9
> >>>>>>> 65497459 [http-apr-8980-exec-22] INFO
> >>>>>>> org.apache.solr.update.processor.LogUpdateProcessor  – [rn0]
> >>>>> webapp=/solr
> >>>>>>> path=/update
> >>>>>>> params={distrib.from=
> >>>>>>>
> >>>>>
> >> http://solrhost:8980/solr/rn0/&update.distrib=FROMLEADER&wt=javabin&version=2&update.chain=dedupe
> >>>>>>> }
> >>>>>>> {add=[D8AA2E196967D241 (1612655281649483776), E73420772E3235B7
> >>>>>>> (1612655281666260992), DFDCF1F8325A3EF6 (1612655281680941056),
> >>>>>>> 1B10EF90E7C3695F (1612655281689329664), 51CBD7F59644A718
> >>>>>>> (1612655281699815424), 1D31EF403AF13E04 (1612655281714495488),
> >>>>>>> 68E1DC3A614B7269 (1612655281723932672), F9BF6A3CF89D74FB
> >>>>>>> (1612655281737564160), 419E017E1F360EB6 (1612655281749098496),
> >>>>>>> 50EF977E5E873065 (1612655281759584256)]} 0 9
> >>>>>>> 65497572 [http-apr-8980-exec-40] INFO
> >>>>>>> org.apache.solr.update.processor.LogUpdateProcessor  – [rn0]
> >>>>> webapp=/solr
> >>>>>>> path=/update
> >>>>>>> params={distrib.from=
> >>>>>>>
> >>>>>
> >> http://solrhost:8980/solr/rn0/&update.distrib=FROMLEADER&wt=javabin&version=2&update.chain=dedupe
> >>>>>>> }
> >>>>>>> {add=[B63AD0671A5E57B9 (1612655281772167168), 00B8A4CCFABFA1AC
> >>>>>>> (1612655281784750080), 9C89A1516C9166E6 (1612655281798381568),
> >>>>>>> 9322E17ECEAADE66 (1612655281803624448), C6DDB4BF8E94DE6B
> >>>>>>> (1612655281814110208), DAA49178A5E74285 (1612655281830887424),
> >>>>>>> 829C2AE38A3E78E4 (1612655281845567488), 4C7B19756D8E4208
> >>>>>>> (1612655281859198976), BE0F7354DC30164C (1612655281869684736),
> >>>>>>> 59C4A764BB50B13B (1612655281880170496)]} 0 9
> >>>>>>> 65497724 [http-apr-8980-exec-31] INFO
> >>>>>>> org.apache.solr.update.processor.LogUpdateProcessor  – [rn0]
> >>>>> webapp=/solr
> >>>>>>> path=/update
> >>>>>>> params={distrib.from=
> >>>>>>>
> >>>>>
> >> http://solrhost:8980/solr/rn0/&update.distrib=FROMLEADER&wt=javabin&version=2&update.chain=dedupe
> >>>>>>> }
> >>>>>>> {add=[1F694F99367D7CE1 (1612655281895899136), 2AEAAF67A6893ABE
> >>>>>>> (1612655281911627776), 81E72DC36C7A9EBC (1612655281926307840),
> >>>>>>> AA71BD9B23548E6D (1612655281939939328), 359E8C4C6EC72AFA
> >>>>>>> (1612655281954619392), 7FEB6C65A3E23311 (1612655281972445184),
> >>>>>>> 9B5ED0BE7AFDD1D0 (1612655281991319552), 99FE8958F6ED8B91
> >>>>>>> (1612655282009145344), 2BDC61DC4038E19F (1612655282023825408),
> >>>>>>> 5131AEC4B87FBFE9 (1612655282037456896)]} 0 10
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> >>>>>>>
> >>>>>
> >>>>>
> >>
> >>
>

Reply via email to