If we reduce the no of threads then is it going to help.
  Is there any other way to debug this.

On Mon, 3 Feb, 2020, 2:52 AM Walter Underwood, <wun...@wunderwood.org>
wrote:

> The only time I’ve ever had an OOM is when Solr gets a huge load
> spike and fires up 2000 threads. Then it runs out of space for stacks.
>
> I’ve never run anything other than an 8GB heap, starting with Solr 1.3
> at Netflix.
>
> Agreed about filter cache, though I’d expect heavy use of that to most
> often be part of a faceted search system.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Feb 2, 2020, at 12:36 PM, Erick Erickson <erickerick...@gmail.com>
> wrote:
> >
> > Mostly I was reacting to the statement that the number
> > of docs increased by over 4x and then there were
> > memory problems.
> >
> > Hmmm, that said, what does “heap space is getting full”
> > mean anyway? If you’re hitting OOMs, that’s one thing. If
> > you’re measuring the amount of heap consumed and
> > noticing that it fills up, that’s totally normal. Java will
> > collect garbage when it needs to. If you attach something
> > like jconsole to Solr you’ll see memory grow and shrink
> > quite regularly. Take a look at your garbage collection logs
> > with something like GCViewer to see how much memory is
> > still required after a GC cycle. If that number is reasonable
> > then there’s no problem.
> >
> > Walter:
> >
> > Well, the expectation that one can keep adding docs without
> > considering heap size is simply naive. The filterCache
> > for instance grows linearly with the number of documents
> > (OK, if it it stores the full bitset). Real Time Get requires
> > on-heap structures to keep track of changed docs between
> > commits. Etc.
> >
> > The OP hasn’t even told us whether docValues are enabled
> > appropriately, which if not set for fields needing it will also
> > grow heap requirements linearly with the number of docs.
> >
> > I’ll totally agree that the relationship between the size of
> > the index on disk and heap is iffy at best. But if more heap is
> > _not_ needed for bigger indexes then we’d never hit OOMs
> > no matter how many docs we put in 4G.
> >
> > Best,
> > Erick
> >
> >
> >
> >> On Feb 2, 2020, at 11:18 AM, Walter Underwood <wun...@wunderwood.org>
> wrote:
> >>
> >> We CANNOT diagnose anything until you tell us the error message!
> >>
> >> Erick, I strongly disagree that more heap is needed for bigger indexes.
> >> Except for faceting, Lucene was designed to stream index data and
> >> work regardless of the size of the index. Indexing is in RAM buffer
> >> sized chunks, so large updates also don’t need extra RAM.
> >>
> >> wunder
> >> Walter Underwood
> >> wun...@wunderwood.org
> >> http://observer.wunderwood.org/  (my blog)
> >>
> >>> On Feb 2, 2020, at 7:52 AM, Rajdeep Sahoo <rajdeepsahoo2...@gmail.com>
> wrote:
> >>>
> >>> We have allocated 16 gb of heap space  out of 24 g.
> >>> There are 3 solr cores here, for one core when the no of documents are
> >>> getting increased i.e. around 4.5 lakhs,then this scenario is
> happening.
> >>>
> >>>
> >>> On Sun, 2 Feb, 2020, 9:02 PM Erick Erickson, <erickerick...@gmail.com>
> >>> wrote:
> >>>
> >>>> Allocate more heap and possibly add more RAM.
> >>>>
> >>>> What are you expectations? You can't continue to
> >>>> add documents to your Solr instance without regard to
> >>>> how much heap you’ve allocated. You’ve put over 4x
> >>>> the number of docs on the node. There’s no magic here.
> >>>> You can’t continue to add docs to a Solr instance without
> >>>> increasing the heap at some point.
> >>>>
> >>>> And as far as I know, you’ve never told us how much heap yo
> >>>> _are_ allocating. The default for Java processes is 512M, which
> >>>> is quite small. so perhaps it’s a simple matter of starting Solr
> >>>> with the -XmX parameter set to something larger.
> >>>>
> >>>> Best,
> >>>> Erick
> >>>>
> >>>>> On Feb 2, 2020, at 10:19 AM, Rajdeep Sahoo <
> rajdeepsahoo2...@gmail.com>
> >>>> wrote:
> >>>>>
> >>>>> What can we do in this scenario as the solr master node is going
> down and
> >>>>> the indexing is failing.
> >>>>> Please provide some workaround for this issue.
> >>>>>
> >>>>> On Sat, 1 Feb, 2020, 11:51 PM Walter Underwood, <
> wun...@wunderwood.org>
> >>>>> wrote:
> >>>>>
> >>>>>> What message do you get about the heap space.
> >>>>>>
> >>>>>> It is completely normal for Java to use all of heap before running a
> >>>> major
> >>>>>> GC. That
> >>>>>> is how the JVM works.
> >>>>>>
> >>>>>> wunder
> >>>>>> Walter Underwood
> >>>>>> wun...@wunderwood.org
> >>>>>> http://observer.wunderwood.org/  (my blog)
> >>>>>>
> >>>>>>> On Feb 1, 2020, at 6:35 AM, Rajdeep Sahoo <
> rajdeepsahoo2...@gmail.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>> Please reply anyone
> >>>>>>>
> >>>>>>> On Fri, 31 Jan, 2020, 11:37 PM Rajdeep Sahoo, <
> >>>>>> rajdeepsahoo2...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>> This is happening when the no of indexed document count is
> increasing.
> >>>>>>>> With 1 million docs it's working fine but when it's crossing 4.5
> >>>>>>>> million it's heap space is getting full.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Wed, 22 Jan, 2020, 7:05 PM Michael Gibney, <
> >>>>>> mich...@michaelgibney.net>
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> Rajdeep, you say that "suddenly" heap space is getting full ...
> does
> >>>>>>>>> this mean that some variant of this configuration was working
> for you
> >>>>>>>>> at some point, or just that the failure happens quickly?
> >>>>>>>>>
> >>>>>>>>> If heap space and faceting are indeed the bottleneck, you might
> make
> >>>>>>>>> sure that you have docValues enabled for your facet field
> fieldTypes,
> >>>>>>>>> and perhaps set uninvertible=false.
> >>>>>>>>>
> >>>>>>>>> I'm not seeing where large numbers of facets initially came from
> in
> >>>>>>>>> this thread? But on that topic this is perhaps relevant,
> regarding
> >>>> the
> >>>>>>>>> potential utility of a facet cache:
> >>>>>>>>> https://issues.apache.org/jira/browse/SOLR-13807
> >>>>>>>>>
> >>>>>>>>> Michael
> >>>>>>>>>
> >>>>>>>>> On Wed, Jan 22, 2020 at 7:16 AM Toke Eskildsen <t...@kb.dk>
> wrote:
> >>>>>>>>>>
> >>>>>>>>>> On Sun, 2020-01-19 at 21:19 -0500, Mehai, Lotfi wrote:
> >>>>>>>>>>> I  had a similar issue with a large number of facets. There is
> no
> >>>> way
> >>>>>>>>>>> (At least I know) your can get an acceptable response time from
> >>>>>>>>>>> search engine with high number of facets.
> >>>>>>>>>>
> >>>>>>>>>> Just for the record then it is doable under specific
> circumstances
> >>>>>>>>>> (static single-shard index, only String fields, Solr 4 with
> patch,
> >>>>>>>>>> fixed list of facet fields):
> >>>>>>>>>>
> https://sbdevel.wordpress.com/2013/03/20/over-9000-facet-fields/
> >>>>>>>>>>
> >>>>>>>>>> More usable for the current case would be to play with
> facet.threads
> >>>>>>>>>> and throw hardware with many CPU-cores after the problem.
> >>>>>>>>>>
> >>>>>>>>>> - Toke Eskildsen, Royal Danish Library
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>>
> >>
> >
>
>

Reply via email to