Initially we were getting the warning message as  ulimit is low i.e. 1024
so we changed it to 65000
Using ulimit -u 65000.

Then the error was failed to reserve shared memory error =1
 Because of this we removed
   -xx : +uselargepages

Now in console log it is showing
Could not find or load main class \

And solr is not starting up


On Mon, 20 Jan, 2020, 7:50 AM Mehai, Lotfi, <lme...@ptfs.com.invalid> wrote:

> I  had a similar issue with a large number of facets. There is no way (At
> least I know) your can get an acceptable response time from search engine
> with high number of facets.
> The way we solved the issue was to cache shallow Facets data structure in
> the web services. Facts structures are refreshed periodically. We don't
> have near real time indexation requirements. Page response time is under
> 5s.
>
> Here the URLs for our worst use case:
> https://www.govinfo.gov/app/collection/cfr
> https://www.govinfo.gov/app/cfrparts/month
>
> I hope that helps.
>
> Lotfi Mehai
> https://www.linkedin.com/in/lmehai/
>
>
>
>
>
> On Sun, Jan 19, 2020 at 9:05 PM Rajdeep Sahoo <rajdeepsahoo2...@gmail.com>
> wrote:
>
> > Initially we were getting the warning message as  ulimit is low i.e. 1024
> > so we changed it to 65000
> > Using ulimit -u 65000.
> >
> > Then the error was failed to reserve shared memory error =1
> >  Because of this we removed
> >    -xx : +uselargepages
> >
> > Now in console log it is showing
> > Could not find or load main class \
> >
> > And solr is not starting up
> >
> >
> >
> > On Mon, 20 Jan, 2020, 7:20 AM Walter Underwood, <wun...@wunderwood.org>
> > wrote:
> >
> > > What message do you get that means the heap space is full?
> > >
> > > Java will always use all of the heap, either as live data or
> > > not-yet-collected garbage.
> > >
> > > wunder
> > > Walter Underwood
> > > wun...@wunderwood.org
> > > http://observer.wunderwood.org/  (my blog)
> > >
> > > > On Jan 19, 2020, at 5:47 PM, Rajdeep Sahoo <
> rajdeepsahoo2...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Hi,
> > > > Currently there is no request or indexing is happening.
> > > >  It's just start up
> > > > And during that time heap is getting full.
> > > > Index size is approx 1 g.
> > > >
> > > >
> > > > On Mon, 20 Jan, 2020, 7:01 AM Walter Underwood, <
> wun...@wunderwood.org
> > >
> > > > wrote:
> > > >
> > > >> A new garbage collector won’t fix it, but it might help a bit.
> > > >>
> > > >> Requesting 200 facet fields and having 50-60 of them with results
> is a
> > > >> huge amount of work for Solr. A typical faceting implementation
> might
> > > have
> > > >> three to five facets. Your requests will be at least 10X to 20X
> > slower.
> > > >>
> > > >> Check the CPU during one request. It should use nearly 100% of a
> > single
> > > >> CPU. If it a lot lower than 100%, you have another bottleneck. That
> > > might
> > > >> be insufficient heap or accessing disk during query requests (not
> > enough
> > > >> RAM). If it is near 100%, the only thing you can do is get a faster
> > CPU.
> > > >>
> > > >> One other question, how frequently is the index updated?
> > > >>
> > > >> wunder
> > > >> Walter Underwood
> > > >> wun...@wunderwood.org
> > > >> http://observer.wunderwood.org/  (my blog)
> > > >>
> > > >>> On Jan 19, 2020, at 4:49 PM, Rajdeep Sahoo <
> > rajdeepsahoo2...@gmail.com
> > > >
> > > >> wrote:
> > > >>>
> > > >>> Hi,
> > > >>> Still facing the same issue...
> > > >>> Anything else that we need to check.
> > > >>>
> > > >>>
> > > >>> On Mon, 20 Jan, 2020, 4:07 AM Walter Underwood, <
> > wun...@wunderwood.org
> > > >
> > > >>> wrote:
> > > >>>
> > > >>>> With Java 1.8, I would use the G1 garbage collector. We’ve been
> > > running
> > > >>>> that combination in prod for three years with no problems.
> > > >>>>
> > > >>>> SOLR_HEAP=8g
> > > >>>> # Use G1 GC  -- wunder 2017-01-23
> > > >>>> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> > > >>>> GC_TUNE=" \
> > > >>>> -XX:+UseG1GC \
> > > >>>> -XX:+ParallelRefProcEnabled \
> > > >>>> -XX:G1HeapRegionSize=8m \
> > > >>>> -XX:MaxGCPauseMillis=200 \
> > > >>>> -XX:+UseLargePages \
> > > >>>> -XX:+AggressiveOpts \
> > > >>>> “
> > > >>>>
> > > >>>> wunder
> > > >>>> Walter Underwood
> > > >>>> wun...@wunderwood.org
> > > >>>> http://observer.wunderwood.org/  (my blog)
> > > >>>>
> > > >>>>> On Jan 19, 2020, at 9:25 AM, Rajdeep Sahoo <
> > > rajdeepsahoo2...@gmail.com
> > > >>>
> > > >>>> wrote:
> > > >>>>>
> > > >>>>> We are using solr 7.7 . Ram size is 24 gb and allocated space is
> 12
> > > gb.
> > > >>>> We
> > > >>>>> have completed indexing after starting the server suddenly heap
> > space
> > > >> is
> > > >>>>> getting full.
> > > >>>>> Added gc params  , still not working and jdk version is 1.8 .
> > > >>>>> Please find the below gc  params
> > > >>>>> -XX:NewRatio=2
> > > >>>>> -XX:SurvivorRatio=3
> > > >>>>> -XX:TargetSurvivorRatio=90 \
> > > >>>>> -XX:MaxTenuringThreshold=8 \
> > > >>>>> -XX:+UseConcMarkSweepGC \
> > > >>>>> -XX:+CMSScavengeBeforeRemark \
> > > >>>>> -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
> > > >>>>> -XX:PretenureSizeThreshold=512m \
> > > >>>>> -XX:CMSFullGCsBeforeCompaction=1 \
> > > >>>>> -XX:+UseCMSInitiatingOccupancyOnly \
> > > >>>>> -XX:CMSInitiatingOccupancyFraction=70 \
> > > >>>>> -XX:CMSMaxAbortablePrecleanTime=6000 \
> > > >>>>> -XX:+CMSParallelRemarkEnabled
> > > >>>>> -XX:+ParallelRefProcEnabled
> > > >>>>> -XX:+UseLargePages \
> > > >>>>> -XX:+AggressiveOpts \
> > > >>>>
> > > >>>>
> > > >>
> > > >>
> > >
> > >
> >
>

Reply via email to