Thanks for all, I'll try later ;)

Greetings!!.

El mié., 24 oct. 2018 a las 7:13, Walter Underwood (<wun...@wunderwood.org>)
escribió:

> We handle request rates at a few thousand requests/minute with an 8 GB
> heap. 95th percentile response time is 200 ms. Median (cached) is 4 ms.
>
> An oversized heap will hurt your query performance because everything
> stops for the huge GC.
>
> RAM is still a thousand times faster than SSD, so you want a lot of RAM
> available for file system buffers managed by the OS.
>
> I recommend trying an 8 GB heap with the latest version of Java 8 and the
> G1 collector.
>
> We have this in our solr.in.sh:
>
> SOLR_HEAP=8g
> # Use G1 GC  -- wunder 2017-01-23
> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> GC_TUNE=" \
> -XX:+UseG1GC \
> -XX:+ParallelRefProcEnabled \
> -XX:G1HeapRegionSize=8m \
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> "
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Oct 23, 2018, at 9:51 PM, Daniel Carrasco <d.carra...@i2tic.com>
> wrote:
> >
> > Hello,
> >
> > I've set that heap size because the solr receives a lot of queries every
> > second and I want to cache as much as possible. Also I'm not sure about
> the
> > number of documents in the collection, but the webpage have a lot of
> > products.
> >
> > About store the index data in RAM is just an expression. The data is
> stored
> > on SSD disks with XFS (faster than EXT4).
> >
> > I'll take a look to the links tomorrow at work.
> >
> > Thanks!!
> > Greetings!!
> >
> >
> > El mar., 23 oct. 2018 23:48, Shawn Heisey <apa...@elyograg.org>
> escribió:
> >
> >> On 10/23/2018 7:15 AM, Daniel Carrasco wrote:
> >>> Hello,
> >>>
> >>> Thanks for your response.
> >>>
> >>> We've already thought about that and doubled the instances. Just now
> for
> >>> every Solr instance we've 60GB of RAM (40GB configured on Solr), and a
> 16
> >>> Cores CPU. The entire Data can be stored on RAM and will not fill the
> RAM
> >>> (of course talking about raw data, not procesed data).
> >>
> >> Why are you making the heap so large?  I've set up servers that can
> >> handle hundreds of millions of Solr documents in a much smaller heap.  A
> >> 40GB heap would be something you might do if you're handling billions of
> >> documents on one server.
> >>
> >> When you say the entire data can be stored in RAM ... are you counting
> >> that 40GB you gave to Solr?  Because you can't count that -- that's for
> >> Solr, NOT the index data.
> >>
> >> The heap size should never be dictated by the amount of memory in the
> >> server.  It should be made as large as it needs to be for the job, and
> >> no larger.
> >>
> >> https://wiki.apache.org/solr/SolrPerformanceProblems#RAM
> >>
> >>> About the usage, I've checked the RAM and CPU usage and are not fully
> >> used.
> >>
> >> What exactly are you looking at?  I've had people swear that they can't
> >> see a problem with their systems when Solr is REALLY struggling to keep
> >> up with what it has been asked to do.
> >>
> >> Further down on the page I linked above is a section about asking for
> >> help.  If you can provide the screenshot it mentions there, that would
> >> be helpful.  Here's a direct link to that section:
> >>
> >>
> >>
> https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue
> >>
> >> Thanks,
> >> Shawn
> >>
> >>
>
>

-- 
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________

Reply via email to