For the last few days I have been trying to correlate the timeouts with GC.
I noticed in the GC logs that full GC takes long time once in a while. Does
this mean that the jvm memory is to high or is it set to low?


 [GC 4730643K->3552794K(4890112K), 0.0433146 secs]
1973853.751: [Full GC 3552794K->2926402K(4635136K), 0.3123954 secs]
1973864.170: [GC 4127554K->2972129K(4644864K), 0.0418248 secs]
1973873.341: [GC 4185569K->2990123K(4640256K), 0.0451723 secs]
1973882.452: [GC 4201770K->2999178K(4645888K), 0.0611839 secs]
1973890.684: [GC 4220298K->3010751K(4646400K), 0.0302890 secs]
1973900.539: [GC 4229514K->3015049K(4646912K), 0.0470857 secs]
1973911.179: [GC 4237193K->3040837K(4646912K), 0.0373900 secs]
1973920.822: [GC 4262981K->3072045K(4655104K), 0.0450480 secs]
1973927.136: [GC 4307501K->3129835K(4635648K), 0.0392559 secs]
1973933.057: [GC 4363058K->3178923K(4647936K), 0.0426612 secs]
1973940.981: [GC 4405163K->3210677K(4648960K), 0.0557622 secs]
1973946.680: [GC 4436917K->3239408K(4656128K), 0.0430889 secs]
1973953.560: [GC 4474277K->3300411K(4641280K), 0.0423129 secs]
1973960.674: [GC 4536894K->3371225K(4630016K), 0.0560341 secs]
1973960.731: [Full GC 3371225K->3339436K(5086208K), 15.5285889 secs]
1973990.516: [GC 4548268K->3405111K(5096448K), 0.0657788 secs]
1973998.191: [GC 4613934K->3527257K(5086208K), 0.1304232 secs]
1974006.505: [GC 4723801K->3597899K(5132800K), 0.0899599 secs]
1974014.748: [GC 4793955K->3654280K(5163008K), 0.0989430 secs]
1974025.349: [GC 4880823K->3672457K(5182464K), 0.0683296 secs]
1974037.517: [GC 4899721K->3681560K(5234688K), 0.1028356 secs]
1974050.066: [GC 4938520K->3718901K(5256192K), 0.0796073 secs]
1974061.466: [GC 4974356K->3726357K(5308928K), 0.1324846 secs]
1974071.726: [GC 5003687K->3757516K(5336064K), 0.0734227 secs]
1974081.917: [GC 5036492K->3777662K(5387264K), 0.1475958 secs]
1974091.853: [GC 5074558K->3800799K(5421056K), 0.0799311 secs]
1974101.882: [GC 5097363K->3846378K(5434880K), 0.3011178 secs]
1974109.234: [GC 5121936K->3930457K(5478912K), 0.0956342 secs]
1974116.082: [GC 5206361K->3974011K(5215744K), 0.1967284 secs]

Thanks
Jay

On Mon, Aug 3, 2015 at 1:53 PM, Bill Bell <billnb...@gmail.com> wrote:

> Yeah a separate by month or year is good and can really help in this case.
>
> Bill Bell
> Sent from mobile
>
>
> > On Aug 2, 2015, at 5:29 PM, Jay Potharaju <jspothar...@gmail.com> wrote:
> >
> > Shawn,
> > Thanks for the feedback. I agree that increasing timeout might alleviate
> > the timeout issue. The main problem with increasing timeout is the
> > detrimental effect it will have on the user experience, therefore can't
> > increase it.
> > I have looked at the queries that threw errors, next time I try it
> > everything seems to work fine. Not sure how to reproduce the error.
> > My concern with increasing the memory to 32GB is what happens when the
> > index size grows over the next few months.
> > One of the other solutions I have been thinking about is to rebuild
> > index(weekly) and create a new collection and use it. Are there any good
> > references for doing that?
> > Thanks
> > Jay
> >
> >> On Sun, Aug 2, 2015 at 10:19 AM, Shawn Heisey <apa...@elyograg.org>
> wrote:
> >>
> >>> On 8/2/2015 8:29 AM, Jay Potharaju wrote:
> >>> The document contains around 30 fields and have stored set to true for
> >>> almost 15 of them. And these stored fields are queried and updated all
> >> the
> >>> time. You will notice that the deleted documents is almost 30% of the
> >>> docs.  And it has stayed around that percent and has not come down.
> >>> I did try optimize but that was disruptive as it caused search errors.
> >>> I have been playing with merge factor to see if that helps with deleted
> >>> documents or not. It is currently set to 5.
> >>>
> >>> The server has 24 GB of memory out of which memory consumption is
> around
> >> 23
> >>> GB normally and the jvm is set to 6 GB. And have noticed that the
> >> available
> >>> memory on the server goes to 100 MB at times during a day.
> >>> All the updates are run through DIH.
> >>
> >> Using all availble memory is completely normal operation for ANY
> >> operating system.  If you hold up Windows as an example of one that
> >> doesn't ... it lies to you about "available" memory.  All modern
> >> operating systems will utilize memory that is not explicitly allocated
> >> for the OS disk cache.
> >>
> >> The disk cache will instantly give up any of the memory it is using for
> >> programs that request it.  Linux doesn't try to hide the disk cache from
> >> you, but older versions of Windows do.  In the newer versions of Windows
> >> that have the Resource Monitor, you can go there to see the actual
> >> memory usage including the cache.
> >>
> >>> Every day at least once i see the following error, which result in
> search
> >>> errors on the front end of the site.
> >>>
> >>> ERROR org.apache.solr.servlet.SolrDispatchFilter -
> >>> null:org.eclipse.jetty.io.EofException
> >>>
> >>> From what I have read these are mainly due to timeout and my timeout is
> >> set
> >>> to 30 seconds and cant set it to a higher number. I was thinking maybe
> >> due
> >>> to high memory usage, sometimes it leads to bad performance/errors.
> >>
> >> Although this error can be caused by timeouts, it has a specific
> >> meaning.  It means that the client disconnected before Solr responded to
> >> the request, so when Solr tried to respond (through jetty), it found a
> >> closed TCP connection.
> >>
> >> Client timeouts need to either be completely removed, or set to a value
> >> much longer than any request will take.  Five minutes is a good starting
> >> value.
> >>
> >> If all your client timeout is set to 30 seconds and you are seeing
> >> EofExceptions, that means that your requests are taking longer than 30
> >> seconds, and you likely have some performance issues.  It's also
> >> possible that some of your client timeouts are set a lot shorter than 30
> >> seconds.
> >>
> >>> My objective is to stop the errors, adding more memory to the server is
> >> not
> >>> a good scaling strategy. That is why i was thinking maybe there is a
> >> issue
> >>> with the way things are set up and need to be revisited.
> >>
> >> You're right that adding more memory to the servers is not a good
> >> scaling strategy for the general case ... but in this situation, I think
> >> it might be prudent.  For your index and heap sizes, I would want the
> >> company to pay for at least 32GB of RAM.
> >>
> >> Having said that ... I've seen Solr installs work well with a LOT less
> >> memory than the ideal.  I don't know that adding more memory is
> >> necessary, unless your system (CPU, storage, and memory speeds) is
> >> particularly slow.  Based on your document count and index size, your
> >> documents are quite small, so I think your memory size is probably good
> >> -- if the CPU, memory bus, and storage are very fast.  If one or more of
> >> those subsystems aren't fast, then make up the difference with lots of
> >> memory.
> >>
> >> Some light reading, where you will learn why I think 32GB is an ideal
> >> memory size for your system:
> >>
> >> https://wiki.apache.org/solr/SolrPerformanceProblems
> >>
> >> It is possible that your 6GB heap is not quite big enough for good
> >> performance, or that your GC is not well-tuned.  These topics are also
> >> discussed on that wiki page.  If you increase your heap size, then the
> >> likelihood of needing more memory in the system becomes greater, because
> >> there will be less memory available for the disk cache.
> >>
> >> Thanks,
> >> Shawn
> >
> >
> > --
> > Thanks
> > Jay Potharaju
>



-- 
Thanks
Jay Potharaju

Reply via email to