Hi, one of problem is now alleviated.

 Number of lines with "can't identify protocol " in "lsof" output is now
reduced very much. Earlier it kept increasing upto "ulimit -n" thus causing
"Too many open files" error but now it is contained to a quite lesser
number. This happened after I changed maxIdleTime from 10s to 50s in
jetty.xml.

*...*
*<Set name="maxIdleTime">50000</Set>*
*...*



But my original problem of **heavy swap usage** is still not clear. If I
could find a solution or work-around I'll post here. In the mean time if
someone knows the reason or is interested in helping me find the reason
please reply.

Thanks



On Sun, Dec 4, 2011 at 3:28 PM, Samarendra Pratap <samarz...@gmail.com>wrote:

> Hi Chris,
>  Thanks for you reply and sorry for delay. Please find my replies below in
> the mail.
>
> On Sat, Dec 3, 2011 at 5:56 AM, Chris Hostetter 
> <hossman_luc...@fucit.org>wrote:
>
>>
>> : Till 3 days ago, we were running Solr 3.4 instance with following java
>> : command line options
>> : java -server -*Xms2048m* -*Xmx4096m* -Dsolr.solr.home=etc -jar start.jar
>> :
>> : Then we increased the memory with following options and restarted the
>> : server
>> : java -server *-**Xms4096m* -*Xmx10g* -Dsolr.solr.home=etc -jar start.jar
>>        ...
>> : Since we restarted Solr, the memory usage of application is continuously
>> : increasing. The swap usage goes from almost zero to as high as 4GB in
>> every
>> : 6-8 hours. We kept restarting the Solr to push it down to ~zero but the
>> : same memory usage trend kept repeating itself.
>>
>> do you really mean "swap" in that sentence, or do you mean the amount of
>> memory your OS says java is using?  You said you have 16GB total
>> physical ram, how big is the index itself? do you have any other processes
>> running on that machine?  (You should ideally leave at least enough ram
>> free to let the OS/filesystem cache the index in RAM)
>>
>> Yes, by "swap" i mean "swap". Which we can see by "free -m" on linux and
> many other ways. So it is not the memory for java.
> The index size is around 31G.
> We have this machine dedicated for Solr, so no other significant processes
> are run here, except incremental indexing script. I didn't think about
> filesystem cache in RAM earlier, but since we have 16G ram so in my opinion
> that should be enough.
>
> Since you've not only changed the Xmx (max heap size) param but also the
>> Xms param (min heap size) to 4GB, it doesn't seem out of the ordinary
>> at all for the memory usage to jump up to 4GB quickly.  If the JVM did
>> exactly what the docs say it should, then on startup it would
>> *immediatley* allocated 4GB or ram, but i think in practice it allocates
>> as needed, but doesn't do any garbage collection if the memory used is
>> still below the "Xms" value.
>>
>> : Then finally I reverted the least expected change, the command line
>> memory
>> : options, back to min 2g, max 4g and I was surprised to see that the
>> problem
>> : vanished.
>> : java -server *-Xms2g* *-Xmx4g* -Dsolr.solr.home=etc -jar start.jar
>> :
>> : Is this a memory leak or my lack of understanding of java/linux memory
>> : allocation?
>>
>> I think you're just missunderstanding the allocation ... if you tell java
>> to use at leaast 4GB, it's going to use at least 4GB w/o blinking.
>>
>> I accept I wrote the confusing word "min" for -Xms, but I promise I
> really I know its meaning. :-)
>
>  did you try "-Xms2g -Xmx10g" ?
>>
>> (again: don't set Xmx any higher then you actually have the RAM to
>> support given the filesystem cache and any other stuff you have running,
>> but you can increase mx w/o increasing ms if you are just worried about
>> how fast the heap grows on startup ... not sure why that would be
>> worrisome though
>>
> As I've written in the mail above that I really meant "swap", I am not
> really concerned about heap size at startup.
>
>
>>
>> -Hoss
>>
>
> My concern is that when a single machine was able to serve n1+n2 queries
> earlier with -Xms2g -Xmx4g
> why the same machine is not able to serve n2 queries with -Xms4g -Xmx10g?
>
> In fact I tried other combinations as well 2g-6g, 1g-6g, 2g-10g but
> nothing replicated the issue.
>
> Since yesterday I am able to see another issue in the same machine. I saw
> "Too many open files" error in the log thus creating problem in incremental
> indexing.
>
> A lot of lines of the lsof were like following -
> java     1232 solr   52u     sock                0,5            1805813279
> can't identify protocol
> java     1232 solr   53u     sock                0,5            1805813282
> can't identify protocol
> java     1232 solr   54u     sock                0,5            1805813283
> can't identify protocol
>
> I searched for "can't identify protocol" and my case seemed related to a
> bug http://bugs.sun.com/view_bug.do?bug_id=6745052 but my java version
> ("1.6.0_22") did not match in the bug description.
>
> I am not sure if this problem and the memory problem could be related. I
> did not check the lsof earlier. Could this be a reason of memory leak?
>
> --
> Regards,
> Samar
>



-- 
Regards,
Samar

Reply via email to