Rahul's comments were spot on. You can gain more confidence that this
is normal if if you try attaching a memory reporting program (jconsole
is one) you'll see the memory grow for quite a while, then garbage
collection kicks in and you'll see it drop in a sawtooth pattern.

Best,
Erick

On Tue, Dec 15, 2015 at 8:19 AM, zhenglingyun <konghuaru...@163.com> wrote:
> Thank you very much.
> I will try reduce the heap memory and check if the memory still keep 
> increasing or not.
>
>> 在 2015年12月15日,19:37,Rahul Ramesh <rr.ii...@gmail.com> 写道:
>>
>> You should actually decrease solr heap size. Let me explain a bit.
>>
>> Solr requires very less heap memory for its operation and more memory for
>> storing data in main memory. This is because solr uses mmap for storing the
>> index files.
>> Please check the link
>> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html for
>> understanding how solr operates on files .
>>
>> Solr has typical problem of Garbage collection once you the heap size to a
>> large value. It will have indeterminate pauses due to GC. The amount of
>> heap memory required is difficult to tell. However the way we tuned this
>> parameter is setting it to a low value and increasing it by 1Gb whenever
>> OOM is thrown.
>>
>> Please check the problem of having large Java Heap
>>
>> http://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap
>>
>>
>> Just for your reference, in our production setup, we have data of around
>> 60Gb/node spread across 25 collections. We have configured 8GB as heap and
>> the rest of the memory we will leave it to OS to manage. We do around 1000
>> (search + Insert)/second on the data.
>>
>> I hope this helps.
>>
>> Regards,
>> Rahul
>>
>>
>>
>> On Tue, Dec 15, 2015 at 4:33 PM, zhenglingyun <konghuaru...@163.com> wrote:
>>
>>> Hi, list
>>>
>>> I’m new to solr. Recently I encounter a “memory leak” problem with
>>> solrcloud.
>>>
>>> I have two 64GB servers running a solrcloud cluster. In the solrcloud, I
>>> have
>>> one collection with about 400k docs. The index size of the collection is
>>> about
>>> 500MB. Memory for solr is 16GB.
>>>
>>> Following is "ps aux | grep solr” :
>>>
>>> /usr/java/jdk1.7.0_67-cloudera/bin/java
>>> -Djava.util.logging.config.file=/var/lib/solr/tomcat-deployment/conf/logging.properties
>>> -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
>>> -Djava.net.preferIPv4Stack=true -Dsolr.hdfs.blockcache.enabled=true
>>> -Dsolr.hdfs.blockcache.direct.memory.allocation=true
>>> -Dsolr.hdfs.blockcache.blocksperbank=16384
>>> -Dsolr.hdfs.blockcache.slab.count=1 -Xms16608395264 -Xmx16608395264
>>> -XX:MaxDirectMemorySize=21590179840 -XX:+UseParNewGC
>>> -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
>>> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled
>>> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC
>>> -Xloggc:/var/log/solr/gc.log
>>> -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -DzkHost=
>>> bjzw-datacenter-hadoop-160.d.yourmall.cc:2181,
>>> bjzw-datacenter-hadoop-163.d.yourmall.cc:2181,
>>> bjzw-datacenter-hadoop-164.d.yourmall.cc:2181/solr
>>> -Dsolr.solrxml.location=zookeeper -Dsolr.hdfs.home=hdfs://datacenter/solr
>>> -Dsolr.hdfs.confdir=/var/run/cloudera-scm-agent/process/6288-solr-SOLR_SERVER/hadoop-conf
>>> -Dsolr.authentication.simple.anonymous.allowed=true
>>> -Dsolr.security.proxyuser.hue.hosts=*
>>> -Dsolr.security.proxyuser.hue.groups=* -Dhost=
>>> bjzw-datacenter-solr-15.d.yourmall.cc -Djetty.port=8983 -Dsolr.host=
>>> bjzw-datacenter-solr-15.d.yourmall.cc -Dsolr.port=8983
>>> -Dlog4j.configuration=file:///var/run/cloudera-scm-agent/process/6288-solr-SOLR_SERVER/log4j.properties
>>> -Dsolr.log=/var/log/solr -Dsolr.admin.port=8984
>>> -Dsolr.max.connector.thread=10000 -Dsolr.solr.home=/var/lib/solr
>>> -Djava.net.preferIPv4Stack=true -Dsolr.hdfs.blockcache.enabled=true
>>> -Dsolr.hdfs.blockcache.direct.memory.allocation=true
>>> -Dsolr.hdfs.blockcache.blocksperbank=16384
>>> -Dsolr.hdfs.blockcache.slab.count=1 -Xms16608395264 -Xmx16608395264
>>> -XX:MaxDirectMemorySize=21590179840 -XX:+UseParNewGC
>>> -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled
>>> -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled
>>> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC
>>> -Xloggc:/var/log/solr/gc.log
>>> -XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -DzkHost=
>>> bjzw-datacenter-hadoop-160.d.yourmall.cc:2181,
>>> bjzw-datacenter-hadoop-163.d.yourmall.cc:2181,
>>> bjzw-datacenter-hadoop-164.d.yourmall.cc:2181/solr
>>> -Dsolr.solrxml.location=zookeeper -Dsolr.hdfs.home=hdfs://datacenter/solr
>>> -Dsolr.hdfs.confdir=/var/run/cloudera-scm-agent/process/6288-solr-SOLR_SERVER/hadoop-conf
>>> -Dsolr.authentication.simple.anonymous.allowed=true
>>> -Dsolr.security.proxyuser.hue.hosts=*
>>> -Dsolr.security.proxyuser.hue.groups=* -Dhost=
>>> bjzw-datacenter-solr-15.d.yourmall.cc -Djetty.port=8983 -Dsolr.host=
>>> bjzw-datacenter-solr-15.d.yourmall.cc -Dsolr.port=8983
>>> -Dlog4j.configuration=file:///var/run/cloudera-scm-agent/process/6288-solr-SOLR_SERVER/log4j.properties
>>> -Dsolr.log=/var/log/solr -Dsolr.admin.port=8984
>>> -Dsolr.max.connector.thread=10000 -Dsolr.solr.home=/var/lib/solr
>>> -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath
>>> /usr/lib/bigtop-tomcat/bin/bootstrap.jar
>>> -Dcatalina.base=/var/lib/solr/tomcat-deployment
>>> -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/lib/solr/
>>> org.apache.catalina.startup.Bootstrap start
>>>
>>>
>>> solr version is solr4.4.0-cdh5.3.0
>>> jdk version is 1.7.0_67
>>>
>>> Soft commit time is 1.5s. And we have real time indexing/partialupdating
>>> rate about 100 docs per second.
>>>
>>> When fresh started, Solr will use about 500M memory(the memory show in
>>> solr ui panel).
>>> After several days running, Solr will meet with long time gc problems, and
>>> no response to user query.
>>>
>>> During solr running, the memory used by solr is keep increasing until some
>>> large value, and decrease to
>>> a low level(because of gc), and keep increasing until a larger value
>>> again, then decrease to a low level again … and keep
>>> increasing to an more larger value … until solr has no response and i
>>> restart it.
>>>
>>>
>>> I don’t know how to solve this problem. Can you give me some advices?
>>>
>>> Thanks.
>>>
>>>
>>>
>>>
>
>

Reply via email to