Rohit, Why do you think it should free it during idle time? Let us what numbers you are actually watching. Check this it can be intetesting blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html 04.09.2012 0:45 пользователь "Markus Jelsma" <markus.jel...@openindex.io> написал:
> You've got more than 45GB of physical RAM in your machine? I assume it's > actually virtual memory you're seeing, which is not a problem, even on > Windows. It's not uncommon for resident memory to be higher than the > allocated heap space and it's normal to have a high virtual memory address > space if you have a large index. > > -----Original message----- > > From:Rohit <ro...@simplify360.com> > > Sent: Tue 04-Sep-2012 00:33 > > To: solr-user@lucene.apache.org > > Subject: RE: Solr Not releasing memory > > > > I am taking of Physical memory here, we start at -Xms of 2gb but very > soon it goes high as 45Gb. The memory never comes down even when a single > user is not using the system. > > > > Regards, > > Rohit > > > > > > -----Original Message----- > > From: Markus Jelsma [mailto:markus.jel...@openindex.io] > > Sent: 03 September 2012 14:58 > > To: solr-user@lucene.apache.org > > Subject: RE: Solr Not releasing memory > > > > It would be helpful yo know which memory isn't being released. Is it > virtual or physical or shared memory? Is it the heap space? > > > > > > -----Original message----- > > > From:Mikhail Khludnev <mkhlud...@griddynamics.com> > > > Sent: Mon 03-Sep-2012 16:52 > > > To: solr-user@lucene.apache.org > > > Subject: RE: Solr Not releasing memory > > > > > > Rohit, > > > Which collector do you use? Releasing physical ram is possible with > > > compacting collectors like serial, parallel and maybe g1 and not > > > possible with cms. The more important thing that releasing is really > > > suspicious and even odd requrement. Please provide more details about > > > your jvm and overall challenge. > > > 03.09.2012 15:03 пользователь "Rohit" <ro...@simplify360.com> написал: > > > > > > > > I am currently using StandardDirectoryFactory, would switching > > > > directory > > > factory have any impact on the indexes? > > > > > > > > Regards, > > > > Rohit > > > > > > > > > > > > -----Original Message----- > > > > From: Claudio Ranieri [mailto:claudio.rani...@estadao.com] > > > > Sent: 03 September 2012 10:03 > > > > To: solr-user@lucene.apache.org > > > > Subject: RES: Solr Not releasing memory > > > > > > > > Are you using MMapDirectoryFactory? > > > > I had swap problem in linux to a big index when I used > > > MMapDirectoryFactory. > > > > You can to try use solr.NIOFSDirectoryFactory. > > > > > > > > > > > > -----Mensagem original----- > > > > De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2 > > > > de > > > setembro de 2012 22:00 > > > > Para: solr-user@lucene.apache.org > > > > Assunto: Re: Solr Not releasing memory > > > > > > > > 1) I believe Java 1.7 release memory back to the OS. > > > > 2) All of the Javas I've used on Windows do this. > > > > > > > > Is the physical memory use a problem? Does it push out all other > programs? > > > > > > > > Or is it just that the Java process appears larger? This explains > > > > the > > > latter: > > > > http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.h > > > > tml > > > > > > > > ----- Original Message ----- > > > > | From: "Rohit" <ro...@simplify360.com> > > > > | To: solr-user@lucene.apache.org > > > > | Sent: Sunday, September 2, 2012 1:22:14 AM > > > > | Subject: Solr Not releasing memory > > > > | > > > > | Hi, > > > > | > > > > | > > > > | > > > > | We are running solr3.5 using tomcal 6.26 on a Windows Enterprise > > > > | RC2 server, our index size if pretty large. > > > > | > > > > | > > > > | > > > > | We have noticed that once tomcat starts using/reserving ram it > > > > | never releases them, even when there is not a single user on the > > > > | system. I have tried forced garbage collection, but that doesn't > > > > | seem to help either. > > > > | > > > > | > > > > | > > > > | Regards, > > > > | > > > > | Rohit > > > > | > > > > | > > > > | > > > > | > > > > > > > > > > > > > > > > > >