I should also add that reducing the caches and autowarm sizes (or not using 
them at all) drastically reduces memory consumption when a new searcher is 
being prepares after a commit. The memory usage will spike at these events. 
Again, use a monitoring tool to get more information on your specific scenario.

> Bing Li,
> 
> One should be conservative when setting Xmx. Also, just setting Xmx might
> not do the trick at all because the garbage collector might also be the
> issue here. Configure the JVM to output debug logs of the garbage
> collector and monitor the heap usage (especially the tenured generation)
> with a good tool like JConsole.
> 
> You might also want to take a look at your cache settings and autowarm
> parameters. In some scenario's with very frequent updates, a large corpus
> and a high load of heterogenous queries you might want to dump the
> documentCache and queryResultCache, the cache hitratio tends to be very
> low and the caches will just consume a lot of memory and CPU time.
> 
> One of my projects i finally decided to only use the filterCache. Using the
> other caches took too much RAM and CPU while running and had a lot of
> evictions and still a lot hitratio. I could, of course, make the caches a
> lot bigger and increase autowarming but that would take a lot of time
> before a cache is autowarmed and a very, very, large amount of RAM. I
> choose to rely on the OS-cache instead.
> 
> Cheers,
> 
> > Dear Adam,
> > 
> > I also got the OutOfMemory exception. I changed the JAVA_OPTS in
> > catalina.sh as follows.
> > 
> >    ...
> >    if [ -z "$LOGGING_MANAGER" ]; then
> >    
> >      JAVA_OPTS="$JAVA_OPTS
> > 
> > -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager"
> > 
> >    else
> >    
> >     JAVA_OPTS="$JAVA_OPTS -server -Xms8096m -Xmx8096m"
> >    
> >    fi
> >    ...
> > 
> > Is this change correct? After that, I still got the same exception. The
> > index is updated and searched frequently. I am trying to change the code
> > to avoid the frequent updates. I guess only changing JAVA_OPTS does not
> > work.
> > 
> > Could you give me some help?
> > 
> > Thanks,
> > LB
> > 
> > 
> > On Wed, Jan 19, 2011 at 10:05 PM, Adam Estrada <
> > 
> > estrada.adam.gro...@gmail.com> wrote:
> > > Is anyone familiar with the environment variable, JAVA_OPTS? I set
> > > mine to a much larger heap size and never had any of these issues
> > > again.
> > > 
> > > JAVA_OPTS = -server -Xms4048m -Xmx4048m
> > > 
> > > Adam
> > > 
> > > On Wed, Jan 19, 2011 at 3:29 AM, Isan Fulia <isan.fu...@germinait.com>
> > > 
> > > wrote:
> > > > Hi all,
> > > > By adding more servers do u mean sharding of index.And after sharding
> > > > ,
> > > 
> > > how
> > > 
> > > > my query performance will be affected .
> > > > Will the query execution time increase.
> > > > 
> > > > Thanks,
> > > > Isan Fulia.
> > > > 
> > > > On 19 January 2011 12:52, Grijesh <pintu.grij...@gmail.com> wrote:
> > > >> Hi Isan,
> > > >> 
> > > >> It seems your index size 25GB si much more compared to you have
> > > >> total
> > > 
> > > Ram
> > > 
> > > >> size is 4GB.
> > > >> You have to do 2 things to avoid Out Of Memory Problem.
> > > >> 1-Buy more Ram ,add at least 12 GB of more ram.
> > > >> 2-Increase the Memory allocated to solr by setting XMX values.at
> > > >> least
> > > 
> > > 12
> > > 
> > > >> GB
> > > >> allocate to solr.
> > > >> 
> > > >> But if your all index will fit into the Cache memory it will give
> > > >> you
> > > 
> > > the
> > > 
> > > >> better result.
> > > >> 
> > > >> Also add more servers to load balance as your QPS is high.
> > > >> Your 7 Laks data makes 25 GB of index its looking quite high.Try to
> > > 
> > > lower
> > > 
> > > >> the index size
> > > >> What are you indexing in your 25GB of index?
> > > >> 
> > > >> -----
> > > >> Thanx:
> > > >> Grijesh
> > > >> --
> > > 
> > > >> View this message in context:
> > > http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-Error-tp2280037p2
> > > 28 5779.html
> > > 
> > > >> Sent from the Solr - User mailing list archive at Nabble.com.
> > > > 
> > > > --
> > > > Thanks & Regards,
> > > > Isan Fulia.

Reply via email to