Thanks for the info.

I was afraid that there's a memory leak, although so far the problem didn't
occur after I enlarge the PermGen size.

Is there any way to check the current PermGen size usage and prevent it
before the system crash? I've read some articles and they recommend that I
can include this phase during the startup of the server
'-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapDumps'. I've
included this but this will only output the dump when the error occurs.

Regards,
Edwin


On 18 May 2015 at 16:54, Tomasz Borek <tomasz.bo...@gmail.com> wrote:

> The error happens either when you have too large codebase or when you are
> String-intensive in your application (Solr including) or when your previous
> process did not terminate well.
>
> Can't say for certain what Solr usage scenarios are string intensive
> without deep look at it's code. Usually enlarging PermSize helps with this
> problem, though if there's a leak, it will only increase the time app
> operates prior to crash.
>
> This is purely from JVM side of things. You may want to read up more on
> PermGen to know various problem scenarios.
>
> pozdrawiam,
> LAFK
>
> 2015-05-18 4:07 GMT+02:00 Zheng Lin Edwin Yeo <edwinye...@gmail.com>:
>
> > Hi,
> >
> > I've recently upgrade my system to 16GB RAM. While there's no more
> > OutofMemory due to the physically memory being full, I get this
> > "java.lang.OutOfMemoryError: PermGen space". This doesn't happen
> previously
> > as I think the physical memory run out first.
> >
> > This occurs after about 2 days of running the Solr Server continuously,
> but
> > the amout of physically memory used is only between 50% to 60%. I have
> read
> > that we can set -XXMaxPermSize=256M when starting up the server. Will
> this
> > help and prevent such error to occur?
> >
> > I'm using Solr-5.1.0 with two shards and a replica for each shard,
> together
> > with external ZooKeeper 3.4.6
> >
> >
> > Regards,
> > Edwin
> >
>

Reply via email to