Re:Re: how to avoid OOM while merge index

2012-01-09 Thread James
Sinece the hadoop task monitor will check each task, and when find it consume to much memory, then it will kill the task, so I am currently want to find a method to decrease the mem usage at solr side, any idea? At 2012-01-09 17:07:09,"Tomas Zerolo" wrote: >On Mon, Jan 09, 2012 at 01:29:39PM +08

Re: how to avoid OOM while merge index

2012-01-09 Thread Tomas Zerolo
On Mon, Jan 09, 2012 at 01:29:39PM +0800, James wrote: > I am build the solr index on the hadoop, and at reduce step I run the task > that merge the indexes, each part of index is about 1G, I have 10 indexes to > merge them together, I always get the java heap memory exhausted, the heap > size i

Re: how to avoid OOM while merge index

2012-01-09 Thread Ralf Matulat
A quick guess: If you are using tomcat for example, be sure to grand unlimited virtual memory to that process, e.g. putting "ulimit -v unlimited" in your tomcat-init script (if you're using Linux). Am 09.01.2012 06:29, schrieb James: I am build the solr index on the hadoop, and at reduce step

how to avoid OOM while merge index

2012-01-09 Thread James
I am build the solr index on the hadoop, and at reduce step I run the task that merge the indexes, each part of index is about 1G, I have 10 indexes to merge them together, I always get the java heap memory exhausted, the heap size is about 2G also. I wonder which part use these so many memory.