I am build the solr index on the hadoop, and at reduce step I run the task that merge the indexes, each part of index is about 1G, I have 10 indexes to merge them together, I always get the java heap memory exhausted, the heap size is about 2G also. I wonder which part use these so many memory. And how to avoid the OOM during the merge process.
- how to avoid OOM while merge index James
- Re: how to avoid OOM while merge index Ralf Matulat
- Re: how to avoid OOM while merge index Tomas Zerolo
- Re:Re: how to avoid OOM while merge index James