What is your unit test runner JVM heap size? By default, for MR jobs the config "io.sort.mb" is "100" MB, so if it can't allocate that many bytes into an array then it would fail. Lower the "io.sort.mb" to "20" or so before submitting your job and that should help you get rid of this.
On Wed, May 2, 2012 at 5:51 AM, Jay Vyas <[email protected]> wrote: > Hi guys : > > > I have a map/r job that has always worked fine, but which fails due to a > heap space error on my local machine during unit tests. > > It runs in hadoop's default mode, and just fails durring the constructor of > the MapOutputBuffer.... Any thoughts on why ? > > I dont do any custom memory settings in by unit tests, because they aren't > really needed --- So I assume this is related to /tmp files > or something ... but cant track down the issue. > > Any thoughts would be very much appreciated .. > > 12/05/01 19:15:53 WARN mapred.LocalJobRunner: job_local_0002 > java.lang.OutOfMemoryError: Java heap space > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:807) > at > org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:557) > > > > > -- > Jay Vyas > MMSB/UCHC -- Harsh J
