Any suggestions as to how to track down the root cause of these errors?
1178709 [main] INFO org.apache.hadoop.mapred.JobClient - map 6% reduce 0%
1178709 [main] INFO org.apache.hadoop.mapred.JobClient - map 6% reduce 0%
11/11/15 00:45:29 INFO mapred.JobClient: Task Id :
attempt_201111150008_0002_r_000000_0, Status : FAILED
1208771 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id :
attempt_201111150008_0002_r_000000_0, Status : FAILED
1208771 [main] INFO org.apache.hadoop.mapred.JobClient - Task Id :
attempt_201111150008_0002_r_000000_0, Status : FAILED
Error: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.IFile$Reader.readNextBlock(IFile.java:342)
at org.apache.hadoop.mapred.IFile$Reader.next(IFile.java:404)
at org.apache.hadoop.mapred.Merger$Segment.next(Merger.java:220)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:420)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:381)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:60)
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTas
k.java:2651)
On 11/13/11 6:34 PM, "Eric Fiala" <[email protected]> wrote:
> Hoot, these are big numbers - some thoughts
> 1) does your machine have 1000GB to spare for each java child thread (each
> mapper + each reducer)? mapred.child.java.opts / -Xmx1048576m
> 2) does each of your daemons need / have 10G? HADOOP_HEAPSIZE=10000
>
> hth
> EF
>>>>> # The maximum amount of heap to use, in MB. Default is 1000.
>>>>> export HADOOP_HEAPSIZE=10000
>>>>> <name>mapred.child.java.opts</name>
>>>>> <value>-Xmx1048576m</value>
>>>>> </property>
>>>>>
>