Marco,

Are you using a 64-bit JVM on your nodes or a 32-bit one?

Sun JRE should say something like (for hadoop -version):
"Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02-402, mixed mode)"

If you are, could you post what 'free' says on your slave nodes?

On Sun, Dec 11, 2011 at 11:29 PM, Marco Didonna <[email protected]> wrote:
> Hello everyone,
> I'm running a small toy cluster (3 nodes) on EC2 configured as follows:
>
> * one node as JT+NN
> * two nodes as DN+TT
>
> I use whirr to build such cluster on demand (config file here
> http://pastebin.com/JXHYvMNb). Since my jobs are memory intensive I'd
> like to exploit the 8GB of ram the m1.large instance offers. Thus I
> added mapred.map.child.java.opts=-Xmx1g and
> mapred.reduce.child.java.opts=-Xmx1g (adding just
> hadoop-mapreduce.mapred.child.java.opts=-Xmx1g produced no effect
> since the map tasks were allocated the default 200MB). The problem is
> that with these settings I cannot have any job running because I
> always get
>
> 1/12/11 18:14:24 INFO mapred.JobClient: Running job: job_201112111644_0002
> 11/12/11 18:14:25 INFO mapred.JobClient:  map 0% reduce 0%
> 11/12/11 18:14:28 INFO mapred.JobClient: Task Id :
> attempt_201112111644_0002_m_000004_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:242)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:229)
>
> 11/12/11 18:14:28 WARN mapred.JobClient: Error reading task
> outputip-10-87-1-170.ec2.internal
> 11/12/11 18:14:28 WARN mapred.JobClient: Error reading task
> outputip-10-87-1-170.ec2.internal
> 11/12/11 18:14:30 INFO mapred.JobClient: Task Id :
> attempt_201112111644_0002_r_000001_0, Status : FAILED
> java.lang.Throwable: Child Error
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:242)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:229)
>
> And the only log I can get from the job_201112111644_0002 is the
> stdout and the stderr whose combined output is
>
> Could not create the Java virtual machine.
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
>
> I really cannot understand why the jvm cannot allocate enough space:
> there's plenty of ram. I also tried to reduce the number of map slots
> to two: nothing changed. I'm out of ideas. I hope you can shed some
> light :)
>
> FYI I use cloudera distribution for hadoop, latest stable release available.
>
> Thanks for your attention.
>
> MD



-- 
Harsh J

Reply via email to