I guess you don't have shuffle configured. Can you look at the application master (AM) logs and paste logs from there? There will a link to AM logs on the application page of RM web UI.
You can also check and see if shuffle is configured. From the INSTALL file ( http://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-mapreduce-project/INSTALL ), Step 7) Setup config: for running mapreduce applications, which now are in user land, you need to setup nodemanager with the following configuration in your yarn-site.xml before you start the nodemanager. <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> Step 8) Modify mapred-site.xml to use yarn framework <property> <name> mapreduce.framework.name</name> <value>yarn</value> </property> +Vinod On Tue, Dec 20, 2011 at 8:12 AM, Arun C Murthy <[email protected]> wrote: > Can you look at the /nodes web-page to see how many nodes you have? > > Also, do you see any exceptions in the ResourceManager logs on dn5? > > Arun > > On Dec 20, 2011, at 5:14 AM, Jingui Lee wrote: > > Hi,all > > I am running hadoop 0.23 on 5 nodes. > > I could run any YARN application or Mapreduce Job on this cluster before. > > But, after I changed Resourcemanager Node from node4 to node5, when I run > applications (I have modified property referenced in configure file), map > and reduce process will hang up at 0% until I killed the application. > > I don't know why. > > terminal output: > > bin/hadoop jar hadoop-mapreduce-examples-0.23.0.jar wordcount > /share/stdinput/1k /testread/hao > 11/12/20 20:20:29 INFO mapreduce.Cluster: Cannot pick > org.apache.hadoop.mapred.LocalClientProtocolProvider as the > ClientProtocolProvider - returned null protocol > 11/12/20 20:20:29 INFO ipc.YarnRPC: Creating YarnRPC for > org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC > 11/12/20 20:20:29 INFO mapred.ResourceMgrDelegate: Connecting to > ResourceManager at dn5/192.168.3.204:50010 > 11/12/20 20:20:29 INFO ipc.HadoopYarnRPC: Creating a HadoopYarnProtoRpc > proxy for protocol interface org.apache.hadoop.yarn.api.ClientRMProtocol > 11/12/20 20:20:29 INFO mapred.ResourceMgrDelegate: Connected to > ResourceManager at dn5/192.168.3.204:50010 > 11/12/20 20:20:29 WARN conf.Configuration: fs.default.name is deprecated. > Instead, use fs.defaultFS > 11/12/20 20:20:29 WARN conf.Configuration: > mapred.used.genericoptionsparser is deprecated. Instead, use > mapreduce.client.genericoptionsparser.used > 11/12/20 20:20:29 INFO input.FileInputFormat: Total input paths to process > : 1 > 11/12/20 20:20:29 INFO util.NativeCodeLoader: Loaded the native-hadoop > library > 11/12/20 20:20:29 WARN snappy.LoadSnappy: Snappy native library not loaded > 11/12/20 20:20:29 INFO mapreduce.JobSubmitter: number of splits:1 > 11/12/20 20:20:29 INFO mapred.YARNRunner: AppMaster capability = memory: > 2048 > 11/12/20 20:20:29 INFO mapred.YARNRunner: Command to launch container for > ApplicationMaster is : $JAVA_HOME/bin/java > -Dlog4j.configuration=container-log4j.properties > -Dyarn.app.mapreduce.container.log.dir=<LOG_DIR> > -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA > -Xmx1536m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1><LOG_DIR>/stdout > 2><LOG_DIR>/stderr > 11/12/20 20:20:29 INFO mapred.ResourceMgrDelegate: Submitted application > application_1324372145692_0004 to ResourceManager > 11/12/20 20:20:29 INFO mapred.ClientCache: Connecting to HistoryServer at: > dn5:10020 > 11/12/20 20:20:29 INFO ipc.YarnRPC: Creating YarnRPC for > org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC > 11/12/20 20:20:29 INFO mapred.ClientCache: Connected to HistoryServer at: > dn5:10020 > 11/12/20 20:20:29 INFO ipc.HadoopYarnRPC: Creating a HadoopYarnProtoRpc > proxy for protocol interface > org.apache.hadoop.mapreduce.v2.api.MRClientProtocol > 11/12/20 20:20:30 INFO mapreduce.Job: Running job: job_1324372145692_0004 > 11/12/20 20:20:31 INFO mapreduce.Job: map 0% reduce 0% > > > >
