From: sidharth kumar [mailto:[email protected]]
Sent: Monday, August 28, 2017 5:50 AM
To: user <[email protected]>
Subject: spark on yarn error -- Please help

Hi,

I have configured apace spark over yarn. I am able to run map reduce job 
successfully but spark-shell gives below error.


Kindly help me to resolve this issue



SPARK-DEFAULT.CONF
spark.master                     spark://master2:7077
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://ha-cluster/user/spark/ApplicationHistory
spark.shuffle.service.enabled    true
spark.shuffle.sevice.port        7337
spark.yarn.historyServer.address http://master2:18088
spark.yarn.archive               
hdfs://jio-cluster/user/spark/share/spark-archive.zip
spark.master                     yarn
spark.dynamicAllocaltion.enabled true
spark.dynamicAllocaltion.executorIdleTimeout    60


YARN-SITE.XML
<configuration>

<!-- Site specific YARN configuration properties -->
         <property>
                 <name>yarn.nodemanager.aux-services</name>
                 <value>mapreduce_shuffle</value>
        </property>


         <property>
                 
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                 <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>

         <property>
                 <name>yarn.resourcemanager.resource-tracker.address</name>
                 <value>master1:8025</value>
        </property>

         <property>
                 <name>yarn.resourcemanager.scheduler.address</name>
                 <value>master1:8040</value>
        </property>


         <property>
                 <name>yarn.resourcemanager.address</name>
                 <value>master1:8032</value>
        </property>


        <property>
                <name>yarn.log-aggregation-enable </name>
                <value>true</value>
        </property>
</configuration>

ERROR:
17/08/28 14:47:05 ERROR cluster.YarnClientSchedulerBackend: Yarn application 
has already exited with state FAILED!
17/08/28 14:47:05 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:673)
        at 
org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:186)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:567)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
        at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
        at scala.Option.getOrElse(Option.scala:121)
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:97)
        at $line3.$read$$iw$$iw.<init>(<console>:15)
        at $line3.$read$$iw.<init>(<console>:42)
        at $line3.$read.<init>(<console>:44)
        at $line3.$read$.<init>(<console>:48)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:7)
17/08/28 14:47:05 ERROR client.TransportClient: Failed to send RPC 
5084109606506612903 to / slave3:55375: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
        at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown 
Source)
17/08/28 14:47:05 ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: 
Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC 5084109606506612903 to / slave3:55375: 
java.nio.channels.ClosedChannelException
        at 
org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
        at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
        at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
        at 
io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
        at 
io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
        at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown 
Source)
17/08/28 14:47:05 ERROR util.Utils: Uncaught exception in thread Yarn 
application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult:
        at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:108)
Caused by: java.io.IOException: Failed to send RPC 5084109606506612903 to 
/slave3:55375: java.nio.channels.ClosedChannelException
        at 
org.apache.spark.network.client.TransportClient.lambda$sendRpc$2(TransportClient.java:237)
        at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
        at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
        at 
io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
        at 
io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:446)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
        at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
        at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown 
Source)
java.lang.IllegalStateException: Spark context stopped while waiting for backend
  at 
org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:673)
  at 
org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:186)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:567)
  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
  at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
  at 
org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
  at scala.Option.getOrElse(Option.scala:121)
  at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:97)
  ... 47 elided
<console>:14: error: not found: value spark
       import spark.implicits._
              ^
<console>:14: error: not found: value spark
       import spark.sql
              ^


Thanks and regards
Sidharth

Reply via email to