Github user 561152 commented on the issue:

    https://github.com/apache/incubator-predictionio/pull/441
  
    @mars  thank you。
    I think this is a BUG, and I do the following:
    Test one:
    I tested the version: 0.12 and master
    Using templates: Recommendation
    Local environment: HDP spark2.1-hadoop2.7.3
    Pio batchpredict: Exception error in thread 
org.apache.spark.SparkException: Only main one SparkContext may be running in 
this JVM (see SPARK-2243). To ignore this error, set 
spark.driver.allowMultipleContexts true. The currently running SparkContext = 
was created at:
    Test two:
    Modify file:
    Core/src/main/scala/org/apache/predictionio/workflow/WorkflowContext.scala:
    Val conf = new, SparkConf ()
    To:
    Val, conf = new, SparkConf (),.Set ("spark.driver.allowMultipleContexts", 
"true")
    Pio batchpredict: error reporting [INFO], [ContextHandler], Started, 
o.s.j.s.ServletContextHandler@2474df51{/metrics/json, null, AVAILABLE}
    [WARN], [SparkContext], Multiple, running, SparkContexts, detected, in, 
the, JVM, same!
    [ERROR] [Utils] Aborting task
    [ERROR], [Executor], Exception, in, task, 0, in, stage, 0 (TID, 0)
    [WARN] [TaskSetManager] Lost task 0 in stage 0 (TID 0, localhost, executor, 
driver): org.apache.spark.SparkException:, This, RDD, lacks, a, SparkContext., 
It,, could, happen, in, cases:, the, following,...
    (1) RDD transformations and actions are NOT invoked by the driver, but 
inside of other transformations; for example, rdd1.map (x rdd2.values.count) = 
> (* x) is invalid because the values transformation and count action cannot be 
performed inside of the rdd1.map transformation. For more information, see 
SPARK-5063.
    (2) When a Spark Streaming job recovers from checkpoint, this exception 
will be hit if a reference to an RDD not defined by the streaming job is used 
in DStream operations. For more information, See SPARK-13758.
    At org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$sc (RDD.scala:89)
    At org.apache.spark.rdd.RDD.withScope (RDD.scala:362)
    At org.apache.spark.rdd.PairRDDFunctions.lookup (PairRDDFunctions.scala:939)
    At 
org.apache.spark.mllib.recommendation.MatrixFactorizationModel.recommendProducts
 (MatrixFactorizationModel.scala:169)
    At org.example.recommendation.ALSAlgorithm$$anonfun$predict$1.apply 
(ALSAlgorithm.scala:85)
    At org.example.recommendation.ALSAlgorithm$$anonfun$predict$1.apply 
(ALSAlgorithm.scala:80)
    At scala.Option.map (Option.scala:146)
    At org.example.recommendation.ALSAlgorithm.predict (ALSAlgorithm.scala:80)
    At org.example.recommendation.ALSAlgorithm.predict (ALSAlgorithm.scala:22)
    At org.apache.predictionio.controller.PAlgorithm.predictBase 
(PAlgorithm.scala:76)
    At 
org.apache.predictionio.workflow.BatchPredict$$anonfun$15$$anonfun$16.apply 
(BatchPredict.scala:212)
    At 
org.apache.predictionio.workflow.BatchPredict$$anonfun$15$$anonfun$16.apply 
(BatchPredict.scala:211)
    At scala.collection.TraversableLike$$anonfun$map$1.apply 
(TraversableLike.scala:234)
    At scala.collection.TraversableLike$$anonfun$map$1.apply 
(TraversableLike.scala:234)
    At scala.collection.immutable.List.foreach (List.scala:381)
    At scala.collection.TraversableLike$class.map (TraversableLike.scala:234)
    At scala.collection.immutable.List.map (List.scala:285)
    At org.apache.predictionio.workflow.BatchPredict$$anonfun$15.apply 
(BatchPredict.scala:211)
    At org.apache.predictionio.workflow.BatchPredict$$anonfun$15.apply 
(BatchPredict.scala:197)
    At scala.collection.Iterator$$anon$11.next (Iterator.scala:409)
    At scala.collection.Iterator$$anon$11.next (Iterator.scala:409)
    At 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp
 (PairRDDFunctions.scala:1211)
    At 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply
 (PairRDDFunctions.scala:1210)
    At 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply
 (PairRDDFunctions.scala:1210)
    At org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks 
(Utils.scala:1341)
    At 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply
 (PairRDDFunctions.scala:1218)
    At 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply
 (PairRDDFunctions.scala:1197)
    At org.apache.spark.scheduler.ResultTask.runTask (ResultTask.scala:87)
    At org.apache.spark.scheduler.Task.run (Task.scala:99)
    At org.apache.spark.executor.Executor$TaskRunner.run (Executor.scala:282)
    At java.util.concurrent.ThreadPoolExecutor.runWorker 
(ThreadPoolExecutor.java:1142)
    At java.util.concurrent.ThreadPoolExecutor$Worker.run 
(ThreadPoolExecutor.java:617)
    At java.lang.Thread.run (Thread.java:745)
    [ERROR] [TaskSetManager] Task 0 in stage 0 failed 1 times; aborting job
    
    test three:
    pio deploy  and use reset aip  success。


---

Reply via email to