This is an automated email from the ASF dual-hosted git repository.

zjffdu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/zeppelin.git


The following commit(s) were added to refs/heads/master by this push:
     new 6045079  [ZEPPELIN-4950].  Support for manually specifying the Java 
version of Spark Interpreter Scala REPL
6045079 is described below

commit 6045079b5f5cb89def74bef55518351d4966c2dd
Author: xiejiajun <xiejiaju...@163.com>
AuthorDate: Mon Jul 13 18:54:56 2020 +0800

    [ZEPPELIN-4950].  Support for manually specifying the Java version of Spark 
Interpreter Scala REPL
    
    ### What is this PR for?
    - Support for manually specifying the Java version of Spark Interpreter 
Scala REPL.
    - This feature can be used to resolve runtime errors caused by reference to 
third-party libraries that use high-version Java in Spark Interpreter.
     
    ### What type of PR is it?
    [Feature]
    
    ### Todos
    * [ ] - Task
    
    ### What is the Jira issue?
    *  https://issues.apache.org/jira/projects/ZEPPELIN/issues/ZEPPELIN-4950
    
    ### How should this be tested?
    * manually tested
        *  Specify spark.repl.target through %spark.conf or when configuring 
Spark Interpreter
        *  Reference and call a Java method in a third-party library developed 
using Java 8 and using Java 8 features such as interface static methods through 
the Spark interpreter. If the call is successful, it means the test passed
    
    ### Screenshots (if appropriate)
    - A runtime error will be triggered when  we reference a third-party 
library that uses the new features of Java8  if we using the default value 
`jvm-1.6`  of `settings.target.value` :
    
![image](https://user-images.githubusercontent.com/26395958/87297066-6581c580-c53a-11ea-989a-ad11001501f4.png)
    
    ### Questions:
    * Does the licenses files need update? NO
    * Is there breaking changes for older versions? NO
    * Does this needs documentation? Yes
    
    Author: xiejiajun <xiejiaju...@163.com>
    Author: JakeXie <xiejiaju...@163.com>
    Author: xie-jia-jun <xiejiaju...@163.com>
    
    Closes #3852 from xiejiajun/spark_repl_target and squashes the following 
commits:
    
    d131f19e0 [xiejiajun] spark.md docs fix
    68720543a [xiejiajun] add spark.repl.target to spark.md docs
    6a05b4f83 [xiejiajun] Support for manually specifying the Java version of 
Spark Interpreter Scala REPL  
    6a2cf04f3 [xiejiajun] Merge branch 'branch-0.9' of 
https://github.com/apache/zeppelin into branch-0.9
    076668acb [JakeXie] Merge pull request #7 from apache/branch-0.9
    00e14dd8f [JakeXie] Merge pull request #5 from apache/branch-0.9
    9bb7341ff [xiejiajun] Merge remote-tracking branch 'origin/branch-0.9' into 
branch-0.9
    9bc56056b [xiejiajun] bug fix:  when we removing a paragraph , an 
interpreter process refused to connect due to it abnormal stop , which 
eventually caused the remove paragraph operation to fail to complete.
    97d271487 [xie-jia-jun] Merge pull request #2 from apache/branch-0.9
    9b3c744a0 [xiejiajun] added timeout for getting Thrift client to avoid 
situations where the interpreter may not be restarted when the interpreter 
process exits unexpectedly
    
    (cherry picked from commit 054651f9e887d5edb24d000cdae83177fd3307da)
    Signed-off-by: Jeff Zhang <zjf...@apache.org>
---
 docs/interpreter/spark.md                                             | 4 ++++
 .../scala/org/apache/zeppelin/spark/SparkScala210Interpreter.scala    | 3 +++
 .../scala/org/apache/zeppelin/spark/SparkScala211Interpreter.scala    | 2 ++
 .../scala/org/apache/zeppelin/spark/SparkScala212Interpreter.scala    | 3 +++
 4 files changed, 12 insertions(+)

diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index 5fc9305..21069b4 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -199,6 +199,10 @@ You can also set other Spark properties which are not 
listed in the table. For a
     <td>false</td>
     <td>whether use yarn proxy url as spark weburl, e.g. 
http://localhost:8088/proxy/application_1583396598068_0004</td>
   </tr>
+  <td>spark.repl.target</td>
+      <td>jvm-1.8</td>
+      <td>Manually specifying the Java version of Spark Interpreter Scala 
REPL,Available options:[jvm-1.5, jvm-1.6, jvm-1.7, jvm-1.8] </td>
+    </tr>
 </table>
 
 Without any configuration, Spark interpreter works out of box in local mode. 
But if you want to connect to your Spark cluster, you'll need to follow below 
two simple steps.
diff --git 
a/spark/scala-2.10/src/main/scala/org/apache/zeppelin/spark/SparkScala210Interpreter.scala
 
b/spark/scala-2.10/src/main/scala/org/apache/zeppelin/spark/SparkScala210Interpreter.scala
index 0eac200..f59f137 100644
--- 
a/spark/scala-2.10/src/main/scala/org/apache/zeppelin/spark/SparkScala210Interpreter.scala
+++ 
b/spark/scala-2.10/src/main/scala/org/apache/zeppelin/spark/SparkScala210Interpreter.scala
@@ -67,10 +67,13 @@ class SparkScala210Interpreter(override val conf: SparkConf,
       sparkHttpServer = server
       conf.set("spark.repl.class.uri", uri)
     }
+    val target = conf.get("spark.repl.target", "jvm-1.8")
 
     val settings = new Settings()
     settings.embeddedDefaults(sparkInterpreterClassLoader)
     settings.usejavacp.value = true
+    settings.target.value = target
+
     this.userJars = getUserJars()
     LOGGER.info("UserJars: " + userJars.mkString(File.pathSeparator))
     settings.classpath.value = userJars.mkString(File.pathSeparator)
diff --git 
a/spark/scala-2.11/src/main/scala/org/apache/zeppelin/spark/SparkScala211Interpreter.scala
 
b/spark/scala-2.11/src/main/scala/org/apache/zeppelin/spark/SparkScala211Interpreter.scala
index cb5a016..d2fb971 100644
--- 
a/spark/scala-2.11/src/main/scala/org/apache/zeppelin/spark/SparkScala211Interpreter.scala
+++ 
b/spark/scala-2.11/src/main/scala/org/apache/zeppelin/spark/SparkScala211Interpreter.scala
@@ -66,12 +66,14 @@ class SparkScala211Interpreter(override val conf: SparkConf,
       sparkHttpServer = server
       conf.set("spark.repl.class.uri", uri)
     }
+    val target = conf.get("spark.repl.target", "jvm-1.8")
 
     val settings = new Settings()
     settings.processArguments(List("-Yrepl-class-based",
       "-Yrepl-outdir", s"${outputDir.getAbsolutePath}"), true)
     settings.embeddedDefaults(sparkInterpreterClassLoader)
     settings.usejavacp.value = true
+    settings.target.value = target
 
     this.userJars = getUserJars()
     LOGGER.info("UserJars: " + userJars.mkString(File.pathSeparator))
diff --git 
a/spark/scala-2.12/src/main/scala/org/apache/zeppelin/spark/SparkScala212Interpreter.scala
 
b/spark/scala-2.12/src/main/scala/org/apache/zeppelin/spark/SparkScala212Interpreter.scala
index 2b04a1d..7f35125 100644
--- 
a/spark/scala-2.12/src/main/scala/org/apache/zeppelin/spark/SparkScala212Interpreter.scala
+++ 
b/spark/scala-2.12/src/main/scala/org/apache/zeppelin/spark/SparkScala212Interpreter.scala
@@ -60,12 +60,15 @@ class SparkScala212Interpreter(override val conf: SparkConf,
     LOGGER.info("Scala shell repl output dir: " + outputDir.getAbsolutePath)
     outputDir.deleteOnExit()
     conf.set("spark.repl.class.outputDir", outputDir.getAbsolutePath)
+    val target = conf.get("spark.repl.target", "jvm-1.8")
 
     val settings = new Settings()
     settings.processArguments(List("-Yrepl-class-based",
       "-Yrepl-outdir", s"${outputDir.getAbsolutePath}"), true)
     settings.embeddedDefaults(sparkInterpreterClassLoader)
     settings.usejavacp.value = true
+    settings.target.value = target
+
     this.userJars = getUserJars()
     LOGGER.info("UserJars: " + userJars.mkString(File.pathSeparator))
     settings.classpath.value = userJars.mkString(File.pathSeparator)

Reply via email to