This is an automated email from the ASF dual-hosted git repository. pdallig pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/zeppelin.git
The following commit(s) were added to refs/heads/master by this push: new f6bb339e09 Small typos in Spark introduction f6bb339e09 is described below commit f6bb339e095d1745b8831b6f506911ef8d897e36 Author: Anton <68559435+antonvasile...@users.noreply.github.com> AuthorDate: Fri Apr 29 13:17:21 2022 +0300 Small typos in Spark introduction ### What is this PR for? Small typos in Spark introduction ### What type of PR is it? Documentation ### What is the Jira issue? No issue ### How should this be tested? Proofread modified strings, ### Screenshots (if appropriate) ### Questions: * Does the licenses files need to update? No * Is there breaking changes for older versions? No * Does this needs documentation? No Author: Anton <68559435+antonvasile...@users.noreply.github.com> Closes #4367 from antonvasilev52/patch-1 and squashes the following commits: c4d1048f0 [Anton] Small typos in Spark introduction --- .../1. Spark Interpreter Introduction_2F8KN6TKK.zpln | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/notebook/Spark Tutorial/1. Spark Interpreter Introduction_2F8KN6TKK.zpln b/notebook/Spark Tutorial/1. Spark Interpreter Introduction_2F8KN6TKK.zpln index 0f3cb7a399..f0d55c698d 100644 --- a/notebook/Spark Tutorial/1. Spark Interpreter Introduction_2F8KN6TKK.zpln +++ b/notebook/Spark Tutorial/1. Spark Interpreter Introduction_2F8KN6TKK.zpln @@ -87,7 +87,7 @@ }, { "title": "Generic Inline Configuration", - "text": "%spark.conf\n\nSPARK_HOME \u003cPATH_TO_SPARK_HOME\u003e\n\n# set driver memory to 8g\nspark.driver.memory 8g\n\n# set executor number to be 6\nspark.executor.instances 6\n\n# set executor memory 4g\nspark.executor.memory 4g\n\n# Any other spark properties can be set here. Here\u0027s avaliable spark configruation you can set. (http://spark.apache.org/docs/latest/configuration.html)\n", + "text": "%spark.conf\n\nSPARK_HOME \u003cPATH_TO_SPARK_HOME\u003e\n\n# set driver memory to 8g\nspark.driver.memory 8g\n\n# set executor number to be 6\nspark.executor.instances 6\n\n# set executor memory 4g\nspark.executor.memory 4g\n\n# Any other spark properties can be set here. Here\u0027s available spark configuration you can set. (http://spark.apache.org/docs/latest/configuration.html)\n", "user": "anonymous", "dateUpdated": "2020-04-30 10:56:30.840", "config": { @@ -117,7 +117,7 @@ }, { "title": "Use Third Party Library", - "text": "%md\n\nThere\u0027re 2 ways to add third party libraries.\n\n* `Generic Inline Configuration` It is the recommended way to add third party jars/packages. Use `spark.jars` for adding local jar file and `spark.jars.packages` for adding packages\n* `Interpreter Setting` You can also config `spark.jars` and `spark.jars.packages` in interpreter setting, but since adding third party libraries is usually application specific. It is recommended to use `Generic Inline Configura [...] + "text": "%md\n\nThere\u0027re 2 ways to add third party libraries.\n\n* `Generic Inline Configuration` It is the recommended way to add third party jars/packages. Use `spark.jars` for adding local jar file and `spark.jars.packages` for adding packages\n* `Interpreter Setting` You can also config `spark.jars` and `spark.jars.packages` in interpreter setting, but since adding third party libraries is usually application specific. It is recommended to use `Generic Inline Configura [...] "user": "anonymous", "dateUpdated": "2020-04-30 10:59:35.270", "config": { @@ -435,7 +435,7 @@ }, { "title": "Enable Impersonation", - "text": "%md\n\nBy default, all the spark interpreter will run as user who launch zeppelin server. This is OK for single user, but expose potential issue for multiple user scenaior. For multiple user scenaior, it is better to enable impersonation for Spark Interpreter in yarn mode.\nThere are 3 steps you need to do to enable impersonation.\n\n1. Enable it in Spark Interpreter Setting. You have to choose Isolated Per User mode, and then click the impersonation option as following sc [...] + "text": "%md\n\nBy default, all the spark interpreter will run as user who launch zeppelin server. This is OK for single user, but expose potential issue for multiple user scenario. For multiple user scenario, it is better to enable impersonation for Spark Interpreter in yarn mode.\nThere are 3 steps you need to do to enable impersonation.\n\n1. Enable it in Spark Interpreter Setting. You have to choose Isolated Per User mode, and then click the impersonation option as following sc [...] "user": "anonymous", "dateUpdated": "2020-04-30 11:11:51.383", "config": { @@ -463,7 +463,7 @@ "msg": [ { "type": "HTML", - "data": "\u003cdiv class\u003d\"markdown-body\"\u003e\n\u003cp\u003eBy default, all the spark interpreter will run as user who launch zeppelin server. This is OK for single user, but expose potential issue for multiple user scenaior. For multiple user scenaior, it is better to enable impersonation for Spark Interpreter in yarn mode.\u003cbr /\u003e\nThere are 3 steps you need to do to enable impersonation.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eEnable it in Spark Interp [...] + "data": "\u003cdiv class\u003d\"markdown-body\"\u003e\n\u003cp\u003eBy default, all the spark interpreter will run as user who launch zeppelin server. This is OK for single user, but expose potential issue for multiple user scenario. For multiple user scenario, it is better to enable impersonation for Spark Interpreter in yarn mode.\u003cbr /\u003e\nThere are 3 steps you need to do to enable impersonation.\u003c/p\u003e\n\u003col\u003e\n\u003cli\u003eEnable it in Spark Interp [...] } ] }, @@ -505,4 +505,4 @@ "isZeppelinNotebookCronEnable": false }, "info": {} -} \ No newline at end of file +}