Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json URL: http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json?rev=1774116&r1=1774115&r2=1774116&view=diff ============================================================================== --- zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json (original) +++ zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json Wed Dec 14 00:05:19 2016 @@ -5,7 +5,7 @@ "/development/howtocontribute.html": { "title": "Contributing to Apache Zeppelin (Code)", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Contributing to Apache Zeppelin ( Code )NOTE : Apache Zeppelin is an Apache2 License Software.Any contributions to Zeppelin (Source code, Documents, Image, Website) means you agree with license all your contributions as Apache2 License.Setting upHere are some tools you will need to build and test Zeppelin.Software Configuration Management ( SCM )Since Zeppelin uses Git for it&#39;s SCM system, you need git client installed in y our development machine.Integrated Development Environment ( IDE )You are free to use whatever IDE you prefer, or your favorite command line editor.Build ToolsTo build the code, installOracle Java 7Apache MavenGetting the source codeFirst of all, you need Zeppelin source code. The official location of Zeppelin is http://git.apache.org/zeppelin.git.git accessGet the source code on your development machine using git.git clone git://git.apache.org/zeppelin.git zeppelinYou may also want to develop against a specific branch. For example, for branch-0.5.6git clone -b branch-0.5.6 git://git.apache.org/zeppelin.git zeppelinApache Zeppelin follows Fork &amp; Pull as a source control workflow.If you want to not only build Zeppelin but also make any changes, then you need to fork Zeppelin github mirror repository and make a pull request.Buildmvn installTo skip testmvn install -DskipTestsTo build with specific spark / hadoop versionmvn install -Dspark.version=x.x.x -Dhadoop.version=x.x.xFor the further Run Zeppelin server in development modecd zeppelin-serverHADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn exec:java -Dexec.mainClass=&quot;org.apache.zeppelin.server.ZeppelinServer&quot; -Dexec.args=&quot;&quot;Note: Make sure you first run mvn clean install -DskipTests on your zeppelin root directory, otherwise your server build will fail to find the required dependencies in the local repro.or use daemon scriptbin/zeppelin-daemon startServer will be run on http://localhost:8080.Generating Thrift CodeSome portions of the Zeppelin code are generated by Thrift. For most Zeppelin changes, you don&#39;t need to worry about this. But if you modify any of the Thrift IDL files (e.g. zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to regenerate these files and submit their updated version as part of your patch.To regenerate the code, install thrift-0.9.2 and change directory into Zeppelin source directory. and then run following c ommandthrift -out zeppelin-interpreter/src/main/java/ --gen java zeppelin-interpreter/src/main/thrift/RemoteInterpreterService.thriftWhere to StartYou can find issues for beginner &amp; newbieStay involvedContributors should join the Zeppelin mailing lists....@zeppelin.apache.org is for people who want to contribute code to Zeppelin. subscribe, unsubscribe, archivesIf you have any issues, create a ticket in JIRA.", + "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Contributing to Apache Zeppelin ( Code )NOTE : Apache Zeppelin is an Apache2 License Software.Any contributions to Zeppelin (Source code, Documents, Image, Website) means you agree with license all your contributions as Apache2 License.Setting upHere are some tools you will need to build and test Zeppelin.Software Configuration Management ( SCM )Since Zeppelin uses Git for it&#39;s SCM system, you need git client installed in y our development machine.Integrated Development Environment ( IDE )You are free to use whatever IDE you prefer, or your favorite command line editor.Build ToolsTo build the code, installOracle Java 7Apache MavenGetting the source codeFirst of all, you need Zeppelin source code. The official location of Zeppelin is http://git.apache.org/zeppelin.git.git accessGet the source code on your development machine using git.git clone git://git.apache.org/zeppelin.git zeppelinYou may also want to develop against a specific branch. For example, for branch-0.5.6git clone -b branch-0.5.6 git://git.apache.org/zeppelin.git zeppelinApache Zeppelin follows Fork &amp; Pull as a source control workflow.If you want to not only build Zeppelin but also make any changes, then you need to fork Zeppelin github mirror repository and make a pull request.Buildmvn installTo skip testmvn install -DskipTestsTo build with specific spark / hadoop versionmvn install -Dspark.version=x.x.x -Dhadoop.version=x.x.xFor the further Run Zeppelin server in development modecd zeppelin-serverHADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn exec:java -Dexec.mainClass=&quot;org.apache.zeppelin.server.ZeppelinServer&quot; -Dexec.args=&quot;&quot;Note: Make sure you first run mvn clean install -DskipTests on your zeppelin root directory, otherwise your server build will fail to find the required dependencies in the local repro.or use daemon scriptbin/zeppelin-daemon startServer will be run on http://localhost:8080.Generating Thrift CodeSome portions of the Zeppelin code are generated by Thrift. For most Zeppelin changes, you don&#39;t need to worry about this. But if you modify any of the Thrift IDL files (e.g. zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to regenerate these files and submit their updated version as part of your patch.To regenerate the code, install thrift-0.9.2 and then run the following command to generate thrift code.cd &lt;zeppeli n_home&gt;/zeppelin-interpreter/src/main/thrift./genthrift.shWhere to StartYou can find issues for beginner &amp; newbieStay involvedContributors should join the Zeppelin mailing lists....@zeppelin.apache.org is for people who want to contribute code to Zeppelin. subscribe, unsubscribe, archivesIf you have any issues, create a ticket in JIRA.", "url": " /development/howtocontribute.html", "group": "development", "excerpt": "How can you contribute to Apache Zeppelin project? This document covers from setting up your develop environment to making a pull request on Github." @@ -60,7 +60,7 @@ "/displaysystem/basicdisplaysystem.html": { "title": "Basic Display System in Apache Zeppelin", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Basic Display System in Apache ZeppelinTextBy default, Apache Zeppelin prints interpreter response as a plain text using text display system.You can explicitly say you&#39;re using text display system.HtmlWith %html directive, Zeppelin treats your output as HTMLMathematical expressionsHTML display system automatically formats mathematical expression using MathJax. You can use( INLINE EXPRESSION ) and $$ EXPRESSION $$ to format. For exampleTableIf you have data that row separated by &#39;n&#39; (newline) and column separated by &#39;t&#39; (tab) with first row as header row, for exampleYou can simply use %table display system to leverage Zeppelin&#39;s built in visualization.If table contents start with %html, it is interpreted as an HTML.Note : Display system is backend independent.", + "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Basic Display System in Apache ZeppelinTextBy default, Apache Zeppelin prints interpreter response as a plain text using text display system.You can explicitly say you&#39;re using text display system.HtmlWith %html directive, Zeppelin treats your output as HTMLMathematical expressionsHTML display system automatically formats mathematical expression using MathJax. You can use( INLINE EXPRESSION ) and $$ EXPRESSION $$ to format. For exampleTableIf you have data that row separated by n (newline) and column separated by t (tab) with first row as header row, for exampleYou can simply use %table display system to leverage Zeppelin&#39;s built in visualization.If table contents start with %html, it is interpreted as an HTML.Note : Display system is backend independent.", "url": " /displaysystem/basicdisplaysystem.html", "group": "display", "excerpt": "There are 3 basic display systems in Apache Zeppelin. By default, Zeppelin prints interpreter responce as a plain text using text display system. With %html directive, Zeppelin treats your output as HTML. You can also simply use %table display system..." @@ -83,7 +83,7 @@ "/install/build.html": { "title": "Build from Source", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Building from SourceIf you want to build from source, you must first install the following dependencies: Name Value Git (Any Version) Maven 3.1.x or higher JDK 1.7 If you haven&#39;t installed Git and Maven yet, check the Build requirements section and follow the step by step instructions from there.1. Clone the Apache Zeppelin repositorygit clone https://github.com/apache/zeppelin.git2. B uild sourceYou can build Zeppelin with following maven command:mvn clean package -DskipTests [Options]If you&#39;re unsure about the options, use the same commands that creates official binary package.# update all pom.xml to use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all interpreters and include latest version of Apache spark support for local mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles section for further build options.If you are behind proxy, follow instructions in Proxy setting section.If you&#39;re interested in contribution, please check Contributing to Apache Zeppelin (Code) and Contributing to Apache Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific Spark version, Hadoop version or specific features, define one or more of the following pr ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles are-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop major versionAvailable profiles are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6minor version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] (optional)set scala version (default 2.10)Available profiles are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local mode.-Pr (optional)enable R support with SparkR integration.-Psparkr (optional)another R support with SparkR integration as well as local mode support.-Pvendor-repo (optional)enable 3rd party vendor repository (cloudera)-Pmap r[version] (optional)For the MapR Hadoop Distribution, these profiles will handle the Hadoop version. As MapR allows different versions of Spark to be installed, you should specify which version of Spark is installed on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, etc.) as needed.The correct Maven artifacts can be found for every version of MapR at http://doc.mapr.comAvailable profiles are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples under zeppelin-examples directoryBuild command examplesHere are some examples with several options:# build with spark-2.0, scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -DskipTests# spark-cassandra integrationmvn clean package -Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests -DskipTests# with C DHmvn clean package -Pspark-1.5 -Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 -Pvendor-repo -DskipTests# with MapRmvn clean package -Pspark-1.5 -Pmapr50 -DskipTestsIgnite Interpretermvn clean package -Dignite.version=1.6.0 -DskipTestsScalding Interpretermvn clean package -Pscalding -DskipTestsBuild requirementsInstall requirementsIf you don&#39;t have requirements prepared, install it.(The installation method may vary according to your environment, example is for Ubuntu.)sudo apt-get updatesudo apt-get install gitsudo apt-get install openjdk-7-jdksudo apt-get install npmsudo apt-get install libfontconfigInstall mavenwget http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s /usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is installed by running node --version - Ensure maven is running version 3.1.x or higher with mvn -version - Configure maven to use more memory than usual by export MAVEN_OPTS=&quot;-Xmx2g -XX:MaxPermSize=1024m&quot;Proxy setting (optional)If you&#39;re behind the proxy, you&#39;ll need to configure maven and npm to pass through it.First of all, configure maven in your ~/.m2/settings.xml.&lt;settings&gt; &lt;proxies&gt; &lt;proxy&gt; &lt;id&gt;proxy-http&lt;/id&gt; &lt;active&gt;true&lt;/active&gt; &lt;protocol&gt;http&lt;/protocol&gt; &lt;host&gt;localhost&lt;/host&gt; &lt;port&gt;3128&lt;/port&gt; &lt;!-- &lt;username&gt;usr&lt;/username&gt; &lt;password&gt;pwd&lt;/password&gt; --&gt; &lt;nonProxyHosts&gt;localhost|127.0.0.1&lt;/nonProxyHosts&gt; &lt;/proxy&gt; &lt;proxy&gt; &lt;id&gt;proxy-https&lt;/id&gt; &lt;active&gt;true&lt;/acti ve&gt; &lt;protocol&gt;https&lt;/protocol&gt; &lt;host&gt;localhost&lt;/host&gt; &lt;port&gt;3128&lt;/port&gt; &lt;!-- &lt;username&gt;usr&lt;/username&gt; &lt;password&gt;pwd&lt;/password&gt; --&gt; &lt;nonProxyHosts&gt;localhost|127.0.0.1&lt;/nonProxyHosts&gt; &lt;/proxy&gt; &lt;/proxies&gt;&lt;/settings&gt;Then, next commands will configure npm.npm config set proxy http://localhost:3128npm config set https-proxy http://localhost:3128npm config set registry &quot;http://registry.npmjs.org/&quot;npm config set strict-ssl falseConfigure git as wellgit config --global http.proxy http://localhost:3128git config --global https.proxy http://localhost:3128git config --global url.&quot;http://&quot;.insteadOf git://To clean up, set active false in Maven settings.xml and run these commands.npm config rm proxynp m config rm https-proxygit config --global --unset http.proxygit config --global --unset https.proxygit config --global --unset url.&quot;http://&quot;.insteadOfNotes: - If you are behind NTLM proxy you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the standard pattern http://user:pwd@host:port.PackageTo package the final distribution including the compressed archive, run:mvn clean package -Pbuild-distrTo build a distribution with specific profiles, run:mvn clean package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles -Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build to a specific spark versions, or omit support such as yarn. The archive is generated under zeppelin-distribution/target directoryRun end-to-end testsZeppelin comes with a set of end-to-end acceptance tests driving headless selenium browser# assumes zeppelin-server running on localhost:8080 (use -Durl=.. to override)mvn verify# or take care of starting/stoping zeppelin-server from packaged zeppelin-distribuion/targetmvn verify -P using-packaged-distr", + "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Building from SourceIf you want to build from source, you must first install the following dependencies: Name Value Git (Any Version) Maven 3.1.x or higher JDK 1.7 If you haven&#39;t installed Git and Maven yet, check the Build requirements section and follow the step by step instructions from there.1. Clone the Apache Zeppelin repositorygit clone https://github.com/apache/zeppelin.git2. B uild sourceYou can build Zeppelin with following maven command:mvn clean package -DskipTests [Options]If you&#39;re unsure about the options, use the same commands that creates official binary package.# update all pom.xml to use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all interpreters and include latest version of Apache spark support for local mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles section for further build options.If you are behind proxy, follow instructions in Proxy setting section.If you&#39;re interested in contribution, please check Contributing to Apache Zeppelin (Code) and Contributing to Apache Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific Spark version, Hadoop version or specific features, define one or more of the following pr ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles are-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop major versionAvailable profiles are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6-Phadoop-2.7minor version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] (optional)set scala version (default 2.10)Available profiles are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local mode.-Pr (optional)enable R support with SparkR integration.-Psparkr (optional)another R support with SparkR integration as well as local mode support.-Pvendor-repo (optional)enable 3rd party vendor repository (cl oudera)-Pmapr[version] (optional)For the MapR Hadoop Distribution, these profiles will handle the Hadoop version. As MapR allows different versions of Spark to be installed, you should specify which version of Spark is installed on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, etc.) as needed.The correct Maven artifacts can be found for every version of MapR at http://doc.mapr.comAvailable profiles are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples under zeppelin-examples directoryBuild command examplesHere are some examples with several options:# build with spark-2.0, scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -DskipTests# spark-cassandra integrationmvn clean package -Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests -DskipT ests# with CDHmvn clean package -Pspark-1.5 -Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 -Pvendor-repo -DskipTests# with MapRmvn clean package -Pspark-1.5 -Pmapr50 -DskipTestsIgnite Interpretermvn clean package -Dignite.version=1.6.0 -DskipTestsScalding Interpretermvn clean package -Pscalding -DskipTestsBuild requirementsInstall requirementsIf you don&#39;t have requirements prepared, install it.(The installation method may vary according to your environment, example is for Ubuntu.)sudo apt-get updatesudo apt-get install gitsudo apt-get install openjdk-7-jdksudo apt-get install npmsudo apt-get install libfontconfigInstall mavenwget http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s /usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is installed by running node --version - Ensure maven is running version 3.1.x or higher with mvn -version - Configure maven to use more memory than usual by export MAVEN_OPTS=&quot;-Xmx2g -XX:MaxPermSize=1024m&quot;Proxy setting (optional)If you&#39;re behind the proxy, you&#39;ll need to configure maven and npm to pass through it.First of all, configure maven in your ~/.m2/settings.xml.&lt;settings&gt; &lt;proxies&gt; &lt;proxy&gt; &lt;id&gt;proxy-http&lt;/id&gt; &lt;active&gt;true&lt;/active&gt; &lt;protocol&gt;http&lt;/protocol&gt; &lt;host&gt;localhost&lt;/host&gt; &lt;port&gt;3128&lt;/port&gt; &lt;!-- &lt;username&gt;usr&lt;/username&gt; &lt;password&gt;pwd&lt;/password&gt; --&gt; &lt;nonProxyHosts&gt;localhost|127.0.0.1&lt;/nonProxyHosts&gt; &lt;/proxy&gt; &lt;proxy&gt; &lt;id&gt;proxy-https&lt;/id&gt; &lt;active&gt;true& amp;lt;/active&gt; &lt;protocol&gt;https&lt;/protocol&gt; &lt;host&gt;localhost&lt;/host&gt; &lt;port&gt;3128&lt;/port&gt; &lt;!-- &lt;username&gt;usr&lt;/username&gt; &lt;password&gt;pwd&lt;/password&gt; --&gt; &lt;nonProxyHosts&gt;localhost|127.0.0.1&lt;/nonProxyHosts&gt; &lt;/proxy&gt; &lt;/proxies&gt;&lt;/settings&gt;Then, next commands will configure npm.npm config set proxy http://localhost:3128npm config set https-proxy http://localhost:3128npm config set registry &quot;http://registry.npmjs.org/&quot;npm config set strict-ssl falseConfigure git as wellgit config --global http.proxy http://localhost:3128git config --global https.proxy http://localhost:3128git config --global url.&quot;http://&quot;.insteadOf git://To clean up, set active false in Maven settings.xml and run these commands.npm confi g rm proxynpm config rm https-proxygit config --global --unset http.proxygit config --global --unset https.proxygit config --global --unset url.&quot;http://&quot;.insteadOfNotes: - If you are behind NTLM proxy you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the standard pattern http://user:pwd@host:port.PackageTo package the final distribution including the compressed archive, run:mvn clean package -Pbuild-distrTo build a distribution with specific profiles, run:mvn clean package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles -Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build to a specific spark versions, or omit support such as yarn. The archive is generated under zeppelin-distribution/target directoryRun end-to-end testsZeppelin comes with a set of end-to-end acceptance tests driving headless selenium browser# assumes zeppelin-server running on localhost:8080 (use -Durl=.. to override)mvn verify# or t ake care of starting/stoping zeppelin-server from packaged zeppelin-distribuion/targetmvn verify -P using-packaged-distr", "url": " /install/build.html", "group": "install", "excerpt": "How to build Zeppelin from source" @@ -127,7 +127,7 @@ "/install/upgrade.html": { "title": "Manual Zeppelin version upgrade procedure", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Manual upgrade procedure for ZeppelinBasically, newer version of Zeppelin works with previous version notebook directory and configurations.So, copying notebook and conf directory should be enough.InstructionsStop Zeppelinbin/zeppelin-daemon.sh stopCopy your notebook and conf directory into a backup directoryDownload newer version of Zeppelin and Install. See Install page.Copy backup notebook and conf directory into newer version o f Zeppelin notebook and conf directoryStart Zeppelinbin/zeppelin-daemon.sh startMigration GuideUpgrading from Zeppelin 0.6 to 0.7From 0.7, we don&#39;t use ZEPPELIN_JAVA_OPTS as default value of ZEPPELIN_INTP_JAVA_OPTS and also the same for ZEPPELIN_MEM/ZEPPELIN_INTP_MEM. If user want to configure the jvm opts of interpreter process, please set ZEPPELIN_INTP_JAVA_OPTS and ZEPPELIN_INTP_MEM explicitly. If you don&#39;t set ZEPPELIN_INTP_MEM, Zeppelin will set it to -Xms1024m -Xmx1024m -XX:MaxPermSize=512m by default.Mapping from %jdbc(prefix) to %prefix is no longer available. Instead, you can use %[interpreter alias] with multiple interpreter setttings on GUI.Usage of ZEPPELIN_PORT is not supported in ssl mode. Instead use ZEPPELIN_SSL_PORT to configure the ssl port. Value from ZEPPELIN_PORT is used only when ZEPPELIN_SSL is set to false.The support on Spark 1.1.x to 1.3.x is deprecated.From 0.7, we uses pegdown as the markdown.parser.type option for the %md interpreter. Ren dered markdown might be different from what you expected", + "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Manual upgrade procedure for ZeppelinBasically, newer version of Zeppelin works with previous version notebook directory and configurations.So, copying notebook and conf directory should be enough.InstructionsStop Zeppelinbin/zeppelin-daemon.sh stopCopy your notebook and conf directory into a backup directoryDownload newer version of Zeppelin and Install. See Install page.Copy backup notebook and conf directory into newer version o f Zeppelin notebook and conf directoryStart Zeppelinbin/zeppelin-daemon.sh startMigration GuideUpgrading from Zeppelin 0.6 to 0.7From 0.7, we don&#39;t use ZEPPELIN_JAVA_OPTS as default value of ZEPPELIN_INTP_JAVA_OPTS and also the same for ZEPPELIN_MEM/ZEPPELIN_INTP_MEM. If user want to configure the jvm opts of interpreter process, please set ZEPPELIN_INTP_JAVA_OPTS and ZEPPELIN_INTP_MEM explicitly. If you don&#39;t set ZEPPELIN_INTP_MEM, Zeppelin will set it to -Xms1024m -Xmx1024m -XX:MaxPermSize=512m by default.Mapping from %jdbc(prefix) to %prefix is no longer available. Instead, you can use %[interpreter alias] with multiple interpreter setttings on GUI.Usage of ZEPPELIN_PORT is not supported in ssl mode. Instead use ZEPPELIN_SSL_PORT to configure the ssl port. Value from ZEPPELIN_PORT is used only when ZEPPELIN_SSL is set to false.The support on Spark 1.1.x to 1.3.x is deprecated.From 0.7, we uses pegdown as the markdown.parser.type option for the %md interpreter. Ren dered markdown might be different from what you expectedFrom 0.7 note.json format has been changed to support multiple outputs in a paragraph. Zeppelin will automatically convert old format to new format. 0.6 or lower version can read new note.json format but output will not be displayed. For the detail, see ZEPPELIN-212 and pullrequest.", "url": " /install/upgrade.html", "group": "install", "excerpt": "This document will guide you through a procedure of manual upgrade your Apache Zeppelin instance to a newer version. Apache Zeppelin keeps backward compatibility for the notebook file format." @@ -424,7 +424,7 @@ "/interpreter/spark.html": { "title": "Apache Spark Interpreter for Apache Zeppelin", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Spark Interpreter for Apache ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing system.It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.Apache Spark is supported in Zeppelin with Spark interpreter group which consists of below five interpreters. Name Class Description %spark SparkInterpreter Creates a SparkConte xt and provides a Scala environment %spark.pyspark PySparkInterpreter Provides a Python environment %spark.r SparkRInterpreter Provides an R environment with SparkR support %spark.sql SparkSQLInterpreter Provides a SQL environment %spark.dep DepInterpreter Dependency loader ConfigurationThe Spark interpreter can be configured with properties provided by Zeppelin.You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to Spark Available Properties. Property Default Description args Spark commandline args master local[*] Spark master uri. ex) spark://masterhost:7077 spark.app.name Zeppelin The name of spark application. spark.cores.max Total number of cores to use. Empty value uses all available core. spark.executor.memory 1g Executor memory per worker instance. ex) 512m, 32g zeppelin.dep .additionalRemoteRepository spark-packages, http://dl.bintray.com/spark-packages/maven, false; A list of id,remote-repository-URL,is-snapshot; for each remote repository. zeppelin.dep.localrepo local-repo Local repository for dependency loader zeppelin.pyspark.python python Python command to run pyspark with zeppelin.spark.concurrentSQL false Execute multiple SQL concurrently if set true. zeppelin.spark.maxResult 1000 Max number of Spark SQL result to display. zeppelin.spark.printREPLOutput true Print REPL output zeppelin.spark.useHiveContext true Use HiveContext instead of SQLContext if it is true. zeppelin.spark.importImplicit true Import implicits, UDF collection, and sql if set true. Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you&#39;ll need to follow below two simple steps.1. Export SPARK_HOM EIn conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.For example,export SPARK_HOME=/usr/lib/sparkYou can optionally export HADOOP_CONF_DIR and SPARK_SUBMIT_OPTIONSexport HADOOP_CONF_DIR=/usr/lib/hadoopexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0&quot;For Windows, ensure you have winutils.exe in %HADOOP_HOME%bin. Please see Problems running Hadoop on Windows for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.For example,local[*] in local modespark://master:7077 in standalone clusteryarn-client in Yarn client modemesos://host:5050 in Mesos clusterThat&#39;s it. Zeppelin will work with any version of Spark and any deployment type without rebuilding Zeppelin in this way. For the further information about Spark &amp; Zeppeli n version compatibility, please refer to &quot;Available Interpreters&quot; section in Zeppelin download page.Note that without exporting SPARK_HOME, it&#39;s running in local mode with included version of Spark. The included version may vary depending on the build profile.SparkContext, SQLContext, SparkSession, ZeppelinContextSparkContext, SQLContext and ZeppelinContext are automatically created and exposed as variable names sc, sqlContext and z, respectively, in Scala, Python and R environments.Staring from 0.6.1 SparkSession is available as variable spark when you are using Spark 2.x.Note that Scala/Python/R environment shares the same SparkContext, SQLContext and ZeppelinContext instance. Dependency ManagementThere are two ways to load external libraries in Spark interpreter. First is using interpreter setting menu and second is loading Spark properties.1. Setting Dependencies via Interpreter SettingPlease see Dependency Management for the details.2. Loading Spark Pr opertiesOnce SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. spark-submit supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit by exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-defaults.conf. Spark properties that user can set to distribute libraries are: spark-defaults.conf SPARK_SUBMIT_OPTIONS Description spark.jars --jars Comma-separated list of local jars to include on the driver and executor classpaths. spark.jars.packages --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. spark .files --files Comma-separated list of files to be placed in the working directory of each executor. Here are few examples:SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg&quot;SPARK_HOME/conf/spark-defaults.confspark.jars /path/mylib1.jar,/path/mylib2.jarspark.jars.packages com.databricks:spark-csv_2.10:1.2.0spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to %spark and %spark.pyspark but not to %spark.sql interpreter. So we recommend you to use the first option instead.When your code requires external library, instead of doing download/copy/restart Zeppelin, you can easily do following jobs using %spark.dep interpreter.Load libraries recursively from maven repositoryL oad libraries from local filesystemAdd additional maven repositoryAutomatically add libraries to SparkCluster (You can turn off)Dep interpreter leverages Scala environment. So you can write any Scala code here.Note that %spark.dep interpreter should be used before %spark, %spark.pyspark, %spark.sql.Here&#39;s usages.%spark.depz.reset() // clean up previously added artifact and repository// add maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;)// add maven snapshot repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).snapshot()// add credentials for private maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).username(&quot;username&quot;).password(&quot;password&quot;)// add artifact from filesystemz.load(&quot;/path/to.jar&quot;)// add artifact from maven repository, with no dependencyz.load(&quot;groupId:artifactId:version&quot;).excludeAll()/ / add artifact recursivelyz.load(&quot;groupId:artifactId:version&quot;)// add artifact recursively except comma separated GroupID:ArtifactId listz.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:artifactId,groupId:artifactId, ...&quot;)// exclude with patternz.load(&quot;groupId:artifactId:version&quot;).exclude(*)z.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:artifactId:*&quot;)z.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:*&quot;)// local() skips adding artifact to spark clusters (skipping sc.addJar())z.load(&quot;groupId:artifactId:version&quot;).local()ZeppelinContextZeppelin automatically injects ZeppelinContext as variable z in your Scala/Python environment. ZeppelinContext provides some additional functions and utilities.Object ExchangeZeppelinContext extends map and it&#39;s shared between Scala and Python environment.So you can put som e objects from Scala and read it from Python, vice versa. // Put object from scala%sparkval myObject = ...z.put(&quot;objName&quot;, myObject) # Get object from python%spark.pysparkmyObject = z.get(&quot;objName&quot;) Form CreationZeppelinContext provides functions for creating forms.In Scala and Python environments, you can create forms programmatically. %spark/* Create text input form */z.input(&quot;formName&quot;)/* Create text input form with default value */z.input(&quot;formName&quot;, &quot;defaultValue&quot;)/* Create select form */z.select(&quot;formName&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)))/* Create select form with default value*/z.select(&quot;formName&quot;, &quot;option1&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;))) %spark.pyspark# Create text input formz.input(&quot;formName&quot;)# Create text input form with default valuez.input(&quot;formName&quot;, &quot;defaultValue&quot;)# Create select formz.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)])# Create select form with default valuez.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)], &quot;option1&quot;) In sql environment, you can create form in simple template.%spark.sqlselect * from ${table=defaultTableName} where text like &#39;%${search}%&#39;To learn more about dynamic form, checkout Dynamic Form.M atplotlib Integration (pyspark)Both the python and pyspark interpreters have built-in support for inline visualization using matplotlib, a popular plotting library for python. More details can be found in the python interpreter documentation, since matplotlib support is identical. More advanced interactive plotting can be done with pyspark through utilizing Zeppelin&#39;s built-in Angular Display System, as shown below:Interpreter setting optionYou can choose one of shared, scoped and isolated options wheh you configure Spark interpreter. Spark interpreter creates separated Scala compiler per each notebook but share a single SparkContext in scoped mode (experimental). It creates separated SparkContext per each notebook in isolated mode.Setting up Zeppelin with KerberosLogical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on YARN:Configuration SetupOn the server that Zeppelin is installed, install Kerberos client modules and configuration, krb5.conf.This is to make the server communicate with KDC.Set SPARK_HOME in [ZEPPELIN_HOME]/conf/zeppelin-env.sh to use spark-submit(Additionally, you might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two properties below to Spark configuration ([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE: If you do not have permission to access for the above spark-defaults.conf file, optionally, you can add the above lines to the Spark Interpreter setting through the Interpreter tab in the Zeppelin UI.That&#39;s it. Play with Zeppelin!", + "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Spark Interpreter for Apache ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing system.It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.Apache Spark is supported in Zeppelin with Spark interpreter group which consists of below five interpreters. Name Class Description %spark SparkInterpreter Creates a SparkConte xt and provides a Scala environment %spark.pyspark PySparkInterpreter Provides a Python environment %spark.r SparkRInterpreter Provides an R environment with SparkR support %spark.sql SparkSQLInterpreter Provides a SQL environment %spark.dep DepInterpreter Dependency loader ConfigurationThe Spark interpreter can be configured with properties provided by Zeppelin.You can also set other Spark properties which are not listed in the table. For a list of additional properties, refer to Spark Available Properties. Property Default Description args Spark commandline args master local[*] Spark master uri. ex) spark://masterhost:7077 spark.app.name Zeppelin The name of spark application. spark.cores.max Total number of cores to use. Empty value uses all available core. spark.executor.memory 1g Executor memory per worker instance. ex) 512m, 32g zeppelin.dep .additionalRemoteRepository spark-packages, http://dl.bintray.com/spark-packages/maven, false; A list of id,remote-repository-URL,is-snapshot; for each remote repository. zeppelin.dep.localrepo local-repo Local repository for dependency loader zeppelin.pyspark.python python Python command to run pyspark with zeppelin.spark.concurrentSQL false Execute multiple SQL concurrently if set true. zeppelin.spark.maxResult 1000 Max number of Spark SQL result to display. zeppelin.spark.printREPLOutput true Print REPL output zeppelin.spark.useHiveContext true Use HiveContext instead of SQLContext if it is true. zeppelin.spark.importImplicit true Import implicits, UDF collection, and sql if set true. Without any configuration, Spark interpreter works out of box in local mode. But if you want to connect to your Spark cluster, you&#39;ll need to follow below two simple steps.1. Export SPARK_HOM EIn conf/zeppelin-env.sh, export SPARK_HOME environment variable with your Spark installation path.For example,export SPARK_HOME=/usr/lib/sparkYou can optionally set more environment variables# set hadoop conf direxport HADOOP_CONF_DIR=/usr/lib/hadoop# set options to pass spark-submit commandexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0&quot;# extra classpath. e.g. set classpath for hive-site.xmlexport ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/confFor Windows, ensure you have winutils.exe in %HADOOP_HOME%bin. Please see Problems running Hadoop on Windows for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to Interpreter menu and edit master property in your Spark interpreter setting. The value may vary depending on your Spark cluster deployment type.For example,local[*] in local modespark://master:7077 in standalone clusteryarn-client in Yarn client modemesos://host:5050 in Mesos clusterThat&#39;s it. Zeppelin wi ll work with any version of Spark and any deployment type without rebuilding Zeppelin in this way. For the further information about Spark &amp; Zeppelin version compatibility, please refer to &quot;Available Interpreters&quot; section in Zeppelin download page.Note that without exporting SPARK_HOME, it&#39;s running in local mode with included version of Spark. The included version may vary depending on the build profile.SparkContext, SQLContext, SparkSession, ZeppelinContextSparkContext, SQLContext and ZeppelinContext are automatically created and exposed as variable names sc, sqlContext and z, respectively, in Scala, Python and R environments.Staring from 0.6.1 SparkSession is available as variable spark when you are using Spark 2.x.Note that Scala/Python/R environment shares the same SparkContext, SQLContext and ZeppelinContext instance. Dependency ManagementThere are two ways to load external libraries in Spark interpreter. First is using interpreter setting men u and second is loading Spark properties.1. Setting Dependencies via Interpreter SettingPlease see Dependency Management for the details.2. Loading Spark PropertiesOnce SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses spark-submit as spark interpreter runner. spark-submit supports two ways to load configurations. The first is command line options such as --master and Zeppelin can pass these options to spark-submit by exporting SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration options from SPARK_HOME/conf/spark-defaults.conf. Spark properties that user can set to distribute libraries are: spark-defaults.conf SPARK_SUBMIT_OPTIONS Description spark.jars --jars Comma-separated list of local jars to include on the driver and executor classpaths. spark.jars.packages --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then mav en central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. spark.files --files Comma-separated list of files to be placed in the working directory of each executor. Here are few examples:SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS=&quot;--packages com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar --files /path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg&quot;SPARK_HOME/conf/spark-defaults.confspark.jars /path/mylib1.jar,/path/mylib2.jarspark.jars.packages com.databricks:spark-csv_2.10:1.2.0spark.files /path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to %spark and %spark.pyspark but not to %spark.sql interpreter. So we recommend you to use the first option instead.When your code requires external library, inst ead of doing download/copy/restart Zeppelin, you can easily do following jobs using %spark.dep interpreter.Load libraries recursively from maven repositoryLoad libraries from local filesystemAdd additional maven repositoryAutomatically add libraries to SparkCluster (You can turn off)Dep interpreter leverages Scala environment. So you can write any Scala code here.Note that %spark.dep interpreter should be used before %spark, %spark.pyspark, %spark.sql.Here&#39;s usages.%spark.depz.reset() // clean up previously added artifact and repository// add maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;)// add maven snapshot repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).snapshot()// add credentials for private maven repositoryz.addRepo(&quot;RepoName&quot;).url(&quot;RepoURL&quot;).username(&quot;username&quot;).password(&quot;password&quot;)// add artifact from filesystemz.load(&a mp;quot;/path/to.jar&quot;)// add artifact from maven repository, with no dependencyz.load(&quot;groupId:artifactId:version&quot;).excludeAll()// add artifact recursivelyz.load(&quot;groupId:artifactId:version&quot;)// add artifact recursively except comma separated GroupID:ArtifactId listz.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:artifactId,groupId:artifactId, ...&quot;)// exclude with patternz.load(&quot;groupId:artifactId:version&quot;).exclude(*)z.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:artifactId:*&quot;)z.load(&quot;groupId:artifactId:version&quot;).exclude(&quot;groupId:*&quot;)// local() skips adding artifact to spark clusters (skipping sc.addJar())z.load(&quot;groupId:artifactId:version&quot;).local()ZeppelinContextZeppelin automatically injects ZeppelinContext as variable z in your Scala/Python environment. ZeppelinContext provides some a dditional functions and utilities.Object ExchangeZeppelinContext extends map and it&#39;s shared between Scala and Python environment.So you can put some objects from Scala and read it from Python, vice versa. // Put object from scala%sparkval myObject = ...z.put(&quot;objName&quot;, myObject)// Exchanging data framesmyScalaDataFrame = ...z.put(&quot;myScalaDataFrame&quot;, myScalaDataFrame)val myPythonDataFrame = z.get(&quot;myPythonDataFrame&quot;).asInstanceOf[DataFrame] # Get object from python%spark.pysparkmyObject = z.get(&quot;objName&quot;)# Exchanging data framesmyPythonDataFrame = ...z.put(&quot;myPythonDataFrame&quot;, postsDf._jdf)myScalaDataFrame = DataFrame(z.get(&quot;myScalaDataFrame&quot;), sqlContext) Form CreationZeppelinContext provides functions for creating forms.In Scala and Python environments, you can create forms programmatically. %spark/* Create text input form */z.input(&quot;formName&quot; )/* Create text input form with default value */z.input(&quot;formName&quot;, &quot;defaultValue&quot;)/* Create select form */z.select(&quot;formName&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)))/* Create select form with default value*/z.select(&quot;formName&quot;, &quot;option1&quot;, Seq((&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;))) %spark.pyspark# Create text input formz.input(&quot;formName&quot;)# Create text input form with default valuez.input(&quot;formName&quot;, &quot;defaultValue&quot;)# Create select formz.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quo t;option2&quot;, &quot;option2DisplayName&quot;)])# Create select form with default valuez.select(&quot;formName&quot;, [(&quot;option1&quot;, &quot;option1DisplayName&quot;), (&quot;option2&quot;, &quot;option2DisplayName&quot;)], &quot;option1&quot;) In sql environment, you can create form in simple template.%spark.sqlselect * from ${table=defaultTableName} where text like &#39;%${search}%&#39;To learn more about dynamic form, checkout Dynamic Form.Matplotlib Integration (pyspark)Both the python and pyspark interpreters have built-in support for inline visualization using matplotlib, a popular plotting library for python. More details can be found in the python interpreter documentation, since matplotlib support is identical. More advanced interactive plotting can be done with pyspark through utilizing Zeppelin&#39;s built-in Angular Display System, as shown below:Interpreter setting opti onYou can choose one of shared, scoped and isolated options wheh you configure Spark interpreter. Spark interpreter creates separated Scala compiler per each notebook but share a single SparkContext in scoped mode (experimental). It creates separated SparkContext per each notebook in isolated mode.Setting up Zeppelin with KerberosLogical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark on YARN:Configuration SetupOn the server that Zeppelin is installed, install Kerberos client modules and configuration, krb5.conf.This is to make the server communicate with KDC.Set SPARK_HOME in [ZEPPELIN_HOME]/conf/zeppelin-env.sh to use spark-submit(Additionally, you might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two properties below to Spark configuration ([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE: If you do not have permission to access for the above spark-defaults.conf file, optionally, you can add the above lines to the Spark Interpreter setting through the Interpreter tab in the Zeppelin UI.That&#39;s it. Play with Zeppelin!", "url": " /interpreter/spark.html", "group": "interpreter", "excerpt": "Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution engine." @@ -512,7 +512,7 @@ "/manual/userimpersonation.html": { "title": "Run zeppelin interpreter process as web front end user", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Run zeppelin interpreter process as web front end userEnable shiro auth in shiro.ini[users]user1 = password1, role1user2 = password2, role2Enable password-less ssh for the user you want to impersonate (say user1).adduser user1#ssh-keygen (optional if you don&#39;t already have generated ssh-key.ssh user1@localhost mkdir -p .sshcat ~/.ssh/id_rsa.pub | ssh user1@localhost &#39;cat &gt;&gt; .ssh/authorized_keys&#39 ;Start zeppelin server. Screenshot Go to interpreter setting page, and enable &quot;User Impersonate&quot; in any of the interpreter (in my example its shell interpreter)Test with a simple paragraph%shwhoami", + "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Run zeppelin interpreter process as web front end userEnable shiro auth in shiro.ini[users]user1 = password1, role1user2 = password2, role2Enable password-less ssh for the user you want to impersonate (say user1).adduser user1#ssh-keygen (optional if you don&#39;t already have generated ssh-key.ssh user1@localhost mkdir -p .sshcat ~/.ssh/id_rsa.pub | ssh user1@localhost &#39;cat &gt;&gt; .ssh/authorized_keys&#39 ;Alternatively instead of password-less, user can override ZEPPELINIMPERSONATECMD in zeppelin-env.shexport ZEPPELIN_IMPERSONATE_CMD=&#39;sudo -H -u ${ZEPPELIN_IMPERSONATE_USER} bash -c &#39;Start zeppelin server. Screenshot Go to interpreter setting page, and enable &quot;User Impersonate&quot; in any of the interpreter (in my example its shell interpreter)Test with a simple paragraph%shwhoami", "url": " /manual/userimpersonation.html", "group": "manual", "excerpt": "Set up zeppelin interpreter process as web front end user." @@ -590,7 +590,7 @@ "/rest-api/rest-notebook.html": { "title": "Apache Zeppelin Notebook REST API", - "content" : "<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.-->Apache Zeppelin Notebook REST APIOverviewApache Zeppelin provides several REST APIs for interaction and remote activation of zeppelin functionality.All REST APIs are available starting with the following endpoint http://[zeppelin-server]:[zeppelin-port]/api. Note that Apache Zeppelin REST APIs receive or return JSON objects, it is recommended for you to install some JSON viewers such as JSONView.If you work with Apache Zeppelin and find a need for an additional REST API, please file an issue or send us an email.Notebook REST API ListNotebooks REST API supports the following operations: List, Create, Get, Delete, Clone, Run, Export, Import as detailed in the following tables.List of the notes Description This GET method lists the available notes on your server. Notebook JSON contains the name and id of all notes. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook Success code 200 Fail code 500 sample JSON response { &quot;status&quot;: &quot;OK&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: [ { &quot;name&quot;:&quot;Homepage&quot;, &quot;id&quot;:&quot;2AV4WUEMK&quot; }, { &quot;name&quot;:&quot;Zeppelin Tutorial&quot;, &quot;id&quot;:&quot;2A94M5J1Z&a mp;quot; } ]} Create a new note Description This POST method creates a new note using the given name or default name if none given. The body field of the returned JSON contains the new note id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook Success code 201 Fail code 500 sample JSON input (without paragraphs) {&quot;name&quot;: &quot;name of new note&quot;} sample JSON input (with initial paragraphs) { &quot;name&quot;: &quot;name of new note&quot;, &quot;paragraphs&quot;: [ { &quot;title&quot;: &quot;paragraph title1&quot;, &quot;text&quot;: &quot;paragraph text1&quot; }, { &quot;title&quot;: &quot;paragraph title2&quot;, &quot;text&quot;: &quot;paragraph text2&quot; } ]} sample JSON response { &quot;status&quot;: &quot;CREATED&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: &quot;2AZPHY918&quot;} Get an existing note information Description This GET method retrieves an existing note&#39;s information using the given id. The body field of the returned JSON contain information about paragraphs in the note. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId] Success code 200 Fail code 500 sample JSON response { &quot;status&quot;: &quot;OK&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: { &quot;paragraphs&quot;: [ { &quot;text&quot;: &quot;%sql nselect age, count(1) valuenfrom bank nwhere age &lt; 30 ngroup by age norder by age&quot;, &quot;config&quot;: { &quot;colWidth&quot;: 4, &quot;graph&quot;: { &quot;mode&quot;: &quot;multiBarChart&quot;, &quot;height&quot;: 300, &quot;optionOpen&quot;: false, &quot;keys&quot;: [ { &quot;name&quot;: &quot;age&quot;, &quot;index&quot;: 0, &quot;aggr&quot;: &quot;sum&quot; } ], &quot;values&quot;: [ { &quot;name&quot;: &quot;value&quot;, &quot;index&quot;: 1, &quot;aggr&quot;: &quot;sum&quot; } ], &quot;groups&quot;: [], &quot;scatter&quot;: { &quot;xAxis&quot;: { &quot;name&quot;: &quot;age&quot;, &quot;index&am p;quot;: 0, &quot;aggr&quot;: &quot;sum&quot; }, &quot;yAxis&quot;: { &quot;name&quot;: &quot;value&quot;, &quot;index&quot;: 1, &quot;aggr&quot;: &quot;sum&quot; } } } }, &quot;settings&quot;: { &quot;params&quot;: {}, &quot;forms&quot;: {} }, &quot;jobName&quot;: &quot;paragraph_1423500782552_-1439281894&quot;, &quot;id&quot;: &quot;20150210-015302_1492795503&quot;, &quot;result&quot;: { &quot;code&quot;: &quot;SUCCESS&quot;, &quot;type&quot;: &quot;TABLE&quot;, &quot;msg&quot;: &quot;agetvaluen19t4n20t3n21t7n22t9n23t20n24t24n25t44n26t77n27t94n28t103n29t97n&quot; }, &quot;dateCreated& quot;: &quot;Feb 10, 2015 1:53:02 AM&quot;, &quot;dateStarted&quot;: &quot;Jul 3, 2015 1:43:17 PM&quot;, &quot;dateFinished&quot;: &quot;Jul 3, 2015 1:43:23 PM&quot;, &quot;status&quot;: &quot;FINISHED&quot;, &quot;progressUpdateIntervalMs&quot;: 500 } ], &quot;name&quot;: &quot;Zeppelin Tutorial&quot;, &quot;id&quot;: &quot;2A94M5J1Z&quot;, &quot;angularObjects&quot;: {}, &quot;config&quot;: { &quot;looknfeel&quot;: &quot;default&quot; }, &quot;info&quot;: {} }} Delete a note Description This DELETE method deletes a note by the given note id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId] Success code 200 Fail code 500 sample JSON response {&quot;status&quot ;: &quot;OK&quot;,&quot;message&quot;: &quot;&quot;} Clone a note Description This POST method clones a note by the given id and create a new note using the given name or default name if none given. The body field of the returned JSON contains the new note id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId] Success code 201 Fail code 500 sample JSON input {&quot;name&quot;: &quot;name of new note&quot;} sample JSON response { &quot;status&quot;: &quot;CREATED&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: &quot;2AZPHY918&quot;} Run all paragraphs Description This POST method runs all paragraphs in the given note id. If you can not find Note id 404 returns. If there is a problem wit h the interpreter returns a 412 error. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId] Success code 200 Fail code 404 or 412 sample JSON response {&quot;status&quot;: &quot;OK&quot;} sample JSON error response { &quot;status&quot;: &quot;NOTFOUND&quot;, &quot;message&quot;: &quot;note not found.&quot; } { &quot;status&quot;: &quot;PRECONDITIONFAILED&quot;, &quot;message&quot;: &quot;paragraph1469771130099-278315611 Not selected or Invalid Interpreter bind&quot; } Stop all paragraphs Description This DELETE method stops all paragraphs in the given note id. URL http://[zeppelin-server]:[zeppelin-port]/api/ notebook/job/[noteId] Success code 200 Fail code 500 sample JSON response {&quot;status&quot;:&quot;OK&quot;} Get the status of all paragraphs Description This GET method gets the status of all paragraphs by the given note id. The body field of the returned JSON contains of the array that compose of the paragraph id, paragraph status, paragraph finish date, paragraph started date. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId] Success code 200 Fail code 500 sample JSON response { &quot;status&quot;: &quot;OK&quot;, &quot;body&quot;: [ { &quot;id&quot;:&quot;20151121-212654_766735423&quot;, &quot;status&quot;:&quot;FINISHED&quot;, &quot;finished&quot;:&quot;Tue Nov 24 14:21:40 KST 2015&a mp;quot;, &quot;started&quot;:&quot;Tue Nov 24 14:21:39 KST 2015&quot; }, { &quot;progress&quot;:&quot;1&quot;, &quot;id&quot;:&quot;20151121-212657_730976687&quot;, &quot;status&quot;:&quot;RUNNING&quot;, &quot;finished&quot;:&quot;Tue Nov 24 14:21:35 KST 2015&quot;, &quot;started&quot;:&quot;Tue Nov 24 14:21:40 KST 2015&quot; } ]} Get the status of a single paragraph Description This GET method gets the status of a single paragraph by the given note and paragraph id. The body field of the returned JSON contains of the array that compose of the paragraph id, paragraph status, paragraph finish date, paragraph started date. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId] Success code 200 Fail code 500 sampl e JSON response { &quot;status&quot;: &quot;OK&quot;, &quot;body&quot;: { &quot;id&quot;:&quot;20151121-212654_766735423&quot;, &quot;status&quot;:&quot;FINISHED&quot;, &quot;finished&quot;:&quot;Tue Nov 24 14:21:40 KST 2015&quot;, &quot;started&quot;:&quot;Tue Nov 24 14:21:39 KST 2015&quot; }} Run a paragraph asynchronously Description This POST method runs the paragraph asynchronously by given note and paragraph id. This API always return SUCCESS even if the execution of the paragraph fails later because the API is asynchronous URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId] Success code 200 Fail code 500 sample JSON input (optional, only needed when if you want to update dynamic form&#39;s value) { &quot;name& quot;: &quot;name of new note&quot;, &quot;params&quot;: { &quot;formLabel1&quot;: &quot;value1&quot;, &quot;formLabel2&quot;: &quot;value2&quot; }} sample JSON response {&quot;status&quot;: &quot;OK&quot;} Run a paragraph synchronously Description This POST method runs the paragraph synchronously by given note and paragraph id. This API can return SUCCESS or ERROR depending on the outcome of the paragraph execution URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/run/[noteId]/[paragraphId] Success code 200 Fail code 500 sample JSON input (optional, only needed when if you want to update dynamic form&#39;s value) { &quot;name&quot;: &quot;name of new note&quot;, &quot;params&quot;: { &quot;formLabel1&quot;: &quot;value1&quot;, & amp;quot;formLabel2&quot;: &quot;value2&quot; }} sample JSON response {&quot;status&quot;: &quot;OK&quot;} sample JSON error { &quot;status&quot;: &quot;INTERNAL_SERVER_ERROR&quot;, &quot;body&quot;: { &quot;code&quot;: &quot;ERROR&quot;, &quot;type&quot;: &quot;TEXT&quot;, &quot;msg&quot;: &quot;bash: -c: line 0: unexpected EOF while looking for matching ``&#39;nbash: -c: line 1: syntax error: unexpected end of filenExitValue: 2&quot; }} Stop a paragraph Description This DELETE method stops the paragraph by given note and paragraph id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId] Success code 200 Fail code 500 sample JSON response {&quot;status&quot;: &quot;OK& amp;quot;} Add Cron Job Description This POST method adds cron job by the given note id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/cron/[noteId] Success code 200 Fail code 500 sample JSON input {&quot;cron&quot;: &quot;cron expression of note&quot;} sample JSON response {&quot;status&quot;: &quot;OK&quot;} Remove Cron Job Description This DELETE method removes cron job by the given note id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/cron/[noteId] Success code 200 Fail code 500 sample JSON response {&quot;status&quot;: &quot;OK&quot;} Get Cron Job Description This GET method gets cron job expression of given note id. The body field of the returned JSON contains the cron expression. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/cron/[noteId] Success code 200 Fail code 500 sample JSON response {&quot;status&quot;: &quot;OK&quot;, &quot;body&quot;: &quot;* * * * * ?&quot;} Full text search through the paragraphs in all notes Description GET request will return list of matching paragraphs URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/search?q=[query] Success code 200 Fail code 500 Sample JSON response { &quot;status&quot;: &quot;OK&quot;, &quot;body&quot;: [ { &quot;id&quot;: &quot;/paragraph/&quot;, &quot;name&quot;:&quot;Note Name&quot;, &quot;snippet&quot;:&quot;&quot;, &quot;text&qu ot;:&quot;&quot; } ]} Create a new paragraph Description This POST method create a new paragraph using JSON payload. The body field of the returned JSON contain the new paragraph id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph Success code 201 Fail code 500 sample JSON input (add to the last) { &quot;title&quot;: &quot;Paragraph insert revised&quot;, &quot;text&quot;: &quot;%sparknprintln(&quot;Paragraph insert revised&quot;)&quot;} sample JSON input (add to specific index) { &quot;title&quot;: &quot;Paragraph insert revised&quot;, &quot;text&quot;: &quot;%sparknprintln(&quot;Paragraph insert revised&quot;)&quot;, &quot;index&quot;: 0} sample JSON response { &quot;status&quot;: &quo t;CREATED&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: &quot;20151218-100330_1754029574&quot;} Get a paragraph information Description This GET method retrieves an existing paragraph&#39;s information using the given id. The body field of the returned JSON contain information about paragraph. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId] Success code 200 Fail code 500 sample JSON response { &quot;status&quot;: &quot;OK&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: { &quot;title&quot;: &quot;Paragraph2&quot;, &quot;text&quot;: &quot;%sparknnprintln(&quot;it&#39;s paragraph2&quot;)&quot;, &quot;dateUpdated&quot;: &quot;Dec 18, 2015 7:33:54 AM&quot; , &quot;config&quot;: { &quot;colWidth&quot;: 12, &quot;graph&quot;: { &quot;mode&quot;: &quot;table&quot;, &quot;height&quot;: 300, &quot;optionOpen&quot;: false, &quot;keys&quot;: [], &quot;values&quot;: [], &quot;groups&quot;: [], &quot;scatter&quot;: {} }, &quot;enabled&quot;: true, &quot;title&quot;: true, &quot;editorMode&quot;: &quot;ace/mode/scala&quot; }, &quot;settings&quot;: { &quot;params&quot;: {}, &quot;forms&quot;: {} }, &quot;jobName&quot;: &quot;paragraph_1450391574392_-1890856722&quot;, &quot;id&quot;: &quot;20151218-073254_1105602047&quot;, &quot;result&quot;: { &quot;code&quot;: &quot;SUCCESS&quot;, &quot;type&quot;: &quot;TEXT&quot; , &quot;msg&quot;: &quot;it&#39;s paragraph2n&quot; }, &quot;dateCreated&quot;: &quot;Dec 18, 2015 7:32:54 AM&quot;, &quot;dateStarted&quot;: &quot;Dec 18, 2015 7:33:55 AM&quot;, &quot;dateFinished&quot;: &quot;Dec 18, 2015 7:33:55 AM&quot;, &quot;status&quot;: &quot;FINISHED&quot;, &quot;progressUpdateIntervalMs&quot;: 500 }} Update paragraph configuration Description This PUT method update paragraph configuration using given id so that user can change paragraph setting such as graph type, show or hide editor/result and paragraph size, etc. You can update certain fields you want, for example you can update colWidth field only by sending request with payload {&quot;colWidth&quot;: 12.0}. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]/config Success code 200 Bad Request code 400 Forbidden code 403 Not Found code 404 Fail code 500 sample JSON input { &quot;colWidth&quot;: 6.0, &quot;graph&quot;: { &quot;mode&quot;: &quot;lineChart&quot;, &quot;height&quot;: 200.0, &quot;optionOpen&quot;: false, &quot;keys&quot;: [ { &quot;name&quot;: &quot;age&quot;, &quot;index&quot;: 0.0, &quot;aggr&quot;: &quot;sum&quot; } ], &quot;values&quot;: [ { &quot;name&quot;: &quot;value&quot;, &quot;index&quot;: 1.0, &quot;aggr&quot;: &quot;sum&quot; } ], &quot;groups&quot;: [], &quot;scatter&quot;: {} }, &quot;editorHide&quot;: true, &quot;editorMode&quot;: &quot;ace/mode/markdown&quot;, & ;quot;tableHide&quot;: false} sample JSON response { &quot;status&quot;:&quot;OK&quot;, &quot;message&quot;:&quot;&quot;, &quot;body&quot;:{ &quot;text&quot;:&quot;%sql nselect age, count(1) valuenfrom bank nwhere age u003c 30 ngroup by age norder by age&quot;, &quot;config&quot;:{ &quot;colWidth&quot;:6.0, &quot;graph&quot;:{ &quot;mode&quot;:&quot;lineChart&quot;, &quot;height&quot;:200.0, &quot;optionOpen&quot;:false, &quot;keys&quot;:[ { &quot;name&quot;:&quot;age&quot;, &quot;index&quot;:0.0, &quot;aggr&quot;:&quot;sum&quot; } ], &quot;values&quot;:[ { &quot;name&quot;:&quot;value&quot;, &quot;index&quot;:1.0, &q uot;aggr&quot;:&quot;sum&quot; } ], &quot;groups&quot;:[], &quot;scatter&quot;:{} }, &quot;tableHide&quot;:false, &quot;editorMode&quot;:&quot;ace/mode/markdown&quot;, &quot;editorHide&quot;:true }, &quot;settings&quot;:{ &quot;params&quot;:{}, &quot;forms&quot;:{} }, &quot;apps&quot;:[], &quot;jobName&quot;:&quot;paragraph1423500782552-1439281894&quot;, &quot;id&quot;:&quot;20150210-015302_1492795503&quot;, &quot;result&quot;:{ &quot;code&quot;:&quot;SUCCESS&quot;, &quot;type&quot;:&quot;TABLE&quot;, &quot;msg&quot;:&quot;agetvaluen19t4n20t3n21t7n22t9n23t20n24t24n25t44n26t77n27t94n28t103n29t97n&quot; }, &quot;dateCreated&quot;:&quot;Feb 10, 2015 1:53:02 AM&quot;, &quot;dateStarted&a mp;quot;:&quot;Jul 3, 2015 1:43:17 PM&quot;, &quot;dateFinished&quot;:&quot;Jul 3, 2015 1:43:23 PM&quot;, &quot;status&quot;:&quot;FINISHED&quot;, &quot;progressUpdateIntervalMs&quot;:500 }} Move a paragraph to the specific index Description This POST method moves a paragraph to the specific index (order) from the note. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]/move/[newIndex] Success code 200 Fail code 500 sample JSON response {&quot;status&quot;: &quot;OK&quot;,&quot;message&quot;: &quot;&quot;} Delete a paragraph Description This DELETE method deletes a paragraph by the given note and paragraph id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId] Success code 200 Fail code 500 sample JSON response {&quot;status&quot;: &quot;OK&quot;,&quot;message&quot;: &quot;&quot;} Export a note Description This GET method exports a note by the given id and gernerates a JSON URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/export/[noteId] Success code 201 Fail code 500 sample JSON response { &quot;paragraphs&quot;: [ { &quot;text&quot;: &quot;%md This is my new paragraph in my new note&quot;, &quot;dateUpdated&quot;: &quot;Jan 8, 2016 4:49:38 PM&quot;, &quot;config&quot;: { &quot;enabled&quot;: true }, &quot;settings&quot;: { &quot;params&quot;: {}, &quot;forms&quot;: {} }, &quot;jobName&quot;: &quo t;paragraph_1452300578795_1196072540&quot;, &quot;id&quot;: &quot;20160108-164938_1685162144&quot;, &quot;dateCreated&quot;: &quot;Jan 8, 2016 4:49:38 PM&quot;, &quot;status&quot;: &quot;READY&quot;, &quot;progressUpdateIntervalMs&quot;: 500 } ], &quot;name&quot;: &quot;source note for export&quot;, &quot;id&quot;: &quot;2B82H3RR1&quot;, &quot;angularObjects&quot;: {}, &quot;config&quot;: {}, &quot;info&quot;: {}} Import a note Description This POST method imports a note from the note JSON input URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/import Success code 201 Fail code 500 sample JSON input { &quot;paragraphs&quot;: [ { &quot;text&quot;: &quot;%md This is my new paragraph in my new note&qu ot;, &quot;dateUpdated&quot;: &quot;Jan 8, 2016 4:49:38 PM&quot;, &quot;config&quot;: { &quot;enabled&quot;: true }, &quot;settings&quot;: { &quot;params&quot;: {}, &quot;forms&quot;: {} }, &quot;jobName&quot;: &quot;paragraph_1452300578795_1196072540&quot;, &quot;id&quot;: &quot;20160108-164938_1685162144&quot;, &quot;dateCreated&quot;: &quot;Jan 8, 2016 4:49:38 PM&quot;, &quot;status&quot;: &quot;READY&quot;, &quot;progressUpdateIntervalMs&quot;: 500 } ], &quot;name&quot;: &quot;source note for export&quot;, &quot;id&quot;: &quot;2B82H3RR1&quot;, &quot;angularObjects&quot;: {}, &quot;config&quot;: {}, &quot;info&quot;: {}} sample JSON response { &quot;status&quot;: &quot;CREATED&quot;, &quot;message&quot;: &quot;&quot;, &quot;body&quot;: &quot;2AZPHY918&quot;} Clear all paragraph result Description This PUT method clear all paragraph results from note of given id. URL http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/clear Success code 200 Forbidden code 401 Not Found code 404 Fail code 500 sample JSON response {&quot;status&quot;: &quot;OK&quot;} ",
[... 14 lines stripped ...]