Modified: zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json
URL: 
http://svn.apache.org/viewvc/zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json?rev=1774116&r1=1774115&r2=1774116&view=diff
==============================================================================
--- zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json (original)
+++ zeppelin/site/docs/0.7.0-SNAPSHOT/search_data.json Wed Dec 14 00:05:19 2016
@@ -5,7 +5,7 @@
 
     "/development/howtocontribute.html": {
       "title": "Contributing to Apache Zeppelin (Code)",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Contributing to Apache 
Zeppelin ( Code )NOTE : Apache Zeppelin is an Apache2 License Software.Any 
contributions to Zeppelin (Source code, Documents, Image, Website) means you 
agree with license all your contributions as Apache2 License.Setting upHere are 
some tools you will need to build and test Zeppelin.Software Configuration 
Management ( SCM )Since Zeppelin uses Git for it's SCM system, you need 
git client installed in y
 our development machine.Integrated Development Environment ( IDE )You are free 
to use whatever IDE you prefer, or your favorite command line editor.Build 
ToolsTo build the code, installOracle Java 7Apache MavenGetting the source 
codeFirst of all, you need Zeppelin source code. The official location of 
Zeppelin is http://git.apache.org/zeppelin.git.git accessGet the source code on 
your development machine using git.git clone git://git.apache.org/zeppelin.git 
zeppelinYou may also want to develop against a specific branch. For example, 
for branch-0.5.6git clone -b branch-0.5.6 git://git.apache.org/zeppelin.git 
zeppelinApache Zeppelin follows Fork & Pull as a source control 
workflow.If you want to not only build Zeppelin but also make any changes, then 
you need to fork Zeppelin github mirror repository and make a pull 
request.Buildmvn installTo skip testmvn install -DskipTestsTo build with 
specific spark / hadoop versionmvn install -Dspark.version=x.x.x 
-Dhadoop.version=x.x.xFor
  the further Run Zeppelin server in development modecd 
zeppelin-serverHADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn 
exec:java 
-Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" 
-Dexec.args=""Note: Make sure you first run mvn clean install 
-DskipTests on your zeppelin root directory, otherwise your server build will 
fail to find the required dependencies in the local repro.or use daemon 
scriptbin/zeppelin-daemon startServer will be run on 
http://localhost:8080.Generating Thrift CodeSome portions of the Zeppelin code 
are generated by Thrift. For most Zeppelin changes, you don't need to 
worry about this. But if you modify any of the Thrift IDL files (e.g. 
zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to 
regenerate these files and submit their updated version as part of your 
patch.To regenerate the code, install thrift-0.9.2 and change directory into 
Zeppelin source directory. and then run following c
 ommandthrift -out zeppelin-interpreter/src/main/java/ --gen java 
zeppelin-interpreter/src/main/thrift/RemoteInterpreterService.thriftWhere to 
StartYou can find issues for beginner & newbieStay involvedContributors 
should join the Zeppelin mailing lists....@zeppelin.apache.org is for people 
who want to contribute code to Zeppelin. subscribe, unsubscribe, archivesIf you 
have any issues, create a ticket in JIRA.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Contributing to Apache 
Zeppelin ( Code )NOTE : Apache Zeppelin is an Apache2 License Software.Any 
contributions to Zeppelin (Source code, Documents, Image, Website) means you 
agree with license all your contributions as Apache2 License.Setting upHere are 
some tools you will need to build and test Zeppelin.Software Configuration 
Management ( SCM )Since Zeppelin uses Git for it's SCM system, you need 
git client installed in y
 our development machine.Integrated Development Environment ( IDE )You are free 
to use whatever IDE you prefer, or your favorite command line editor.Build 
ToolsTo build the code, installOracle Java 7Apache MavenGetting the source 
codeFirst of all, you need Zeppelin source code. The official location of 
Zeppelin is http://git.apache.org/zeppelin.git.git accessGet the source code on 
your development machine using git.git clone git://git.apache.org/zeppelin.git 
zeppelinYou may also want to develop against a specific branch. For example, 
for branch-0.5.6git clone -b branch-0.5.6 git://git.apache.org/zeppelin.git 
zeppelinApache Zeppelin follows Fork & Pull as a source control 
workflow.If you want to not only build Zeppelin but also make any changes, then 
you need to fork Zeppelin github mirror repository and make a pull 
request.Buildmvn installTo skip testmvn install -DskipTestsTo build with 
specific spark / hadoop versionmvn install -Dspark.version=x.x.x 
-Dhadoop.version=x.x.xFor
  the further Run Zeppelin server in development modecd 
zeppelin-serverHADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn 
exec:java 
-Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" 
-Dexec.args=""Note: Make sure you first run mvn clean install 
-DskipTests on your zeppelin root directory, otherwise your server build will 
fail to find the required dependencies in the local repro.or use daemon 
scriptbin/zeppelin-daemon startServer will be run on 
http://localhost:8080.Generating Thrift CodeSome portions of the Zeppelin code 
are generated by Thrift. For most Zeppelin changes, you don't need to 
worry about this. But if you modify any of the Thrift IDL files (e.g. 
zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to 
regenerate these files and submit their updated version as part of your 
patch.To regenerate the code, install thrift-0.9.2 and then run the following 
command to generate thrift code.cd <zeppeli
 n_home>/zeppelin-interpreter/src/main/thrift./genthrift.shWhere to 
StartYou can find issues for beginner & newbieStay involvedContributors 
should join the Zeppelin mailing lists....@zeppelin.apache.org is for people 
who want to contribute code to Zeppelin. subscribe, unsubscribe, archivesIf you 
have any issues, create a ticket in JIRA.",
       "url": " /development/howtocontribute.html",
       "group": "development",
       "excerpt": "How can you contribute to Apache Zeppelin project? This 
document covers from setting up your develop environment to making a pull 
request on Github."
@@ -60,7 +60,7 @@
 
     "/displaysystem/basicdisplaysystem.html": {
       "title": "Basic Display System in Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Basic Display System in 
Apache ZeppelinTextBy default, Apache Zeppelin prints interpreter response as a 
plain text using text display system.You can explicitly say you're 
using text display system.HtmlWith %html directive, Zeppelin treats your output 
as HTMLMathematical expressionsHTML display system automatically formats 
mathematical expression using MathJax. You can use( INLINE EXPRESSION ) and $$ 
EXPRESSION $$ to format.
  For exampleTableIf you have data that row separated by 'n' 
(newline) and column separated by 't' (tab) with first row as 
header row, for exampleYou can simply use %table display system to leverage 
Zeppelin's built in visualization.If table contents start with %html, 
it is interpreted as an HTML.Note : Display system is backend independent.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Basic Display System in 
Apache ZeppelinTextBy default, Apache Zeppelin prints interpreter response as a 
plain text using text display system.You can explicitly say you're 
using text display system.HtmlWith %html directive, Zeppelin treats your output 
as HTMLMathematical expressionsHTML display system automatically formats 
mathematical expression using MathJax. You can use( INLINE EXPRESSION ) and $$ 
EXPRESSION $$ to format.
  For exampleTableIf you have data that row separated by n (newline) and column 
separated by t (tab) with first row as header row, for exampleYou can simply 
use %table display system to leverage Zeppelin's built in 
visualization.If table contents start with %html, it is interpreted as an 
HTML.Note : Display system is backend independent.",
       "url": " /displaysystem/basicdisplaysystem.html",
       "group": "display",
       "excerpt": "There are 3 basic display systems in Apache Zeppelin. By 
default, Zeppelin prints interpreter responce as a plain text using text 
display system. With %html directive, Zeppelin treats your output as HTML. You 
can also simply use %table display system..."
@@ -83,7 +83,7 @@
 
     "/install/build.html": {
       "title": "Build from Source",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Building from SourceIf you 
want to build from source, you must first install the following dependencies:   
   Name    Value        Git    (Any Version)        Maven    3.1.x or higher    
    JDK    1.7  If you haven't installed Git and Maven yet, check the 
Build requirements section and follow the step by step instructions from 
there.1. Clone the Apache Zeppelin repositorygit clone 
https://github.com/apache/zeppelin.git2. B
 uild sourceYou can build Zeppelin with following maven command:mvn clean 
package -DskipTests [Options]If you're unsure about the options, use 
the same commands that creates official binary package.# update all pom.xml to 
use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all 
interpreters and include latest version of Apache spark support for local 
mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark 
-Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running 
after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles 
section for further build options.If you are behind proxy, follow instructions 
in Proxy setting section.If you're interested in contribution, please 
check Contributing to Apache Zeppelin (Code) and Contributing to Apache 
Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific 
Spark version, Hadoop version or specific features, define one or more of the 
following pr
 ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles 
are-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor
 version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop 
major versionAvailable profiles 
are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6minor 
version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] 
(optional)set scala version (default 2.10)Available profiles 
are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local 
modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set 
SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local 
mode.-Pr (optional)enable R support with SparkR integration.-Psparkr 
(optional)another R support with SparkR integration as well as local mode 
support.-Pvendor-repo (optional)enable 3rd party vendor repository 
(cloudera)-Pmap
 r[version] (optional)For the MapR Hadoop Distribution, these profiles will 
handle the Hadoop version. As MapR allows different versions of Spark to be 
installed, you should specify which version of Spark is installed on the 
cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, etc.) as 
needed.The correct Maven artifacts can be found for every version of MapR at 
http://doc.mapr.comAvailable profiles 
are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples 
under zeppelin-examples directoryBuild command examplesHere are some examples 
with several options:# build with spark-2.0, 
scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.0 
-Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with 
spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn 
-Ppyspark -Psparkr -DskipTests# spark-cassandra integrationmvn clean package 
-Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests 
-DskipTests# with C
 DHmvn clean package -Pspark-1.5 -Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 
-Pvendor-repo -DskipTests# with MapRmvn clean package -Pspark-1.5 -Pmapr50 
-DskipTestsIgnite Interpretermvn clean package -Dignite.version=1.6.0 
-DskipTestsScalding Interpretermvn clean package -Pscalding -DskipTestsBuild 
requirementsInstall requirementsIf you don't have requirements 
prepared, install it.(The installation method may vary according to your 
environment, example is for Ubuntu.)sudo apt-get updatesudo apt-get install 
gitsudo apt-get install openjdk-7-jdksudo apt-get install npmsudo apt-get 
install libfontconfigInstall mavenwget 
http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo
 tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s 
/usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is 
installed by running node --version - Ensure maven is running version 3.1.x or 
higher with mvn -version - Configure maven to use
  more memory than usual by export MAVEN_OPTS="-Xmx2g 
-XX:MaxPermSize=1024m"Proxy setting (optional)If you're behind 
the proxy, you'll need to configure maven and npm to pass through 
it.First of all, configure maven in your 
~/.m2/settings.xml.<settings>  <proxies>    
<proxy>      <id>proxy-http</id>      
<active>true</active>      
<protocol>http</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>  
  </proxy>    <proxy>      
<id>proxy-https</id>      
<active>true</acti
 ve>      <protocol>https</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>  
  </proxy>  
</proxies></settings>Then, next commands will 
configure npm.npm config set proxy http://localhost:3128npm config set 
https-proxy http://localhost:3128npm config set registry 
"http://registry.npmjs.org/"npm config set strict-ssl 
falseConfigure git as wellgit config --global http.proxy 
http://localhost:3128git config --global https.proxy http://localhost:3128git 
config --global url."http://".insteadOf git://To clean up, 
set active false in Maven settings.xml and run these commands.npm config rm 
proxynp
 m config rm https-proxygit config --global --unset http.proxygit config 
--global --unset https.proxygit config --global --unset 
url."http://".insteadOfNotes: - If you are behind NTLM proxy 
you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the 
standard pattern http://user:pwd@host:port.PackageTo package the final 
distribution including the compressed archive, run:mvn clean package 
-Pbuild-distrTo build a distribution with specific profiles, run:mvn clean 
package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles 
-Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build 
to a specific spark versions, or omit support such as yarn.  The archive is 
generated under zeppelin-distribution/target directoryRun end-to-end 
testsZeppelin comes with a set of end-to-end acceptance tests driving headless 
selenium browser# assumes zeppelin-server running on localhost:8080 (use 
-Durl=.. to override)mvn verify# or take care of 
 starting/stoping zeppelin-server from packaged zeppelin-distribuion/targetmvn 
verify -P using-packaged-distr",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Building from SourceIf you 
want to build from source, you must first install the following dependencies:   
   Name    Value        Git    (Any Version)        Maven    3.1.x or higher    
    JDK    1.7  If you haven't installed Git and Maven yet, check the 
Build requirements section and follow the step by step instructions from 
there.1. Clone the Apache Zeppelin repositorygit clone 
https://github.com/apache/zeppelin.git2. B
 uild sourceYou can build Zeppelin with following maven command:mvn clean 
package -DskipTests [Options]If you're unsure about the options, use 
the same commands that creates official binary package.# update all pom.xml to 
use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all 
interpreters and include latest version of Apache spark support for local 
mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark 
-Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running 
after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles 
section for further build options.If you are behind proxy, follow instructions 
in Proxy setting section.If you're interested in contribution, please 
check Contributing to Apache Zeppelin (Code) and Contributing to Apache 
Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific 
Spark version, Hadoop version or specific features, define one or more of the 
following pr
 ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles 
are-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor
 version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop 
major versionAvailable profiles 
are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6-Phadoop-2.7minor
 version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] 
(optional)set scala version (default 2.10)Available profiles 
are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local 
modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set 
SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local 
mode.-Pr (optional)enable R support with SparkR integration.-Psparkr 
(optional)another R support with SparkR integration as well as local mode 
support.-Pvendor-repo (optional)enable 3rd party vendor repository (cl
 oudera)-Pmapr[version] (optional)For the MapR Hadoop Distribution, these 
profiles will handle the Hadoop version. As MapR allows different versions of 
Spark to be installed, you should specify which version of Spark is installed 
on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, etc.) as 
needed.The correct Maven artifacts can be found for every version of MapR at 
http://doc.mapr.comAvailable profiles 
are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples 
under zeppelin-examples directoryBuild command examplesHere are some examples 
with several options:# build with spark-2.0, 
scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.0 
-Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with 
spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -Phadoop-2.4 -Pyarn 
-Ppyspark -Psparkr -DskipTests# spark-cassandra integrationmvn clean package 
-Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests -DskipT
 ests# with CDHmvn clean package -Pspark-1.5 -Dhadoop.version=2.6.0-cdh5.5.0 
-Phadoop-2.6 -Pvendor-repo -DskipTests# with MapRmvn clean package -Pspark-1.5 
-Pmapr50 -DskipTestsIgnite Interpretermvn clean package -Dignite.version=1.6.0 
-DskipTestsScalding Interpretermvn clean package -Pscalding -DskipTestsBuild 
requirementsInstall requirementsIf you don't have requirements 
prepared, install it.(The installation method may vary according to your 
environment, example is for Ubuntu.)sudo apt-get updatesudo apt-get install 
gitsudo apt-get install openjdk-7-jdksudo apt-get install npmsudo apt-get 
install libfontconfigInstall mavenwget 
http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo
 tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s 
/usr/local/apache-maven-3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is 
installed by running node --version - Ensure maven is running version 3.1.x or 
higher with mvn -version - Configure 
 maven to use more memory than usual by export MAVEN_OPTS="-Xmx2g 
-XX:MaxPermSize=1024m"Proxy setting (optional)If you're behind 
the proxy, you'll need to configure maven and npm to pass through 
it.First of all, configure maven in your 
~/.m2/settings.xml.<settings>  <proxies>    
<proxy>      <id>proxy-http</id>      
<active>true</active>      
<protocol>http</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>  
  </proxy>    <proxy>      
<id>proxy-https</id>      
<active>true&
 amp;lt;/active>      
<protocol>https</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>  
  </proxy>  
</proxies></settings>Then, next commands will 
configure npm.npm config set proxy http://localhost:3128npm config set 
https-proxy http://localhost:3128npm config set registry 
"http://registry.npmjs.org/"npm config set strict-ssl 
falseConfigure git as wellgit config --global http.proxy 
http://localhost:3128git config --global https.proxy http://localhost:3128git 
config --global url."http://".insteadOf git://To clean up, 
set active false in Maven settings.xml and run these commands.npm confi
 g rm proxynpm config rm https-proxygit config --global --unset http.proxygit 
config --global --unset https.proxygit config --global --unset 
url."http://".insteadOfNotes: - If you are behind NTLM proxy 
you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the 
standard pattern http://user:pwd@host:port.PackageTo package the final 
distribution including the compressed archive, run:mvn clean package 
-Pbuild-distrTo build a distribution with specific profiles, run:mvn clean 
package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles 
-Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build 
to a specific spark versions, or omit support such as yarn.  The archive is 
generated under zeppelin-distribution/target directoryRun end-to-end 
testsZeppelin comes with a set of end-to-end acceptance tests driving headless 
selenium browser# assumes zeppelin-server running on localhost:8080 (use 
-Durl=.. to override)mvn verify# or t
 ake care of starting/stoping zeppelin-server from packaged 
zeppelin-distribuion/targetmvn verify -P using-packaged-distr",
       "url": " /install/build.html",
       "group": "install",
       "excerpt": "How to build Zeppelin from source"
@@ -127,7 +127,7 @@
 
     "/install/upgrade.html": {
       "title": "Manual Zeppelin version upgrade procedure",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Manual upgrade procedure for 
ZeppelinBasically, newer version of Zeppelin works with previous version 
notebook directory and configurations.So, copying notebook and conf directory 
should be enough.InstructionsStop Zeppelinbin/zeppelin-daemon.sh stopCopy your 
notebook and conf directory into a backup directoryDownload newer version of 
Zeppelin and Install. See Install page.Copy backup notebook and conf directory 
into newer version o
 f Zeppelin notebook and conf directoryStart Zeppelinbin/zeppelin-daemon.sh 
startMigration GuideUpgrading from Zeppelin 0.6 to 0.7From 0.7, we 
don't use ZEPPELIN_JAVA_OPTS as default value of 
ZEPPELIN_INTP_JAVA_OPTS and also the same for ZEPPELIN_MEM/ZEPPELIN_INTP_MEM. 
If user want to configure the jvm opts of interpreter process, please set 
ZEPPELIN_INTP_JAVA_OPTS and ZEPPELIN_INTP_MEM explicitly. If you don't 
set ZEPPELIN_INTP_MEM, Zeppelin will set it to -Xms1024m -Xmx1024m 
-XX:MaxPermSize=512m by default.Mapping from %jdbc(prefix) to %prefix is no 
longer available. Instead, you can use %[interpreter alias] with multiple 
interpreter setttings on GUI.Usage of ZEPPELIN_PORT is not supported in ssl 
mode. Instead use ZEPPELIN_SSL_PORT to configure the ssl port. Value from 
ZEPPELIN_PORT is used only when ZEPPELIN_SSL is set to false.The support on 
Spark 1.1.x to 1.3.x is deprecated.From 0.7, we uses pegdown as the 
markdown.parser.type option for the %md interpreter. Ren
 dered markdown might be different from what you expected",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Manual upgrade procedure for 
ZeppelinBasically, newer version of Zeppelin works with previous version 
notebook directory and configurations.So, copying notebook and conf directory 
should be enough.InstructionsStop Zeppelinbin/zeppelin-daemon.sh stopCopy your 
notebook and conf directory into a backup directoryDownload newer version of 
Zeppelin and Install. See Install page.Copy backup notebook and conf directory 
into newer version o
 f Zeppelin notebook and conf directoryStart Zeppelinbin/zeppelin-daemon.sh 
startMigration GuideUpgrading from Zeppelin 0.6 to 0.7From 0.7, we 
don't use ZEPPELIN_JAVA_OPTS as default value of 
ZEPPELIN_INTP_JAVA_OPTS and also the same for ZEPPELIN_MEM/ZEPPELIN_INTP_MEM. 
If user want to configure the jvm opts of interpreter process, please set 
ZEPPELIN_INTP_JAVA_OPTS and ZEPPELIN_INTP_MEM explicitly. If you don't 
set ZEPPELIN_INTP_MEM, Zeppelin will set it to -Xms1024m -Xmx1024m 
-XX:MaxPermSize=512m by default.Mapping from %jdbc(prefix) to %prefix is no 
longer available. Instead, you can use %[interpreter alias] with multiple 
interpreter setttings on GUI.Usage of ZEPPELIN_PORT is not supported in ssl 
mode. Instead use ZEPPELIN_SSL_PORT to configure the ssl port. Value from 
ZEPPELIN_PORT is used only when ZEPPELIN_SSL is set to false.The support on 
Spark 1.1.x to 1.3.x is deprecated.From 0.7, we uses pegdown as the 
markdown.parser.type option for the %md interpreter. Ren
 dered markdown might be different from what you expectedFrom 0.7 note.json 
format has been changed to support multiple outputs in a paragraph. Zeppelin 
will automatically convert old format to new format. 0.6 or lower version can 
read new note.json format but output will not be displayed. For the detail, see 
ZEPPELIN-212 and pullrequest.",
       "url": " /install/upgrade.html",
       "group": "install",
       "excerpt": "This document will guide you through a procedure of manual 
upgrade your Apache Zeppelin instance to a newer version. Apache Zeppelin keeps 
backward compatibility for the notebook file format."
@@ -424,7 +424,7 @@
 
     "/interpreter/spark.html": {
       "title": "Apache Spark Interpreter for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Spark Interpreter for Apache 
ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing 
system.It provides high-level APIs in Java, Scala, Python and R, and an 
optimized engine that supports general execution graphs.Apache Spark is 
supported in Zeppelin with Spark interpreter group which consists of below five 
interpreters.      Name    Class    Description        %spark    
SparkInterpreter    Creates a SparkConte
 xt and provides a Scala environment        %spark.pyspark    
PySparkInterpreter    Provides a Python environment        %spark.r    
SparkRInterpreter    Provides an R environment with SparkR support        
%spark.sql    SparkSQLInterpreter    Provides a SQL environment        
%spark.dep    DepInterpreter    Dependency loader  ConfigurationThe Spark 
interpreter can be configured with properties provided by Zeppelin.You can also 
set other Spark properties which are not listed in the table. For a list of 
additional properties, refer to Spark Available Properties.      Property    
Default    Description        args        Spark commandline args      master    
local[*]    Spark master uri.  ex) spark://masterhost:7077      spark.app.name  
  Zeppelin    The name of spark application.        spark.cores.max        
Total number of cores to use.  Empty value uses all available core.        
spark.executor.memory     1g    Executor memory per worker instance.  ex) 512m, 
32g        zeppelin.dep
 .additionalRemoteRepository    spark-packages,  
http://dl.bintray.com/spark-packages/maven,  false;    A list of 
id,remote-repository-URL,is-snapshot;  for each remote repository.        
zeppelin.dep.localrepo    local-repo    Local repository for dependency loader  
      zeppelin.pyspark.python    python    Python command to run pyspark with   
     zeppelin.spark.concurrentSQL    false    Execute multiple SQL concurrently 
if set true.        zeppelin.spark.maxResult    1000    Max number of Spark SQL 
result to display.        zeppelin.spark.printREPLOutput    true    Print REPL 
output        zeppelin.spark.useHiveContext    true    Use HiveContext instead 
of SQLContext if it is true.        zeppelin.spark.importImplicit    true    
Import implicits, UDF collection, and sql if set true.  Without any 
configuration, Spark interpreter works out of box in local mode. But if you 
want to connect to your Spark cluster, you'll need to follow below two 
simple steps.1. Export SPARK_HOM
 EIn conf/zeppelin-env.sh, export SPARK_HOME environment variable with your 
Spark installation path.For example,export SPARK_HOME=/usr/lib/sparkYou can 
optionally export HADOOP_CONF_DIR and SPARK_SUBMIT_OPTIONSexport 
HADOOP_CONF_DIR=/usr/lib/hadoopexport SPARK_SUBMIT_OPTIONS="--packages 
com.databricks:spark-csv_2.10:1.2.0"For Windows, ensure you have 
winutils.exe in %HADOOP_HOME%bin. Please see Problems running Hadoop on Windows 
for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to 
Interpreter menu and edit master property in your Spark interpreter setting. 
The value may vary depending on your Spark cluster deployment type.For 
example,local[*] in local modespark://master:7077 in standalone 
clusteryarn-client in Yarn client modemesos://host:5050 in Mesos 
clusterThat's it. Zeppelin will work with any version of Spark and any 
deployment type without rebuilding Zeppelin in this way. For the further 
information about Spark & Zeppeli
 n version compatibility, please refer to "Available 
Interpreters" section in Zeppelin download page.Note that without 
exporting SPARK_HOME, it's running in local mode with included version 
of Spark. The included version may vary depending on the build 
profile.SparkContext, SQLContext, SparkSession, ZeppelinContextSparkContext, 
SQLContext and ZeppelinContext are automatically created and exposed as 
variable names sc, sqlContext and z, respectively, in Scala, Python and R 
environments.Staring from 0.6.1 SparkSession is available as variable spark 
when you are using Spark 2.x.Note that Scala/Python/R environment shares the 
same SparkContext, SQLContext and ZeppelinContext instance. Dependency 
ManagementThere are two ways to load external libraries in Spark interpreter. 
First is using interpreter setting menu and second is loading Spark 
properties.1. Setting Dependencies via Interpreter SettingPlease see Dependency 
Management for the details.2. Loading Spark Pr
 opertiesOnce SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses 
spark-submit as spark interpreter runner. spark-submit supports two ways to 
load configurations. The first is command line options such as --master and 
Zeppelin can pass these options to spark-submit by exporting 
SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration 
options from SPARK_HOME/conf/spark-defaults.conf. Spark properties that user 
can set to distribute libraries are:      spark-defaults.conf    
SPARK_SUBMIT_OPTIONS    Description        spark.jars    --jars    
Comma-separated list of local jars to include on the driver and executor 
classpaths.        spark.jars.packages    --packages    Comma-separated list of 
maven coordinates of jars to include on the driver and executor classpaths. 
Will search the local maven repo, then maven central and any additional remote 
repositories given by --repositories. The format for the coordinates should be 
groupId:artifactId:version.        spark
 .files    --files    Comma-separated list of files to be placed in the working 
directory of each executor.  Here are few examples:SPARK_SUBMIT_OPTIONS in 
conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS="--packages 
com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar 
--files 
/path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"SPARK_HOME/conf/spark-defaults.confspark.jars
        /path/mylib1.jar,/path/mylib2.jarspark.jars.packages   
com.databricks:spark-csv_2.10:1.2.0spark.files       
/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading 
via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to 
%spark and %spark.pyspark but not to  %spark.sql interpreter. So we recommend 
you to use the first option instead.When your code requires external library, 
instead of doing download/copy/restart Zeppelin, you can easily do following 
jobs using %spark.dep interpreter.Load libraries recursively from maven 
repositoryL
 oad libraries from local filesystemAdd additional maven 
repositoryAutomatically add libraries to SparkCluster (You can turn off)Dep 
interpreter leverages Scala environment. So you can write any Scala code 
here.Note that %spark.dep interpreter should be used before %spark, 
%spark.pyspark, %spark.sql.Here's usages.%spark.depz.reset() // clean 
up previously added artifact and repository// add maven 
repositoryz.addRepo("RepoName").url("RepoURL")//
 add maven snapshot 
repositoryz.addRepo("RepoName").url("RepoURL").snapshot()//
 add credentials for private maven 
repositoryz.addRepo("RepoName").url("RepoURL").username("username").password("password")//
 add artifact from filesystemz.load("/path/to.jar")// add 
artifact from maven repository, with no 
dependencyz.load("groupId:artifactId:version").excludeAll()/
 / add artifact 
recursivelyz.load("groupId:artifactId:version")// add 
artifact recursively except comma separated GroupID:ArtifactId 
listz.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId,
 ...")// exclude with 
patternz.load("groupId:artifactId:version").exclude(*)z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")z.load("groupId:artifactId:version").exclude("groupId:*")//
 local() skips adding artifact to spark clusters (skipping 
sc.addJar())z.load("groupId:artifactId:version").local()ZeppelinContextZeppelin
 automatically injects ZeppelinContext as variable z in your Scala/Python 
environment. ZeppelinContext provides some additional functions and 
utilities.Object ExchangeZeppelinContext extends map and it's shared 
between Scala and Python environment.So you can put som
 e objects from Scala and read it from Python, vice versa.  // Put object from 
scala%sparkval myObject = ...z.put("objName", myObject)    # 
Get object from python%spark.pysparkmyObject = 
z.get("objName")  Form CreationZeppelinContext provides 
functions for creating forms.In Scala and Python environments, you can create 
forms programmatically.  %spark/* Create text input form 
*/z.input("formName")/* Create text input form with default 
value */z.input("formName", 
"defaultValue")/* Create select form 
*/z.select("formName", Seq(("option1", 
"option1DisplayName"),                         
("option2", "option2DisplayName")))/* 
Create select form with default value*/z.select("formName", 
"option1", Seq(("option1", 
"option1DisplayName"),          
                           ("option2", 
"option2DisplayName")))    %spark.pyspark# Create text input 
formz.input("formName")# Create text input form with default 
valuez.input("formName", "defaultValue")# 
Create select formz.select("formName", 
[("option1", "option1DisplayName"),         
             ("option2", 
"option2DisplayName")])# Create select form with default 
valuez.select("formName", [("option1", 
"option1DisplayName"),                      
("option2", "option2DisplayName")], 
"option1")  In sql environment, you can create form in simple 
template.%spark.sqlselect * from ${table=defaultTableName} where text like 
'%${search}%'To learn more about dynamic form, checkout Dynamic 
Form.M
 atplotlib Integration (pyspark)Both the python and pyspark interpreters have 
built-in support for inline visualization using matplotlib, a popular plotting 
library for python. More details can be found in the python interpreter 
documentation, since matplotlib support is identical. More advanced interactive 
plotting can be done with pyspark through utilizing Zeppelin's built-in 
Angular Display System, as shown below:Interpreter setting optionYou can choose 
one of shared, scoped and isolated options wheh you configure Spark 
interpreter. Spark interpreter creates separated Scala compiler per each 
notebook but share a single SparkContext in scoped mode (experimental). It 
creates separated SparkContext per each notebook in isolated mode.Setting up 
Zeppelin with KerberosLogical setup with Zeppelin, Kerberos Key Distribution 
Center (KDC), and Spark on YARN:Configuration SetupOn the server that Zeppelin 
is installed, install Kerberos client modules and configuration, krb5.conf.This 
 is to make the server communicate with KDC.Set SPARK_HOME in 
[ZEPPELIN_HOME]/conf/zeppelin-env.sh to use spark-submit(Additionally, you 
might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two 
properties below to Spark configuration 
([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE:
 If you do not have permission to access for the above spark-defaults.conf 
file, optionally, you can add the above lines to the Spark Interpreter setting 
through the Interpreter tab in the Zeppelin UI.That's it. Play with 
Zeppelin!",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Spark Interpreter for Apache 
ZeppelinOverviewApache Spark is a fast and general-purpose cluster computing 
system.It provides high-level APIs in Java, Scala, Python and R, and an 
optimized engine that supports general execution graphs.Apache Spark is 
supported in Zeppelin with Spark interpreter group which consists of below five 
interpreters.      Name    Class    Description        %spark    
SparkInterpreter    Creates a SparkConte
 xt and provides a Scala environment        %spark.pyspark    
PySparkInterpreter    Provides a Python environment        %spark.r    
SparkRInterpreter    Provides an R environment with SparkR support        
%spark.sql    SparkSQLInterpreter    Provides a SQL environment        
%spark.dep    DepInterpreter    Dependency loader  ConfigurationThe Spark 
interpreter can be configured with properties provided by Zeppelin.You can also 
set other Spark properties which are not listed in the table. For a list of 
additional properties, refer to Spark Available Properties.      Property    
Default    Description        args        Spark commandline args      master    
local[*]    Spark master uri.  ex) spark://masterhost:7077      spark.app.name  
  Zeppelin    The name of spark application.        spark.cores.max        
Total number of cores to use.  Empty value uses all available core.        
spark.executor.memory     1g    Executor memory per worker instance.  ex) 512m, 
32g        zeppelin.dep
 .additionalRemoteRepository    spark-packages,  
http://dl.bintray.com/spark-packages/maven,  false;    A list of 
id,remote-repository-URL,is-snapshot;  for each remote repository.        
zeppelin.dep.localrepo    local-repo    Local repository for dependency loader  
      zeppelin.pyspark.python    python    Python command to run pyspark with   
     zeppelin.spark.concurrentSQL    false    Execute multiple SQL concurrently 
if set true.        zeppelin.spark.maxResult    1000    Max number of Spark SQL 
result to display.        zeppelin.spark.printREPLOutput    true    Print REPL 
output        zeppelin.spark.useHiveContext    true    Use HiveContext instead 
of SQLContext if it is true.        zeppelin.spark.importImplicit    true    
Import implicits, UDF collection, and sql if set true.  Without any 
configuration, Spark interpreter works out of box in local mode. But if you 
want to connect to your Spark cluster, you'll need to follow below two 
simple steps.1. Export SPARK_HOM
 EIn conf/zeppelin-env.sh, export SPARK_HOME environment variable with your 
Spark installation path.For example,export SPARK_HOME=/usr/lib/sparkYou can 
optionally set more environment variables# set hadoop conf direxport 
HADOOP_CONF_DIR=/usr/lib/hadoop# set options to pass spark-submit commandexport 
SPARK_SUBMIT_OPTIONS="--packages 
com.databricks:spark-csv_2.10:1.2.0"# extra classpath. e.g. set 
classpath for hive-site.xmlexport 
ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/confFor Windows, ensure you have 
winutils.exe in %HADOOP_HOME%bin. Please see Problems running Hadoop on Windows 
for the details.2. Set master in Interpreter menuAfter start Zeppelin, go to 
Interpreter menu and edit master property in your Spark interpreter setting. 
The value may vary depending on your Spark cluster deployment type.For 
example,local[*] in local modespark://master:7077 in standalone 
clusteryarn-client in Yarn client modemesos://host:5050 in Mesos 
clusterThat's it. Zeppelin wi
 ll work with any version of Spark and any deployment type without rebuilding 
Zeppelin in this way. For the further information about Spark & 
Zeppelin version compatibility, please refer to "Available 
Interpreters" section in Zeppelin download page.Note that without 
exporting SPARK_HOME, it's running in local mode with included version 
of Spark. The included version may vary depending on the build 
profile.SparkContext, SQLContext, SparkSession, ZeppelinContextSparkContext, 
SQLContext and ZeppelinContext are automatically created and exposed as 
variable names sc, sqlContext and z, respectively, in Scala, Python and R 
environments.Staring from 0.6.1 SparkSession is available as variable spark 
when you are using Spark 2.x.Note that Scala/Python/R environment shares the 
same SparkContext, SQLContext and ZeppelinContext instance. Dependency 
ManagementThere are two ways to load external libraries in Spark interpreter. 
First is using interpreter setting men
 u and second is loading Spark properties.1. Setting Dependencies via 
Interpreter SettingPlease see Dependency Management for the details.2. Loading 
Spark PropertiesOnce SPARK_HOME is set in conf/zeppelin-env.sh, Zeppelin uses 
spark-submit as spark interpreter runner. spark-submit supports two ways to 
load configurations. The first is command line options such as --master and 
Zeppelin can pass these options to spark-submit by exporting 
SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh. Second is reading configuration 
options from SPARK_HOME/conf/spark-defaults.conf. Spark properties that user 
can set to distribute libraries are:      spark-defaults.conf    
SPARK_SUBMIT_OPTIONS    Description        spark.jars    --jars    
Comma-separated list of local jars to include on the driver and executor 
classpaths.        spark.jars.packages    --packages    Comma-separated list of 
maven coordinates of jars to include on the driver and executor classpaths. 
Will search the local maven repo, then mav
 en central and any additional remote repositories given by --repositories. The 
format for the coordinates should be groupId:artifactId:version.        
spark.files    --files    Comma-separated list of files to be placed in the 
working directory of each executor.  Here are few examples:SPARK_SUBMIT_OPTIONS 
in conf/zeppelin-env.shexport SPARK_SUBMIT_OPTIONS="--packages 
com.databricks:spark-csv_2.10:1.2.0 --jars /path/mylib1.jar,/path/mylib2.jar 
--files 
/path/mylib1.py,/path/mylib2.zip,/path/mylib3.egg"SPARK_HOME/conf/spark-defaults.confspark.jars
        /path/mylib1.jar,/path/mylib2.jarspark.jars.packages   
com.databricks:spark-csv_2.10:1.2.0spark.files       
/path/mylib1.py,/path/mylib2.egg,/path/mylib3.zip3. Dynamic Dependency Loading 
via %spark.dep interpreterNote: %spark.dep interpreter loads libraries to 
%spark and %spark.pyspark but not to  %spark.sql interpreter. So we recommend 
you to use the first option instead.When your code requires external library, 
inst
 ead of doing download/copy/restart Zeppelin, you can easily do following jobs 
using %spark.dep interpreter.Load libraries recursively from maven 
repositoryLoad libraries from local filesystemAdd additional maven 
repositoryAutomatically add libraries to SparkCluster (You can turn off)Dep 
interpreter leverages Scala environment. So you can write any Scala code 
here.Note that %spark.dep interpreter should be used before %spark, 
%spark.pyspark, %spark.sql.Here's usages.%spark.depz.reset() // clean 
up previously added artifact and repository// add maven 
repositoryz.addRepo("RepoName").url("RepoURL")//
 add maven snapshot 
repositoryz.addRepo("RepoName").url("RepoURL").snapshot()//
 add credentials for private maven 
repositoryz.addRepo("RepoName").url("RepoURL").username("username").password("password")//
 add artifact from filesystemz.load(&a
 mp;quot;/path/to.jar")// add artifact from maven repository, with no 
dependencyz.load("groupId:artifactId:version").excludeAll()// 
add artifact 
recursivelyz.load("groupId:artifactId:version")// add 
artifact recursively except comma separated GroupID:ArtifactId 
listz.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId,
 ...")// exclude with 
patternz.load("groupId:artifactId:version").exclude(*)z.load("groupId:artifactId:version").exclude("groupId:artifactId:*")z.load("groupId:artifactId:version").exclude("groupId:*")//
 local() skips adding artifact to spark clusters (skipping 
sc.addJar())z.load("groupId:artifactId:version").local()ZeppelinContextZeppelin
 automatically injects ZeppelinContext as variable z in your Scala/Python 
environment. ZeppelinContext provides some a
 dditional functions and utilities.Object ExchangeZeppelinContext extends map 
and it's shared between Scala and Python environment.So you can put 
some objects from Scala and read it from Python, vice versa.  // Put object 
from scala%sparkval myObject = ...z.put("objName", 
myObject)// Exchanging data framesmyScalaDataFrame = 
...z.put("myScalaDataFrame", myScalaDataFrame)val 
myPythonDataFrame = 
z.get("myPythonDataFrame").asInstanceOf[DataFrame]    # Get 
object from python%spark.pysparkmyObject = z.get("objName")# 
Exchanging data framesmyPythonDataFrame = 
...z.put("myPythonDataFrame", postsDf._jdf)myScalaDataFrame = 
DataFrame(z.get("myScalaDataFrame"), sqlContext)  Form 
CreationZeppelinContext provides functions for creating forms.In Scala and 
Python environments, you can create forms programmatically.  %spark/* Create 
text input form */z.input("formName"
 )/* Create text input form with default value 
*/z.input("formName", "defaultValue")/* 
Create select form */z.select("formName", 
Seq(("option1", "option1DisplayName"),      
                   ("option2", 
"option2DisplayName")))/* Create select form with default 
value*/z.select("formName", "option1", 
Seq(("option1", "option1DisplayName"),      
                              ("option2", 
"option2DisplayName")))    %spark.pyspark# Create text input 
formz.input("formName")# Create text input form with default 
valuez.input("formName", "defaultValue")# 
Create select formz.select("formName", 
[("option1", "option1DisplayName"),         
             (&quo
 t;option2", "option2DisplayName")])# Create select 
form with default valuez.select("formName", 
[("option1", "option1DisplayName"),         
             ("option2", 
"option2DisplayName")], "option1")  In sql 
environment, you can create form in simple template.%spark.sqlselect * from 
${table=defaultTableName} where text like '%${search}%'To learn 
more about dynamic form, checkout Dynamic Form.Matplotlib Integration 
(pyspark)Both the python and pyspark interpreters have built-in support for 
inline visualization using matplotlib, a popular plotting library for python. 
More details can be found in the python interpreter documentation, since 
matplotlib support is identical. More advanced interactive plotting can be done 
with pyspark through utilizing Zeppelin's built-in Angular Display 
System, as shown below:Interpreter setting opti
 onYou can choose one of shared, scoped and isolated options wheh you configure 
Spark interpreter. Spark interpreter creates separated Scala compiler per each 
notebook but share a single SparkContext in scoped mode (experimental). It 
creates separated SparkContext per each notebook in isolated mode.Setting up 
Zeppelin with KerberosLogical setup with Zeppelin, Kerberos Key Distribution 
Center (KDC), and Spark on YARN:Configuration SetupOn the server that Zeppelin 
is installed, install Kerberos client modules and configuration, krb5.conf.This 
is to make the server communicate with KDC.Set SPARK_HOME in 
[ZEPPELIN_HOME]/conf/zeppelin-env.sh to use spark-submit(Additionally, you 
might have to set export HADOOP_CONF_DIR=/etc/hadoop/conf)Add the two 
properties below to Spark configuration 
([SPARK_HOME]/conf/spark-defaults.conf):spark.yarn.principalspark.yarn.keytabNOTE:
 If you do not have permission to access for the above spark-defaults.conf 
file, optionally, you can add the above lines to
  the Spark Interpreter setting through the Interpreter tab in the Zeppelin 
UI.That's it. Play with Zeppelin!",
       "url": " /interpreter/spark.html",
       "group": "interpreter",
       "excerpt": "Apache Spark is a fast and general-purpose cluster computing 
system. It provides high-level APIs in Java, Scala, Python and R, and an 
optimized engine that supports general execution engine."
@@ -512,7 +512,7 @@
 
     "/manual/userimpersonation.html": {
       "title": "Run zeppelin interpreter process as web front end user",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Run zeppelin interpreter 
process as web front end userEnable shiro auth in shiro.ini[users]user1 = 
password1, role1user2 = password2, role2Enable password-less ssh for the user 
you want to impersonate (say user1).adduser user1#ssh-keygen (optional if you 
don't already have generated ssh-key.ssh user1@localhost mkdir -p 
.sshcat ~/.ssh/id_rsa.pub | ssh user1@localhost 'cat >> 
.ssh/authorized_keys&#39
 ;Start zeppelin server.            Screenshot                                  
      Go to interpreter setting page, and enable "User 
Impersonate" in any of the interpreter (in my example its shell 
interpreter)Test with a simple paragraph%shwhoami",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Run zeppelin interpreter 
process as web front end userEnable shiro auth in shiro.ini[users]user1 = 
password1, role1user2 = password2, role2Enable password-less ssh for the user 
you want to impersonate (say user1).adduser user1#ssh-keygen (optional if you 
don't already have generated ssh-key.ssh user1@localhost mkdir -p 
.sshcat ~/.ssh/id_rsa.pub | ssh user1@localhost 'cat >> 
.ssh/authorized_keys&#39
 ;Alternatively instead of password-less, user can override 
ZEPPELINIMPERSONATECMD in zeppelin-env.shexport 
ZEPPELIN_IMPERSONATE_CMD='sudo -H -u ${ZEPPELIN_IMPERSONATE_USER} bash 
-c 'Start zeppelin server.            Screenshot                        
                Go to interpreter setting page, and enable "User 
Impersonate" in any of the interpreter (in my example its shell 
interpreter)Test with a simple paragraph%shwhoami",
       "url": " /manual/userimpersonation.html",
       "group": "manual",
       "excerpt": "Set up zeppelin interpreter process as web front end user."
@@ -590,7 +590,7 @@
 
     "/rest-api/rest-notebook.html": {
       "title": "Apache Zeppelin Notebook REST API",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Apache Zeppelin Notebook 
REST APIOverviewApache Zeppelin provides several REST APIs for interaction and 
remote activation of zeppelin functionality.All REST APIs are available 
starting with the following endpoint 
http://[zeppelin-server]:[zeppelin-port]/api. Note that Apache Zeppelin REST 
APIs receive or return JSON objects, it is recommended for you to install some 
JSON viewers such as JSONView.If you work with Apache Zeppelin and
  find a need for an additional REST API, please file an issue or send us an 
email.Notebook REST API ListNotebooks REST API supports the following 
operations: List, Create, Get, Delete, Clone, Run, Export, Import as detailed 
in the following tables.List of the notes              Description      This 
GET method lists the available notes on your server.          Notebook JSON 
contains the name and id of all notes.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook              Success code 
     200               Fail code       500                sample JSON response  
     {  "status": "OK",  
"message": "",  "body": [ 
   {      "name":"Homepage",      
"id":"2AV4WUEMK"    },    {      
"name":"Zeppelin Tutorial",      
"id":"2A94M5J1Z&a
 mp;quot;    }  ]}      Create a new note              Description      This 
POST method creates a new note using the given name or default name if none 
given.          The body field of the returned JSON contains the new note id.   
                 URL      http://[zeppelin-server]:[zeppelin-port]/api/notebook 
             Success code      201               Fail code       500            
    sample JSON input (without paragraphs)       {"name": 
"name of new note"}               sample JSON input (with 
initial paragraphs)       {  "name": "name of new 
note",  "paragraphs": [    {      
"title": "paragraph title1",      
"text": "paragraph text1"    },    {      
"title": "paragraph title2",      
"text": "paragraph text2"    }  ]}          
     sample JSON 
 response       {  "status": "CREATED",  
"message": "",  "body": 
"2AZPHY918"}      Get an existing note information            
  Description      This GET method retrieves an existing note's 
information using the given id.          The body field of the returned JSON 
contain information about paragraphs in the note.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]              
Success code      200               Fail code       500                sample 
JSON response       {  "status": "OK",  
"message": "",  "body": { 
   "paragraphs": [      {        "text": 
"%sql nselect age, count(1) valuenfrom bank nwhere age < 30 
ngroup by age norder by age",        "config":
  {          "colWidth": 4,          
"graph": {            "mode": 
"multiBarChart",            "height": 300,  
          "optionOpen": false,            
"keys": [              {                
"name": "age",                
"index": 0,                "aggr": 
"sum"              }            ],            
"values": [              {                
"name": "value",                
"index": 1,                "aggr": 
"sum"              }            ],            
"groups": [],            "scatter": {       
       "xAxis": {                "name": 
"age",                "index&am
 p;quot;: 0,                "aggr": "sum"   
           },              "yAxis": {                
"name": "value",                
"index": 1,                "aggr": 
"sum"              }            }          }        },        
"settings": {          "params": {},        
  "forms": {}        },        "jobName": 
"paragraph_1423500782552_-1439281894",        
"id": "20150210-015302_1492795503",        
"result": {          "code": 
"SUCCESS",          "type": 
"TABLE",          "msg": 
"agetvaluen19t4n20t3n21t7n22t9n23t20n24t24n25t44n26t77n27t94n28t103n29t97n"
        },        "dateCreated&
 quot;: "Feb 10, 2015 1:53:02 AM",        
"dateStarted": "Jul 3, 2015 1:43:17 PM",    
    "dateFinished": "Jul 3, 2015 1:43:23 
PM",        "status": "FINISHED",  
      "progressUpdateIntervalMs": 500      }    ],    
"name": "Zeppelin Tutorial",    
"id": "2A94M5J1Z",    
"angularObjects": {},    "config": {      
"looknfeel": "default"    },    
"info": {}  }}      Delete a note              Description    
  This DELETE method deletes a note by the given note id.                    
URL      http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]         
     Success code      200               Fail code       500                
sample JSON response       {"status&quot
 ;: "OK","message": ""}   
   Clone a note              Description      This POST method clones a note by 
the given id and create a new note using the given name          or default 
name if none given.          The body field of the returned JSON contains the 
new note id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]              
Success code      201               Fail code       500                sample 
JSON input       {"name": "name of new 
note"}               sample JSON response       {  
"status": "CREATED",  
"message": "",  "body": 
"2AZPHY918"}      Run all paragraphs              Description 
           This POST method runs all paragraphs in the given note id.       If 
you can not find Note id 404 returns.      If there is a problem wit
 h the interpreter returns a 412 error.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]              
Success code      200               Fail code       404 or 412               
sample JSON response       {"status": "OK"} 
               sample JSON error response                            {          
   "status": "NOTFOUND",             
"message": "note not found."           }    
                         {             "status": 
"PRECONDITIONFAILED",             
"message": "paragraph1469771130099-278315611 Not 
selected or Invalid Interpreter bind"           }                      
Stop all paragraphs              Description      This DELETE method stops all 
paragraphs in the given note id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/
 notebook/job/[noteId]              Success code      200               Fail 
code       500                sample JSON response       
{"status":"OK"}      Get the status of all 
paragraphs              Description      This GET method gets the status of all 
paragraphs by the given note id.          The body field of the returned JSON 
contains of the array that compose of the paragraph id, paragraph status, 
paragraph finish date, paragraph started date.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]              
Success code      200               Fail code       500                sample 
JSON response       {  "status": "OK",  
"body": [    {      
"id":"20151121-212654_766735423",      
"status":"FINISHED",      
"finished":"Tue Nov 24 14:21:40 KST 2015&a
 mp;quot;,      "started":"Tue Nov 24 14:21:39 KST 
2015"    },    {      
"progress":"1",      
"id":"20151121-212657_730976687",      
"status":"RUNNING",      
"finished":"Tue Nov 24 14:21:35 KST 2015",  
    "started":"Tue Nov 24 14:21:40 KST 
2015"    }  ]}      Get the status of a single paragraph              
Description      This GET method gets the status of a single paragraph by the 
given note and paragraph id.          The body field of the returned JSON 
contains of the array that compose of the paragraph id, paragraph status, 
paragraph finish date, paragraph started date.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId]
              Success code      200               Fail code       500           
     sampl
 e JSON response       {  "status": "OK",  
"body": {      
"id":"20151121-212654_766735423",      
"status":"FINISHED",      
"finished":"Tue Nov 24 14:21:40 KST 2015",  
    "started":"Tue Nov 24 14:21:39 KST 
2015"    }}      Run a paragraph asynchronously              
Description      This POST method runs the paragraph asynchronously by given 
note and paragraph id. This API always return SUCCESS even if the execution of 
the paragraph fails later because the API is asynchronous                    
URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId]
              Success code      200               Fail code       500           
     sample JSON input (optional, only needed when if you want to update 
dynamic form's value)       {  "name&
 quot;: "name of new note",  "params": {    
"formLabel1": "value1",    
"formLabel2": "value2"  }}               
sample JSON response       {"status": "OK"} 
     Run a paragraph synchronously              Description      This POST 
method runs the paragraph synchronously by given note and paragraph id. This 
API can return SUCCESS or ERROR depending on the outcome of the paragraph 
execution                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/run/[noteId]/[paragraphId]
              Success code      200               Fail code       500           
     sample JSON input (optional, only needed when if you want to update 
dynamic form's value)       {  "name": "name 
of new note",  "params": {    
"formLabel1": "value1",    &
 amp;quot;formLabel2": "value2"  }}               
sample JSON response       {"status": "OK"} 
              sample JSON error       {   "status": 
"INTERNAL_SERVER_ERROR",   "body": {       
"code": "ERROR",       
"type": "TEXT",       
"msg": "bash: -c: line 0: unexpected EOF while 
looking for matching ``'nbash: -c: line 1: syntax error: unexpected end 
of filenExitValue: 2"   }}      Stop a paragraph              
Description      This DELETE method stops the paragraph by given note and 
paragraph id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/job/[noteId]/[paragraphId]
              Success code      200               Fail code       500           
     sample JSON response       {"status": "OK&
 amp;quot;}      Add Cron Job              Description      This POST method 
adds cron job by the given note id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/cron/[noteId]             
 Success code      200               Fail code       500                sample 
JSON input       {"cron": "cron expression of 
note"}               sample JSON response       
{"status": "OK"}      Remove Cron Job       
       Description      This DELETE method removes cron job by the given note 
id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/cron/[noteId]             
 Success code      200               Fail code       500                sample 
JSON response       {"status": "OK"}      
Get Cron Job              Description      This GET method gets cron job 
expression of given note id.          The body field of the returned
  JSON contains the cron expression.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/cron/[noteId]             
 Success code      200               Fail code       500                sample 
JSON response       {"status": "OK", 
"body": "* * * * * ?"}      Full text 
search through the paragraphs in all notes              Description      GET 
request will return list of matching paragraphs                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/search?q=[query]          
    Success code      200              Fail code       500               Sample 
JSON response       {  "status": "OK",  
"body": [    {      "id": 
"/paragraph/",      "name":"Note 
Name",       "snippet":"",      
"text&qu
 ot;:""    }  ]}      Create a new paragraph              
Description      This POST method create a new paragraph using JSON payload.    
      The body field of the returned JSON contain the new paragraph id.         
           URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph        
      Success code      201               Fail code       500                
sample JSON input (add to the last)       {  "title": 
"Paragraph insert revised",  "text": 
"%sparknprintln("Paragraph insert 
revised")"}               sample JSON input (add to specific 
index)       {  "title": "Paragraph insert 
revised",  "text": 
"%sparknprintln("Paragraph insert 
revised")",  "index": 0}               
sample JSON response       {  "status": &quo
 t;CREATED",  "message": "",  
"body": "20151218-100330_1754029574"}      
Get a paragraph information              Description      This GET method 
retrieves an existing paragraph's information using the given id.       
   The body field of the returned JSON contain information about paragraph.     
               URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]
              Success code      200               Fail code       500           
     sample JSON response       {  "status": 
"OK",  "message": "",  
"body": {    "title": 
"Paragraph2",    "text": 
"%sparknnprintln("it's 
paragraph2")",    "dateUpdated": 
"Dec 18, 2015 7:33:54 AM"
 ,    "config": {      "colWidth": 12,      
"graph": {        "mode": 
"table",        "height": 300,        
"optionOpen": false,        "keys": [],     
   "values": [],        "groups": [],       
 "scatter": {}      },      "enabled": 
true,      "title": true,      
"editorMode": "ace/mode/scala"    },    
"settings": {      "params": {},      
"forms": {}    },    "jobName": 
"paragraph_1450391574392_-1890856722",    
"id": "20151218-073254_1105602047",    
"result": {      "code": 
"SUCCESS",      "type": 
"TEXT"
 ,      "msg": "it's paragraph2n"   
 },    "dateCreated": "Dec 18, 2015 7:32:54 
AM",    "dateStarted": "Dec 18, 2015 
7:33:55 AM",    "dateFinished": "Dec 18, 
2015 7:33:55 AM",    "status": 
"FINISHED",    "progressUpdateIntervalMs": 
500  }}      Update paragraph configuration              Description      This 
PUT method update paragraph configuration using given id so that user can 
change paragraph setting such as graph type, show or hide editor/result and 
paragraph size, etc. You can update certain fields you want, for example you 
can update colWidth field only by sending request with payload 
{"colWidth": 12.0}.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]/config
              Success code      
 200              Bad Request code      400              Forbidden code      
403              Not Found code      404              Fail code      500        
      sample JSON input      {  "colWidth": 6.0,  
"graph": {    "mode": 
"lineChart",    "height": 200.0,    
"optionOpen": false,    "keys": [      {    
    "name": "age",        
"index": 0.0,        "aggr": 
"sum"      }    ],    "values": [      {    
    "name": "value",        
"index": 1.0,        "aggr": 
"sum"      }    ],    "groups": [],    
"scatter": {}  },  "editorHide": true,  
"editorMode": "ace/mode/markdown",  &amp
 ;quot;tableHide": false}              sample JSON response      {  
"status":"OK",  
"message":"",  "body":{   
 "text":"%sql nselect age, count(1) valuenfrom bank 
nwhere age u003c 30 ngroup by age norder by age",    
"config":{      "colWidth":6.0,      
"graph":{        
"mode":"lineChart",        
"height":200.0,        "optionOpen":false,  
      "keys":[          {            
"name":"age",            
"index":0.0,            
"aggr":"sum"          }        ],        
"values":[          {            
"name":"value",            
"index":1.0,            &q
 uot;aggr":"sum"          }        ],        
"groups":[],        "scatter":{}      },    
  "tableHide":false,      
"editorMode":"ace/mode/markdown",      
"editorHide":true    },    "settings":{     
 "params":{},      "forms":{}    },    
"apps":[],    
"jobName":"paragraph1423500782552-1439281894",
    "id":"20150210-015302_1492795503",    
"result":{      
"code":"SUCCESS",      
"type":"TABLE",      
"msg":"agetvaluen19t4n20t3n21t7n22t9n23t20n24t24n25t44n26t77n27t94n28t103n29t97n"
    },    "dateCreated":"Feb 10, 2015 1:53:02 
AM",    "dateStarted&a
 mp;quot;:"Jul 3, 2015 1:43:17 PM",    
"dateFinished":"Jul 3, 2015 1:43:23 PM",    
"status":"FINISHED",    
"progressUpdateIntervalMs":500  }}      Move a paragraph to 
the specific index              Description      This POST method moves a 
paragraph to the specific index (order) from the note.                    URL   
   
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]/move/[newIndex]
              Success code      200               Fail code       500           
     sample JSON response       {"status": 
"OK","message": ""}      
Delete a paragraph              Description      This DELETE method deletes a 
paragraph by the given note and paragraph id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/paragraph/[paragraphId]
      
         Success code      200               Fail code       500                
sample JSON response       {"status": 
"OK","message": ""}      
Export a note              Description      This GET method exports a note by 
the given id and gernerates a JSON                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/export/[noteId]           
   Success code      201               Fail code       500          sample JSON 
response       {  "paragraphs": [    {      
"text": "%md This is my new paragraph in my new 
note",      "dateUpdated": "Jan 8, 2016 
4:49:38 PM",      "config": {        
"enabled": true      },      "settings": {  
      "params": {},        "forms": {}      
},      "jobName": &quo
 t;paragraph_1452300578795_1196072540",      "id": 
"20160108-164938_1685162144",      
"dateCreated": "Jan 8, 2016 4:49:38 PM",    
  "status": "READY",      
"progressUpdateIntervalMs": 500    }  ],  
"name": "source note for export",  
"id": "2B82H3RR1",  
"angularObjects": {},  "config": {},  
"info": {}}      Import a note              Description      
This POST method imports a note from the note JSON input                    URL 
     http://[zeppelin-server]:[zeppelin-port]/api/notebook/import              
Success code      201               Fail code       500               sample 
JSON input      {  "paragraphs": [    {      
"text": "%md This is my new paragraph in my new 
note&qu
 ot;,      "dateUpdated": "Jan 8, 2016 4:49:38 
PM",      "config": {        
"enabled": true      },      "settings": {  
      "params": {},        "forms": {}      
},      "jobName": 
"paragraph_1452300578795_1196072540",      
"id": "20160108-164938_1685162144",      
"dateCreated": "Jan 8, 2016 4:49:38 PM",    
  "status": "READY",      
"progressUpdateIntervalMs": 500    }  ],  
"name": "source note for export",  
"id": "2B82H3RR1",  
"angularObjects": {},  "config": {},  
"info": {}}              sample JSON response      {  
"status": "CREATED",  
 "message": "",  "body": 
"2AZPHY918"}      Clear all paragraph result              
Description      This PUT method clear all paragraph results from note of given 
id.                    URL      
http://[zeppelin-server]:[zeppelin-port]/api/notebook/[noteId]/clear            
  Success code      200              Forbidden code      401              Not 
Found code      404              Fail code      500              sample JSON 
response      {"status": "OK"}          ",

[... 14 lines stripped ...]

Reply via email to