Modified: zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json
URL: 
http://svn.apache.org/viewvc/zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json?rev=1789786&r1=1789785&r2=1789786&view=diff
==============================================================================
--- zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json (original)
+++ zeppelin/site/docs/0.8.0-SNAPSHOT/search_data.json Sat Apr  1 10:03:36 2017
@@ -105,7 +105,7 @@
 
     "/install/build.html": {
       "title": "Build from Source",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Building from SourceIf you 
want to build from source, you must first install the following dependencies:   
   Name    Value        Git    (Any Version)        Maven    3.1.x or higher    
    JDK    1.7  If you haven't installed Git and Maven yet, check the 
Build requirements section and follow the step by step instructions from 
there.1. Clone the Apache Zeppelin repositorygit clone 
https://github.com/apache/zeppelin.git2. B
 uild sourceYou can build Zeppelin with following maven command:mvn clean 
package -DskipTests [Options]If you're unsure about the options, use 
the same commands that creates official binary package.# update all pom.xml to 
use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all 
interpreters and include latest version of Apache spark support for local 
mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark 
-Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running 
after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles 
section for further build options.If you are behind proxy, follow instructions 
in Proxy setting section.If you're interested in contribution, please 
check Contributing to Apache Zeppelin (Code) and Contributing to Apache 
Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific 
Spark version, Hadoop version or specific features, define one or more of the 
following pr
 ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles 
are-Pspark-2.1-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor
 version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop 
major versionAvailable profiles 
are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6-Phadoop-2.7minor
 version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] 
(optional)set scala version (default 2.10)Available profiles 
are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local 
modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set 
SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local 
mode.-Pr (optional)enable R support with SparkR integration.-Psparkr 
(optional)another R support with SparkR integration as well as local mode 
support.-Pvendor-repo (optional)enable 3rd party vendor rep
 ository (cloudera)-Pmapr[version] (optional)For the MapR Hadoop Distribution, 
these profiles will handle the Hadoop version. As MapR allows different 
versions of Spark to be installed, you should specify which version of Spark is 
installed on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, 
etc.) as needed.The correct Maven artifacts can be found for every version of 
MapR at http://doc.mapr.comAvailable profiles 
are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples 
under zeppelin-examples directoryBuild command examplesHere are some examples 
with several options:# build with spark-2.1, 
scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.1 
-Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with 
spark-2.0, scala-2.11./dev/change_scala_version.sh 2.11mvn clean package 
-Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# 
build with spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -P
 hadoop-2.4 -Pyarn -Ppyspark -Psparkr -DskipTests# spark-cassandra 
integrationmvn clean package -Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 
-Phadoop-2.6 -DskipTests -DskipTests# with CDHmvn clean package -Pspark-1.5 
-Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 -Pvendor-repo -DskipTests# with 
MapRmvn clean package -Pspark-1.5 -Pmapr50 -DskipTestsIgnite Interpretermvn 
clean package -Dignite.version=1.8.0 -DskipTestsScalding Interpretermvn clean 
package -Pscalding -DskipTestsBuild requirementsInstall requirementsIf you 
don't have requirements prepared, install it.(The installation method 
may vary according to your environment, example is for Ubuntu.)sudo apt-get 
updatesudo apt-get install gitsudo apt-get install openjdk-7-jdksudo apt-get 
install npmsudo apt-get install libfontconfigInstall mavenwget 
http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo
 tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s 
/usr/local/apache-maven-
 3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is installed by running 
node --version - Ensure maven is running version 3.1.x or higher with mvn 
-version - Configure maven to use more memory than usual by export 
MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=1024m"Proxy setting 
(optional)If you're behind the proxy, you'll need to configure 
maven and npm to pass through it.First of all, configure maven in your 
~/.m2/settings.xml.<settings>  <proxies>    
<proxy>      <id>proxy-http</id>      
<active>true</active>      
<protocol>http</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0
 .0.1</nonProxyHosts>    </proxy>    
<proxy>      <id>proxy-https</id>     
 <active>true</active>      
<protocol>https</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>  
  </proxy>  
</proxies></settings>Then, next commands will 
configure npm.npm config set proxy http://localhost:3128npm config set 
https-proxy http://localhost:3128npm config set registry 
"http://registry.npmjs.org/"npm config set strict-ssl 
falseConfigure git as wellgit config --global http.proxy 
http://localhost:3128git config --global https.proxy h
 ttp://localhost:3128git config --global 
url."http://".insteadOf git://To clean up, set active false 
in Maven settings.xml and run these commands.npm config rm proxynpm config rm 
https-proxygit config --global --unset http.proxygit config --global --unset 
https.proxygit config --global --unset 
url."http://".insteadOfNotes: - If you are behind NTLM proxy 
you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the 
standard pattern http://user:pwd@host:port.PackageTo package the final 
distribution including the compressed archive, run:mvn clean package 
-Pbuild-distrTo build a distribution with specific profiles, run:mvn clean 
package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles 
-Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build 
to a specific spark versions, or omit support such as yarn.  The archive is 
generated under zeppelin-distribution/target directoryRun end-to-end 
testsZeppelin com
 es with a set of end-to-end acceptance tests driving headless selenium 
browser# assumes zeppelin-server running on localhost:8080 (use -Durl=.. to 
override)mvn verify# or take care of starting/stoping zeppelin-server from 
packaged zeppelin-distribuion/targetmvn verify -P using-packaged-distr",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Building from SourceIf you 
want to build from source, you must first install the following dependencies:   
   Name    Value        Git    (Any Version)        Maven    3.1.x or higher    
    JDK    1.7  If you haven't installed Git and Maven yet, check the 
Build requirements section and follow the step by step instructions from 
there.1. Clone the Apache Zeppelin repositorygit clone 
https://github.com/apache/zeppelin.git2. B
 uild sourceYou can build Zeppelin with following maven command:mvn clean 
package -DskipTests [Options]If you're unsure about the options, use 
the same commands that creates official binary package.# update all pom.xml to 
use scala 2.11./dev/change_scala_version.sh 2.11# build zeppelin with all 
interpreters and include latest version of Apache spark support for local 
mode.mvn clean package -DskipTests -Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark 
-Psparkr -Pr -Pscala-2.113. DoneYou can directly start Zeppelin by running 
after successful build:./bin/zeppelin-daemon.sh startCheck build-profiles 
section for further build options.If you are behind proxy, follow instructions 
in Proxy setting section.If you're interested in contribution, please 
check Contributing to Apache Zeppelin (Code) and Contributing to Apache 
Zeppelin (Website).Build profilesSpark InterpreterTo build with a specific 
Spark version, Hadoop version or specific features, define one or more of the 
following pr
 ofiles and options:-Pspark-[version]Set spark major versionAvailable profiles 
are-Pspark-2.1-Pspark-2.0-Pspark-1.6-Pspark-1.5-Pspark-1.4-Pcassandra-spark-1.5-Pcassandra-spark-1.4-Pcassandra-spark-1.3-Pcassandra-spark-1.2-Pcassandra-spark-1.1minor
 version can be adjusted by -Dspark.version=x.x.x-Phadoop-[version]set hadoop 
major versionAvailable profiles 
are-Phadoop-0.23-Phadoop-1-Phadoop-2.2-Phadoop-2.3-Phadoop-2.4-Phadoop-2.6-Phadoop-2.7minor
 version can be adjusted by -Dhadoop.version=x.x.x-Pscala-[version] 
(optional)set scala version (default 2.10)Available profiles 
are-Pscala-2.10-Pscala-2.11-Pyarn (optional)enable YARN support for local 
modeYARN for local mode is not supported for Spark v1.5.0 or higher. Set 
SPARK_HOME instead.-Ppyspark (optional)enable PySpark support for local 
mode.-Pr (optional)enable R support with SparkR integration.-Psparkr 
(optional)another R support with SparkR integration as well as local mode 
support.-Pvendor-repo (optional)enable 3rd party vendor rep
 ository (cloudera)-Pmapr[version] (optional)For the MapR Hadoop Distribution, 
these profiles will handle the Hadoop version. As MapR allows different 
versions of Spark to be installed, you should specify which version of Spark is 
installed on the cluster by adding a Spark profile (-Pspark-1.6, -Pspark-2.0, 
etc.) as needed.The correct Maven artifacts can be found for every version of 
MapR at http://doc.mapr.comAvailable profiles 
are-Pmapr3-Pmapr40-Pmapr41-Pmapr50-Pmapr51-Pexamples (optional)Bulid examples 
under zeppelin-examples directoryBuild command examplesHere are some examples 
with several options:# build with spark-2.1, 
scala-2.11./dev/change_scala_version.sh 2.11mvn clean package -Pspark-2.1 
-Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# build with 
spark-2.0, scala-2.11./dev/change_scala_version.sh 2.11mvn clean package 
-Pspark-2.0 -Phadoop-2.4 -Pyarn -Ppyspark -Psparkr -Pscala-2.11 -DskipTests# 
build with spark-1.6, scala-2.10mvn clean package -Pspark-1.6 -P
 hadoop-2.4 -Pyarn -Ppyspark -Psparkr -DskipTests# spark-cassandra 
integrationmvn clean package -Pcassandra-spark-1.5 -Dhadoop.version=2.6.0 
-Phadoop-2.6 -DskipTests -DskipTests# with CDHmvn clean package -Pspark-1.5 
-Dhadoop.version=2.6.0-cdh5.5.0 -Phadoop-2.6 -Pvendor-repo -DskipTests# with 
MapRmvn clean package -Pspark-1.5 -Pmapr50 -DskipTestsIgnite Interpretermvn 
clean package -Dignite.version=1.9.0 -DskipTestsScalding Interpretermvn clean 
package -Pscalding -DskipTestsBuild requirementsInstall requirementsIf you 
don't have requirements prepared, install it.(The installation method 
may vary according to your environment, example is for Ubuntu.)sudo apt-get 
updatesudo apt-get install gitsudo apt-get install openjdk-7-jdksudo apt-get 
install npmsudo apt-get install libfontconfigInstall mavenwget 
http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gzsudo
 tar -zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/sudo ln -s 
/usr/local/apache-maven-
 3.3.9/bin/mvn /usr/local/bin/mvnNotes: - Ensure node is installed by running 
node --version - Ensure maven is running version 3.1.x or higher with mvn 
-version - Configure maven to use more memory than usual by export 
MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=1024m"Proxy setting 
(optional)If you're behind the proxy, you'll need to configure 
maven and npm to pass through it.First of all, configure maven in your 
~/.m2/settings.xml.<settings>  <proxies>    
<proxy>      <id>proxy-http</id>      
<active>true</active>      
<protocol>http</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0
 .0.1</nonProxyHosts>    </proxy>    
<proxy>      <id>proxy-https</id>     
 <active>true</active>      
<protocol>https</protocol>      
<host>localhost</host>      
<port>3128</port>      <!-- 
<username>usr</username>      
<password>pwd</password> -->      
<nonProxyHosts>localhost|127.0.0.1</nonProxyHosts>  
  </proxy>  
</proxies></settings>Then, next commands will 
configure npm.npm config set proxy http://localhost:3128npm config set 
https-proxy http://localhost:3128npm config set registry 
"http://registry.npmjs.org/"npm config set strict-ssl 
falseConfigure git as wellgit config --global http.proxy 
http://localhost:3128git config --global https.proxy h
 ttp://localhost:3128git config --global 
url."http://".insteadOf git://To clean up, set active false 
in Maven settings.xml and run these commands.npm config rm proxynpm config rm 
https-proxygit config --global --unset http.proxygit config --global --unset 
https.proxygit config --global --unset 
url."http://".insteadOfNotes: - If you are behind NTLM proxy 
you can use Cntlm Authentication Proxy. - Replace localhost:3128 with the 
standard pattern http://user:pwd@host:port.PackageTo package the final 
distribution including the compressed archive, run:mvn clean package 
-Pbuild-distrTo build a distribution with specific profiles, run:mvn clean 
package -Pbuild-distr -Pspark-1.5 -Phadoop-2.4 -Pyarn -PpysparkThe profiles 
-Pspark-1.5 -Phadoop-2.4 -Pyarn -Ppyspark can be adjusted if you wish to build 
to a specific spark versions, or omit support such as yarn.  The archive is 
generated under zeppelin-distribution/target directoryRun end-to-end 
testsZeppelin com
 es with a set of end-to-end acceptance tests driving headless selenium 
browser# assumes zeppelin-server running on localhost:8080 (use -Durl=.. to 
override)mvn verify# or take care of starting/stoping zeppelin-server from 
packaged zeppelin-distribuion/targetmvn verify -P using-packaged-distr",
       "url": " /install/build.html",
       "group": "install",
       "excerpt": "How to build Zeppelin from source"
@@ -127,7 +127,7 @@
 
     "/install/configuration.html": {
       "title": "Apache Zeppelin Configuration",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Apache Zeppelin 
ConfigurationZeppelin PropertiesThere are two locations you can configure 
Apache Zeppelin.Environment variables can be defined 
conf/zeppelin-env.sh(confzeppelin-env.cmd for Windows). Java properties can ba 
defined in conf/zeppelin-site.xml.If both are defined, then the environment 
variables will take priority.Mouse hover on each property and click  then you 
can get a link for that.      zeppelin-env.sh    zeppelin-s
 ite.xml    Default value    Description        ZEPPELIN_PORT    
zeppelin.server.port    8080    Zeppelin server port        ZEPPELIN_SSL_PORT   
 zeppelin.server.ssl.port    8443    Zeppelin Server ssl port (used when ssl 
environment/property is set to true)        ZEPPELIN_MEM    N/A    -Xmx1024m 
-XX:MaxPermSize=512m    JVM mem options        ZEPPELIN_INTP_MEM    N/A    
ZEPPELIN_MEM    JVM mem options for interpreter process        
ZEPPELIN_JAVA_OPTS    N/A        JVM options        ZEPPELIN_ALLOWED_ORIGINS    
zeppelin.server.allowed.origins    *    Enables a way to specify a ',' 
separated list of allowed origins for REST and websockets.  e.g. 
http://localhost:8080          N/A    zeppelin.anonymous.allowed    true    The 
anonymous user is allowed by default.        ZEPPELIN_SERVER_CONTEXT_PATH    
zeppelin.server.context.path    /    Context path of the web application        
ZEPPELIN_SSL    zeppelin.ssl    false            ZEPPELIN_SSL_CLIENT_AUTH    
zeppelin.ssl.client.aut
 h    false            ZEPPELIN_SSL_KEYSTORE_PATH    zeppelin.ssl.keystore.path 
   keystore            ZEPPELIN_SSL_KEYSTORE_TYPE    zeppelin.ssl.keystore.type 
   JKS            ZEPPELIN_SSL_KEYSTORE_PASSWORD    
zeppelin.ssl.keystore.password                ZEPPELIN_SSL_KEY_MANAGER_PASSWORD 
   zeppelin.ssl.key.manager.password                
ZEPPELIN_SSL_TRUSTSTORE_PATH    zeppelin.ssl.truststore.path                
ZEPPELIN_SSL_TRUSTSTORE_TYPE    zeppelin.ssl.truststore.type                
ZEPPELIN_SSL_TRUSTSTORE_PASSWORD    zeppelin.ssl.truststore.password            
    ZEPPELIN_NOTEBOOK_HOMESCREEN    zeppelin.notebook.homescreen        Display 
note IDs on the Apache Zeppelin homescreen e.g. 2A94M5J1Z        
ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE    zeppelin.notebook.homescreen.hide    false 
   Hide the note ID set by ZEPPELIN_NOTEBOOK_HOMESCREEN on the Apache Zeppelin 
homescreen. For the further information, please read Customize your Zeppelin 
homepage.        ZEPPELIN_WAR_TEMPDIR    
 zeppelin.war.tempdir    webapps    Location of the jetty temporary directory   
     ZEPPELIN_NOTEBOOK_DIR    zeppelin.notebook.dir    notebook    The root 
directory where notebook directories are saved        
ZEPPELIN_NOTEBOOK_S3_BUCKET    zeppelin.notebook.s3.bucket    zeppelin    S3 
Bucket where notebook files will be saved        ZEPPELIN_NOTEBOOK_S3_USER    
zeppelin.notebook.s3.user    user    User name of an S3 buckete.g. 
bucket/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_S3_ENDPOINT   
 zeppelin.notebook.s3.endpoint    s3.amazonaws.com    Endpoint for the bucket   
     ZEPPELIN_NOTEBOOK_S3_KMS_KEY_ID    zeppelin.notebook.s3.kmsKeyID        
AWS KMS Key ID to use for encrypting data in S3 (optional)        
ZEPPELIN_NOTEBOOK_S3_EMP    zeppelin.notebook.s3.encryptionMaterialsProvider    
    Class name of a custom S3 encryption materials provider implementation to 
use for encrypting data in S3 (optional)        ZEPPELIN_NOTEBOOK_S3_SSE    
zeppelin.notebook.s3.sse    f
 alse    Save notebooks to S3 with server-side encryption enabled        
ZEPPELIN_NOTEBOOK_AZURE_CONNECTION_STRING    
zeppelin.notebook.azure.connectionString        The Azure storage account 
connection stringe.g. 
DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>
        ZEPPELIN_NOTEBOOK_AZURE_SHARE    zeppelin.notebook.azure.share    
zeppelin    Azure Share where the notebook files will be saved        
ZEPPELIN_NOTEBOOK_AZURE_USER    zeppelin.notebook.azure.user    user    
Optional user name of an Azure file sharee.g. 
share/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_STORAGE    
zeppelin.notebook.storage    org.apache.zeppelin.notebook.repo.GitNotebookRepo  
  Comma separated list of notebook storage locations        
ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC    zeppelin.notebook.one.way.sync    false    If 
there are multiple notebook storage locations, should we treat the first one as 
the only source of truth?        ZEPPE
 LIN_NOTEBOOK_PUBLIC    zeppelin.notebook.public    true    Make notebook 
public (set only owners) by default when created/imported. If set to false will 
add user to readers and writers as well, making it private and invisible to 
other users unless permissions are granted.        ZEPPELIN_INTERPRETERS    
zeppelin.interpreters      
org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,
    ...              Comma separated interpreter configurations [Class]       
NOTE: This property is deprecated since Zeppelin-0.6.0 and will not be 
supported from Zeppelin-0.7.0.            ZEPPELIN_INTERPRETER_DIR    
zeppelin.interpreter.dir    interpreter    Interpreter directory        
ZEPPELIN_INTERPRETER_OUTPUT_LIMIT    zeppelin.interpreter.output.limit    
102400    Output message from interpreter ex
 ceeding the limit will be truncated        
ZEPPELIN_INTERPRETER_CONNECT_TIMEOUT    zeppelin.interpreter.connect.timeout    
30000    Output message from interpreter exceeding the limit will be truncated  
      ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE    
zeppelin.websocket.max.text.message.size    1024000    Size(in characters) of 
the maximum text message that can be received by websocket.        
ZEPPELIN_SERVER_DEFAULT_DIR_ALLOWED    zeppelin.server.default.dir.allowed    
false    Enable directory listings on server.  SSL ConfigurationEnabling SSL 
requires a few configuration changes. First, you need to create certificates 
and then update necessary configurations to enable server side SSL and/or 
client side certificate authentication.Creating and configuring the 
CertificatesInformation how about to generate certificates and a keystore can 
be found here.A condensed example can be found in the top answer to this 
StackOverflow post.The keystore holds the private key and certificate on t
 he server end. The trustore holds the trusted client certificates. Be sure 
that the path and password for these two stores are correctly configured in the 
password fields below. They can be obfuscated using the Jetty password tool. 
After Maven pulls in all the dependency to build Zeppelin, one of the Jetty 
jars contain the Password tool. Invoke this command from the Zeppelin home 
build directory with the appropriate version, user, and password.java -cp 
./zeppelin-server/target/lib/jetty-all-server-<version>.jar 
org.eclipse.jetty.util.security.Password <user> 
<password>If you are using a self-signed, a certificate signed by 
an untrusted CA, or if client authentication is enabled, then the client must 
have a browser create exceptions for both the normal HTTPS port and WebSocket 
port. This can by done by trying to establish an HTTPS connection to both ports 
in a browser (e.g. if the ports are 443 and 8443, then visit 
https://127.0.0.1:443 and h
 ttps://127.0.0.1:8443). This step can be skipped if the server certificate is 
signed by a trusted CA and client auth is disabled.Configuring server side 
SSLThe following properties needs to be updated in the zeppelin-site.xml in 
order to enable server side SSL.<property>  
<name>zeppelin.server.ssl.port</name>  
<value>8443</value>  
<description>Server ssl port. (used when ssl property is set to 
true)</description></property><property>
  <name>zeppelin.ssl</name>  
<value>true</value>  
<description>Should SSL be used by the 
servers?</description></property><property>
  <name>zeppelin.ssl.keystore.path</name>  
<value>keystore</value>  
<description>Path to keystore relative to Zeppelin c
 onfiguration 
directory</description></property><property>
  <name>zeppelin.ssl.keystore.type</name>  
<value>JKS</value>  <description>The 
format of the given keystore (e.g. JKS or 
PKCS12)</description></property><property>
  <name>zeppelin.ssl.keystore.password</name>  
<value>change me</value>  
<description>Keystore password. Can be obfuscated by the Jetty 
Password 
tool</description></property><property>
  <name>zeppelin.ssl.key.manager.password</name>  
<value>change me</value>  
<description>Key Manager password. Defaults to keystore password. 
Can be obfuscated.</description></property>Enabling 
client side certificate authentica
 tionThe following properties needs to be updated in the zeppelin-site.xml in 
order to enable client side certificate authentication.<property> 
 <name>zeppelin.server.ssl.port</name>  
<value>8443</value>  
<description>Server ssl port. (used when ssl property is set to 
true)</description></property><property>
  <name>zeppelin.ssl.client.auth</name>  
<value>true</value>  
<description>Should client authentication be used for SSL 
connections?</description></property><property>
  <name>zeppelin.ssl.truststore.path</name>  
<value>truststore</value>  
<description>Path to truststore relative to Zeppelin 
configuration directory. Defaults to the keystore 
path</description>&lt
 ;/property><property>  
<name>zeppelin.ssl.truststore.type</name>  
<value>JKS</value>  <description>The 
format of the given truststore (e.g. JKS or PKCS12). Defaults to the same type 
as the keystore 
type</description></property><property>
  <name>zeppelin.ssl.truststore.password</name>  
<value>change me</value>  
<description>Truststore password. Can be obfuscated by the Jetty 
Password tool. Defaults to the keystore 
password</description></property>Obfuscating 
Passwords using the Jetty Password ToolSecurity best practices advise to not 
use plain text passwords and Jetty provides a password tool to help obfuscating 
the passwords used to access the KeyStore and TrustStore.The Password tool 
documentation can be found here.After using the tool:java -cp $ZEPPELIN_
 HOME/zeppelin-server/target/lib/jetty-util-9.2.15.v20160210.jar          
org.eclipse.jetty.util.security.Password           password2016-12-15 
10:46:47.931:INFO::main: Logging initialized 
@101mspasswordOBF:1v2j1uum1xtv1zej1zer1xtn1uvk1v1vMD5:5f4dcc3b5aa765d61d8327deb882cf99update
 your configuration with the obfuscated password :<property>  
<name>zeppelin.ssl.keystore.password</name>  
<value>OBF:1v2j1uum1xtv1zej1zer1xtn1uvk1v1v</value> 
 <description>Keystore password. Can be obfuscated by the Jetty 
Password tool</description></property>Note: After 
updating these configurations, Zeppelin server needs to be restarted.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Apache Zeppelin 
ConfigurationZeppelin PropertiesThere are two locations you can configure 
Apache Zeppelin.Environment variables can be defined 
conf/zeppelin-env.sh(confzeppelin-env.cmd for Windows).Java properties can ba 
defined in conf/zeppelin-site.xml.If both are defined, then the environment 
variables will take priority.Mouse hover on each property and click  then you 
can get a link for that.      zeppelin-env.sh    zeppelin-si
 te.xml    Default value    Description        ZEPPELIN_PORT    
zeppelin.server.port    8080    Zeppelin server port        Note: Please make 
sure you're not using the same port with      Zeppelin web application 
development port (default: 9000).        ZEPPELIN_SSL_PORT    
zeppelin.server.ssl.port    8443    Zeppelin Server ssl port (used when ssl 
environment/property is set to true)        ZEPPELIN_MEM    N/A    -Xmx1024m 
-XX:MaxPermSize=512m    JVM mem options        ZEPPELIN_INTP_MEM    N/A    
ZEPPELIN_MEM    JVM mem options for interpreter process        
ZEPPELIN_JAVA_OPTS    N/A        JVM options        ZEPPELIN_ALLOWED_ORIGINS    
zeppelin.server.allowed.origins    *    Enables a way to specify a ',' 
separated list of allowed origins for REST and websockets.  e.g. 
http://localhost:8080          N/A    zeppelin.anonymous.allowed    true    The 
anonymous user is allowed by default.        ZEPPELIN_SERVER_CONTEXT_PATH    
zeppelin.server.context.path    /    Context pa
 th of the web application        ZEPPELIN_SSL    zeppelin.ssl    false         
   ZEPPELIN_SSL_CLIENT_AUTH    zeppelin.ssl.client.auth    false            
ZEPPELIN_SSL_KEYSTORE_PATH    zeppelin.ssl.keystore.path    keystore            
ZEPPELIN_SSL_KEYSTORE_TYPE    zeppelin.ssl.keystore.type    JKS            
ZEPPELIN_SSL_KEYSTORE_PASSWORD    zeppelin.ssl.keystore.password                
ZEPPELIN_SSL_KEY_MANAGER_PASSWORD    zeppelin.ssl.key.manager.password          
      ZEPPELIN_SSL_TRUSTSTORE_PATH    zeppelin.ssl.truststore.path              
  ZEPPELIN_SSL_TRUSTSTORE_TYPE    zeppelin.ssl.truststore.type                
ZEPPELIN_SSL_TRUSTSTORE_PASSWORD    zeppelin.ssl.truststore.password            
    ZEPPELIN_NOTEBOOK_HOMESCREEN    zeppelin.notebook.homescreen        Display 
note IDs on the Apache Zeppelin homescreen e.g. 2A94M5J1Z        
ZEPPELIN_NOTEBOOK_HOMESCREEN_HIDE    zeppelin.notebook.homescreen.hide    false 
   Hide the note ID set by ZEPPELIN_NOTEBOOK_HOMESCREEN on the A
 pache Zeppelin homescreen. For the further information, please read Customize 
your Zeppelin homepage.        ZEPPELIN_WAR_TEMPDIR    zeppelin.war.tempdir    
webapps    Location of the jetty temporary directory        
ZEPPELIN_NOTEBOOK_DIR    zeppelin.notebook.dir    notebook    The root 
directory where notebook directories are saved        
ZEPPELIN_NOTEBOOK_S3_BUCKET    zeppelin.notebook.s3.bucket    zeppelin    S3 
Bucket where notebook files will be saved        ZEPPELIN_NOTEBOOK_S3_USER    
zeppelin.notebook.s3.user    user    User name of an S3 buckete.g. 
bucket/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_S3_ENDPOINT   
 zeppelin.notebook.s3.endpoint    s3.amazonaws.com    Endpoint for the bucket   
     ZEPPELIN_NOTEBOOK_S3_KMS_KEY_ID    zeppelin.notebook.s3.kmsKeyID        
AWS KMS Key ID to use for encrypting data in S3 (optional)        
ZEPPELIN_NOTEBOOK_S3_EMP    zeppelin.notebook.s3.encryptionMaterialsProvider    
    Class name of a custom S3 encryption materials
  provider implementation to use for encrypting data in S3 (optional)        
ZEPPELIN_NOTEBOOK_S3_SSE    zeppelin.notebook.s3.sse    false    Save notebooks 
to S3 with server-side encryption enabled        
ZEPPELIN_NOTEBOOK_AZURE_CONNECTION_STRING    
zeppelin.notebook.azure.connectionString        The Azure storage account 
connection stringe.g. 
DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>
        ZEPPELIN_NOTEBOOK_AZURE_SHARE    zeppelin.notebook.azure.share    
zeppelin    Azure Share where the notebook files will be saved        
ZEPPELIN_NOTEBOOK_AZURE_USER    zeppelin.notebook.azure.user    user    
Optional user name of an Azure file sharee.g. 
share/user/notebook/2A94M5J1Z/note.json        ZEPPELIN_NOTEBOOK_STORAGE    
zeppelin.notebook.storage    org.apache.zeppelin.notebook.repo.GitNotebookRepo  
  Comma separated list of notebook storage locations        
ZEPPELIN_NOTEBOOK_ONE_WAY_SYNC    zeppelin.notebook.one.way.sync 
    false    If there are multiple notebook storage locations, should we treat 
the first one as the only source of truth?        ZEPPELIN_NOTEBOOK_PUBLIC    
zeppelin.notebook.public    true    Make notebook public (set only owners) by 
default when created/imported. If set to false will add user to readers and 
writers as well, making it private and invisible to other users unless 
permissions are granted.        ZEPPELIN_INTERPRETERS    zeppelin.interpreters  
    
org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.shell.ShellInterpreter,
    ...              Comma separated interpreter configurations [Class]       
NOTE: This property is deprecated since Zeppelin-0.6.0 and will not be 
supported from Zeppelin-0.7.0.            ZEPPELIN_INTERPRETER_DIR    
zeppelin.interpreter.dir    interpreter    Interpreter di
 rectory        ZEPPELIN_INTERPRETER_DEP_MVNREPO    
zeppelin.interpreter.dep.mvnRepo    http://repo1.maven.org/maven2/    Remote 
principal repository for interpreter's additional dependency loading        
ZEPPELIN_INTERPRETER_OUTPUT_LIMIT    zeppelin.interpreter.output.limit    
102400    Output message from interpreter exceeding the limit will be truncated 
       ZEPPELIN_INTERPRETER_CONNECT_TIMEOUT    
zeppelin.interpreter.connect.timeout    30000    Output message from 
interpreter exceeding the limit will be truncated        ZEPPELIN_DEP_LOCALREPO 
   zeppelin.dep.localrepo    local-repo    Local repository for dependency 
loader.ex)visualiztion modules of npm.        ZEPPELIN_HELIUM_NPM_REGISTRY    
zeppelin.helium.npm.registry    http://registry.npmjs.org/    Remote Npm 
registry for Helium dependency loader        
ZEPPELIN_WEBSOCKET_MAX_TEXT_MESSAGE_SIZE    
zeppelin.websocket.max.text.message.size    1024000    Size(in characters) of 
the maximum text message that can be received 
 by websocket.        ZEPPELIN_SERVER_DEFAULT_DIR_ALLOWED    
zeppelin.server.default.dir.allowed    false    Enable directory listings on 
server.  SSL ConfigurationEnabling SSL requires a few configuration changes. 
First, you need to create certificates and then update necessary configurations 
to enable server side SSL and/or client side certificate 
authentication.Creating and configuring the CertificatesInformation how about 
to generate certificates and a keystore can be found here.A condensed example 
can be found in the top answer to this StackOverflow post.The keystore holds 
the private key and certificate on the server end. The trustore holds the 
trusted client certificates. Be sure that the path and password for these two 
stores are correctly configured in the password fields below. They can be 
obfuscated using the Jetty password tool. After Maven pulls in all the 
dependency to build Zeppelin, one of the Jetty jars contain the Password tool. 
Invoke this command from the Zeppelin
  home build directory with the appropriate version, user, and password.java 
-cp ./zeppelin-server/target/lib/jetty-all-server-<version>.jar 
org.eclipse.jetty.util.security.Password <user> 
<password>If you are using a self-signed, a certificate signed by 
an untrusted CA, or if client authentication is enabled, then the client must 
have a browser create exceptions for both the normal HTTPS port and WebSocket 
port. This can by done by trying to establish an HTTPS connection to both ports 
in a browser (e.g. if the ports are 443 and 8443, then visit 
https://127.0.0.1:443 and https://127.0.0.1:8443). This step can be skipped if 
the server certificate is signed by a trusted CA and client auth is 
disabled.Configuring server side SSLThe following properties needs to be 
updated in the zeppelin-site.xml in order to enable server side 
SSL.<property>  
<name>zeppelin.server.ssl.port</name>  
<value>84
 43</value>  <description>Server ssl port. (used 
when ssl property is set to 
true)</description></property><property>
  <name>zeppelin.ssl</name>  
<value>true</value>  
<description>Should SSL be used by the 
servers?</description></property><property>
  <name>zeppelin.ssl.keystore.path</name>  
<value>keystore</value>  
<description>Path to keystore relative to Zeppelin configuration 
directory</description></property><property>
  <name>zeppelin.ssl.keystore.type</name>  
<value>JKS</value>  <description>The 
format of the given keystore (e.g. JKS or 
PKCS12)</description></property><property>
  
 <name>zeppelin.ssl.keystore.password</name>  
<value>change me</value>  
<description>Keystore password. Can be obfuscated by the Jetty 
Password 
tool</description></property><property>
  <name>zeppelin.ssl.key.manager.password</name>  
<value>change me</value>  
<description>Key Manager password. Defaults to keystore password. 
Can be obfuscated.</description></property>Enabling 
client side certificate authenticationThe following properties needs to be 
updated in the zeppelin-site.xml in order to enable client side certificate 
authentication.<property>  
<name>zeppelin.server.ssl.port</name>  
<value>8443</value>  
<description>Server ssl port. (used when ssl property is set to 
true)</description
 ></property><property>  
<name>zeppelin.ssl.client.auth</name>  
<value>true</value>  
<description>Should client authentication be used for SSL 
connections?</description></property><property>
  <name>zeppelin.ssl.truststore.path</name>  
<value>truststore</value>  
<description>Path to truststore relative to Zeppelin 
configuration directory. Defaults to the keystore 
path</description></property><property>
  <name>zeppelin.ssl.truststore.type</name>  
<value>JKS</value>  <description>The 
format of the given truststore (e.g. JKS or PKCS12). Defaults to the same type 
as the keystore 
type</description></property><property>
  &
 lt;name>zeppelin.ssl.truststore.password</name>  
<value>change me</value>  
<description>Truststore password. Can be obfuscated by the Jetty 
Password tool. Defaults to the keystore 
password</description></property>Obfuscating 
Passwords using the Jetty Password ToolSecurity best practices advise to not 
use plain text passwords and Jetty provides a password tool to help obfuscating 
the passwords used to access the KeyStore and TrustStore.The Password tool 
documentation can be found here.After using the tool:java -cp 
$ZEPPELIN_HOME/zeppelin-server/target/lib/jetty-util-9.2.15.v20160210.jar       
   org.eclipse.jetty.util.security.Password           password2016-12-15 
10:46:47.931:INFO::main: Logging initialized 
@101mspasswordOBF:1v2j1uum1xtv1zej1zer1xtn1uvk1v1vMD5:5f4dcc3b5aa765d61d8327deb882cf99update
 your configuration with the obfuscated password :<property>  
<name>z
 eppelin.ssl.keystore.password</name>  
<value>OBF:1v2j1uum1xtv1zej1zer1xtn1uvk1v1v</value> 
 <description>Keystore password. Can be obfuscated by the Jetty 
Password tool</description></property>Note: After 
updating these configurations, Zeppelin server needs to be restarted.",
       "url": " /install/configuration.html",
       "group": "install",
       "excerpt": "This page will guide you to configure Apache Zeppelin using 
either environment variables or Java properties. Also, you can configure SSL 
for Zeppelin."
@@ -171,7 +171,7 @@
 
     "/install/upgrade.html": {
       "title": "Manual Zeppelin version upgrade procedure",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Manual upgrade procedure for 
ZeppelinBasically, newer version of Zeppelin works with previous version 
notebook directory and configurations.So, copying notebook and conf directory 
should be enough.InstructionsStop Zeppelinbin/zeppelin-daemon.sh stopCopy your 
notebook and conf directory into a backup directoryDownload newer version of 
Zeppelin and Install. See Install page.Copy backup notebook and conf directory 
into newer version o
 f Zeppelin notebook and conf directoryStart Zeppelinbin/zeppelin-daemon.sh 
startMigration GuideUpgrading from Zeppelin 0.6 to 0.7From 0.7, we 
don't use ZEPPELIN_JAVA_OPTS as default value of 
ZEPPELIN_INTP_JAVA_OPTS and also the same for ZEPPELIN_MEM/ZEPPELIN_INTP_MEM. 
If user want to configure the jvm opts of interpreter process, please set 
ZEPPELIN_INTP_JAVA_OPTS and ZEPPELIN_INTP_MEM explicitly. If you don't 
set ZEPPELIN_INTP_MEM, Zeppelin will set it to -Xms1024m -Xmx1024m 
-XX:MaxPermSize=512m by default.Mapping from %jdbc(prefix) to %prefix is no 
longer available. Instead, you can use %[interpreter alias] with multiple 
interpreter setttings on GUI.Usage of ZEPPELIN_PORT is not supported in ssl 
mode. Instead use ZEPPELIN_SSL_PORT to configure the ssl port. Value from 
ZEPPELIN_PORT is used only when ZEPPELIN_SSL is set to false.The support on 
Spark 1.1.x to 1.3.x is deprecated.From 0.7, we uses pegdown as the 
markdown.parser.type option for the %md interpreter. Ren
 dered markdown might be different from what you expectedFrom 0.7 note.json 
format has been changed to support multiple outputs in a paragraph. Zeppelin 
will automatically convert old format to new format. 0.6 or lower version can 
read new note.json format but output will not be displayed. For the detail, see 
ZEPPELIN-212 and pull request.From 0.7 note storage layer will utilize 
GitNotebookRepo by default instead of VFSNotebookRepo storage layer, which is 
an extension of latter one with versioning capabilities on top of it.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Manual upgrade procedure for 
ZeppelinBasically, newer version of Zeppelin works with previous version 
notebook directory and configurations.So, copying notebook and conf directory 
should be enough.InstructionsStop Zeppelinbin/zeppelin-daemon.sh stopCopy your 
notebook and conf directory into a backup directoryDownload newer version of 
Zeppelin and Install. See Install page.Copy backup notebook and conf directory 
into newer version o
 f Zeppelin notebook and conf directoryStart Zeppelinbin/zeppelin-daemon.sh 
startMigration GuideUpgrading from Zeppelin 0.6 to 0.7From 0.7, we 
don't use ZEPPELIN_JAVA_OPTS as default value of 
ZEPPELIN_INTP_JAVA_OPTS and also the same for ZEPPELIN_MEM/ZEPPELIN_INTP_MEM. 
If user want to configure the jvm opts of interpreter process, please set 
ZEPPELIN_INTP_JAVA_OPTS and ZEPPELIN_INTP_MEM explicitly. If you don't 
set ZEPPELIN_INTP_MEM, Zeppelin will set it to -Xms1024m -Xmx1024m 
-XX:MaxPermSize=512m by default.Mapping from %jdbc(prefix) to %prefix is no 
longer available. Instead, you can use %[interpreter alias] with multiple 
interpreter setttings on GUI.Usage of ZEPPELIN_PORT is not supported in ssl 
mode. Instead use ZEPPELIN_SSL_PORT to configure the ssl port. Value from 
ZEPPELIN_PORT is used only when ZEPPELIN_SSL is set to false.The support on 
Spark 1.1.x to 1.3.x is deprecated.From 0.7, we uses pegdown as the 
markdown.parser.type option for the %md interpreter. Ren
 dered markdown might be different from what you expectedFrom 0.7 note.json 
format has been changed to support multiple outputs in a paragraph. Zeppelin 
will automatically convert old format to new format. 0.6 or lower version can 
read new note.json format but output will not be displayed. For the detail, see 
ZEPPELIN-212 and pull request.From 0.7 note storage layer will utilize 
GitNotebookRepo by default instead of VFSNotebookRepo storage layer, which is 
an extension of latter one with versioning capabilities on top of it.Upgrading 
from Zeppelin 0.7 to 0.8From 0.8, we recommend to use PYSPARK_PYTHON and 
PYSPARK_DRIVER_PYTHON instead of zeppelin.pyspark.python as 
zeppelin.pyspark.python only effects driver. You can use PYSPARK_PYTHON and 
PYSPARK_DRIVER_PYTHON as using them in spark.",
       "url": " /install/upgrade.html",
       "group": "install",
       "excerpt": "This document will guide you through a procedure of manual 
upgrade your Apache Zeppelin instance to a newer version. Apache Zeppelin keeps 
backward compatibility for the notebook file format."
@@ -325,7 +325,7 @@
 
     "/interpreter/jdbc.html": {
       "title": "Generic JDBC Interpreter for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Generic JDBC Interpreter for 
Apache ZeppelinOverviewJDBC interpreter lets you create a JDBC connection to 
any data sources seamlessly.Inserts, Updates, and Upserts are applied 
immediately after running each statement.By now, it has been tested with:       
             Postgresql -      JDBC Driver              Mysql -      JDBC 
Driver              MariaDB -      JDBC Driver              Redshift -      
JDBC Driver              Apac
 he Hive -       JDBC Driver              Apache Phoenix itself is a JDBC 
driver              Apache Drill -       JDBC Driver              Apache Tajo - 
      JDBC Driver      If you are using other databases not in the above list, 
please feel free to share your use case. It would be helpful to improve the 
functionality of JDBC interpreter.Create a new JDBC InterpreterFirst, click + 
Create button at the top-right corner in the interpreter setting page.Fill 
Interpreter name field with whatever you want to use as the alias(e.g. mysql, 
mysql2, hive, redshift, and etc..). Please note that this alias will be used as 
%interpreter_name to call the interpreter in the paragraph. Then select jdbc as 
an Interpreter group. The default driver of JDBC interpreter is set as 
PostgreSQL. It means Zeppelin includes PostgreSQL driver jar in itself.So you 
don't need to add any dependencies(e.g. the artifact name or path for 
PostgreSQL driver jar) for PostgreSQL connection.The JDBC interpreter p
 roperties are defined by default like below.      Name    Default Value    
Description        common.max_count    1000    The maximun number of SQL result 
to display        default.driver    org.postgresql.Driver    JDBC Driver Name   
     default.password        The JDBC user password        default.url    
jdbc:postgresql://localhost:5432/    The URL for JDBC        default.user    
gpadmin    The JDBC user name  If you want to connect other databases such as 
Mysql, Redshift and Hive, you need to edit the property values.You can also use 
Credential for JDBC authentication.If default.user and default.password 
properties are deleted(using X button) for database connection in the 
interpreter setting page,the JDBC interpreter will get the account information 
from Credential.The below example is for Mysql connection.The last step is 
Dependency Setting. Since Zeppelin only includes PostgreSQL driver jar by 
default, you need to add each driver's maven coordinates or JDBC 
driver&amp
 ;#39;s jar file path for the other databases.That's it. You can find 
more JDBC connection setting examples(Mysql, MariaDB, Redshift, Apache Hive, 
Apache Phoenix, and Apache Tajo) in this section.More propertiesThere are more 
JDBC interpreter properties you can specify like below.      Property Name    
Description        common.max_result    Max number of SQL result to display to 
prevent the browser overload. This is  common properties for all connections    
    zeppelin.jdbc.auth.type    Types of authentications' methods supported 
are SIMPLE, and KERBEROS        zeppelin.jdbc.principal    The principal name 
to load from the keytab        zeppelin.jdbc.keytab.location    The path to the 
keytab file        default.jceks.file    jceks store path (e.g: 
jceks://file/tmp/zeppelin.jceks)        default.jceks.credentialKey    jceks 
credential key  You can also add more properties by using this method.For 
example, if a connection needs a schema parameter, it would have to add the
  property as follows:      name    value        default.schema    schema_name  
Binding JDBC interpter to notebookTo bind the interpreters created in the 
interpreter setting page, click the gear icon at the top-right 
corner.Select(blue) or deselect(white) the interpreter buttons depending on 
your use cases. If you need to use more than one interpreter in the notebook, 
activate several buttons.Don't forget to click Save button, or you will 
face Interpreter *** is not found error.How to useRun the paragraph with JDBC 
interpreterTo test whether your databases and Zeppelin are successfully 
connected or not, type %jdbc_interpreter_name(e.g. %mysql) at the top of the 
paragraph and run show databases.%jdbc_interpreter_nameshow databasesIf the 
paragraph is FINISHED without any errors, a new paragraph will be automatically 
added after the previous one with %jdbc_interpreter_name.So you don't 
need to type this prefix in every paragraphs' header.Apply Zeppelin 
Dynamic Fo
 rmsYou can leverage Zeppelin Dynamic Form inside your queries. You can use 
both the text input and select form parametrization 
features.%jdbc_interpreter_nameSELECT name, country, performerFROM 
demo.performersWHERE name='{{performer=Sheryl Crow|Doof|Fanfarlo|Los 
Paranoia}}'ExamplesHere are some examples you can refer to. Including 
the below connectors, you can connect every databases as long as it can be 
configured with it's JDBC driver.PostgresProperties      Name    Value  
      default.driver    org.postgresql.Driver        default.url    
jdbc:postgresql://localhost:5432/        default.user    mysql_user        
default.password    mysql_password  Postgres JDBC Driver DocsDependencies      
Artifact    Excludes        org.postgresql:postgresql:9.4.1211      Maven 
Repository: org.postgresql:postgresqlMysqlProperties      Name    Value        
default.driver    com.mysql.jdbc.Driver        default.url    
jdbc:mysql://localhost:3306/        default.user    mysq
 l_user        default.password    mysql_password  Mysql JDBC Driver 
DocsDependencies      Artifact    Excludes        
mysql:mysql-connector-java:5.1.38      Maven Repository: 
mysql:mysql-connector-javaMariaDBProperties      Name    Value        
default.driver    org.mariadb.jdbc.Driver        default.url    
jdbc:mariadb://localhost:3306        default.user    mariadb_user        
default.password    mariadb_password  MariaDB JDBC Driver DocsDependencies      
Artifact    Excludes        org.mariadb.jdbc:mariadb-java-client:1.5.4      
Maven Repository: org.mariadb.jdbc:mariadb-java-clientRedshiftProperties      
Name    Value        default.driver    com.amazon.redshift.jdbc42.Driver        
default.url    
jdbc:redshift://your-redshift-instance-address.redshift.amazonaws.com:5439/your-database
        default.user    redshift_user        default.password    
redshift_password  AWS Redshift JDBC Driver DocsDependencies      Artifact    
Excludes        com.amazonaws:aws-java-sdk-redshift:1.1
 1.51      Maven Repository: com.amazonaws:aws-java-sdk-redshiftApache 
HiveProperties      Name    Value        default.driver    
org.apache.hive.jdbc.HiveDriver        default.url    
jdbc:hive2://localhost:10000        default.user    hive_user        
default.password    hive_password  Apache Hive 1 JDBC Driver DocsApache Hive 2 
JDBC Driver DocsDependencies      Artifact    Excludes        
org.apache.hive:hive-jdbc:0.14.0            
org.apache.hadoop:hadoop-common:2.6.0      Maven Repository : 
org.apache.hive:hive-jdbcApache PhoenixPhoenix supports thick and thin 
connection types:Thick client is faster, but must connect directly to ZooKeeper 
and HBase RegionServers.Thin client has fewer dependencies and connects through 
a Phoenix Query Server instance.Use the appropriate default.driver, 
default.url, and the dependency artifact for your connection type.Thick client 
connectionProperties      Name    Value        default.driver    
org.apache.phoenix.jdbc.PhoenixDriver        default.ur
 l    jdbc:phoenix:localhost:2181:/hbase-unsecure        default.user    
phoenix_user        default.password    phoenix_password  Dependencies      
Artifact    Excludes        org.apache.phoenix:phoenix-core:4.4.0-HBase-1.0     
 Maven Repository: org.apache.phoenix:phoenix-coreThin client 
connectionProperties      Name    Value        default.driver    
org.apache.phoenix.queryserver.client.Driver        default.url    
jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF        
default.user    phoenix_user        default.password    phoenix_password  
DependenciesBefore Adding one of the below dependencies, check the Phoenix 
version first.      Artifact    Excludes    Description        
org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1        For Phoenix 4.7 
       org.apache.phoenix:phoenix-queryserver-client:4.8.0-HBase-1.2        For 
Phoenix 4.8+  Maven Repository: 
org.apache.phoenix:phoenix-queryserver-clientApache TajoProperties      Name    
Value        defa
 ult.driver    org.apache.tajo.jdbc.TajoDriver        default.url    
jdbc:tajo://localhost:26002/default  Apache Tajo JDBC Driver DocsDependencies   
   Artifact    Excludes        org.apache.tajo:tajo-jdbc:0.11.0      Maven 
Repository: org.apache.tajo:tajo-jdbcBug reportingIf you find a bug using JDBC 
interpreter, please create a JIRA ticket.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Generic JDBC Interpreter for 
Apache ZeppelinOverviewJDBC interpreter lets you create a JDBC connection to 
any data sources seamlessly.Inserts, Updates, and Upserts are applied 
immediately after running each statement.By now, it has been tested with:       
             Postgresql -      JDBC Driver              Mysql -      JDBC 
Driver              MariaDB -      JDBC Driver              Redshift -      
JDBC Driver              Apac
 he Hive -       JDBC Driver              Apache Phoenix itself is a JDBC 
driver              Apache Drill -       JDBC Driver              Apache Tajo - 
      JDBC Driver      If you are using other databases not in the above list, 
please feel free to share your use case. It would be helpful to improve the 
functionality of JDBC interpreter.Create a new JDBC InterpreterFirst, click + 
Create button at the top-right corner in the interpreter setting page.Fill 
Interpreter name field with whatever you want to use as the alias(e.g. mysql, 
mysql2, hive, redshift, and etc..). Please note that this alias will be used as 
%interpreter_name to call the interpreter in the paragraph. Then select jdbc as 
an Interpreter group. The default driver of JDBC interpreter is set as 
PostgreSQL. It means Zeppelin includes PostgreSQL driver jar in itself.So you 
don't need to add any dependencies(e.g. the artifact name or path for 
PostgreSQL driver jar) for PostgreSQL connection.The JDBC interpreter p
 roperties are defined by default like below.      Name    Default Value    
Description        common.max_count    1000    The maximun number of SQL result 
to display        default.driver    org.postgresql.Driver    JDBC Driver Name   
     default.password        The JDBC user password        default.url    
jdbc:postgresql://localhost:5432/    The URL for JDBC        default.user    
gpadmin    The JDBC user name        default.precode        Some SQL which 
executes while opening connection  If you want to connect other databases such 
as Mysql, Redshift and Hive, you need to edit the property values.You can also 
use Credential for JDBC authentication.If default.user and default.password 
properties are deleted(using X button) for database connection in the 
interpreter setting page,the JDBC interpreter will get the account information 
from Credential.The below example is for Mysql connection.The last step is 
Dependency Setting. Since Zeppelin only includes PostgreSQL driver jar by defa
 ult, you need to add each driver's maven coordinates or JDBC 
driver's jar file path for the other databases.That's it. You 
can find more JDBC connection setting examples(Mysql, MariaDB, Redshift, Apache 
Hive, Apache Phoenix, and Apache Tajo) in this section.More propertiesThere are 
more JDBC interpreter properties you can specify like below.      Property Name 
   Description        common.max_result    Max number of SQL result to display 
to prevent the browser overload. This is  common properties for all connections 
       zeppelin.jdbc.auth.type    Types of authentications' methods 
supported are SIMPLE, and KERBEROS        zeppelin.jdbc.principal    The 
principal name to load from the keytab        zeppelin.jdbc.keytab.location    
The path to the keytab file        default.jceks.file    jceks store path (e.g: 
jceks://file/tmp/zeppelin.jceks)        default.jceks.credentialKey    jceks 
credential key  You can also add more properties by using this method.
 For example, if a connection needs a schema parameter, it would have to add 
the property as follows:      name    value        default.schema    
schema_name  Binding JDBC interpter to notebookTo bind the interpreters created 
in the interpreter setting page, click the gear icon at the top-right 
corner.Select(blue) or deselect(white) the interpreter buttons depending on 
your use cases. If you need to use more than one interpreter in the notebook, 
activate several buttons.Don't forget to click Save button, or you will 
face Interpreter *** is not found error.How to useRun the paragraph with JDBC 
interpreterTo test whether your databases and Zeppelin are successfully 
connected or not, type %jdbc_interpreter_name(e.g. %mysql) at the top of the 
paragraph and run show databases.%jdbc_interpreter_nameshow databasesIf the 
paragraph is FINISHED without any errors, a new paragraph will be automatically 
added after the previous one with %jdbc_interpreter_name.So you don't 
need to
  type this prefix in every paragraphs' header.Apply Zeppelin Dynamic 
FormsYou can leverage Zeppelin Dynamic Form inside your queries. You can use 
both the text input and select form parametrization 
features.%jdbc_interpreter_nameSELECT name, country, performerFROM 
demo.performersWHERE name='{{performer=Sheryl Crow|Doof|Fanfarlo|Los 
Paranoia}}'Usage precodeYou can set precode for each data source. Code 
runs once while opening the connection.PropertiesAn example settings of 
interpreter for the two data sources, each of which has its precode parameter.  
    Property Name    Value        default.driver    org.postgresql.Driver       
 default.password    1        default.url    jdbc:postgresql://localhost:5432/  
      default.user    postgres        default.precode    set 
search_path='test_path'        mysql.driver    com.mysql.jdbc.Driver    
    mysql.password    1        mysql.url    jdbc:mysql://localhost:3306/        
mysql.user    root        mysql.pre
 code    set @v=12  UsageTest of execution precode for each data source. 
%jdbcshow search_pathReturns value of search_path which is set in the 
default.precode.%jdbc(mysql)select @vReturns value of v which is set in the 
mysql.precode.ExamplesHere are some examples you can refer to. Including the 
below connectors, you can connect every databases as long as it can be 
configured with it's JDBC driver.PostgresProperties      Name    Value  
      default.driver    org.postgresql.Driver        default.url    
jdbc:postgresql://localhost:5432/        default.user    mysql_user        
default.password    mysql_password  Postgres JDBC Driver DocsDependencies      
Artifact    Excludes        org.postgresql:postgresql:9.4.1211      Maven 
Repository: org.postgresql:postgresqlMysqlProperties      Name    Value        
default.driver    com.mysql.jdbc.Driver        default.url    
jdbc:mysql://localhost:3306/        default.user    mysql_user        
default.password    mysql_password  Mysql JD
 BC Driver DocsDependencies      Artifact    Excludes        
mysql:mysql-connector-java:5.1.38      Maven Repository: 
mysql:mysql-connector-javaMariaDBProperties      Name    Value        
default.driver    org.mariadb.jdbc.Driver        default.url    
jdbc:mariadb://localhost:3306        default.user    mariadb_user        
default.password    mariadb_password  MariaDB JDBC Driver DocsDependencies      
Artifact    Excludes        org.mariadb.jdbc:mariadb-java-client:1.5.4      
Maven Repository: org.mariadb.jdbc:mariadb-java-clientRedshiftProperties      
Name    Value        default.driver    com.amazon.redshift.jdbc42.Driver        
default.url    
jdbc:redshift://your-redshift-instance-address.redshift.amazonaws.com:5439/your-database
        default.user    redshift_user        default.password    
redshift_password  AWS Redshift JDBC Driver DocsDependencies      Artifact    
Excludes        com.amazonaws:aws-java-sdk-redshift:1.11.51      Maven 
Repository: com.amazonaws:aws-java-sdk-red
 shiftApache HiveProperties      Name    Value        default.driver    
org.apache.hive.jdbc.HiveDriver        default.url    
jdbc:hive2://localhost:10000        default.user    hive_user        
default.password    hive_password        hive.proxy.user    true or 
falseConnection to Hive JDBC with a proxy user can be disabled with 
hive.proxy.user property (set to true by default)Apache Hive 1 JDBC Driver 
DocsApache Hive 2 JDBC Driver DocsDependencies      Artifact    Excludes        
org.apache.hive:hive-jdbc:0.14.0            
org.apache.hadoop:hadoop-common:2.6.0      Maven Repository : 
org.apache.hive:hive-jdbcApache PhoenixPhoenix supports thick and thin 
connection types:Thick client is faster, but must connect directly to ZooKeeper 
and HBase RegionServers.Thin client has fewer dependencies and connects through 
a Phoenix Query Server instance.Use the appropriate default.driver, 
default.url, and the dependency artifact for your connection type.Thick client 
connectionProperties      Na
 me    Value        default.driver    org.apache.phoenix.jdbc.PhoenixDriver     
   default.url    jdbc:phoenix:localhost:2181:/hbase-unsecure        
default.user    phoenix_user        default.password    phoenix_password  
Dependencies      Artifact    Excludes        
org.apache.phoenix:phoenix-core:4.4.0-HBase-1.0      Maven Repository: 
org.apache.phoenix:phoenix-coreThin client connectionProperties      Name    
Value        default.driver    org.apache.phoenix.queryserver.client.Driver     
   default.url    
jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF        
default.user    phoenix_user        default.password    phoenix_password  
DependenciesBefore Adding one of the below dependencies, check the Phoenix 
version first.      Artifact    Excludes    Description        
org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1        For Phoenix 4.7 
       org.apache.phoenix:phoenix-queryserver-client:4.8.0-HBase-1.2        For 
Phoenix 4.8+  Maven Repository: org.a
 pache.phoenix:phoenix-queryserver-clientApache TajoProperties      Name    
Value        default.driver    org.apache.tajo.jdbc.TajoDriver        
default.url    jdbc:tajo://localhost:26002/default  Apache Tajo JDBC Driver 
DocsDependencies      Artifact    Excludes        
org.apache.tajo:tajo-jdbc:0.11.0      Maven Repository: 
org.apache.tajo:tajo-jdbcBug reportingIf you find a bug using JDBC interpreter, 
please create a JIRA ticket.",
       "url": " /interpreter/jdbc.html",
       "group": "interpreter",
       "excerpt": "Generic JDBC Interpreter lets you create a JDBC connection 
to any data source. You can use Postgres, MySql, MariaDB, Redshift, Apache 
Hive, Apache Phoenix, Apache Drill and Apache Tajo using JDBC interpreter."
@@ -358,7 +358,7 @@
 
     "/interpreter/livy.html": {
       "title": "Livy Interpreter for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Livy Interpreter for Apache 
ZeppelinOverviewLivy is an open source REST interface for interacting with 
Spark from anywhere. It supports executing snippets of code or programs in a 
Spark context that runs locally or in YARN.Interactive Scala, Python and R 
shellsBatch submissions in Scala, Java, PythonMulti users can share the same 
server (impersonation support)Can be used for submitting jobs from anywhere 
with RESTDoes not require a
 ny code change to your programsRequirementsAdditional requirements for the 
Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some 
common configurations for spark, and you can set any configuration you want.You 
can find all Spark configurations in here.And instead of starting property with 
spark. it should be replaced with livy.spark..Example: spark.driver.memory to 
livy.spark.driver.memory      Property    Default    Description        
zeppelin.livy.url    http://localhost:8998    URL where livy server is running  
      zeppelin.livy.spark.maxResult    1000    Max number of Spark SQL result 
to display.        zeppelin.livy.session.create_timeout    120    Timeout in 
seconds for session creation        zeppelin.livy.displayAppInfo    false    
Whether to display app info        zeppelin.livy.pull_status.interval.millis    
1000    The interval for checking paragraph execution status        
livy.spark.driver.cores        Driver cores. ex) 1, 2.          livy.spar
 k.driver.memory        Driver memory. ex) 512m, 32g.          
livy.spark.executor.instances        Executor instances. ex) 1, 4.          
livy.spark.executor.cores        Num cores per executor. ex) 1, 4.        
livy.spark.executor.memory        Executor memory per worker instance. ex) 
512m, 32g.        livy.spark.dynamicAllocation.enabled        Use dynamic 
resource allocation. ex) True, False.        
livy.spark.dynamicAllocation.cachedExecutorIdleTimeout        Remove an 
executor which has cached data blocks.        
livy.spark.dynamicAllocation.minExecutors        Lower bound for the number of 
executors.        livy.spark.dynamicAllocation.initialExecutors        Initial 
number of executors to run.        livy.spark.dynamicAllocation.maxExecutors    
    Upper bound for the number of executors.            
livy.spark.jars.packages            Adding extra libraries to livy interpreter  
  We remove livy.spark.master in zeppelin-0.7. Because we sugguest user to use 
livy 0.3 in zeppelin
 -0.7. And livy 0.3 don't allow to specify livy.spark.master, it 
enfornce yarn-cluster mode.Adding External librariesYou can load dynamic 
library to livy interpreter by set livy.spark.jars.packages property to 
comma-separated list of maven coordinates of jars to include on the driver and 
executor classpaths. The format for the coordinates should be 
groupId:artifactId:version. Example      Property    Example    Description     
     livy.spark.jars.packages      io.spray:spray-json_2.10:1.3.1      Adding 
extra libraries to livy interpreter        How to useBasically, you can 
usespark%livy.sparksc.versionpyspark%livy.pysparkprint 
"1"sparkR%livy.sparkrhello <- function( name ) {    
sprintf( "Hello, %s", name 
);}hello("livy")ImpersonationWhen Zeppelin server is running 
with authentication enabled, then this interpreter utilizes Livy’s user 
impersonation feature i.e. sends extra parameter for creating and running a 
 session ("proxyUser": "${loggedInUser}"). 
This is particularly useful when multi users are sharing a Notebook 
server.Apply Zeppelin Dynamic FormsYou can leverage Zeppelin Dynamic Form. You 
can use both the text input and select form parameterization 
features.%livy.pysparkprint 
"${group_by=product_id,product_id|product_name|customer_id|store_id}"FAQLivy
 debugging: If you see any of these in error consoleConnect to livyhost:8998 
[livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks 
like the livy server is not up yet or the config is wrongException: Session not 
found, Livy server would have restarted, or lost session.The session would have 
timed out, you may need to restart the interpreter.Blacklisted configuration 
values in session config: spark.masterEdit conf/spark-blacklist.conf file in 
livy server and comment out #spark.master line.If you choose to work on livy in 
apps/spark/java directory in https://gi
 thub.com/cloudera/hue,copy spark-user-configurable-options.template to 
spark-user-configurable-options.conf file in livy server and comment out 
#spark.master. ",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Livy Interpreter for Apache 
ZeppelinOverviewLivy is an open source REST interface for interacting with 
Spark from anywhere. It supports executing snippets of code or programs in a 
Spark context that runs locally or in YARN.Interactive Scala, Python and R 
shellsBatch submissions in Scala, Java, PythonMulti users can share the same 
server (impersonation support)Can be used for submitting jobs from anywhere 
with RESTDoes not require a
 ny code change to your programsRequirementsAdditional requirements for the 
Livy interpreter are:Spark 1.3 or above.Livy server.ConfigurationWe added some 
common configurations for spark, and you can set any configuration you want.You 
can find all Spark configurations in here.And instead of starting property with 
spark. it should be replaced with livy.spark..Example: spark.driver.memory to 
livy.spark.driver.memory      Property    Default    Description        
zeppelin.livy.url    http://localhost:8998    URL where livy server is running  
      zeppelin.livy.spark.maxResult    1000    Max number of Spark SQL result 
to display.        zeppelin.livy.session.create_timeout    120    Timeout in 
seconds for session creation        zeppelin.livy.displayAppInfo    false    
Whether to display app info        zeppelin.livy.pull_status.interval.millis    
1000    The interval for checking paragraph execution status        
livy.spark.driver.cores        Driver cores. ex) 1, 2.          livy.spar
 k.driver.memory        Driver memory. ex) 512m, 32g.          
livy.spark.executor.instances        Executor instances. ex) 1, 4.          
livy.spark.executor.cores        Num cores per executor. ex) 1, 4.        
livy.spark.executor.memory        Executor memory per worker instance. ex) 
512m, 32g.        livy.spark.dynamicAllocation.enabled        Use dynamic 
resource allocation. ex) True, False.        
livy.spark.dynamicAllocation.cachedExecutorIdleTimeout        Remove an 
executor which has cached data blocks.        
livy.spark.dynamicAllocation.minExecutors        Lower bound for the number of 
executors.        livy.spark.dynamicAllocation.initialExecutors        Initial 
number of executors to run.        livy.spark.dynamicAllocation.maxExecutors    
    Upper bound for the number of executors.            
livy.spark.jars.packages            Adding extra libraries to livy interpreter  
        zeppelin.livy.ssl.trustStore        client trustStore file. Used when 
livy ssl is enabled  
       zeppelin.livy.ssl.trustStorePassword        password for trustStore 
file. Used when livy ssl is enabled    We remove livy.spark.master in 
zeppelin-0.7. Because we sugguest user to use livy 0.3 in zeppelin-0.7. And 
livy 0.3 don't allow to specify livy.spark.master, it enfornce 
yarn-cluster mode.Adding External librariesYou can load dynamic library to livy 
interpreter by set livy.spark.jars.packages property to comma-separated list of 
maven coordinates of jars to include on the driver and executor classpaths. The 
format for the coordinates should be groupId:artifactId:version. Example      
Property    Example    Description          livy.spark.jars.packages      
io.spray:spray-json_2.10:1.3.1      Adding extra libraries to livy interpreter  
      How to useBasically, you can 
usespark%livy.sparksc.versionpyspark%livy.pysparkprint 
"1"sparkR%livy.sparkrhello <- function( name ) {    
sprintf( "Hello, %s", name );}hello("liv
 y")ImpersonationWhen Zeppelin server is running with authentication 
enabled, then this interpreter utilizes Livy’s user impersonation feature 
i.e. sends extra parameter for creating and running a session 
("proxyUser": "${loggedInUser}"). This is 
particularly useful when multi users are sharing a Notebook server.Apply 
Zeppelin Dynamic FormsYou can leverage Zeppelin Dynamic Form. You can use both 
the text input and select form parameterization features.%livy.pysparkprint 
"${group_by=product_id,product_id|product_name|customer_id|store_id}"FAQLivy
 debugging: If you see any of these in error consoleConnect to livyhost:8998 
[livyhost/127.0.0.1, livyhost/0:0:0:0:0:0:0:1] failed: Connection refusedLooks 
like the livy server is not up yet or the config is wrongException: Session not 
found, Livy server would have restarted, or lost session.The session would have 
timed out, you may need to restart the interpreter.Blacklisted 
 configuration values in session config: spark.masterEdit 
conf/spark-blacklist.conf file in livy server and comment out #spark.master 
line.If you choose to work on livy in apps/spark/java directory in 
https://github.com/cloudera/hue,copy spark-user-configurable-options.template 
to spark-user-configurable-options.conf file in livy server and comment out 
#spark.master. ",
       "url": " /interpreter/livy.html",
       "group": "interpreter",
       "excerpt": "Livy is an open source REST interface for interacting with 
Spark from anywhere. It supports executing snippets of code or programs in a 
Spark context that runs locally or in YARN."
@@ -391,7 +391,7 @@
 
     "/interpreter/pig.html": {
       "title": "Pig Interpreter for Apache Zeppelin",
-      "content"  : "Pig Interpreter for Apache ZeppelinOverviewApache Pig is a 
platform for analyzing large data sets that consists of a high-level language 
for expressing data analysis programs, coupled with infrastructure for 
evaluating these programs. The salient property of Pig programs is that their 
structure is amenable to substantial parallelization, which in turns enables 
them to handle very large data sets.Supported interpreter type%pig.script 
(default)All the pig script can run in this type of interpreter, and display 
type is plain text.%pig.queryAlmost the same as %pig.script. The only 
difference is that you don't need to add alias in the last statement. 
And the display type is table.   Supported runtime modeLocalMapReduceTez_Local 
(Only Tez 0.7 is supported)Tez  (Only Tez 0.7 is supported)How to useHow to 
setup PigLocal ModeNothing needs to be done for local modeMapReduce 
ModeHADOOP_CONF_DIR needs to be specified in 
ZEPPELIN_HOME/conf/zeppelin-env.sh.Tez Local Mo
 deNothing needs to be done for tez local modeTez ModeHADOOP_CONF_DIR and 
TEZ_CONF_DIR needs to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.How to 
configure interpreterAt the Interpreters menu, you have to create a new Pig 
interpreter. Pig interpreter has below properties by default.And you can set 
any pig properties here which will be passed to pig engine. (like 
tez.queue.name & mapred.job.queue.name).Besides, we use paragraph title 
as job name if it exists, else use the last line of pig script. So you can use 
that to find app running in YARN RM UI.            Property        Default      
  Description                zeppelin.pig.execType        mapreduce        
Execution mode for pig runtime. local | mapreduce | tez_local | tez             
    zeppelin.pig.includeJobStats        false        whether display jobStats 
info in %pig.script                zeppelin.pig.maxResult        1000        
max row number displayed in %pig.query                tez.queue.name        
 default        queue name for tez engine                mapred.job.queue.name  
      default        queue name for mapreduce engine      
Examplepig%pigraw_data = load 'dataset/sf_crime/train.csv' 
using PigStorage(',') as 
(Dates,Category,Descript,DayOfWeek,PdDistrict,Resolution,Address,X,Y);b = group 
raw_data all;c = foreach b generate COUNT($1);dump c;pig.query%pig.queryb = 
foreach raw_data generate Category;c = group b by Category;foreach c generate 
group as category, COUNT($1) as count;Data is shared between %pig and 
%pig.query, so that you can do some common work in %pig, and do different kinds 
of query based on the data of %pig. Besides, we recommend you to specify alias 
explicitly so that the visualization can display the column name correctly. 
Here, we name COUNT($1) as count, if you don't do this,then we will 
name it using position, here we will use col_1 to represent COUNT($1) if you 
don't specify alias for it. There's 
 one pig tutorial note in zeppelin for your reference.",
+      "content"  : "Pig Interpreter for Apache ZeppelinOverviewApache Pig is a 
platform for analyzing large data sets that consists of a high-level language 
for expressing data analysis programs, coupled with infrastructure for 
evaluating these programs. The salient property of Pig programs is that their 
structure is amenable to substantial parallelization, which in turns enables 
them to handle very large data sets.Supported interpreter type%pig.script 
(default Pig interpreter, so you can use %pig)%pig.script is like the Pig grunt 
shell. Anything you can run in Pig grunt shell can be run in %pig.script 
interpreter, it is used for running Pig script where you don’t need to 
visualize the data, it is suitable for data munging. %pig.query%pig.query is a 
little different compared with %pig.script. It is used for exploratory data 
analysis via Pig latin where you can leverage Zeppelin’s visualization 
ability. There're 2 minor differences in the last statement between %pig
 .script and %pig.queryNo pig alias in the last statement in %pig.query (read 
the examples below).The last statement must be in single line in 
%pig.querySupported runtime modeLocalMapReduceTez_Local (Only Tez 0.7 is 
supported)Tez  (Only Tez 0.7 is supported)How to useHow to setup PigLocal 
ModeNothing needs to be done for local modeMapReduce ModeHADOOP_CONF_DIR needs 
to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.Tez Local ModeNothing 
needs to be done for tez local modeTez ModeHADOOP_CONF_DIR and TEZ_CONF_DIR 
needs to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.How to configure 
interpreterAt the Interpreters menu, you have to create a new Pig interpreter. 
Pig interpreter has below properties by default.And you can set any Pig 
properties here which will be passed to Pig engine. (like tez.queue.name 
& mapred.job.queue.name).Besides, we use paragraph title as job name if 
it exists, else use the last line of Pig script. So you can use that to find 
app running in YARN
  RM UI.            Property        Default        Description                
zeppelin.pig.execType        mapreduce        Execution mode for pig runtime. 
local | mapreduce | tez_local | tez                 
zeppelin.pig.includeJobStats        false        whether display jobStats info 
in %pig.script                zeppelin.pig.maxResult        1000        max row 
number displayed in %pig.query                tez.queue.name        default     
   queue name for tez engine                mapred.job.queue.name        
default        queue name for mapreduce engine      Examplepig%pigbankText = 
load 'bank.csv' using PigStorage(';');bank = 
foreach bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, 
$5 as balance; bank = filter bank by age != 
'"age"';bank = foreach bank generate 
(int)age, REPLACE(job,'"','') as job, 
REPLACE(marital, '"', &am
 p;#39;') as marital, (int)(REPLACE(balance, 
'"', '')) as balance;store bank into 
'clean_bank.csv' using PigStorage(';'); -- this 
statement is optional, it just show you that most of time %pig.script is used 
for data munging before querying the data. pig.queryGet the number of each age 
where age is less than 30%pig.querybank_data = filter bank by age < 30;b 
= group bank_data by age;foreach b generate group, COUNT($1);The same as above, 
but use dynamic text form so that use can specify the variable maxAge in 
textbox. (See screenshot below). Dynamic form is a very cool feature of 
Zeppelin, you can refer this link) for details.%pig.querybank_data = filter 
bank by age < ${maxAge=40};b = group bank_data by age;foreach b generate 
group, COUNT($1) as count;Get the number of each age for specific marital type, 
also use dynamic form here. User can choose the marital type in the dropdown 
list (see sc
 reenshot below).%pig.querybank_data = filter bank by 
marital=='${marital=single,single|divorced|married}';b = group 
bank_data by age;foreach b generate group, COUNT($1) as count;The above 
examples are in the Pig tutorial note in Zeppelin, you can check that for 
details. Here's the screenshot.Data is shared between %pig and 
%pig.query, so that you can do some common work in %pig, and do different kinds 
of query based on the data of %pig. Besides, we recommend you to specify alias 
explicitly so that the visualization can display the column name correctly. In 
the above example 2 and 3 of %pig.query, we name COUNT($1) as count. If you 
don't do this,then we will name it using position. E.g. in the above 
first example of %pig.query, we will use col_1 in chart to represent 
COUNT($1).",
       "url": " /interpreter/pig.html",
       "group": "manual",
       "excerpt": "Apache Pig is a platform for analyzing large data sets that 
consists of a high-level language for expressing data analysis programs, 
coupled with infrastructure for evaluating these programs."
@@ -468,7 +468,7 @@
 
     "/interpreter/spark.html": {
       "title": "Apache Spark Interpreter for Apache Zeppelin",

[... 42 lines stripped ...]

Reply via email to