Modified: zeppelin/site/docs/0.9.0/search_data.json
URL: 
http://svn.apache.org/viewvc/zeppelin/site/docs/0.9.0/search_data.json?rev=1890433&r1=1890432&r2=1890433&view=diff
==============================================================================
--- zeppelin/site/docs/0.9.0/search_data.json (original)
+++ zeppelin/site/docs/0.9.0/search_data.json Thu Jun  3 15:43:41 2021
@@ -25,7 +25,7 @@
 
     "/interpreter/pig.html": {
       "title": "Pig Interpreter for Apache Zeppelin",
-      "content"  : "Pig Interpreter for Apache ZeppelinOverviewApache Pig is a 
platform for analyzing large data sets that consists of a high-level language 
for expressing data analysis programs, coupled with infrastructure for 
evaluating these programs. The salient property of Pig programs is that their 
structure is amenable to substantial parallelization, which in turns enables 
them to handle very large data sets.Supported interpreter type%pig.script 
(default Pig interpreter, so you can use %pig)%pig.script is like the Pig grunt 
shell. Anything you can run in Pig grunt shell can be run in %pig.script 
interpreter, it is used for running Pig script where you don’t need to 
visualize the data, it is suitable for data munging. %pig.query%pig.query is a 
little different compared with %pig.script. It is used for exploratory data 
analysis via Pig latin where you can leverage Zeppelin’s visualization 
ability. There're 2 minor differences in the last statement between %pig
 .script and %pig.queryNo pig alias in the last statement in %pig.query (read 
the examples below).The last statement must be in single line in %pig.queryHow 
to useHow to setup Pig execution modes.Local ModeSet zeppelin.pig.execType as 
local.MapReduce ModeSet zeppelin.pig.execType as mapreduce. HADOOP_CONF_DIR 
needs to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.Tez Local ModeOnly 
Tez 0.7 is supported. Set zeppelin.pig.execType as tez_local.Tez ModeOnly Tez 
0.7 is supported. Set zeppelin.pig.execType as tez. HADOOP_CONF_DIR and 
TEZ_CONF_DIR needs to be specified in ZEPPELIN_HOME/conf/zeppelin-env.sh.Spark 
Local ModeOnly Spark 1.6.x is supported, by default it is Spark 1.6.3. Set 
zeppelin.pig.execType as spark_local.Spark ModeOnly Spark 1.6.x is supported, 
by default it is Spark 1.6.3. Set zeppelin.pig.execType as spark. For now, only 
yarn-client mode is supported. To enable it, you need to set property 
SPARK_MASTER to yarn-client and set SPARK_JAR to the spark assembly jar.How 
 to choose custom Spark VersionBy default, Pig Interpreter would use Spark 
1.6.3 built with scala 2.10, if you want to use another spark version or scala 
version, you need to rebuild Zeppelin by specifying the custom Spark version 
via -Dpig.spark.version= and scala version via -Dpig.scala.version= in the 
maven build command.How to configure interpreterAt the Interpreters menu, you 
have to create a new Pig interpreter. Pig interpreter has below properties by 
default.And you can set any Pig properties here which will be passed to Pig 
engine. (like tez.queue.name & mapred.job.queue.name).Besides, we use 
paragraph title as job name if it exists, else use the last line of Pig script. 
So you can use that to find app running in YARN RM UI.            Property      
  Default        Description                zeppelin.pig.execType        
mapreduce        Execution mode for pig runtime. local | mapreduce | tez_local 
| tez | spark_local | spark                 zeppelin.pig.includeJobSta
 ts        false        whether display jobStats info in %pig.script            
    zeppelin.pig.maxResult        1000        max row number displayed in 
%pig.query                tez.queue.name        default        queue name for 
tez engine                mapred.job.queue.name        default        queue 
name for mapreduce engine                SPARK_MASTER        local        local 
| yarn-client                SPARK_JAR                The spark assembly jar, 
both jar in local or hdfs is supported. Put it on hdfs could have        
performance benefit      Examplepig%pigbankText = load 
'bank.csv' using PigStorage(';');bank = foreach 
bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, $5 as 
balance; bank = filter bank by age != 
'"age"';bank = foreach bank generate 
(int)age, REPLACE(job,'"','') as job, 
REPLACE(marital, '"', '&a
 mp;#39;) as marital, (int)(REPLACE(balance, '"', 
'')) as balance;store bank into 
'clean_bank.csv' using PigStorage(';'); -- this 
statement is optional, it just show you that most of time %pig.script is used 
for data munging before querying the data. pig.queryGet the number of each age 
where age is less than 30%pig.querybank_data = filter bank by age < 30;b 
= group bank_data by age;foreach b generate group, COUNT($1);The same as above, 
but use dynamic text form so that use can specify the variable maxAge in 
textbox. (See screenshot below). Dynamic form is a very cool feature of 
Zeppelin, you can refer this link) for details.%pig.querybank_data = filter 
bank by age < ${maxAge=40};b = group bank_data by age;foreach b generate 
group, COUNT($1) as count;Get the number of each age for specific marital type, 
also use dynamic form here. User can choose the marital type in the dropdown 
list (see screenshot
  below).%pig.querybank_data = filter bank by 
marital=='${marital=single,single|divorced|married}';b = group 
bank_data by age;foreach b generate group, COUNT($1) as count;The above 
examples are in the Pig tutorial note in Zeppelin, you can check that for 
details. Here's the screenshot.Data is shared between %pig and 
%pig.query, so that you can do some common work in %pig, and do different kinds 
of query based on the data of %pig. Besides, we recommend you to specify alias 
explicitly so that the visualization can display the column name correctly. In 
the above example 2 and 3 of %pig.query, we name COUNT($1) as count. If you 
don't do this, then we will name it using position. E.g. in the above 
first example of %pig.query, we will use col_1 in chart to represent 
COUNT($1).",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Pig Interpreter for Apache 
ZeppelinOverviewApache Pig is a platform for analyzing large data sets that 
consists of a high-level language for expressing data analysis programs, 
coupled with infrastructure for evaluating these programs. The salient property 
of Pig programs is that their structure is amenable to substantial 
parallelization, which in turns enables them to handle very large data 
sets.Supported interpreter type%pig.scrip
 t (default Pig interpreter, so you can use %pig)%pig.script is like the Pig 
grunt shell. Anything you can run in Pig grunt shell can be run in %pig.script 
interpreter, it is used for running Pig script where you don’t need to 
visualize the data, it is suitable for data munging. %pig.query%pig.query is a 
little different compared with %pig.script. It is used for exploratory data 
analysis via Pig latin where you can leverage Zeppelin’s visualization 
ability. There're 2 minor differences in the last statement between 
%pig.script and %pig.queryNo pig alias in the last statement in %pig.query 
(read the examples below).The last statement must be in single line in 
%pig.queryHow to useHow to setup Pig execution modes.Local ModeSet 
zeppelin.pig.execType as local.MapReduce ModeSet zeppelin.pig.execType as 
mapreduce. HADOOP_CONF_DIR needs to be specified in 
ZEPPELIN_HOME/conf/zeppelin-env.sh.Tez Local ModeOnly Tez 0.7 is supported. Set 
zeppelin.pig.execType as tez_local.Tez M
 odeOnly Tez 0.7 is supported. Set zeppelin.pig.execType as tez. 
HADOOP_CONF_DIR and TEZ_CONF_DIR needs to be specified in 
ZEPPELIN_HOME/conf/zeppelin-env.sh.Spark Local ModeOnly Spark 1.6.x is 
supported, by default it is Spark 1.6.3. Set zeppelin.pig.execType as 
spark_local.Spark ModeOnly Spark 1.6.x is supported, by default it is Spark 
1.6.3. Set zeppelin.pig.execType as spark. For now, only yarn-client mode is 
supported. To enable it, you need to set property SPARK_MASTER to yarn-client 
and set SPARK_JAR to the spark assembly jar.How to choose custom Spark 
VersionBy default, Pig Interpreter would use Spark 1.6.3 built with scala 2.10, 
if you want to use another spark version or scala version, you need to rebuild 
Zeppelin by specifying the custom Spark version via -Dpig.spark.version= and 
scala version via -Dpig.scala.version= in the maven build command.How to 
configure interpreterAt the Interpreters menu, you have to create a new Pig 
interpreter. Pig interpreter has below properti
 es by default.And you can set any Pig properties here which will be passed to 
Pig engine. (like tez.queue.name & mapred.job.queue.name).Besides, we 
use paragraph title as job name if it exists, else use the last line of Pig 
script. So you can use that to find app running in YARN RM UI.            
Property        Default        Description                zeppelin.pig.execType 
       mapreduce        Execution mode for pig runtime. local | mapreduce | 
tez_local | tez | spark_local | spark                 
zeppelin.pig.includeJobStats        false        whether display jobStats info 
in %pig.script                zeppelin.pig.maxResult        1000        max row 
number displayed in %pig.query                tez.queue.name        default     
   queue name for tez engine                mapred.job.queue.name        
default        queue name for mapreduce engine                SPARK_MASTER      
  local        local | yarn-client                SPARK_JAR                The 
spark asse
 mbly jar, both jar in local or hdfs is supported. Put it on hdfs could have    
    performance benefit      Examplepig%pigbankText = load 
'bank.csv' using PigStorage(';');bank = foreach 
bankText generate $0 as age, $1 as job, $2 as marital, $3 as education, $5 as 
balance; bank = filter bank by age != 
'"age"';bank = foreach bank generate 
(int)age, REPLACE(job,'"','') as job, 
REPLACE(marital, '"', '') as marital, 
(int)(REPLACE(balance, '"', '')) as 
balance;store bank into 'clean_bank.csv' using 
PigStorage(';'); -- this statement is optional, it just show 
you that most of time %pig.script is used for data munging before querying the 
data. pig.queryGet the number of each age where age is less than 
30%pig.querybank_data = filter bank by age < 30;b = group bank_dat
 a by age;foreach b generate group, COUNT($1);The same as above, but use 
dynamic text form so that use can specify the variable maxAge in textbox. (See 
screenshot below). Dynamic form is a very cool feature of Zeppelin, you can 
refer this link) for details.%pig.querybank_data = filter bank by age < 
${maxAge=40};b = group bank_data by age;foreach b generate group, COUNT($1) as 
count;Get the number of each age for specific marital type, also use dynamic 
form here. User can choose the marital type in the dropdown list (see 
screenshot below).%pig.querybank_data = filter bank by 
marital=='${marital=single,single|divorced|married}';b = group 
bank_data by age;foreach b generate group, COUNT($1) as count;The above 
examples are in the Pig tutorial note in Zeppelin, you can check that for 
details. Here's the screenshot.Data is shared between %pig and 
%pig.query, so that you can do some common work in %pig, and do different kinds 
of query based on the data of %pig
 . Besides, we recommend you to specify alias explicitly so that the 
visualization can display the column name correctly. In the above example 2 and 
3 of %pig.query, we name COUNT($1) as count. If you don't do this, then 
we will name it using position. E.g. in the above first example of %pig.query, 
we will use col_1 in chart to represent COUNT($1).",
       "url": " /interpreter/pig.html",
       "group": "manual",
       "excerpt": "Apache Pig is a platform for analyzing large data sets that 
consists of a high-level language for expressing data analysis programs, 
coupled with infrastructure for evaluating these programs."
@@ -157,7 +157,7 @@
 
     "/interpreter/sap.html": {
       "title": "SAP BusinessObjects Interpreter for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->SAP BusinessObjects 
(Universe) Interpreter for Apache ZeppelinOverviewSAP BusinessObjects BI 
platform (universes) can simplify the lives of business users and IT staff. SAP 
BusinessObjects is based on universes. The universe contains dual-semantic 
layer model. The users make queries upon universes. This interpreter is new 
interface for universes.Disclaimer SAP interpreter is not official interpreter 
for SAP BusinessObjects BI platf
 orm. It uses BI Semantic Layer REST APIThis interpreter is not directly 
supported by SAP AG.Tested with versions 4.2SP3 (14.2.3.2220) and 4.2SP5. There 
is no support for filters in UNX-universes converted from old UNV format.The 
universe name must be unique.Configuring SAP Universe InterpreterAt the 
"Interpreters" menu, you can edit SAP interpreter or create 
new one. Zeppelin provides these properties for SAP.      Property Name    
Value    Description        universe.api.url    http://localhost:6405/biprws    
The base url for the SAP BusinessObjects BI platform. You have to edit 
"localhost" that you may use (ex. http://0.0.0.0:6405/biprws)        
universe.authType    secEnterprise    The type of authentication for API of 
Universe. Available values: secEnterprise, secLDAP, secWinAD, secSAPR3        
universe.password        The BI platform user password        universe.user    
Administrator    The BI platform user login  How to use Choose the universe Choo
 se dimensions and measures in select statement Define conditions in where 
statementYou can compare two dimensions/measures or use Filter (without value). 
Dimesions/Measures can be compared with static values, may be is null or is not 
null, contains or not in list.Available the nested conditions (using braces 
"()"). "and" operator have more priority 
than "or". If generated query contains promtps, then promtps 
will appear as dynamic form after paragraph submitting.Example 
query%sapuniverse [Universe Name];select  [Folder1].[Dimension2],  
[Folder2].[Dimension3],  [Measure1]where  [Filter1]  and [Date] > 
'2018-01-01 00:00:00'  and [Folder1].[Dimension4] is not null  
and [Folder1].[Dimension5] in ('Value1', 
'Value2');distinct keywordYou can write keyword distinct after 
keyword select to return only distinct (different) values.Example 
query```sql%sapuniverse [Universe Name];select 
 distinct  [Folder1].[Dimension2], [Measure1]where  [Filter1];```limit 
keywordYou can write keyword limit and limit value in the end of query to limit 
the number of records returned based on a limit value.Example 
query```sql%sapuniverse [Universe Name];select  [Folder1].[Dimension2], 
[Measure1]where  [Filter1]limit 100;```Object InterpolationThe SAP interpreter 
also supports interpolation of ZeppelinContext objects into the paragraph 
text.To enable this feature set universe.interpolation to true. The following 
example shows one use of this facility:In Scala 
cell:z.put("curr_date", "2018-01-01 
00:00:00")In later SAP cell:where   [Filter1]   and [Date] > 
'{curr_date}'",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->SAP BusinessObjects 
(Universe) Interpreter for Apache ZeppelinOverviewSAP BusinessObjects BI 
platform (universes) can simplify the lives of business users and IT staff. SAP 
BusinessObjects is based on universes. The universe contains dual-semantic 
layer model. The users make queries upon universes. This interpreter is new 
interface for universes.Disclaimer SAP interpreter is not official interpreter 
for SAP BusinessObjects BI platf
 orm. It uses BI Semantic Layer REST APIThis interpreter is not directly 
supported by SAP AG.Tested with versions 4.2SP3 (14.2.3.2220) and 4.2SP5. There 
is no support for filters in UNX-universes converted from old UNV format.The 
universe name must be unique.Configuring SAP Universe InterpreterAt the 
"Interpreters" menu, you can edit SAP interpreter or create 
new one. Zeppelin provides these properties for SAP.      Property Name    
Value    Description        universe.api.url    http://localhost:6405/biprws    
The base url for the SAP BusinessObjects BI platform. You have to edit 
"localhost" that you may use (ex. http://0.0.0.0:6405/biprws)        
universe.authType    secEnterprise    The type of authentication for API of 
Universe. Available values: secEnterprise, secLDAP, secWinAD, secSAPR3        
universe.password        The BI platform user password        universe.user    
Administrator    The BI platform user login  How to use Choose the universe Choo
 se dimensions and measures in select statement Define conditions in where 
statementYou can compare two dimensions/measures or use Filter (without value). 
Dimesions/Measures can be compared with static values, may be is null or is not 
null, contains or not in list.Available the nested conditions (using braces 
"()"). "and" operator have more priority 
than "or". If generated query contains promtps, then promtps 
will appear as dynamic form after paragraph submitting.Example 
query%sapuniverse [Universe Name];select  [Folder1].[Dimension2],  
[Folder2].[Dimension3],  [Measure1]where  [Filter1]  and [Date] > 
'2018-01-01 00:00:00'  and [Folder1].[Dimension4] is not null  
and [Folder1].[Dimension5] in ('Value1', 
'Value2');distinct keywordYou can write keyword distinct after 
keyword select to return only distinct (different) values.Example 
query%sapuniverse [Universe Name];select distin
 ct  [Folder1].[Dimension2], [Measure1]where  [Filter1];limit keywordYou can 
write keyword limit and limit value in the end of query to limit the number of 
records returned based on a limit value.Example query%sapuniverse [Universe 
Name];select  [Folder1].[Dimension2], [Measure1]where  [Filter1]limit 
100;Object InterpolationThe SAP interpreter also supports interpolation of 
ZeppelinContext objects into the paragraph text.To enable this feature set 
universe.interpolation to true. The following example shows one use of this 
facility:In Scala cell:z.put("curr_date", 
"2018-01-01 00:00:00")In later SAP cell:where   [Filter1]   
and [Date] > '{curr_date}'",
       "url": " /interpreter/sap.html",
       "group": "interpreter",
       "excerpt": "SAP BusinessObjects BI platform can simplify the lives of 
business users and IT staff. SAP BusinessObjects is based on universes. The 
universe contains dual-semantic layer model. The users make queries upon 
universes. This interpreter is new interface for universes."
@@ -256,7 +256,7 @@
 
     "/interpreter/flink.html": {
       "title": "Flink Interpreter for Apache Zeppelin",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Flink interpreter for Apache 
ZeppelinOverviewApache Flink is an open source platform for distributed stream 
and batch data processing. Flink’s core is a streaming dataflow engine that 
provides data distribution, communication, and fault tolerance for distributed 
computations over data streams. Flink also builds batch processing on top of 
the streaming engine, overlaying native iteration support, managed memory, and 
program opt
 imization.In Zeppelin 0.9, we refactor the Flink interpreter in Zeppelin to 
support the latest version of Flink. Only Flink 1.10+ is supported, old version 
of flink won't work.Apache Flink is supported in Zeppelin with Flink 
interpreter group which consists of below five interpreters.      Name    Class 
   Description        %flink    FlinkInterpreter    Creates 
ExecutionEnvironment/StreamExecutionEnvironment/BatchTableEnvironment/StreamTableEnvironment
 and provides a Scala environment        %flink.pyflink    PyFlinkInterpreter   
 Provides a python environment        %flink.ipyflink    IPyFlinkInterpreter    
Provides an ipython environment        %flink.ssql    FlinkStreamSqlInterpreter 
   Provides a stream sql environment        %flink.bsql    
FlinkBatchSqlInterpreter    Provides a batch sql environment  
PrerequisitesDownload Flink 1.10 for scala 2.11 (Only scala-2.11 is supported, 
scala-2.12 is not supported yet in Zeppelin)ConfigurationThe Flink interpreter 
can be config
 ured with properties provided by Zeppelin (as following table).You can also 
add and set other flink properties which are not listed in the table. For a 
list of additional properties, refer to Flink Available Properties.      
Property    Default    Description        FLINK_HOME        Location of flink 
installation. It is must be specified, otherwise you can not use flink in 
Zeppelin        HADOOP_CONF_DIR        Location of hadoop conf, this is must be 
set if running in yarn mode        HIVE_CONF_DIR        Location of hive conf, 
this is must be set if you want to connect to hive metastore        
flink.execution.mode    local    Execution mode of flink, e.g. local | yarn | 
remote        flink.execution.remote.host        Host name of running 
JobManager. Only used for remote mode        flink.execution.remote.port        
Port of running JobManager. Only used for remote mode        flink.jm.memory    
1024    Total number of memory(mb) of JobManager        flink.tm.memory    1024 
   To
 tal number of memory(mb) of TaskManager        flink.tm.slot    1    Number of 
slot per TaskManager        local.number-taskmanager    4    Total number of 
TaskManagers in local mode        flink.yarn.appName    Zeppelin Flink Session  
  Yarn app name        flink.yarn.queue    default    queue name of yarn app    
    flink.webui.yarn.useProxy    false    whether use yarn proxy url as flink 
weburl, e.g. http://resource-manager:8088/proxy/application15833965980680004    
    flink.webui.yarn.address        Set this value only when your yarn address 
is mapped to some other address, e.g. some cloud vender will map 
http://resource-manager:8088 to https://xxx-yarn.yy.cn/gateway/kkk/yarn        
flink.udf.jars        Flink udf jars (comma separated), zeppelin will register 
udf in this jar automatically for user. These udf jars could be either local 
files or hdfs files if you have hadoop installed. The udf name is the class 
name.        flink.udf.jars.packages        Packages (comma separate
 d) that would be searched for the udf defined in flink.udf.jars.        
flink.execution.jars        Additional user jars (comma separated), these jars 
could be either local files or hdfs files if you have hadoop installed.        
flink.execution.packages        Additional user packages (comma separated), 
e.g. 
org.apache.flink:flink-connector-kafka2.11:1.10,org.apache.flink:flink-connector-kafka-base2.11:1.10.0,org.apache.flink:flink-json:1.10.0
        zeppelin.flink.concurrentBatchSql.max    10    Max concurrent sql of 
Batch Sql (%flink.bsql)        zeppelin.flink.concurrentStreamSql.max    10    
Max concurrent sql of Stream Sql (%flink.ssql)        zeppelin.pyflink.python   
 python    Python binary executable for PyFlink        
table.exec.resource.default-parallelism    1    Default parallelism for flink 
sql job        zeppelin.flink.scala.color    true    Whether display scala 
shell output in colorful format      zeppelin.flink.enableHive    false    
Whether enable hive        zep
 pelin.flink.hive.version    2.3.4    Hive version that you would like to 
connect        zeppelin.flink.module.enableHive    false    Whether enable hive 
module, hive udf take precedence over flink udf if hive module is enabled.      
  zeppelin.flink.maxResult    1000    max number of row returned by sql 
interpreter        flink.interpreter.close.shutdown_cluster    true    Whether 
shutdown application when closing interpreter        
zeppelin.interpreter.close.cancel_job    true    Whether cancel flink job when 
closing interpreter        zeppelin.flink.job.check_interval    1000    Check 
interval (in milliseconds) to check flink job progress  
StreamExecutionEnvironment, ExecutionEnvironment, StreamTableEnvironment, 
BatchTableEnvironmentZeppelin will create 6 variables as flink scala (%flink) 
entry point:senv    (StreamExecutionEnvironment), benv     
(ExecutionEnvironment)stenv   (StreamTableEnvironment for blink planner) btenv  
 (BatchTableEnvironment for blink planner)stenv_2   (Str
 eamTableEnvironment for flink planner) btenv_2   (BatchTableEnvironment for 
flink planner)And will create 6 variables as pyflink (%flink.pyflink or 
%flink.ipyflink) entry point:s_env    (StreamExecutionEnvironment), b_env     
(ExecutionEnvironment)st_env   (StreamTableEnvironment for blink planner) 
bt_env   (BatchTableEnvironment for blink planner)st_env_2   
(StreamTableEnvironment for flink planner) bt_env_2   (BatchTableEnvironment 
for flink planner)Blink/Flink PlannerThere're 2 planners supported by 
Flink's table api: flink & blink.If you want to use DataSet 
api, and convert it to flink table then please use flink planner (btenv_2 and 
stenv_2).In other cases, we would always recommend you to use blink planner. 
This is also what flink batch/streaming sql interpreter use (%flink.bsql 
& %flink.ssql)Check this page for the difference between flink planner 
and blink planner.Execution mode (Local/Remote/Yarn)Flink in Zeppelin supports 
3 execution modes (
 flink.execution.mode):LocalRemoteYarnRun Flink in Local ModeRunning Flink in 
Local mode will start a MiniCluster in local JVM. By default, the local 
MiniCluster will use port 8081, so make sure this port is available in your 
machine,otherwise you can configure rest.port to specify another port. You can 
also specify local.number-taskmanager and flink.tm.slot to customize the number 
of TM and number of slots per TM, because by default it is only 4 TM with 1 
Slots which may not be enough for some cases.Run Flink in Remote ModeRunning 
Flink in remote mode will connect to a existing flink cluster which could be 
standalone cluster or yarn session cluster. Besides specifying 
flink.execution.mode to be remote. You also need to 
specifyflink.execution.remote.host and flink.execution.remote.port to point to 
flink job manager.Run Flink in Yarn ModeIn order to run flink in Yarn mode, you 
need to make the following settings:Set flink.execution.mode to yarnSet 
HADOOP_CONF_DIR in flink's in
 terpreter setting.Make sure hadoop command is on your PATH. Because internally 
flink will call command hadoop classpath and load all the hadoop related jars 
in the flink interpreter processHow to use HiveIn order to use Hive in Flink, 
you have to make the following setting.Set zeppelin.flink.enableHive to be 
trueSet zeppelin.flink.hive.version to be the hive version you are using.Set 
HIVE_CONF_DIR to be the location where hive-site.xml is located. Make sure hive 
metastore is started and you have configured hive.metastore.uris in 
hive-site.xmlCopy the following dependencies to the lib folder of flink 
installation. 
flink-connector-hive_2.11–1.10.0.jarflink-hadoop-compatibility_2.11–1.10.0.jarhive-exec-2.x.jar
 (for hive 1.x, you need to copy hive-exec-1.x.jar, hive-metastore-1.x.jar, 
libfb303–0.9.2.jar and libthrift-0.9.2.jar)Flink Batch SQL%flink.bsql is used 
for flink's batch sql. You can type help to get all the available 
commands. It supports all the flink
  sql, including DML/DDL/DQL.Use insert into statement for batch ETLUse select 
statement for batch data analytics Flink Streaming SQL%flink.ssql is used for 
flink's streaming sql. You just type help to get all the available 
commands. It supports all the flink sql, including DML/DDL/DQL.Use insert into 
statement for streaming ETLUse select statement for streaming data 
analyticsStreaming Data VisualizationZeppelin supports 3 types of streaming 
data analytics:* Single* Update* Appendtype=singleSingle mode is for the case 
when the result of sql statement is always one row, such as the following 
example. The output format is HTML, and you can specify paragraph local 
property template for the final output content template. And you can use {i} as 
placeholder for the ith column of result.    type=updateUpdate mode is suitable 
for the case when the output is more than one rows, and always will be updated 
continuously. Here’s one example where we use group by.    type=appendAppend
  mode is suitable for the scenario where output data is always appended. E.g. 
the following example which use tumble window.    Flink UDFYou can use Flink 
scala UDF or Python UDF in sql. UDF for batch and streaming sql is the same. 
Here're 2 examples.Scala UDF%flinkclass ScalaUpper extends 
ScalarFunction {  def eval(str: String) = 
str.toUpperCase}btenv.registerFunction("scala_upper", new 
ScalaUpper())Python UDF%flink.pyflinkclass PythonUpper(ScalarFunction):  def 
eval(self, s):    return 
s.upper()bt_env.register_function("python_upper", 
udf(PythonUpper(), DataTypes.STRING(), DataTypes.STRING()))Zeppelin only 
supports scala and python for flink interpreter, if you want to write java udf 
or the udf is pretty complicated which make it not suitable to write in 
Zeppelin,then you can write the udf in IDE and build a udf jar.In Zeppelin you 
just need to specify flink.udf.jars to this jar, and flinkinterpreter will 
detect all the udfs in this jar 
 and register all the udfs to TableEnvironment, the udf name is the class 
name.PyFlink(%flink.pyflink)In order to use PyFlink in Zeppelin, you just need 
to do the following configuration.* Install apache-flink (e.g. pip install 
apache-flink)* Set zeppelin.pyflink.python to the python executable where 
apache-flink is installed in case you have multiple python installed.* Copy 
flink-python_2.11–1.10.0.jar from flink opt folder to flink lib folderAnd 
PyFlink will create 6 variables for you:s_env    (StreamExecutionEnvironment), 
b_env     (ExecutionEnvironment)st_env   (StreamTableEnvironment for blink 
planner) bt_env   (BatchTableEnvironment for blink planner)st_env_2   
(StreamTableEnvironment for flink planner) bt_env_2   (BatchTableEnvironment 
for flink planner)IPython Support(%flink.ipyflink)By default, zeppelin would 
use IPython in %flink.pyflink when IPython is available, Otherwise it would 
fall back to the original python implementation.For the IPython features, you 
can refer
  docPython InterpreterZeppelinContextZeppelin automatically injects 
ZeppelinContext as variable z in your Scala/Python environment. ZeppelinContext 
provides some additional functions and utilities.See Zeppelin-Context for more 
details. You can use z to display both flink DataSet and batch/stream 
table.Display DataSetDisplay Batch TableDisplay Stream TableParagraph local 
propertiesIn the section of Streaming Data Visualization, we demonstrate the 
different visualization type via paragraph local properties: type. In this 
section, we will list and explain all the supported local properties in flink 
interpreter.      Property    Default    Description        type        Used in 
%flink.ssql to specify the streaming visualization type (single, update, 
append)        refreshInterval    3000    Used in `%flink.ssql` to specify 
frontend refresh interval for streaming data visualization.        template    
{0}    Used in `%flink.ssql` to specify html template for `single` type of 
streaming da
 ta visualization, And you can use `{i}` as placeholder for the {i}th column of 
the result.        parallelism        Used in %flink.ssql & %flink.bsql to 
specify the flink sql job parallelism        maxParallelism        Used in 
%flink.ssql & %flink.bsql to specify the flink sql job max parallelism in 
case you want to change parallelism later. For more details, refer this 
[link](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/parallel.html#setting-the-maximum-parallelism)
         savepointDir        If you specify it, then when you cancel your flink 
job in Zeppelin, it would also do savepoint and store state in this directory. 
And when you resume your job, it would resume from this savepoint.        
execution.savepoint.path        When you resume your job, it would resume from 
this savepoint path.        resumeFromSavepoint        Resume flink job from 
savepoint if you specify savepointDir.        resumeFromLatestCheckpoint        
Resume flink job from lates
 t checkpoint if you enable checkpoint.        runAsOne    false    All the 
insert into sql will run in a single flink job if this is true.  Tutorial 
NotesZeppelin is shipped with several Flink tutorial notes which may be helpful 
for you. You check more features in the tutorial notes.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Flink interpreter for Apache 
ZeppelinOverviewApache Flink is an open source platform for distributed stream 
and batch data processing. Flink’s core is a streaming dataflow engine that 
provides data distribution, communication, and fault tolerance for distributed 
computations over data streams. Flink also builds batch processing on top of 
the streaming engine, overlaying native iteration support, managed memory, and 
program opt
 imization.In Zeppelin 0.9, we refactor the Flink interpreter in Zeppelin to 
support the latest version of Flink. Only Flink 1.10+ is supported, old 
versions of flink won't work.Apache Flink is supported in Zeppelin with 
the Flink interpreter group which consists of the five interpreters listed 
below.      Name    Class    Description        %flink    FlinkInterpreter    
Creates 
ExecutionEnvironment/StreamExecutionEnvironment/BatchTableEnvironment/StreamTableEnvironment
 and provides a Scala environment        %flink.pyflink    PyFlinkInterpreter   
 Provides a python environment        %flink.ipyflink    IPyFlinkInterpreter    
Provides an ipython environment        %flink.ssql    FlinkStreamSqlInterpreter 
   Provides a stream sql environment        %flink.bsql    
FlinkBatchSqlInterpreter    Provides a batch sql environment  
PrerequisitesDownload Flink 1.10 for scala 2.11 (Only scala-2.11 is supported, 
scala-2.12 is not supported yet in Zeppelin)ConfigurationThe Flink interpret
 er can be configured with properties provided by Zeppelin (as following 
table).You can also add and set other flink properties which are not listed in 
the table. For a list of additional properties, refer to Flink Available 
Properties.      Property    Default    Description        FLINK_HOME        
Location of flink installation. It is must be specified, otherwise you can not 
use flink in Zeppelin        HADOOP_CONF_DIR        Location of hadoop conf, 
this is must be set if running in yarn mode        HIVE_CONF_DIR        
Location of hive conf, this is must be set if you want to connect to hive 
metastore        flink.execution.mode    local    Execution mode of flink, e.g. 
local | yarn | remote        flink.execution.remote.host        Host name of 
running JobManager. Only used for remote mode        
flink.execution.remote.port        Port of running JobManager. Only used for 
remote mode        flink.jm.memory    1024    Total number of memory(mb) of 
JobManager        flink.tm.memo
 ry    1024    Total number of memory(mb) of TaskManager        flink.tm.slot   
 1    Number of slot per TaskManager        local.number-taskmanager    4    
Total number of TaskManagers in local mode        flink.yarn.appName    
Zeppelin Flink Session    Yarn app name        flink.yarn.queue    default    
queue name of yarn app        flink.webui.yarn.useProxy    false    whether use 
yarn proxy url as flink weburl, e.g. 
http://resource-manager:8088/proxy/application15833965980680004        
flink.webui.yarn.address        Set this value only when your yarn address is 
mapped to some other address, e.g. some cloud vender will map 
http://resource-manager:8088 to https://xxx-yarn.yy.cn/gateway/kkk/yarn        
flink.udf.jars        Flink udf jars (comma separated), zeppelin will register 
udf in this jar automatically for user. These udf jars could be either local 
files or hdfs files if you have hadoop installed. The udf name is the class 
name.        flink.udf.jars.packages        Packages
  (comma separated) that would be searched for the udf defined in 
flink.udf.jars.        flink.execution.jars        Additional user jars (comma 
separated), these jars could be either local files or hdfs files if you have 
hadoop installed.        flink.execution.packages        Additional user 
packages (comma separated), e.g. 
org.apache.flink:flink-connector-kafka2.11:1.10,org.apache.flink:flink-connector-kafka-base2.11:1.10.0,org.apache.flink:flink-json:1.10.0
        zeppelin.flink.concurrentBatchSql.max    10    Max concurrent sql of 
Batch Sql (%flink.bsql)        zeppelin.flink.concurrentStreamSql.max    10    
Max concurrent sql of Stream Sql (%flink.ssql)        zeppelin.pyflink.python   
 python    Python binary executable for PyFlink        
table.exec.resource.default-parallelism    1    Default parallelism for flink 
sql job        zeppelin.flink.scala.color    true    Whether display scala 
shell output in colorful format      zeppelin.flink.enableHive    false    
Whether enable
  hive        zeppelin.flink.hive.version    2.3.4    Hive version that you 
would like to connect        zeppelin.flink.module.enableHive    false    
Whether enable hive module, hive udf take precedence over flink udf if hive 
module is enabled.        zeppelin.flink.maxResult    1000    max number of row 
returned by sql interpreter        flink.interpreter.close.shutdown_cluster    
true    Whether shutdown application when closing interpreter        
zeppelin.interpreter.close.cancel_job    true    Whether cancel flink job when 
closing interpreter        zeppelin.flink.job.check_interval    1000    Check 
interval (in milliseconds) to check flink job progress  
StreamExecutionEnvironment, ExecutionEnvironment, StreamTableEnvironment, 
BatchTableEnvironmentZeppelin will create 6 variables as flink scala (%flink) 
entry point:senv    (StreamExecutionEnvironment), benv     
(ExecutionEnvironment)stenv   (StreamTableEnvironment for blink planner) btenv  
 (BatchTableEnvironment for blink planne
 r)stenv_2   (StreamTableEnvironment for flink planner) btenv_2   
(BatchTableEnvironment for flink planner)And will create 6 variables as pyflink 
(%flink.pyflink or %flink.ipyflink) entry point:s_env    
(StreamExecutionEnvironment), b_env     (ExecutionEnvironment)st_env   
(StreamTableEnvironment for blink planner) bt_env   (BatchTableEnvironment for 
blink planner)st_env_2   (StreamTableEnvironment for flink planner) bt_env_2   
(BatchTableEnvironment for flink planner)Blink/Flink PlannerThere are 2 
planners supported by Flink's table api: flink & blink.If you 
want to use DataSet api, and convert it to flink table then please use flink 
planner (btenv_2 and stenv_2).In other cases, we would always recommend you to 
use blink planner. This is also what flink batch/streaming sql interpreter use 
(%flink.bsql & %flink.ssql)Check this page for the difference between 
flink planner and blink planner.Execution mode (Local/Remote/Yarn/Yarn 
Application)Flink in Zeppelin su
 pports 4 execution modes (flink.execution.mode):LocalRemoteYarnYarn 
ApplicationRun Flink in Local ModeRunning Flink in Local mode will start a 
MiniCluster in local JVM. By default, the local MiniCluster will use port 8081, 
so make sure this port is available in your machine,otherwise you can configure 
rest.port to specify another port. You can also specify 
local.number-taskmanager and flink.tm.slot to customize the number of TM and 
number of slots per TM, because by default it is only 4 TM with 1 Slots which 
may not be enough for some cases.Run Flink in Remote ModeRunning Flink in 
remote mode will connect to an existing flink cluster which could be standalone 
cluster or yarn session cluster. Besides specifying flink.execution.mode to be 
remote. You also need to specifyflink.execution.remote.host and 
flink.execution.remote.port to point to flink job manager.Run Flink in Yarn 
ModeIn order to run flink in Yarn mode, you need to make the following 
settings:Set flink.execution.mode to ya
 rnSet HADOOP_CONF_DIR in flink's interpreter setting or 
zeppelin-env.sh.Make sure hadoop command is on your PATH. Because internally 
flink will call command hadoop classpath and load all the hadoop related jars 
in the flink interpreter processRun Flink in Yarn Application ModeIn the above 
yarn mode, there will be a separated flink interpreter process. This may run 
out of resources when there're many interpreter processes.So it is 
recommended to use yarn application mode if you are using flink 1.11 or 
afterwards (yarn application mode is only supported after flink 1.11). In this 
mode flink interpreter runs in the JobManager which is in yarn container.In 
order to run flink in yarn application mode, you need to make the following 
settings:Set flink.execution.mode to yarn-applicationSet HADOOP_CONF_DIR in 
flink's interpreter setting or zeppelin-env.sh.Make sure hadoop command 
is on your PATH. Because internally flink will call command hadoop classpath 
and load al
 l the hadoop related jars in the flink interpreter processHow to use HiveIn 
order to use Hive in Flink, you have to make the following setting.Set 
zeppelin.flink.enableHive to be trueSet zeppelin.flink.hive.version to be the 
hive version you are using.Set HIVE_CONF_DIR to be the location where 
hive-site.xml is located. Make sure hive metastore is started and you have 
configured hive.metastore.uris in hive-site.xmlCopy the following dependencies 
to the lib folder of flink installation. 
flink-connector-hive_2.11–1.10.0.jarflink-hadoop-compatibility_2.11–1.10.0.jarhive-exec-2.x.jar
 (for hive 1.x, you need to copy hive-exec-1.x.jar, hive-metastore-1.x.jar, 
libfb303–0.9.2.jar and libthrift-0.9.2.jar)Flink Batch SQL%flink.bsql is used 
for flink's batch sql. You can type help to get all the available 
commands. It supports all the flink sql, including DML/DDL/DQL.Use insert into 
statement for batch ETLUse select statement for batch data analytics Flink 
Streaming SQ
 L%flink.ssql is used for flink's streaming sql. You just type help to 
get all the available commands. It supports all the flink sql, including 
DML/DDL/DQL.Use insert into statement for streaming ETLUse select statement for 
streaming data analyticsStreaming Data VisualizationZeppelin supports 3 types 
of streaming data analytics:* Single* Update* Appendtype=singleSingle mode is 
for the case when the result of sql statement is always one row, such as the 
following example. The output format is HTML, and you can specify paragraph 
local property template for the final output content template. And you can use 
{i} as placeholder for the ith column of result.    type=updateUpdate mode is 
suitable for the case when the output is more than one rows, and always will be 
updated continuously. Here’s one example where we use group by.    
type=appendAppend mode is suitable for the scenario where output data is always 
appended. E.g. the following example which use tumble window.    Fli
 nk UDFYou can use Flink scala UDF or Python UDF in sql. UDF for batch and 
streaming sql is the same. Here're 2 examples.Scala UDF%flinkclass 
ScalaUpper extends ScalarFunction {  def eval(str: String) = 
str.toUpperCase}btenv.registerFunction("scala_upper", new 
ScalaUpper())Python UDF%flink.pyflinkclass PythonUpper(ScalarFunction):  def 
eval(self, s):    return 
s.upper()bt_env.register_function("python_upper", 
udf(PythonUpper(), DataTypes.STRING(), DataTypes.STRING()))Zeppelin only 
supports scala and python for flink interpreter, if you want to write a java 
udf or the udf is pretty complicated which make it not suitable to write in 
Zeppelin,then you can write the udf in IDE and build an udf jar.In Zeppelin you 
just need to specify flink.udf.jars to this jar, and flinkinterpreter will 
detect all the udfs in this jar and register all the udfs to TableEnvironment, 
the udf name is the class name.PyFlink(%flink.pyflink)In order to use PyFlink 
in 
 Zeppelin, you just need to do the following configuration.* Install 
apache-flink (e.g. pip install apache-flink)* Set zeppelin.pyflink.python to 
the python executable where apache-flink is installed in case you have multiple 
python installed.* Copy flink-python_2.11–1.10.0.jar from flink opt folder to 
flink lib folderAnd PyFlink will create 6 variables for you:s_env    
(StreamExecutionEnvironment), b_env     (ExecutionEnvironment)st_env   
(StreamTableEnvironment for blink planner) bt_env   (BatchTableEnvironment for 
blink planner)st_env_2   (StreamTableEnvironment for flink planner) bt_env_2   
(BatchTableEnvironment for flink planner)IPython Support(%flink.ipyflink)By 
default, zeppelin would use IPython in %flink.pyflink when IPython is 
available, Otherwise it would fall back to the original python 
implementation.For the IPython features, you can refer docPython 
InterpreterZeppelinContextZeppelin automatically injects ZeppelinContext as 
variable z in your Scala/Python environme
 nt. ZeppelinContext provides some additional functions and utilities.See 
Zeppelin-Context for more details. You can use z to display both flink DataSet 
and batch/stream table.Display DataSetDisplay Batch TableDisplay Stream 
TableParagraph local propertiesIn the section of Streaming Data Visualization, 
we demonstrate the different visualization type via paragraph local properties: 
type. In this section, we will list and explain all the supported local 
properties in flink interpreter.      Property    Default    Description        
type        Used in %flink.ssql to specify the streaming visualization type 
(single, update, append)        refreshInterval    3000    Used in 
`%flink.ssql` to specify frontend refresh interval for streaming data 
visualization.        template    {0}    Used in `%flink.ssql` to specify html 
template for `single` type of streaming data visualization, And you can use 
`{i}` as placeholder for the {i}th column of the result.        parallelism     
   Used in %fl
 ink.ssql & %flink.bsql to specify the flink sql job parallelism        
maxParallelism        Used in %flink.ssql & %flink.bsql to specify the 
flink sql job max parallelism in case you want to change parallelism later. For 
more details, refer this 
[link](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/parallel.html#setting-the-maximum-parallelism)
         savepointDir        If you specify it, then when you cancel your flink 
job in Zeppelin, it would also do savepoint and store state in this directory. 
And when you resume your job, it would resume from this savepoint.        
execution.savepoint.path        When you resume your job, it would resume from 
this savepoint path.        resumeFromSavepoint        Resume flink job from 
savepoint if you specify savepointDir.        resumeFromLatestCheckpoint        
Resume flink job from latest checkpoint if you enable checkpoint.        
runAsOne    false    All the insert into sql will run in a single flink job if 
thi
 s is true.  Tutorial NotesZeppelin is shipped with several Flink tutorial 
notes which may be helpful for you. You can check for more features in the 
tutorial notes.",
       "url": " /interpreter/flink.html",
       "group": "interpreter",
       "excerpt": "Apache Flink is an open source platform for distributed 
stream and batch data processing."
@@ -467,7 +467,7 @@
 
     "/development/writing_zeppelin_interpreter.html": {
       "title": "Writing a New Interpreter",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Writing a New 
InterpreterWhat is Apache Zeppelin InterpreterApache Zeppelin Interpreter is a 
language backend. For example to use scala code in Zeppelin, you need a scala 
interpreter.Every Interpreters belongs to an InterpreterGroup.Interpreters in 
the same InterpreterGroup can reference each other. For example, 
SparkSqlInterpreter can reference SparkInterpreter to get SparkContext from it 
while they're in the same group.In
 terpreterSetting is configuration of a given InterpreterGroup and a unit of 
start/stop interpreter.All Interpreters in the same InterpreterSetting are 
launched in a single, separate JVM process. The Interpreter communicates with 
Zeppelin engine via Thrift.In 'Separate Interpreter(scoped / isolated) 
for each note' mode which you can see at the Interpreter Setting menu 
when you create a new interpreter, new interpreter instance will be created per 
note. But it still runs on the same JVM while they're in the same 
InterpreterSettings.Make your own InterpreterCreating a new interpreter is 
quite simple. Just extend org.apache.zeppelin.interpreter abstract class and 
implement some methods.For your interpreter project, you need to make 
interpreter-parent as your parent project and use plugin maven-enforcer-plugin, 
maven-dependency-plugin and maven-resources-plugin. Here's one sample 
pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0&amp
 ;quot; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd">    
<modelVersion>4.0.0</modelVersion>    
<parent>        
<artifactId>interpreter-parent</artifactId>        
<groupId>org.apache.zeppelin</groupId>        
<version>0.9.0-SNAPSHOT</version>        
<relativePath>../interpreter-parent</relativePath>  
  </parent>    ...    <dependencies>        
<dependency>            
<groupId>org.apache.zeppelin</groupId>            
<artifactId>zeppelin-interpreter</artifactId>       
     <version>${project.version}</version>          
  <scope>provided</scope&a
 mp;gt;        </dependency>    </dependencies>    
<build>        <plugins>            
<plugin>                
<artifactId>maven-enforcer-plugin</artifactId>      
      </plugin>            <plugin>                
<artifactId>maven-dependency-plugin</artifactId>    
        </plugin>            <plugin>               
 <artifactId>maven-resources-plugin</artifactId>    
        </plugin>        </plugins>    
</build></project>You should include 
org.apache.zeppelin:zeppelin-interpreter:[VERSION] as your 
interpreter's dependency in pom.xml. BesAnd you should put your jars 
under your interpreter directory with a specific directory name. Zeppelin 
server reads interpreter directories recursively and initializes interpreter
 s including your own interpreter.There are three locations where you can store 
your interpreter group, name and other information. Zeppelin server tries to 
find the location below. Next, Zeppelin tries to find interpreter-setting.json 
in your interpreter 
jar.{ZEPPELIN_INTERPRETER_DIR}/{YOUR_OWN_INTERPRETER_DIR}/interpreter-setting.jsonHere
 is an example of interpreter-setting.json on your own interpreter.[  {    
"group": "your-group",    
"name": "your-name",    
"className": "your.own.interpreter.class",  
  "properties": {      "properties1": {     
   "envName": null,        "propertyName": 
"property.1.name",        "defaultValue": 
"propertyDefaultValue",        
"description": "Property description",      
  "t
 ype": "textarea"      },      
"properties2": {        "envName": 
PROPERTIES_2,        "propertyName": null,        
"defaultValue": "property2DefaultValue",    
    "description": "Property 2 
description",        "type": 
"textarea"      }, ...    },    "editor": { 
     "language": 
"your-syntax-highlight-language",      
"editOnDblClick": false,      
"completionKey": "TAB"    },    
"config": {      "runOnSelectionChange": 
true/false,      "title": true/false,      
"checkEmpty": true/false    }  },  {    ...  }]Finally, 
Zeppelin uses static initialization with the following:static {  
Interpreter.register("MyInterpret
 erName", MyClassName.class.getName());}Static initialization is 
deprecated and will be supported until 0.6.0.The name will appear later in the 
interpreter name option box during the interpreter configuration process.The 
name of the interpreter is what you later write to identify a paragraph which 
should be interpreted using this interpreter.%MyInterpreterNamesome interpreter 
specific code...Editor setting for InterpreterYou can add editor object to 
interpreter-setting.json file to specify paragraph editor settings.LanguageIf 
the interpreter uses a specific programming language (like Scala, Python, SQL), 
it is generally recommended to add a syntax highlighting supported for that to 
the note paragraph editor.To check out the list of languages supported, see the 
mode-*.js files under zeppelin-web/bower_components/ace-builds/src-noconflict 
or from github.com/ajaxorg/ace-builds.If you want to add a new set of syntax 
highlighting,  Add the mode-*.js file to zeppelin-web/bower.jso
 n (when built, zeppelin-web/src/index.html will be changed automatically).Add 
language field to editor object. Note that if you don't specify 
language field, your interpreter will use plain text mode for syntax 
highlighting. Let's say you want to set your language to java, then 
add:"editor": {  "language": 
"java"}Edit on double clickIf your interpreter uses mark-up 
language such as markdown or HTML, set editOnDblClick to true so that text 
editor opens on pargraph double click and closes on paragraph run. Otherwise 
set it to false."editor": {  
"editOnDblClick": false}Completion key (Optional)By default, 
Ctrl+dot(.) brings autocompletion list in the editor.Through completionKey, 
each interpreter can configure autocompletion key.Currently TAB is only 
available option."editor": {  
"completionKey": "TAB"}Notebook paragraph 
display
  title (Optional)The notebook paragraph does not display the title by 
default.You can have the title of the notebook display the title by 
config.title=true."config": {  "title": 
true  # default: false}Notebook run on selection change (Optional)The dynamic 
form in the notebook triggers execution when the selection is modified.You can 
make the dynamic form in the notebook not trigger execution after selecting the 
modification by setting 
config.runOnSelectionChange=false."config": {  
"runOnSelectionChange": false # default: true}Check if the 
paragraph is empty before running (Optional)The notebook's paragraph 
default will not run if it is empty.You can set config.checkEmpty=false, to run 
even when the paragraph of the notebook is empty."config": {  
"checkEmpty": false # default: true}Install your interpreter 
binaryOnce you have built your interpreter, you can place it und
 er the interpreter directory with all its 
dependencies.[ZEPPELIN_HOME]/interpreter/[INTERPRETER_NAME]/Configure your 
interpreterTo configure your interpreter you need to follow these steps:Start 
Zeppelin by running ./bin/zeppelin-daemon.sh start.In the interpreter page, 
click the +Create button and configure your interpreter properties.Now you are 
done and ready to use your interpreter.Note : Interpreters released with 
zeppelin have a default configuration which is used when there is no 
conf/zeppelin-site.xml.Use your interpreter0.5.0Inside of a note, 
%[INTERPRETER_NAME] directive will call your interpreter.Note that the first 
interpreter configuration in zeppelin.interpreters will be the default one.For 
example,%myintpval a = "My interpreter"println(a)0.6.0 and 
laterInside of a note, %[INTERPRETER_GROUP].[INTERPRETER_NAME] directive will 
call your interpreter.You can omit either [INTERPRETER_GROUP] or 
[INTERPRETER_NAME]. If you omit [INTERPRETER_NAME], then first 
 available interpreter will be selected in the [INTERPRETER_GROUP].Likewise, if 
you skip [INTERPRETER_GROUP], then [INTERPRETER_NAME] will be chosen from 
default interpreter group.For example, if you have two interpreter myintp1 and 
myintp2 in group mygrp, you can call myintp1 like%mygrp.myintp1codes for 
myintp1and you can call myintp2 like%mygrp.myintp2codes for myintp2If you omit 
your interpreter name, it'll select first available interpreter in the 
group ( myintp1 ).%mygrpcodes for myintp1You can only omit your interpreter 
group when your interpreter group is selected as a default group.%myintp2codes 
for myintp2ExamplesCheckout some interpreters released with Zeppelin by 
default.sparkmarkdownshelljdbcContributing a new Interpreter to Zeppelin 
releasesWe welcome contribution to a new interpreter. Please follow these few 
steps:First, check out the general contribution guide here.Follow the steps in 
Make your own Interpreter section and Editor setting for Interpreter above.Ad
 d your interpreter as in the Configure your interpreter section above; also 
add it to the example template zeppelin-site.xml.template.Add tests! They are 
run by Travis for all changes and it is important that they are 
self-contained.Include your interpreter as a module in pom.xml.Add 
documentation on how to use your interpreter under docs/interpreter/. Follow 
the Markdown style as this example. Make sure you list config settings and 
provide working examples on using your interpreter in code boxes in Markdown. 
Link to images as appropriate (images should go to 
docs/assets/themes/zeppelin/img/docs-img/). And add a link to your 
documentation in the navigation menu 
(docs/_includes/themes/zeppelin/_navigation.html).Most importantly, ensure 
licenses of the transitive closure of all dependencies are list in license 
file.Commit your changes and open a Pull Request on the project Mirror on 
GitHub; check to make sure Travis CI build is passing.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Writing a New 
InterpreterWhat is Apache Zeppelin InterpreterApache Zeppelin Interpreter is a 
language backend. For example to use scala code in Zeppelin, you need a scala 
interpreter.Every Interpreters belongs to an InterpreterGroup.Interpreters in 
the same InterpreterGroup can reference each other. For example, 
SparkSqlInterpreter can reference SparkInterpreter to get SparkContext from it 
while they're in the same group.In
 terpreterSetting is configuration of a given InterpreterGroup and a unit of 
start/stop interpreter.All Interpreters in the same InterpreterSetting are 
launched in a single, separate JVM process. The Interpreter communicates with 
Zeppelin engine via Thrift.In 'Separate Interpreter(scoped / isolated) 
for each note' mode which you can see at the Interpreter Setting menu 
when you create a new interpreter, new interpreter instance will be created per 
note. But it still runs on the same JVM while they're in the same 
InterpreterSettings.Make your own InterpreterCreating a new interpreter is 
quite simple. Just extend org.apache.zeppelin.interpreter abstract class and 
implement some methods.For your interpreter project, you need to make 
interpreter-parent as your parent project and use plugin maven-enforcer-plugin, 
maven-dependency-plugin and maven-resources-plugin. Here's one sample 
pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0&amp
 ;quot; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd">    
<modelVersion>4.0.0</modelVersion>    
<parent>        
<artifactId>interpreter-parent</artifactId>        
<groupId>org.apache.zeppelin</groupId>        
<version>0.9.0-SNAPSHOT</version>        
<relativePath>../interpreter-parent</relativePath>  
  </parent>    ...    <dependencies>        
<dependency>            
<groupId>org.apache.zeppelin</groupId>            
<artifactId>zeppelin-interpreter</artifactId>       
     <version>${project.version}</version>          
  <scope>provided</scope&a
 mp;gt;        </dependency>    </dependencies>    
<build>        <plugins>            
<plugin>                
<artifactId>maven-enforcer-plugin</artifactId>      
      </plugin>            <plugin>                
<artifactId>maven-dependency-plugin</artifactId>    
        </plugin>            <plugin>               
 <artifactId>maven-resources-plugin</artifactId>    
        </plugin>        </plugins>    
</build></project>You should include 
org.apache.zeppelin:zeppelin-interpreter:[VERSION] as your 
interpreter's dependency in pom.xml. BesAnd you should put your jars 
under your interpreter directory with a specific directory name. Zeppelin 
server reads interpreter directories recursively and initializes interpreter
 s including your own interpreter.There are three locations where you can store 
your interpreter group, name and other information. Zeppelin server tries to 
find the location below. Next, Zeppelin tries to find interpreter-setting.json 
in your interpreter 
jar.{ZEPPELIN_INTERPRETER_DIR}/{YOUR_OWN_INTERPRETER_DIR}/interpreter-setting.jsonHere
 is an example of interpreter-setting.json on your own interpreter.[  {    
"group": "your-group",    
"name": "your-name",    
"className": "your.own.interpreter.class",  
  "properties": {      "properties1": {     
   "envName": null,        "propertyName": 
"property.1.name",        "defaultValue": 
"propertyDefaultValue",        
"description": "Property description",      
  "t
 ype": "textarea"      },      
"properties2": {        "envName": 
PROPERTIES_2,        "propertyName": null,        
"defaultValue": "property2DefaultValue",    
    "description": "Property 2 
description",        "type": 
"textarea"      }, ...    },    "editor": { 
     "language": 
"your-syntax-highlight-language",      
"editOnDblClick": false,      
"completionKey": "TAB"    },    
"config": {      "runOnSelectionChange": 
true/false,      "title": true/false,      
"checkEmpty": true/false    }  },  {    ...  }]Finally, 
Zeppelin uses static initialization with the following:static {  
Interpreter.register("MyInterpret
 erName", MyClassName.class.getName());}Static initialization is 
deprecated and will be supported until 0.6.0.The name will appear later in the 
interpreter name option box during the interpreter configuration process.The 
name of the interpreter is what you later write to identify a paragraph which 
should be interpreted using this interpreter.%MyInterpreterNamesome interpreter 
specific code...Editor setting for InterpreterYou can add editor object to 
interpreter-setting.json file to specify paragraph editor settings.LanguageIf 
the interpreter uses a specific programming language (like Scala, Python, SQL), 
it is generally recommended to add a syntax highlighting supported for that to 
the note paragraph editor.To check out the list of languages supported, see the 
mode-*.js files under zeppelin-web/bower_components/ace-builds/src-noconflict 
or from github.com/ajaxorg/ace-builds.If you want to add a new set of syntax 
highlighting,  Add the mode-*.js file to zeppelin-web/bower.jso
 n (when built, zeppelin-web/src/index.html will be changed automatically).Add 
language field to editor object. Note that if you don't specify 
language field, your interpreter will use plain text mode for syntax 
highlighting. Let's say you want to set your language to java, then 
add:"editor": {  "language": 
"java"}Edit on double clickIf your interpreter uses mark-up 
language such as markdown or HTML, set editOnDblClick to true so that text 
editor opens on pargraph double click and closes on paragraph run. Otherwise 
set it to false."editor": {  
"editOnDblClick": false}Completion key (Optional)By default, 
Ctrl+dot(.) brings autocompletion list in the editor.Through completionKey, 
each interpreter can configure autocompletion key.Currently TAB is only 
available option."editor": {  
"completionKey": "TAB"}Notebook paragraph 
display
  title (Optional)The notebook paragraph does not display the title by 
default.You can have the title of the notebook display the title by 
config.title=true."config": {  "title": 
true  # default: false}Notebook run on selection change (Optional)The dynamic 
form in the notebook triggers execution when the selection is modified.You can 
make the dynamic form in the notebook not trigger execution after selecting the 
modification by setting 
config.runOnSelectionChange=false."config": {  
"runOnSelectionChange": false # default: true}Check if the 
paragraph is empty before running (Optional)The notebook's paragraph 
default will not run if it is empty.You can set config.checkEmpty=false, to run 
even when the paragraph of the notebook is empty."config": {  
"checkEmpty": false # default: true}Install your interpreter 
binaryOnce you have built your interpreter, you can place it und
 er the interpreter directory with all its 
dependencies.[ZEPPELIN_HOME]/interpreter/[INTERPRETER_NAME]/Configure your 
interpreterTo configure your interpreter you need to follow these steps:Start 
Zeppelin by running ./bin/zeppelin-daemon.sh start.In the interpreter page, 
click the +Create button and configure your interpreter properties.Now you are 
done and ready to use your interpreter.Note : Interpreters released with 
zeppelin have a default configuration which is used when there is no 
conf/zeppelin-site.xml.Use your interpreter0.5.0Inside of a note, 
%[INTERPRETER_NAME] directive will call your interpreter.Note that the first 
interpreter configuration in zeppelin.interpreters will be the default one.For 
example,%myintpval a = "My interpreter"println(a)0.6.0 and 
laterInside of a note, %[INTERPRETER_GROUP].[INTERPRETER_NAME] directive will 
call your interpreter.You can omit either [INTERPRETER_GROUP] or 
[INTERPRETER_NAME]. If you omit [INTERPRETER_NAME], then first 
 available interpreter will be selected in the [INTERPRETER_GROUP].Likewise, if 
you skip [INTERPRETER_GROUP], then [INTERPRETER_NAME] will be chosen from 
default interpreter group.For example, if you have two interpreter myintp1 and 
myintp2 in group mygrp, you can call myintp1 like%mygrp.myintp1codes for 
myintp1and you can call myintp2 like%mygrp.myintp2codes for myintp2If you omit 
your interpreter name, it'll select first available interpreter in the 
group ( myintp1 ).%mygrpcodes for myintp1You can only omit your interpreter 
group when your interpreter group is selected as a default group.%myintp2codes 
for myintp2ExamplesCheckout some interpreters released with Zeppelin by 
default.sparkmarkdownshelljdbcContributing a new Interpreter to Zeppelin 
releasesWe welcome contribution to a new interpreter. Please follow these few 
steps:First, check out the general contribution guide here.Follow the steps in 
Make your own Interpreter section and Editor setting for Interpreter above.Ad
 d your interpreter as in the Configure your interpreter section above; also 
add it to the example template zeppelin-site.xml.template.Add tests! They are 
run for all changes and it is important that they are self-contained.Include 
your interpreter as a module in pom.xml.Add documentation on how to use your 
interpreter under docs/interpreter/. Follow the Markdown style as this example. 
Make sure you list config settings and provide working examples on using your 
interpreter in code boxes in Markdown. Link to images as appropriate (images 
should go to docs/assets/themes/zeppelin/img/docs-img/). And add a link to your 
documentation in the navigation menu 
(docs/_includes/themes/zeppelin/_navigation.html).Most importantly, ensure 
licenses of the transitive closure of all dependencies are list in license 
file.Commit your changes and open a Pull Request on the project Mirror on 
GitHub; check to make sure Travis CI build is passing.",
       "url": " /development/writing_zeppelin_interpreter.html",
       "group": "development",
       "excerpt": "Apache Zeppelin Interpreter is a language backend. Every 
Interpreters belongs to an InterpreterGroup. Interpreters in the same 
InterpreterGroup can reference each other."
@@ -489,7 +489,7 @@
 
     "/development/contribution/how_to_contribute_code.html": {
       "title": "Contributing to Apache Zeppelin (Code)",
-      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Contributing to Apache 
Zeppelin ( Code )NOTE : Apache Zeppelin is an Apache2 License Software.Any 
contributions to Zeppelin (Source code, Documents, Image, Website) means you 
agree with license all your contributions as Apache2 License.Setting upHere are 
some tools you will need to build and test Zeppelin.Software Configuration 
Management ( SCM )Since Zeppelin uses Git for it's SCM system, you need 
git client installed in y
 our development machine.Integrated Development Environment ( IDE )You are free 
to use whatever IDE you prefer, or your favorite command line editor.Build 
ToolsTo build the code, installOracle Java 8Apache MavenGetting the source 
codeFirst of all, you need Zeppelin source code. The official location of 
Zeppelin is https://gitbox.apache.org/repos/asf/zeppelin.git.git accessGet the 
source code on your development machine using git.git clone 
git://gitbox.apache.org/repos/asf/zeppelin.git zeppelinYou may also want to 
develop against a specific branch. For example, for branch-0.5.6git clone -b 
branch-0.5.6 git://gitbox.apache.org/repos/asf/zeppelin.git zeppelinApache 
Zeppelin follows Fork & Pull as a source control workflow.If you want 
to not only build Zeppelin but also make any changes, then you need to fork 
Zeppelin github mirror repository and make a pull request.Before making a pull 
request, please take a look Contribution Guidelines.Buildmvn installTo skip 
testmvn install -D
 skipTestsTo build with specific spark / hadoop versionmvn install 
-Dspark.version=x.x.x -Dhadoop.version=x.x.xFor the further Run Zeppelin server 
in development modeOption 1 - Command LineCopy the 
conf/zeppelin-site.xml.template to 
zeppelin-server/src/main/resources/zeppelin-site.xml and change the 
configurations in this file if requiredRun the following commandcd 
zeppelin-serverHADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn 
exec:java 
-Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" 
-Dexec.args=""Option 2 - Daemon ScriptNote: Make sure you 
first run mvn clean install -DskipTestsin your zeppelin root directory, 
otherwise your server build will fail to find the required dependencies in the 
local repro.or use daemon scriptbin/zeppelin-daemon startServer will be run on 
http://localhost:8080.Option 3 - IDECopy the conf/zeppelin-site.xml.template to 
zeppelin-server/src/main/resources/zeppelin-site.xml and change the 
configurations
  in this file if requiredZeppelinServer.java Main classGenerating Thrift 
CodeSome portions of the Zeppelin code are generated by Thrift. For most 
Zeppelin changes, you don't need to worry about this. But if you modify 
any of the Thrift IDL files (e.g. 
zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to 
regenerate these files and submit their updated version as part of your 
patch.To regenerate the code, install thrift-0.9.2 and then run the following 
command to generate thrift code.cd 
<zeppelin_home>/zeppelin-interpreter/src/main/thrift./genthrift.shRun
 Selenium testZeppelin has set of integration tests using Selenium. To run 
these test, first build and run Zeppelin and make sure Zeppelin is running on 
port 8080. Then you can run test using following commandTEST_SELENIUM=true mvn 
test -Dtest=[TEST_NAME] -DfailIfNoTests=false -pl 
'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'For 
example, to run ParagraphActionIT,TEST_SEL
 ENIUM=true mvn test -Dtest=ParagraphActionsIT -DfailIfNoTests=false -pl 
'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'You'll
 need Firefox web browser installed in your development environment. While CI 
server uses Firefox 31.0 to run selenium test, it is good idea to install the 
same version (disable auto update to keep the version).Where to StartYou can 
find issues for beginner & newbieStay involvedContributors should join 
the Zeppelin mailing lists....@zeppelin.apache.org is for people who want to 
contribute code to Zeppelin. subscribe, unsubscribe, archivesIf you have any 
issues, create a ticket in JIRA.",
+      "content"  : "<!--Licensed under the Apache License, Version 2.0 (the 
"License");you may not use this file except in compliance with the 
License.You may obtain a copy of the License 
athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law 
or agreed to in writing, softwaredistributed under the License is distributed 
on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 
either express or implied.See the License for the specific language governing 
permissions andlimitations under the License.-->Contributing to Apache 
Zeppelin ( Code )NOTE : Apache Zeppelin is an Apache2 License Software.Any 
contributions to Zeppelin (Source code, Documents, Image, Website) means you 
agree with license all your contributions as Apache2 License.Setting upHere are 
some tools you will need to build and test Zeppelin.Software Configuration 
Management ( SCM )Since Zeppelin uses Git for it's SCM system, you need 
git client installed in y
 our development machine.Integrated Development Environment ( IDE )You are free 
to use whatever IDE you prefer, or your favorite command line editor.Build 
ToolsTo build the code, installOracle Java 8Apache MavenGetting the source 
codeFirst of all, you need Zeppelin source code. The official location of 
Zeppelin is https://gitbox.apache.org/repos/asf/zeppelin.git.git accessGet the 
source code on your development machine using git.git clone 
git://gitbox.apache.org/repos/asf/zeppelin.git zeppelinYou may also want to 
develop against a specific branch. For example, for branch-0.5.6git clone -b 
branch-0.5.6 git://gitbox.apache.org/repos/asf/zeppelin.git zeppelinApache 
Zeppelin follows Fork & Pull as a source control workflow.If you want 
to not only build Zeppelin but also make any changes, then you need to fork 
Zeppelin github mirror repository and make a pull request.Before making a pull 
request, please take a look Contribution Guidelines.Buildmvn installTo skip 
testmvn install -D
 skipTestsTo build with specific spark / hadoop versionmvn install 
-Dspark.version=x.x.x -Dhadoop.version=x.x.xFor the further Run Zeppelin server 
in development modeOption 1 - Command LineCopy the 
conf/zeppelin-site.xml.template to 
zeppelin-server/src/main/resources/zeppelin-site.xml and change the 
configurations in this file if requiredRun the following commandcd 
zeppelin-serverHADOOP_HOME=YOUR_HADOOP_HOME JAVA_HOME=YOUR_JAVA_HOME mvn 
exec:java 
-Dexec.mainClass="org.apache.zeppelin.server.ZeppelinServer" 
-Dexec.args=""Option 2 - Daemon ScriptNote: Make sure you 
first run mvn clean install -DskipTestsin your zeppelin root directory, 
otherwise your server build will fail to find the required dependencies in the 
local repro.or use daemon scriptbin/zeppelin-daemon startServer will be run on 
http://localhost:8080.Option 3 - IDECopy the conf/zeppelin-site.xml.template to 
zeppelin-server/src/main/resources/zeppelin-site.xml and change the 
configurations
  in this file if requiredZeppelinServer.java Main classGenerating Thrift 
CodeSome portions of the Zeppelin code are generated by Thrift. For most 
Zeppelin changes, you don't need to worry about this. But if you modify 
any of the Thrift IDL files (e.g. 
zeppelin-interpreter/src/main/thrift/*.thrift), then you also need to 
regenerate these files and submit their updated version as part of your 
patch.To regenerate the code, install thrift-0.9.2 and then run the following 
command to generate thrift code.cd 
<zeppelin_home>/zeppelin-interpreter/src/main/thrift./genthrift.shRun
 Selenium testZeppelin has set of integration tests using Selenium. To run 
these test, first build and run Zeppelin and make sure Zeppelin is running on 
port 8080. Then you can run test using following commandTEST_SELENIUM=true mvn 
test -Dtest=[TEST_NAME] -DfailIfNoTests=false -pl 
'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'For 
example, to run ParagraphActionIT,TEST_SEL
 ENIUM=true mvn test -Dtest=ParagraphActionsIT -DfailIfNoTests=false -pl 
'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'You'll
 need Firefox web browser installed in your development environment.Where to 
StartYou can find issues for beginner & newbieStay involvedContributors 
should join the Zeppelin mailing lists....@zeppelin.apache.org is for people 
who want to contribute code to Zeppelin. subscribe, unsubscribe, archivesIf you 
have any issues, create a ticket in JIRA.",
       "url": " /development/contribution/how_to_contribute_code.html",
       "group": "development/contribution",
       "excerpt": "How can you contribute to Apache Zeppelin project? This 
document covers from setting up your develop environment to making a pull 
request on Github."
@@ -534,7 +534,7 @@
 
     "/setup/operation/configuration.html": {
       "title": "Apache Zeppelin Configuration",

[... 197 lines stripped ...]
Modified: zeppelin/site/docs/0.9.0/setup/basics/hadoop_integration.html
URL: 
http://svn.apache.org/viewvc/zeppelin/site/docs/0.9.0/setup/basics/hadoop_integration.html?rev=1890433&r1=1890432&r2=1890433&view=diff
==============================================================================
--- zeppelin/site/docs/0.9.0/setup/basics/hadoop_integration.html (original)
+++ zeppelin/site/docs/0.9.0/setup/basics/hadoop_integration.html Thu Jun  3 
15:43:41 2021
@@ -125,6 +125,9 @@
                 <li><a 
href="/docs/0.9.0/usage/rest_api/configuration.html">Configuration API</a></li>
                 <li><a 
href="/docs/0.9.0/usage/rest_api/credential.html">Credential API</a></li>
                 <li><a href="/docs/0.9.0/usage/rest_api/helium.html">Helium 
API</a></li>
+                <li class="title"><span>Zeppelin SDK</span></li>
+                <li><a 
href="/docs/0.9.0/usage/zeppelin_sdk/client_api.html">Client API</a></li>
+                <li><a 
href="/docs/0.9.0/usage/zeppelin_sdk/session_api.html">Session API</a></li>
               </ul>
             </li>
 
@@ -192,17 +195,21 @@
                 <li><a href="/docs/0.9.0/interpreter/hdfs.html">HDFS</a></li>
                 <li><a href="/docs/0.9.0/interpreter/hive.html">Hive</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/ignite.html">Ignite</a></li>
+                <li><a 
href="/docs/0.9.0/interpreter/influxdb.html">influxDB</a></li>
                 <li><a href="/docs/0.9.0/interpreter/java.html">Java</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/jupyter.html">Jupyter</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/kotlin.html">Kotlin</a></li>
+                <li><a href="/docs/0.9.0/interpreter/ksql.html">KSQL</a></li>
                 <li><a href="/docs/0.9.0/interpreter/kylin.html">Kylin</a></li>
                 <li><a href="/docs/0.9.0/interpreter/lens.html">Lens</a></li>
                 <li><a href="/docs/0.9.0/interpreter/livy.html">Livy</a></li>
+                <li><a 
href="/docs/0.9.0/interpreter/mahout.html">Mahout</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/markdown.html">Markdown</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/mongodb.html">MongoDB</a></li>
                 <li><a href="/docs/0.9.0/interpreter/neo4j.html">Neo4j</a></li>
                 <li><a href="/docs/0.9.0/interpreter/pig.html">Pig</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/postgresql.html">Postgresql, HAWQ</a></li>
+                <li><a href="/docs/0.9.0/interpreter/sap.html">SAP</a></li>
                 <li><a 
href="/docs/0.9.0/interpreter/scalding.html">Scalding</a></li>
                 <li><a href="/docs/0.9.0/interpreter/scio.html">Scio</a></li>
                 <li><a href="/docs/0.9.0/interpreter/shell.html">Shell</a></li>
@@ -285,7 +292,7 @@ limitations under the License.
 
 <h2>Requirements</h2>
 
-<p>In Zeppelin 0.9 doesn&#39;t ship with hadoop dependencies, you need to 
include hadoop jars by yourself via the following steps</p>
+<p>Zeppelin 0.9 doesn&#39;t ship with hadoop dependencies, you need to include 
hadoop jars by yourself via the following steps</p>
 
 <ul>
 <li>Hadoop client (both 2.x and 3.x are supported) is installed.</li>
@@ -299,7 +306,7 @@ limitations under the License.
 
       <hr>
       <footer>
-        <!-- <p>&copy; 2020 The Apache Software Foundation</p>-->
+        <!-- <p>&copy; 2021 The Apache Software Foundation</p>-->
       </footer>
     </div>
 


Reply via email to