http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/install/virtual_machine.md
----------------------------------------------------------------------
diff --git a/docs/install/virtual_machine.md b/docs/install/virtual_machine.md
deleted file mode 100644
index d726004..0000000
--- a/docs/install/virtual_machine.md
+++ /dev/null
@@ -1,192 +0,0 @@
----
-layout: page
-title: "Apache Zeppelin on Vagrant Virtual Machine"
-description: "Apache Zeppelin provides a script for running a virtual machine 
for development through Vagrant. The script will create a virtual machine with 
core dependencies pre-installed, required for developing Apache Zeppelin."
-group: install
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-# Apache Zeppelin on Vagrant Virtual Machine
-
-<div id="toc"></div>
-
-## Overview
-
-Apache Zeppelin distribution includes a script directory
-
- `scripts/vagrant/zeppelin-dev`
-
-This script creates a virtual machine that launches a repeatable, known set of 
core dependencies required for developing Zeppelin. It can also be used to run 
an existing Zeppelin build if you don't plan to build from source.
-For PySpark users, this script includes several helpful [Python 
Libraries](#python-extras).
-For SparkR users, this script includes several helpful [R 
Libraries](#r-extras).
-
-### Prerequisites
-
-This script requires three applications, 
[Ansible](http://docs.ansible.com/ansible/intro_installation.html#latest-releases-via-pip
 "Ansible"), [Vagrant](http://www.vagrantup.com "Vagrant") and [Virtual 
Box](https://www.virtualbox.org/ "Virtual Box").  All of these applications are 
freely available as Open Source projects and extremely easy to set up on most 
operating systems.
-
-## Create a Zeppelin Ready VM
-
-If you are running Windows and don't yet have python installed, [install 
Python 2.7.x](https://www.python.org/downloads/release/python-2710/) first.
-
-1. Download and Install Vagrant:  [Vagrant 
Downloads](http://www.vagrantup.com/downloads.html)
-2. Install Ansible:  [Ansible Python pip 
install](http://docs.ansible.com/ansible/intro_installation.html#latest-releases-via-pip)
-
-    ```
-    sudo easy_install pip
-    sudo pip install ansible
-    ansible --version
-    ```
-    After then, please check whether it reports **ansible version 1.9.2 or 
higher**.
-
-3. Install Virtual Box: [Virtual Box Downloads](https://www.virtualbox.org/ 
"Virtual Box")
-4. Type `vagrant up`  from within the `/scripts/vagrant/zeppelin-dev` directory
-
-Thats it ! You can now run `vagrant ssh` and this will place you into the 
guest machines terminal prompt.
-
-If you don't wish to build Zeppelin from scratch, run the z-manager installer 
script while running in the guest VM:
-
-```
-curl -fsSL 
https://raw.githubusercontent.com/NFLabs/z-manager/master/zeppelin-installer.sh 
| bash
-```
-
-
-## Building Zeppelin
-
-You can now 
-
-```
-git clone git://git.apache.org/zeppelin.git
-```
-
-into a directory on your host machine, or directly in your virtual machine.
-
-Cloning Zeppelin into the `/scripts/vagrant/zeppelin-dev` directory from the 
host, will allow the directory to be shared between your host and the guest 
machine.
-
-Cloning the project again may seem counter intuitive, since this script likely 
originated from the project repository.  Consider copying just the 
vagrant/zeppelin-dev script from the Zeppelin project as a stand alone 
directory, then once again clone the specific branch you wish to build.
-
-Synced folders enable Vagrant to sync a folder on the host machine to the 
guest machine, allowing you to continue working on your project's files on your 
host machine, but use the resources in the guest machine to compile or run your 
project. _[(1) Synced Folder Description from Vagrant 
Up](https://docs.vagrantup.com/v2/synced-folders/index.html)_
-
-By default, Vagrant will share your project directory (the directory with the 
Vagrantfile) to `/vagrant`.  Which means you should be able to build within the 
guest machine after you
-`cd /vagrant/zeppelin`
-
-
-## What's in this VM?
-
-Running the following commands in the guest machine should display these 
expected versions:
-
-`node --version` should report *v0.12.7*
-`mvn --version` should report *Apache Maven 3.3.9* and *Java version: 1.7.0_85*
-
-The virtual machine consists of:
-
- - Ubuntu Server 14.04 LTS
- - Node.js 0.12.7
- - npm 2.11.3
- - ruby 1.9.3 + rake, make and bundler (only required if building jekyll 
documentation)
- - Maven 3.3.9
- - Git
- - Unzip
- - libfontconfig to avoid phatomJs missing dependency issues
- - openjdk-7-jdk
- - Python addons: pip, matplotlib, scipy, numpy, pandas
- - [R](https://www.r-project.org/) and R Packages required to run the R 
Interpreter and the related R tutorial notebook, including:  Knitr, devtools, 
repr, rCharts, ggplot2, googleVis, mplot, htmltools, base64enc, data.table
-
-## How to build & run Zeppelin
-
-This assumes you've already cloned the project either on the host machine in 
the zeppelin-dev directory (to be shared with the guest machine) or cloned 
directly into a directory while running inside the guest machine.  The 
following build steps will also include Python and R support via PySpark and 
SparkR:
-
-```
-cd /zeppelin
-mvn clean package -Pspark-1.6 -Phadoop-2.4 -DskipTests
-./bin/zeppelin-daemon.sh start
-```
-
-On your host machine browse to `http://localhost:8080/`
-
-If you [turned off port forwarding](#tweaking-the-virtual-machine) in the 
`Vagrantfile` browse to `http://192.168.51.52:8080`
-
-
-## Tweaking the Virtual Machine
-
-If you plan to run this virtual machine along side other Vagrant images, you 
may wish to bind the virtual machine to a specific IP address, and not use port 
fowarding from your local host.
-
-Comment out the `forward_port` line, and uncomment the `private_network` line 
in Vagrantfile.  The subnet that works best for your local network will vary so 
adjust `192.168.*.*` accordingly.
-
-```
-#config.vm.network "forwarded_port", guest: 8080, host: 8080
-config.vm.network "private_network", ip: "192.168.51.52"
-```
-
-`vagrant halt` followed by `vagrant up` will restart the guest machine bound 
to the IP address of `192.168.51.52`.
-This approach usually is typically required if running other virtual machines 
that discover each other directly by IP address, such as Spark Masters and 
Slaves as well as Cassandra Nodes, Elasticsearch Nodes, and other Spark data 
sources.  You may wish to launch nodes in virtual machines with IP addresses in 
a subnet that works for your local network, such as: 192.168.51.53, 
192.168.51.54, 192.168.51.53, etc..
-
-## Extras
-### Python Extras
-
-With Zeppelin running, **Numpy**, **SciPy**, **Pandas** and **Matplotlib** 
will be available.  Create a pyspark notebook, and try the below code.
-
-```python
-%pyspark
-
-import numpy
-import scipy
-import pandas
-import matplotlib
-
-print "numpy " + numpy.__version__
-print "scipy " + scipy.__version__
-print "pandas " + pandas.__version__
-print "matplotlib " + matplotlib.__version__
-```
-
-To Test plotting using Matplotlib into a rendered `%html` SVG image, try
-
-```python
-%pyspark
-
-import matplotlib
-matplotlib.use('Agg')   # turn off interactive charting so this works for 
server side SVG rendering
-import matplotlib.pyplot as plt
-import numpy as np
-import StringIO
-
-# clear out any previous plots on this note
-plt.clf()
-
-def show(p):
-    img = StringIO.StringIO()
-    p.savefig(img, format='svg')
-    img.seek(0)
-    print "%html <div style='width:600px'>" + img.buf + "</div>"
-
-# Example data
-people = ('Tom', 'Dick', 'Harry', 'Slim', 'Jim')
-y_pos = np.arange(len(people))
-performance = 3 + 10 * np.random.rand(len(people))
-error = np.random.rand(len(people))
-
-plt.barh(y_pos, performance, xerr=error, align='center', alpha=0.4)
-plt.yticks(y_pos, people)
-plt.xlabel('Performance')
-plt.title('How fast do you want to go today?')
-
-show(plt)
-```
-
-### R Extras
-
-With zeppelin running, an R Tutorial notebook will be available.  The R 
packages required to run the examples and graphs in this tutorial notebook were 
installed by this virtual machine.
-The installed R Packages include: Knitr, devtools, repr, rCharts, ggplot2, 
googleVis, mplot, htmltools, base64enc, data.table

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/install/yarn_install.md
----------------------------------------------------------------------
diff --git a/docs/install/yarn_install.md b/docs/install/yarn_install.md
deleted file mode 100644
index e242754..0000000
--- a/docs/install/yarn_install.md
+++ /dev/null
@@ -1,172 +0,0 @@
----
-layout: page
-title: "Install Zeppelin to connect with existing YARN cluster"
-description: "This page describes how to pre-configure a bare metal node, 
configure Apache Zeppelin and connect it to existing YARN cluster running 
Hortonworks flavour of Hadoop."
-group: install
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-## Introduction
-This page describes how to pre-configure a bare metal node, configure Zeppelin 
and connect it to existing YARN cluster running Hortonworks flavour of Hadoop. 
It also describes steps to configure Spark interpreter of Zeppelin.
-
-## Prepare Node
-
-### Zeppelin user (Optional)
-This step is optional, however its nice to run Zeppelin under its own user. In 
case you do not like to use Zeppelin (hope not) the user could be deleted along 
with all the packages that were installed for Zeppelin, Zeppelin binary itself 
and associated directories.
-
-Create a zeppelin user and switch to zeppelin user or if zeppelin user is 
already created then login as zeppelin.
-
-```bash
-useradd zeppelin
-su - zeppelin
-whoami
-```
-Assuming a zeppelin user is created then running whoami command must return
-
-```bash
-zeppelin
-```
-
-Its assumed in the rest of the document that zeppelin user is indeed created 
and below installation instructions are performed as zeppelin user.
-
-### List of Prerequisites
-
- * CentOS 6.x, Mac OSX, Ubuntu 14.X
- * Java 1.7
- * Hadoop client
- * Spark
- * Internet connection is required.
-
-It's assumed that the node has CentOS 6.x installed on it. Although any 
version of Linux distribution should work fine.
-
-#### Hadoop client
-Zeppelin can work with multiple versions & distributions of Hadoop. A complete 
list is available [here](https://github.com/apache/zeppelin#build). This 
document assumes Hadoop 2.7.x client libraries including configuration files 
are installed on Zeppelin node. It also assumes /etc/hadoop/conf contains 
various Hadoop configuration files. The location of Hadoop configuration files 
may vary, hence use appropriate location.
-
-```bash
-hadoop version
-Hadoop 2.7.1.2.3.1.0-2574
-Subversion g...@github.com:hortonworks/hadoop.git -r 
f66cf95e2e9367a74b0ec88b2df33458b6cff2d0
-Compiled by jenkins on 2015-07-25T22:36Z
-Compiled with protoc 2.5.0
-From source with checksum 54f9bbb4492f92975e84e390599b881d
-This command was run using 
/usr/hdp/2.3.1.0-2574/hadoop/lib/hadoop-common-2.7.1.2.3.1.0-2574.jar
-```
-
-#### Spark
-Spark is supported out of the box and to take advantage of this, you need to 
Download appropriate version of Spark binary packages from [Spark Download 
page](http://spark.apache.org/downloads.html) and unzip it.
-Zeppelin can work with multiple versions of Spark. A complete list is 
available [here](https://github.com/apache/zeppelin#build).
-This document assumes Spark 1.6.0 is installed at /usr/lib/spark.
-> Note: Spark should be installed on the same node as Zeppelin.
-
-> Note: Spark's pre-built package for CDH 4 doesn't support yarn.
-
-#### Zeppelin
-
-Checkout source code from 
[git://git.apache.org/zeppelin.git](https://github.com/apache/zeppelin.git) or 
download binary package from [Download 
page](https://zeppelin.apache.org/download.html).
-You can refer [Install](install.html) page for the details.
-This document assumes that Zeppelin is located under `/home/zeppelin/zeppelin`.
-
-## Zeppelin Configuration
-Zeppelin configuration needs to be modified to connect to YARN cluster. Create 
a copy of zeppelin environment shell script.
-
-```bash
-cp /home/zeppelin/zeppelin/conf/zeppelin-env.sh.template 
/home/zeppelin/zeppelin/conf/zeppelin-env.sh
-```
-
-Set the following properties
-
-```bash
-export JAVA_HOME="/usr/java/jdk1.7.0_79"
-export HADOOP_CONF_DIR="/etc/hadoop/conf"
-export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.1.0-2574"
-export SPARK_HOME="/usr/lib/spark"
-```
-
-As /etc/hadoop/conf contains various configurations of YARN cluster, Zeppelin 
can now submit Spark/Hive jobs on YARN cluster form its web interface. The 
value of hdp.version is set to 2.3.1.0-2574. This can be obtained by running 
the following command
-
-```bash
-hdp-select status hadoop-client | sed 's/hadoop-client - \(.*\)/\1/'
-# It returned  2.3.1.0-2574
-```
-
-## Start/Stop
-### Start Zeppelin
-
-```
-cd /home/zeppelin/zeppelin
-bin/zeppelin-daemon.sh start
-```
-After successful start, visit http://[zeppelin-server-host-name]:8080 with 
your web browser.
-
-### Stop Zeppelin
-
-```
-bin/zeppelin-daemon.sh stop
-```
-
-## Interpreter
-Zeppelin provides various distributed processing frameworks to process data 
that ranges from Spark, JDBC, Ignite and Lens to name a few. This document 
describes to configure JDBC & Spark interpreters.
-
-### Hive
-Zeppelin supports Hive through JDBC interpreter. You might need the 
information to use Hive and can find in your hive-site.xml
-
-Once Zeppelin server has started successfully, visit 
http://[zeppelin-server-host-name]:8080 with your web browser. Click on 
Interpreter tab next to Notebook dropdown. Look for Hive configurations and set 
them appropriately. Set them as per Hive installation on YARN cluster.
-Click on Save button. Once these configurations are updated, Zeppelin will 
prompt you to restart the interpreter. Accept the prompt and the interpreter 
will reload the configurations.
-
-### Spark
-It was assumed that 1.6.0 version of Spark is installed at /usr/lib/spark. 
Look for Spark configurations and click edit button to add the following 
properties
-
-<table class="table-configuration">
-  <tr>
-    <th>Property Name</th>
-    <th>Property Value</th>
-    <th>Remarks</th>
-  </tr>
-  <tr>
-    <td>master</td>
-    <td>yarn-client</td>
-    <td>In yarn-client mode, the driver runs in the client process, and the 
application master is only used for requesting resources from YARN.</td>
-  </tr>
-  <tr>
-    <td>spark.driver.extraJavaOptions</td>
-    <td>-Dhdp.version=2.3.1.0-2574</td>
-    <td></td>
-  </tr>
-  <tr>
-    <td>spark.yarn.am.extraJavaOptions</td>
-    <td>-Dhdp.version=2.3.1.0-2574</td>
-    <td></td>
-  </tr>
-</table>
-
-Click on Save button. Once these configurations are updated, Zeppelin will 
prompt you to restart the interpreter. Accept the prompt and the interpreter 
will reload the configurations.
-
-Spark & Hive notebooks can be written with Zeppelin now. The resulting Spark & 
Hive jobs will run on configured YARN cluster.
-
-## Debug
-Zeppelin does not emit any kind of error messages on web interface when 
notebook/paragraph is run. If a paragraph fails it only displays ERROR. The 
reason for failure needs to be looked into log files which is present in logs 
directory under zeppelin installation base directory. Zeppelin creates a log 
file for each kind of interpreter.
-
-```bash
-[zeppelin@zeppelin-3529 logs]$ pwd
-/home/zeppelin/zeppelin/logs
-[zeppelin@zeppelin-3529 logs]$ ls -l
-total 844
--rw-rw-r-- 1 zeppelin zeppelin  14648 Aug  3 14:45 
zeppelin-interpreter-hive-zeppelin-zeppelin-3529.log
--rw-rw-r-- 1 zeppelin zeppelin 625050 Aug  3 16:05 
zeppelin-interpreter-spark-zeppelin-zeppelin-3529.log
--rw-rw-r-- 1 zeppelin zeppelin 200394 Aug  3 21:15 
zeppelin-zeppelin-zeppelin-3529.log
--rw-rw-r-- 1 zeppelin zeppelin  16162 Aug  3 14:03 
zeppelin-zeppelin-zeppelin-3529.out
-[zeppelin@zeppelin-3529 logs]$
-```

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/alluxio.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/alluxio.md b/docs/interpreter/alluxio.md
index 4c41fdc..c3613df 100644
--- a/docs/interpreter/alluxio.md
+++ b/docs/interpreter/alluxio.md
@@ -246,5 +246,5 @@ Following steps are performed:
 *  using sh interpreter it's checked the existence of the new file copied from 
Alluxio and its content is showed
 
 <center>
-  ![Alluxio Interpreter 
Example](../assets/themes/zeppelin/img/docs-img/alluxio-example.png)
+  ![Alluxio Interpreter 
Example](/assets/themes/zeppelin/img/docs-img/alluxio-example.png)
 </center>

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/cassandra.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/cassandra.md b/docs/interpreter/cassandra.md
index 5d8929b..96d509a 100644
--- a/docs/interpreter/cassandra.md
+++ b/docs/interpreter/cassandra.md
@@ -41,9 +41,9 @@ limitations under the License.
 In a notebook, to enable the **Cassandra** interpreter, click on the **Gear** 
icon and select **Cassandra**
 
  <center>
- ![Interpreter 
Binding](../assets/themes/zeppelin/img/docs-img/cassandra-InterpreterBinding.png)
+ ![Interpreter 
Binding](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterBinding.png)
 
- ![Interpreter 
Selection](../assets/themes/zeppelin/img/docs-img/cassandra-InterpreterSelection.png)
+ ![Interpreter 
Selection](/assets/themes/zeppelin/img/docs-img/cassandra-InterpreterSelection.png)
  </center>
 
 ## Using the Cassandra Interpreter
@@ -53,7 +53,7 @@ In a paragraph, use **_%cassandra_** to select the 
**Cassandra** interpreter and
 To access the interactive help, type **HELP;**
 
  <center>
-   ![Interactive 
Help](../assets/themes/zeppelin/img/docs-img/cassandra-InteractiveHelp.png)
+   ![Interactive 
Help](/assets/themes/zeppelin/img/docs-img/cassandra-InteractiveHelp.png)
  </center>
 
 ## Interpreter Commands
@@ -312,7 +312,7 @@ The schema objects (cluster, keyspace, table, type, 
function and aggregate) are
 There is a drop-down menu on the top left corner to expand objects details. On 
the top right menu is shown the Icon legend.
 
 <center>
-  ![Describe 
Schema](../assets/themes/zeppelin/img/docs-img/cassandra-DescribeSchema.png)
+  ![Describe 
Schema](/assets/themes/zeppelin/img/docs-img/cassandra-DescribeSchema.png)
 </center>
 
 ## Runtime Parameters
@@ -821,12 +821,11 @@ Below are the configuration parameters and their default 
value.
  If you encounter a bug for this interpreter, please create a **[JIRA]** 
ticket and ping me on Twitter
  at **[@doanduyhai]**
 
-
 [Cassandra Java Driver]: https://github.com/datastax/java-driver
 [standard CQL syntax]: 
http://docs.datastax.com/en/cql/3.1/cql/cql_using/use_collections_c.html
 [Tuple CQL syntax]: 
http://docs.datastax.com/en/cql/3.1/cql/cql_reference/tupleType.html
 [UDT CQL syntax]: 
http://docs.datastax.com/en/cql/3.1/cql/cql_using/cqlUseUDT.html
-[Zeppelin dynamic form]: 
http://zeppelin.apache.org/docs/0.6.0-SNAPSHOT/manual/dynamicform.html
-[Interpreter Binding Mode]: 
http://zeppelin.apache.org/docs/0.6.0-SNAPSHOT/manual/interpreters.html
+[Zeppelin Dynamic Form](../usage/dynamic_form/intro.html)
+[Interpreter Binding Mode](../usage/interpreter/interpreter_binding_mode.html)
 [JIRA]: 
https://issues.apache.org/jira/browse/ZEPPELIN-382?jql=project%20%3D%20ZEPPELIN
 [@doanduyhai]: https://twitter.com/doanduyhai

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/elasticsearch.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/elasticsearch.md 
b/docs/interpreter/elasticsearch.md
index 165116b..8efd533 100644
--- a/docs/interpreter/elasticsearch.md
+++ b/docs/interpreter/elasticsearch.md
@@ -24,7 +24,9 @@ limitations under the License.
 <div id="toc"></div>
 
 ## Overview
-[Elasticsearch](https://www.elastic.co/products/elasticsearch) is a highly 
scalable open-source full-text search and analytics engine. It allows you to 
store, search, and analyze big volumes of data quickly and in near real time. 
It is generally used as the underlying engine/technology that powers 
applications that have complex search features and requirements.
+[Elasticsearch](https://www.elastic.co/products/elasticsearch) is a highly 
scalable open-source full-text search and analytics engine. 
+It allows you to store, search, and analyze big volumes of data quickly and in 
near real time. 
+It is generally used as the underlying engine/technology that powers 
applications that have complex search features and requirements.
 
 ## Configuration
 <table class="table-configuration">
@@ -71,7 +73,7 @@ limitations under the License.
 </table>
 
 <center>
-  ![Interpreter 
configuration](../assets/themes/zeppelin/img/docs-img/elasticsearch-config.png)
+  ![Interpreter 
configuration](/assets/themes/zeppelin/img/docs-img/elasticsearch-config.png)
 </center>
 
 > **Note #1 :** You can add more properties to configure the Elasticsearch 
 > client.
@@ -82,7 +84,8 @@ limitations under the License.
 In a notebook, to enable the **Elasticsearch** interpreter, click the **Gear** 
icon and select **Elasticsearch**.
 
 ## Using the Elasticsearch Interpreter
-In a paragraph, use `%elasticsearch` to select the Elasticsearch interpreter 
and then input all commands. To get the list of available commands, use `help`.
+In a paragraph, use `%elasticsearch` to select the Elasticsearch interpreter 
and then input all commands. 
+To get the list of available commands, use `help`.
 
 ```bash
 %elasticsearch
@@ -118,7 +121,7 @@ get /index/type/id
 ```
 
 Example:
-![Elasticsearch - 
Get](../assets/themes/zeppelin/img/docs-img/elasticsearch-get.png)
+![Elasticsearch - 
Get](/assets/themes/zeppelin/img/docs-img/elasticsearch-get.png)
 
 ### Search
 With the `search` command, you can send a search query to Elasticsearch. There 
are two formats of query:
@@ -142,7 +145,8 @@ size 50
 search /index1,index2,.../type1,type2,...  <JSON document containing the query 
or query_string elements>
 ```
 
-> A search query can also contain 
[aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html).
 If there is at least one aggregation, the result of the first aggregation is 
shown, otherwise, you get the search hits.
+> A search query can also contain 
[aggregations](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations.html).
 
+If there is at least one aggregation, the result of the first aggregation is 
shown, otherwise, you get the search hits.
 
 Examples:
 
@@ -202,25 +206,25 @@ content_length | date | request.headers[0] | 
request.headers[1] | request.method
 Examples:
 
 * With a table containing the results:
-![Elasticsearch - Search - 
table](../assets/themes/zeppelin/img/docs-img/elasticsearch-search-table.png)
+![Elasticsearch - Search - 
table](/assets/themes/zeppelin/img/docs-img/elasticsearch-search-table.png)
 
 * You can also use a predefined diagram:
-![Elasticsearch - Search - 
diagram](../assets/themes/zeppelin/img/docs-img/elasticsearch-search-pie.png)
+![Elasticsearch - Search - 
diagram](/assets/themes/zeppelin/img/docs-img/elasticsearch-search-pie.png)
 
 * With a JSON query:
-![Elasticsearch - Search with 
query](../assets/themes/zeppelin/img/docs-img/elasticsearch-search-json-query-table.png)
+![Elasticsearch - Search with 
query](/assets/themes/zeppelin/img/docs-img/elasticsearch-search-json-query-table.png)
 
 * With a JSON query containing a `fields` parameter (for filtering the fields 
in the response): in this case, all the fields values in the response are 
arrays, so, after flattening the result, the format of all the field names is 
`field_name[x]`
-![Elasticsearch - Search with query and a fields 
param](../assets/themes/zeppelin/img/docs-img/elasticsearch-query-with-fields-param.png)
+![Elasticsearch - Search with query and a fields 
param](/assets/themes/zeppelin/img/docs-img/elasticsearch-query-with-fields-param.png)
 
 * With a query string:
-![Elasticsearch - Search with query 
string](../assets/themes/zeppelin/img/docs-img/elasticsearch-query-string.png)
+![Elasticsearch - Search with query 
string](/assets/themes/zeppelin/img/docs-img/elasticsearch-query-string.png)
 
 * With a query containing a multi-value metric aggregation:
-![Elasticsearch - Search with aggregation (multi-value 
metric)](../assets/themes/zeppelin/img/docs-img/elasticsearch-agg-multi-value-metric.png)
+![Elasticsearch - Search with aggregation (multi-value 
metric)](/assets/themes/zeppelin/img/docs-img/elasticsearch-agg-multi-value-metric.png)
 
 * With a query containing a multi-bucket aggregation:
-![Elasticsearch - Search with aggregation 
(multi-bucket)](../assets/themes/zeppelin/img/docs-img/elasticsearch-agg-multi-bucket-pie.png)
+![Elasticsearch - Search with aggregation 
(multi-bucket)](/assets/themes/zeppelin/img/docs-img/elasticsearch-agg-multi-bucket-pie.png)
 
 ### Count
 With the `count` command, you can count documents available in some indices 
and types. You can also provide a query.
@@ -233,10 +237,10 @@ count /index1,index2,.../type1,type2,... <JSON document 
containing the query OR
 Examples:
 
 * Without query:
-![Elasticsearch - 
Count](../assets/themes/zeppelin/img/docs-img/elasticsearch-count.png)
+![Elasticsearch - 
Count](/assets/themes/zeppelin/img/docs-img/elasticsearch-count.png)
 
 * With a query:
-![Elasticsearch - Count with 
query](../assets/themes/zeppelin/img/docs-img/elasticsearch-count-with-query.png)
+![Elasticsearch - Count with 
query](/assets/themes/zeppelin/img/docs-img/elasticsearch-count-with-query.png)
 
 ### Index
 With the `index` command, you can insert/update a document in Elasticsearch.
@@ -258,7 +262,7 @@ delete /index/type/id
 ```
 
 ### Apply Zeppelin Dynamic Forms
-You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html) inside 
your queries. You can use both the `text input` and `select form` 
parameterization features.
+You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html) 
inside your queries. You can use both the `text input` and `select form` 
parameterization features.
 
 ```bash
 %elasticsearch

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/geode.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/geode.md b/docs/interpreter/geode.md
index f833f9d..436c308 100644
--- a/docs/interpreter/geode.md
+++ b/docs/interpreter/geode.md
@@ -37,7 +37,8 @@ limitations under the License.
   </tr>
 </table>
 
-This interpreter supports the [Geode](http://geode.incubator.apache.org/) 
[Object Query Language 
(OQL)](http://geode-docs.cfapps.io/docs/developing/querying_basics/oql_compared_to_sql.html).
  With the OQL-based querying language:
+This interpreter supports the [Geode](http://geode.incubator.apache.org/) 
[Object Query Language 
(OQL)](http://geode-docs.cfapps.io/docs/developing/querying_basics/oql_compared_to_sql.html).
  
+With the OQL-based querying language:
 
 [<img align="right" src="http://img.youtube.com/vi/zvzzA9GXu3Q/3.jpg"; 
alt="zeppelin-view" hspace="10" 
width="200"></img>](https://www.youtube.com/watch?v=zvzzA9GXu3Q)
 
@@ -53,17 +54,24 @@ This [Video 
Tutorial](https://www.youtube.com/watch?v=zvzzA9GXu3Q) illustrates s
 ## Create Interpreter
 By default Zeppelin creates one `Geode/OQL` instance. You can remove it or 
create more instances.
 
-Multiple Geode instances can be created, each configured to the same or 
different backend Geode cluster. But over time a  `Notebook` can have only one 
Geode interpreter instance `bound`. That means you _cannot_ connect to 
different Geode clusters in the same `Notebook`. This is a known Zeppelin 
limitation.
+Multiple Geode instances can be created, each configured to the same or 
different backend Geode cluster. 
+But over time a  `Notebook` can have only one Geode interpreter instance 
`bound`. 
+That means you _cannot_ connect to different Geode clusters in the same 
`Notebook`. 
+This is a known Zeppelin limitation.
 
-To create new Geode instance open the `Interpreter` section and click the 
`+Create` button. Pick a `Name` of your choice and from the `Interpreter` 
drop-down select `geode`.  Then follow the configuration instructions and 
`Save` the new instance.
+To create new Geode instance open the `Interpreter` section and click the 
`+Create` button. 
+Pick a `Name` of your choice and from the `Interpreter` drop-down select 
`geode`.  
+Then follow the configuration instructions and `Save` the new instance.
 
 > Note: The `Name` of the instance is used only to distinguish the instances 
 > while binding them to the `Notebook`. The `Name` is irrelevant inside the 
 > `Notebook`. In the `Notebook` you must use `%geode.oql` tag.
 
 ## Bind to Notebook
-In the `Notebook` click on the `settings` icon in the top right corner. The 
select/deselect the interpreters to be bound with the `Notebook`.
+In the `Notebook` click on the `settings` icon in the top right corner. 
+The select/deselect the interpreters to be bound with the `Notebook`.
 
 ## Configuration
-You can modify the configuration of the Geode from the `Interpreter` section.  
The Geode interpreter expresses the following properties:
+You can modify the configuration of the Geode from the `Interpreter` section. 
+The Geode interpreter expresses the following properties:
 
 <table class="table-configuration">
   <tr>
@@ -94,7 +102,10 @@ You can modify the configuration of the Geode from the 
`Interpreter` section.  T
 > *Tip 2: Always start the paragraphs with the full `%geode.oql` prefix tag! 
 > The short notation: `%geode` would still be able run the OQL queries but the 
 > syntax highlighting and the auto-completions will be disabled.*
 
 ### Create / Destroy Regions
-The OQL specification does not support  [Geode 
Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents)
 mutation operations. To `create`/`destroy` regions one should use the 
[GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html)
 shell tool instead. In the following it is assumed that the GFSH is colocated 
with Zeppelin server.
+
+The OQL specification does not support  [Geode 
Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents)
 mutation operations. 
+To `create`/`destroy` regions one should use the 
[GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html)
 shell tool instead. 
+In the following it is assumed that the GFSH is colocated with Zeppelin server.
 
 ```bash
 %sh
@@ -112,7 +123,10 @@ gfsh << EOF
 EOF
 ```
 
-Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. 
Note that you have to explicitly specify the locator host and port. The values 
should match those you have used in the Geode Interpreter configuration. 
Comprehensive list of [GFSH Commands by Functional 
Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
+Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. 
+Note that you have to explicitly specify the locator host and port. 
+The values should match those you have used in the Geode Interpreter 
configuration. 
+Comprehensive list of [GFSH Commands by Functional 
Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
 
 ### Basic OQL
 ```sql
@@ -163,7 +177,7 @@ gfsh -e "connect" -e "list members"
 ```
 
 ### Apply Zeppelin Dynamic Forms
-You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html) inside 
your OQL queries. You can use both the `text input` and `select form` 
parameterization features
+You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html) 
inside your OQL queries. You can use both the `text input` and `select form` 
parameterization features
 
 ```sql
 %geode.oql

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/hive.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/hive.md b/docs/interpreter/hive.md
index ba6614b..86602fc 100644
--- a/docs/interpreter/hive.md
+++ b/docs/interpreter/hive.md
@@ -24,7 +24,10 @@ limitations under the License.
 <div id="toc"></div>
 
 ## Important Notice
-Hive Interpreter will be deprecated and merged into JDBC Interpreter. You can 
use Hive Interpreter by using JDBC Interpreter with same functionality. See the 
example below of settings and dependencies.
+
+Hive Interpreter will be deprecated and merged into JDBC Interpreter. 
+You can use Hive Interpreter by using JDBC Interpreter with same 
functionality. 
+See the example below of settings and dependencies.
 
 ### Properties
 <table class="table-configuration">
@@ -130,7 +133,11 @@ This interpreter provides multiple configuration with 
`${prefix}`. User can set
 
 ## Overview
 
-The [Apache Hive](https://hive.apache.org/) ™ data warehouse software 
facilitates querying and managing large datasets residing in distributed 
storage. Hive provides a mechanism to project structure onto this data and 
query the data using a SQL-like language called HiveQL. At the same time this 
language also allows traditional map/reduce programmers to plug in their custom 
mappers and reducers when it is inconvenient or inefficient to express this 
logic in HiveQL.
+The [Apache Hive](https://hive.apache.org/) ™ data warehouse software 
facilitates querying and managing large datasets 
+residing in distributed storage. Hive provides a mechanism to project 
structure onto 
+this data and query the data using a SQL-like language called HiveQL. 
+At the same time this language also allows traditional map/reduce programmers 
to 
+plug in their custom mappers and reducers when it is inconvenient or 
inefficient to express this logic in HiveQL.
 
 ## How to use
 Basically, you can use
@@ -151,7 +158,8 @@ select * from my_table;
 You can also run multiple queries up to 10 by default. Changing these settings 
is not implemented yet.
 
 ### Apply Zeppelin Dynamic Forms
-You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html) inside 
your queries. You can use both the `text input` and `select form` 
parameterization features.
+You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html) 
inside your queries. 
+You can use both the `text input` and `select form` parameterization features.
 
 ```sql
 %hive

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/ignite.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/ignite.md b/docs/interpreter/ignite.md
index 677952f..8406e04 100644
--- a/docs/interpreter/ignite.md
+++ b/docs/interpreter/ignite.md
@@ -26,7 +26,7 @@ limitations under the License.
 ## Overview
 [Apache Ignite](https://ignite.apache.org/) In-Memory Data Fabric is a 
high-performance, integrated and distributed in-memory platform for computing 
and transacting on large-scale data sets in real-time, orders of magnitude 
faster than possible with traditional disk-based or flash technologies.
 
-![Apache Ignite](../assets/themes/zeppelin/img/docs-img/ignite-logo.png)
+![Apache Ignite](/assets/themes/zeppelin/img/docs-img/ignite-logo.png)
 
 You can use Zeppelin to retrieve distributed data from cache using Ignite SQL 
interpreter. Moreover, Ignite interpreter allows you to execute any Scala code 
in cases when SQL doesn't fit to your requirements. For example, you can 
populate data into your caches or execute distributed computations.
 
@@ -82,14 +82,14 @@ At the "Interpreters" menu, you may edit Ignite interpreter 
or create new one. Z
   </tr>
 </table>
 
-![Configuration of Ignite 
Interpreter](../assets/themes/zeppelin/img/docs-img/ignite-interpreter-setting.png)
+![Configuration of Ignite 
Interpreter](/assets/themes/zeppelin/img/docs-img/ignite-interpreter-setting.png)
 
 ## How to use
 After configuring Ignite interpreter, create your own notebook. Then you can 
bind interpreters like below image.
 
-![Binding 
Interpreters](../assets/themes/zeppelin/img/docs-img/ignite-interpreter-binding.png)
+![Binding 
Interpreters](/assets/themes/zeppelin/img/docs-img/ignite-interpreter-binding.png)
 
-For more interpreter binding information see 
[here](../manual/interpreters.html#what-is-interpreter-setting).
+For more interpreter binding information see 
[here](../usage/interpreter/overview.html#what-is-interpreter-setting).
 
 ### Ignite SQL interpreter
 In order to execute SQL query, use ` %ignite.ignitesql ` prefix. <br>
@@ -101,7 +101,7 @@ For example, you can select top 10 words in the words cache 
using the following
 select _val, count(_val) as cnt from String group by _val order by cnt desc 
limit 10
 ```
 
-![IgniteSql on 
Zeppelin](../assets/themes/zeppelin/img/docs-img/ignite-sql-example.png)
+![IgniteSql on 
Zeppelin](/assets/themes/zeppelin/img/docs-img/ignite-sql-example.png)
 
 As long as your Ignite version and Zeppelin Ignite version is same, you can 
also use scala code. Please check the Zeppelin Ignite version before you 
download your own Ignite.
 
@@ -123,6 +123,6 @@ val res = cache.query(qry).getAll()
 collectionAsScalaIterable(res).foreach(println _)
 ```
 
-![Using Scala 
Code](../assets/themes/zeppelin/img/docs-img/ignite-scala-example.png)
+![Using Scala 
Code](/assets/themes/zeppelin/img/docs-img/ignite-scala-example.png)
 
 Apache Ignite also provides a guide docs for Zeppelin ["Ignite with Apache 
Zeppelin"](https://apacheignite.readme.io/docs/data-analysis-with-apache-zeppelin)

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/jdbc.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/jdbc.md b/docs/interpreter/jdbc.md
index 0fe9d68..57adc76 100644
--- a/docs/interpreter/jdbc.md
+++ b/docs/interpreter/jdbc.md
@@ -33,7 +33,7 @@ By now, it has been tested with:
 
 <div class="row" style="margin: 30px auto;">
   <div class="col-md-6">
-    <img src="../assets/themes/zeppelin/img/docs-img/tested_databases.png" 
width="300px"/>
+    <img src="/assets/themes/zeppelin/img/docs-img/tested_databases.png" 
width="300px"/>
   </div>
   <div class="col-md-6">
     <li style="padding-bottom: 5px; list-style: circle">
@@ -76,12 +76,13 @@ If you are using other databases not in the above list, 
please feel free to shar
 
 First, click `+ Create` button at the top-right corner in the interpreter 
setting page.
 
-<img src="../assets/themes/zeppelin/img/docs-img/click_create_button.png" 
width="600px"/>
+<img src="/assets/themes/zeppelin/img/docs-img/click_create_button.png" 
width="600px"/>
 
-Fill `Interpreter name` field with whatever you want to use as the alias(e.g. 
mysql, mysql2, hive, redshift, and etc..). Please note that this alias will be 
used as `%interpreter_name` to call the interpreter in the paragraph. 
+Fill `Interpreter name` field with whatever you want to use as the alias(e.g. 
mysql, mysql2, hive, redshift, and etc..). 
+Please note that this alias will be used as `%interpreter_name` to call the 
interpreter in the paragraph. 
 Then select `jdbc` as an `Interpreter group`. 
 
-<img src="../assets/themes/zeppelin/img/docs-img/select_name_and_group.png" 
width="200px"/>
+<img src="/assets/themes/zeppelin/img/docs-img/select_name_and_group.png" 
width="200px"/>
 
 The default driver of JDBC interpreter is set as `PostgreSQL`. It means 
Zeppelin includes `PostgreSQL` driver jar in itself.
 So you don't need to add any dependencies(e.g. the artifact name or path for 
`PostgreSQL` driver jar) for `PostgreSQL` connection.
@@ -121,7 +122,7 @@ The JDBC interpreter properties are defined by default like 
below.
   <tr>
     <td>default.precode</td>
     <td></td>
-    <td>Some SQL which executes every time after initialization of the 
interpreter (see [Binding 
mode](../manual/interpreters.md#interpreter-binding-mode))</td>
+    <td>Some SQL which executes every time after initialization of the 
interpreter (see <a 
href="../usage/interpreter/overview.html#interpreter-binding-mode">Binding 
mode</a>)</td>
   </tr>
   <tr>
     <td>default.completer.schemaFilters</td>
@@ -141,17 +142,17 @@ The JDBC interpreter properties are defined by default 
like below.
 </table>
 
 If you want to connect other databases such as `Mysql`, `Redshift` and `Hive`, 
you need to edit the property values.
-You can also use [Credential](../security/datasource_authorization.html) for 
JDBC authentication.
+You can also use [Credential](../setup/security/datasource_authorization.html) 
for JDBC authentication.
 If `default.user` and `default.password` properties are deleted(using X 
button) for database connection in the interpreter setting page,
-the JDBC interpreter will get the account information from 
[Credential](../security/datasource_authorization.html).
+the JDBC interpreter will get the account information from 
[Credential](../setup/security/datasource_authorization.html).
 
 The below example is for `Mysql` connection.
 
-<img src="../assets/themes/zeppelin/img/docs-img/edit_properties.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/edit_properties.png" 
width="600px" />
 
 The last step is **Dependency Setting**. Since Zeppelin only includes 
`PostgreSQL` driver jar by default, you need to add each driver's maven 
coordinates or JDBC driver's jar file path for the other databases.
 
-<img src="../assets/themes/zeppelin/img/docs-img/edit_dependencies.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/edit_dependencies.png" 
width="600px" />
 
 That's it. You can find more JDBC connection setting examples([Mysql](#mysql), 
[MariaDB](#mariadb), [Redshift](#redshift), [Apache Hive](#apache-hive), 
[Apache Phoenix](#apache-phoenix), and [Apache Tajo](#apache-tajo)) in [this 
section](#examples).
 
@@ -210,13 +211,13 @@ For example, if a connection needs a schema parameter, it 
would have to add the
 ## Binding JDBC interpter to notebook
 To bind the interpreters created in the interpreter setting page, click the 
gear icon at the top-right corner.
 
-<img 
src="../assets/themes/zeppelin/img/docs-img/click_interpreter_binding_button.png"
 width="600px" />
+<img 
src="/assets/themes/zeppelin/img/docs-img/click_interpreter_binding_button.png" 
width="600px" />
 
 Select(blue) or deselect(white) the interpreter buttons depending on your use 
cases.
 If you need to use more than one interpreter in the notebook, activate several 
buttons.
 Don't forget to click `Save` button, or you will face `Interpreter *** is not 
found` error.
 
-<img src="../assets/themes/zeppelin/img/docs-img/jdbc_interpreter_binding.png" 
width="550px" />
+<img src="/assets/themes/zeppelin/img/docs-img/jdbc_interpreter_binding.png" 
width="550px" />
 
 ## How to use
 ### Run the paragraph with JDBC interpreter
@@ -229,11 +230,11 @@ show databases
 If the paragraph is `FINISHED` without any errors, a new paragraph will be 
automatically added after the previous one with `%jdbc_interpreter_name`.
 So you don't need to type this prefix in every paragraphs' header.
 
-<img src="../assets/themes/zeppelin/img/docs-img/run_paragraph_with_jdbc.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/run_paragraph_with_jdbc.png" 
width="600px" />
 
 ### Apply Zeppelin Dynamic Forms
 
-You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html) inside 
your queries. You can use both the `text input` and `select form` 
parametrization features.
+You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html) 
inside your queries. You can use both the `text input` and `select form` 
parametrization features.
 
 ```sql
 %jdbc_interpreter_name
@@ -316,7 +317,7 @@ Here are some examples you can refer to. Including the 
below connectors, you can
 
 ### Postgres
 
-<img src="../assets/themes/zeppelin/img/docs-img/postgres_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/postgres_setting.png" 
width="600px" />
 
 ##### Properties
 <table class="table-configuration">
@@ -360,7 +361,7 @@ Here are some examples you can refer to. Including the 
below connectors, you can
 
 ### Mysql
 
-<img src="../assets/themes/zeppelin/img/docs-img/mysql_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/mysql_setting.png" 
width="600px" />
 
 ##### Properties
 <table class="table-configuration">
@@ -404,7 +405,7 @@ Here are some examples you can refer to. Including the 
below connectors, you can
 
 ### MariaDB
 
-<img src="../assets/themes/zeppelin/img/docs-img/mariadb_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/mariadb_setting.png" 
width="600px" />
 
 ##### Properties
 <table class="table-configuration">
@@ -448,7 +449,7 @@ Here are some examples you can refer to. Including the 
below connectors, you can
 
 ### Redshift
 
-<img src="../assets/themes/zeppelin/img/docs-img/redshift_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/redshift_setting.png" 
width="600px" />
 
 ##### Properties
 <table class="table-configuration">
@@ -492,7 +493,7 @@ Here are some examples you can refer to. Including the 
below connectors, you can
 
 ### Apache Hive
 
-<img src="../assets/themes/zeppelin/img/docs-img/hive_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/hive_setting.png" width="600px" 
/>
 
 ##### Properties
 <table class="table-configuration">
@@ -544,13 +545,16 @@ Here are some examples you can refer to. Including the 
below connectors, you can
 [Maven Repository : 
org.apache.hive:hive-jdbc](https://mvnrepository.com/artifact/org.apache.hive/hive-jdbc)
 
 ##### Impersonation
-When Zeppelin server is running with authentication enabled, then the 
interpreter can utilize Hive's user proxy feature i.e. send extra parameter for 
creating and running a session ("hive.server2.proxy.user=": "${loggedInUser}"). 
This is particularly useful when multiple users are sharing a notebook.
+When Zeppelin server is running with authentication enabled, then the 
interpreter can utilize Hive's user proxy feature 
+i.e. send extra parameter for creating and running a session 
("hive.server2.proxy.user=": "${loggedInUser}"). 
+This is particularly useful when multiple users are sharing a notebook.
 
 To enable this set following:
 
   - `zeppelin.jdbc.auth.type` as `SIMPLE` or `KERBEROS` (if required) in the 
interpreter setting.
   - `${prefix}.proxy.user.property` as `hive.server2.proxy.user`
-
+  
+See [User Impersonation in 
interpreter](../usage/interpreter/user_impersonation.html) for more information.
 
 ##### Sample configuration
 <table class="table-configuration">
@@ -592,7 +596,7 @@ Use the appropriate `default.driver`, `default.url`, and 
the dependency artifact
 
 #### Thick client connection
 
-<img src="../assets/themes/zeppelin/img/docs-img/phoenix_thick_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/phoenix_thick_setting.png" 
width="600px" />
 
 ##### Properties
 <table class="table-configuration">
@@ -634,7 +638,7 @@ Use the appropriate `default.driver`, `default.url`, and 
the dependency artifact
 
 #### Thin client connection
 
-<img src="../assets/themes/zeppelin/img/docs-img/phoenix_thin_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/phoenix_thin_setting.png" 
width="600px" />
 
 ##### Properties
 <table class="table-configuration">
@@ -686,7 +690,7 @@ Before Adding one of the below dependencies, check the 
Phoenix version first.
 
 ### Apache Tajo
 
-<img src="../assets/themes/zeppelin/img/docs-img/tajo_setting.png" 
width="600px" />
+<img src="/assets/themes/zeppelin/img/docs-img/tajo_setting.png" width="600px" 
/>
 
 ##### Properties
 <table class="table-configuration">

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/lens.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/lens.md b/docs/interpreter/lens.md
index b929220..dfa6849 100644
--- a/docs/interpreter/lens.md
+++ b/docs/interpreter/lens.md
@@ -26,7 +26,7 @@ limitations under the License.
 ## Overview
 [Apache Lens](https://lens.apache.org/) provides an Unified Analytics 
interface. Lens aims to cut the Data Analytics silos by providing a single view 
of data across multiple tiered data stores and optimal execution environment 
for the analytical query. It seamlessly integrates Hadoop with traditional data 
warehouses to appear like one.
 
-![Apache Lens](../assets/themes/zeppelin/img/docs-img/lens-logo.png)
+![Apache Lens](/assets/themes/zeppelin/img/docs-img/lens-logo.png)
 
 ## Installing and Running Lens
 In order to use Lens interpreters, you may install Apache Lens in some simple 
steps:
@@ -90,14 +90,14 @@ At the "Interpreters" menu, you can edit Lens interpreter 
or create new one. Zep
   </tr>
 </table>
 
-![Apache Lens Interpreter 
Setting](../assets/themes/zeppelin/img/docs-img/lens-interpreter-setting.png)
+![Apache Lens Interpreter 
Setting](/assets/themes/zeppelin/img/docs-img/lens-interpreter-setting.png)
 
 ### Interpreter Binding for Zeppelin Notebook
 After configuring Lens interpreter, create your own notebook, then you can 
bind interpreters like below image.
 
-![Zeppelin Notebook Interpreter 
Binding](../assets/themes/zeppelin/img/docs-img/lens-interpreter-binding.png)
+![Zeppelin Notebook Interpreter 
Binding](/assets/themes/zeppelin/img/docs-img/lens-interpreter-binding.png)
 
-For more interpreter binding information see 
[here](http://zeppelin.apache.org/docs/manual/interpreters.html).
+For more interpreter binding information see 
[here](../usage/interpreter/overview.html#interpreter-binding-mode).
 
 ### How to use
 You can analyze your data by using [OLAP 
Cube](http://lens.apache.org/user/olap-cube.html) 
[QL](http://lens.apache.org/user/cli.html) which is a high level SQL like 
language to query and describe data sets organized in data cubes.
@@ -174,11 +174,11 @@ fact add partitions --fact_name sales_raw_fact 
--storage_name local --path your/
 query execute cube select customer_city_name, product_details.description, 
product_details.category, product_details.color, store_sales from sales where 
time_range_in(delivery_time, '2015-04-11-00', '2015-04-13-00')
 ```
 
-![Lens Query Result](../assets/themes/zeppelin/img/docs-img/lens-result.png)
+![Lens Query Result](/assets/themes/zeppelin/img/docs-img/lens-result.png)
 
 These are just examples that provided in advance by Lens. If you want to 
explore whole tutorials of Lens, see the [tutorial 
video](https://cwiki.apache.org/confluence/display/LENS/2015/07/13/20+Minute+video+demo+of+Apache+Lens+through+examples).
 
 ## Lens UI Service
 Lens also provides web UI service. Once the server starts up, you can open the 
service on http://serverhost:19999/index.html and browse. You may also check 
the structure that you made and use query easily here.
 
-![Lens UI Service](../assets/themes/zeppelin/img/docs-img/lens-ui-service.png)
+![Lens UI Service](/assets/themes/zeppelin/img/docs-img/lens-ui-service.png)

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/livy.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/livy.md b/docs/interpreter/livy.md
index a7b776c..d85b02a 100644
--- a/docs/interpreter/livy.md
+++ b/docs/interpreter/livy.md
@@ -197,11 +197,13 @@ hello("livy")
 ```
 
 ## Impersonation
-When Zeppelin server is running with authentication enabled, then this 
interpreter utilizes Livy’s user impersonation feature i.e. sends extra 
parameter for creating and running a session ("proxyUser": "${loggedInUser}"). 
This is particularly useful when multi users are sharing a Notebook server.
-
+When Zeppelin server is running with authentication enabled, 
+then this interpreter utilizes Livy’s user impersonation feature 
+i.e. sends extra parameter for creating and running a session ("proxyUser": 
"${loggedInUser}"). 
+This is particularly useful when multi users are sharing a Notebook server.
 
 ## Apply Zeppelin Dynamic Forms
-You can leverage [Zeppelin Dynamic Form](../manual/dynamicform.html). You can 
use both the `text input` and `select form` parameterization features.
+You can leverage [Zeppelin Dynamic Form](../usage/dynamic_form/intro.html). 
You can use both the `text input` and `select form` parameterization features.
 
 ```
 %livy.pyspark

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/markdown.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/markdown.md b/docs/interpreter/markdown.md
index 46fd170..07fa1b1 100644
--- a/docs/interpreter/markdown.md
+++ b/docs/interpreter/markdown.md
@@ -31,17 +31,18 @@ In Zeppelin notebook, you can use ` %md ` in the beginning 
of a paragraph to inv
 
 In Zeppelin, Markdown interpreter is enabled by default and uses the 
[pegdown](https://github.com/sirthias/pegdown) parser.
 
-<img 
src="../assets/themes/zeppelin/img/docs-img/markdown-interpreter-setting.png" 
width="60%" />
+<img 
src="/assets/themes/zeppelin/img/docs-img/markdown-interpreter-setting.png" 
width="60%" />
 
 ## Example
 
 The following example demonstrates the basic usage of Markdown in a Zeppelin 
notebook.
 
-<img src="../assets/themes/zeppelin/img/docs-img/markdown-example.png" 
width="70%" />
+<img src="/assets/themes/zeppelin/img/docs-img/markdown-example.png" 
width="70%" />
 
 ## Mathematical expression
 
-Markdown interpreter leverages %html display system internally. That means you 
can mix mathematical expressions with markdown syntax. For more information, 
please see [Mathematical 
Expression](../displaysystem/basicdisplaysystem.html#mathematical-expressions) 
section.
+Markdown interpreter leverages %html display system internally. That means you 
can mix mathematical expressions with markdown syntax. 
+For more information, please see [Mathematical 
Expression](../usage/display_system/basic.html#mathematical-expressions) 
section.
 
 ## Configuration
 <table class="table-configuration">
@@ -62,11 +63,11 @@ Markdown interpreter leverages %html display system 
internally. That means you c
 
 `pegdown` parser provides github flavored markdown.
 
-<img 
src="../assets/themes/zeppelin/img/docs-img/markdown-example-pegdown-parser.png"
 width="70%" />
+<img 
src="/assets/themes/zeppelin/img/docs-img/markdown-example-pegdown-parser.png" 
width="70%" />
 
 `pegdown` parser provides [YUML](http://yuml.me/) and 
[Websequence](https://www.websequencediagrams.com/) plugins also. 
 
-<img 
src="../assets/themes/zeppelin/img/docs-img/markdown-example-pegdown-parser-plugins.png"
 width="70%" />
+<img 
src="/assets/themes/zeppelin/img/docs-img/markdown-example-pegdown-parser-plugins.png"
 width="70%" />
 
 ### Markdown4j Parser
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/pig.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/pig.md b/docs/interpreter/pig.md
index d1f18fa..2e633a2 100644
--- a/docs/interpreter/pig.md
+++ b/docs/interpreter/pig.md
@@ -12,7 +12,11 @@ group: manual
 <div id="toc"></div>
 
 ## Overview
-[Apache Pig](https://pig.apache.org/) is a platform for analyzing large data 
sets that consists of a high-level language for expressing data analysis 
programs, coupled with infrastructure for evaluating these programs. The 
salient property of Pig programs is that their structure is amenable to 
substantial parallelization, which in turns enables them to handle very large 
data sets.
+[Apache Pig](https://pig.apache.org/) is a platform for analyzing large data 
sets that consists of 
+a high-level language for expressing data analysis programs, 
+coupled with infrastructure for evaluating these programs. 
+The salient property of Pig programs is that their structure is amenable to 
substantial parallelization, 
+which in turns enables them to handle very large data sets.
 
 ## Supported interpreter type
   - `%pig.script` (default Pig interpreter, so you can use `%pig`)
@@ -55,7 +59,8 @@ group: manual
 
 At the Interpreters menu, you have to create a new Pig interpreter. Pig 
interpreter has below properties by default.
 And you can set any Pig properties here which will be passed to Pig engine. 
(like tez.queue.name & mapred.job.queue.name).
-Besides, we use paragraph title as job name if it exists, else use the last 
line of Pig script. So you can use that to find app running in YARN RM UI.
+Besides, we use paragraph title as job name if it exists, else use the last 
line of Pig script. 
+So you can use that to find app running in YARN RM UI.
 
 <table class="table-configuration">
     <tr>
@@ -116,7 +121,8 @@ b = group bank_data by age;
 foreach b generate group, COUNT($1);
 ```
 
-The same as above, but use dynamic text form so that use can specify the 
variable maxAge in textbox. (See screenshot below). Dynamic form is a very cool 
feature of Zeppelin, you can refer this [link]((../manual/dynamicform.html)) 
for details.
+The same as above, but use dynamic text form so that use can specify the 
variable maxAge in textbox. 
+(See screenshot below). Dynamic form is a very cool feature of Zeppelin, you 
can refer this [link]((../usage/dynamic_form/intro.html)) for details.
 
 ```
 %pig.query
@@ -126,7 +132,8 @@ b = group bank_data by age;
 foreach b generate group, COUNT($1) as count;
 ```
 
-Get the number of each age for specific marital type, also use dynamic form 
here. User can choose the marital type in the dropdown list (see screenshot 
below).
+Get the number of each age for specific marital type, 
+also use dynamic form here. User can choose the marital type in the dropdown 
list (see screenshot below).
 
 ```
 %pig.query
@@ -138,11 +145,14 @@ foreach b generate group, COUNT($1) as count;
 
 The above examples are in the Pig tutorial note in Zeppelin, you can check 
that for details. Here's the screenshot.
 
-<img class="img-responsive" width="1024px" style="margin:0 auto; padding: 
26px;" src="../assets/themes/zeppelin/img/pig_zeppelin_tutorial.png" />
+<img class="img-responsive" width="1024px" style="margin:0 auto; padding: 
26px;" src="/assets/themes/zeppelin/img/pig_zeppelin_tutorial.png" />
 
 
-Data is shared between `%pig` and `%pig.query`, so that you can do some common 
work in `%pig`, and do different kinds of query based on the data of `%pig`. 
-Besides, we recommend you to specify alias explicitly so that the 
visualization can display the column name correctly. In the above example 2 and 
3 of `%pig.query`, we name `COUNT($1)` as `count`. If you don't do this,
-then we will name it using position. E.g. in the above first example of 
`%pig.query`, we will use `col_1` in chart to represent `COUNT($1)`.
+Data is shared between `%pig` and `%pig.query`, so that you can do some common 
work in `%pig`, 
+and do different kinds of query based on the data of `%pig`. 
+Besides, we recommend you to specify alias explicitly so that the 
visualization can display 
+the column name correctly. In the above example 2 and 3 of `%pig.query`, we 
name `COUNT($1)` as `count`. 
+If you don't do this, then we will name it using position. 
+E.g. in the above first example of `%pig.query`, we will use `col_1` in chart 
to represent `COUNT($1)`.
 
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/postgresql.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/postgresql.md b/docs/interpreter/postgresql.md
index 52dac16..6aed010 100644
--- a/docs/interpreter/postgresql.md
+++ b/docs/interpreter/postgresql.md
@@ -25,5 +25,6 @@ limitations under the License.
                
 ## Important Notice            
                
-
-Postgresql interpreter is deprecated and merged into [JDBC 
Interpreter](./jdbc.html). You can use it with JDBC Interpreter as same 
functionality. See [Postgresql setting example](./jdbc.html#postgres) for more 
detailed information.
+Postgresql interpreter is deprecated and merged into [JDBC 
Interpreter](./jdbc.html). 
+You can use it with JDBC Interpreter as same functionality. 
+See [Postgresql setting example](./jdbc.html#postgres) for more detailed 
information.

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/python.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/python.md b/docs/interpreter/python.md
index 8d5d4b8..1963f55 100644
--- a/docs/interpreter/python.md
+++ b/docs/interpreter/python.md
@@ -68,23 +68,36 @@ The interpreter can use all modules already installed (with 
pip, easy_install...
 
 #### Usage
 
-List your environments
+- get the Conda Infomation: 
 
-```
-%python.conda
-```
+    ```%python.conda info```
+    
+- list the Conda environments: 
 
-Activate an environment
+    ```%python.conda env list```
 
-```
-%python.conda activate [ENVIRONMENT_NAME]
-```
+- create a conda enviornment: 
+    ```%python.conda create --name [ENV NAME]```
+    
+- activate an environment (python interpreter will be restarted): 
 
-Deactivate
+    ```%python.conda activate [ENV NAME]```
 
-```
-%python.conda deactivate
-```
+- deactivate
+
+    ```%python.conda deactivate```
+    
+- get installed package list inside the current environment
+
+    ```%python.conda list```
+    
+- install package
+
+    ```%python.conda install [PACKAGE NAME]```
+  
+- uninstall package
+  
+    ```%python.conda uninstall [PACKAGE NAME]```
 
 ### Docker
 
@@ -92,21 +105,22 @@ Deactivate
 
 #### Usage
 
-Activate an environment
+- activate an environment
 
-```
-%python.docker activate [Repository]
-%python.docker activate [Repository:Tag]
-%python.docker activate [Image Id]
-```
+    ```
+    %python.docker activate [Repository]
+    %python.docker activate [Repository:Tag]
+    %python.docker activate [Image Id]
+    ```
 
-Deactivate
+- deactivate
 
-```
-%python.docker deactivate
-```
+    ```
+    %python.docker deactivate
+    ```
 
-Example
+<br/>
+Here is an example
 
 ```
 # activate latest tensorflow image as a python environment
@@ -114,7 +128,7 @@ Example
 ```
 
 ## Using Zeppelin Dynamic Forms
-You can leverage [Zeppelin Dynamic 
Form]({{BASE_PATH}}/manual/dynamicform.html) inside your Python code.
+You can leverage [Zeppelin Dynamic 
Form]({{BASE_PATH}}/usage/dynamic_form/intro.html) inside your Python code.
 
 **Zeppelin Dynamic Form can only be used if py4j Python library is installed 
in your system. If not, you can install it with `pip install py4j`.**
 
@@ -148,9 +162,13 @@ z.configure_mpl(width=400, height=300, fmt='svg')
 plt.plot([1, 2, 3])
 ```
 
-Will produce a 400x300 image in SVG format, which by default are normally 
600x400 and PNG respectively. In the future, another option called `angular` 
can be used to make it possible to update a plot produced from one paragraph 
directly from another (the output will be `%angular` instead of `%html`). 
However, this feature is already available in the `pyspark` interpreter. More 
details can be found in the included "Zeppelin Tutorial: Python - matplotlib 
basic" tutorial notebook. 
+Will produce a 400x300 image in SVG format, which by default are normally 
600x400 and PNG respectively. 
+In the future, another option called `angular` can be used to make it possible 
to update a plot produced from one paragraph directly from another 
+(the output will be `%angular` instead of `%html`). However, this feature is 
already available in the `pyspark` interpreter. 
+More details can be found in the included "Zeppelin Tutorial: Python - 
matplotlib basic" tutorial notebook. 
 
-If Zeppelin cannot find the matplotlib backend files (which should usually be 
found in `$ZEPPELIN_HOME/interpreter/lib/python`) in your `PYTHONPATH`, then 
the backend will automatically be set to agg, and the (otherwise deprecated) 
instructions below can be used for more limited inline plotting.
+If Zeppelin cannot find the matplotlib backend files (which should usually be 
found in `$ZEPPELIN_HOME/interpreter/lib/python`) in your `PYTHONPATH`, 
+then the backend will automatically be set to agg, and the (otherwise 
deprecated) instructions below can be used for more limited inline plotting.
 
 If you are unable to load the inline backend, use `z.show(plt)`:
  ```python
@@ -168,11 +186,13 @@ The `z.show()` function can take optional parameters to 
adapt graph dimensions (
 z.show(plt, width='50px')
 z.show(plt, height='150px', fmt='svg')
 ```
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/pythonMatplotlib.png" />
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/pythonMatplotlib.png" />
 
 
 ## Pandas integration
-Apache Zeppelin [Table Display 
System](../displaysystem/basicdisplaysystem.html#table) provides built-in data 
visualization capabilities. Python interpreter leverages it to visualize Pandas 
DataFrames though similar `z.show()` API, same as with [Matplotlib 
integration](#matplotlib-integration).
+Apache Zeppelin [Table Display 
System](../usage/display_system/basic.html#table) provides built-in data 
visualization capabilities. 
+Python interpreter leverages it to visualize Pandas DataFrames though similar 
`z.show()` API, 
+same as with [Matplotlib integration](#matplotlib-integration).
 
 Example:
 
@@ -184,7 +204,9 @@ z.show(rates)
 
 ## SQL over Pandas DataFrames
 
-There is a convenience `%python.sql` interpreter that matches Apache Spark 
experience in Zeppelin and enables usage of SQL language to query [Pandas 
DataFrames](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html)
 and visualization of results though built-in [Table Display 
System](../displaysystem/basicdisplaysystem.html#table).
+There is a convenience `%python.sql` interpreter that matches Apache Spark 
experience in Zeppelin and 
+enables usage of SQL language to query [Pandas 
DataFrames](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html)
 and 
+visualization of results though built-in [Table Display 
System](../usage/display_system/basic.html#table).
 
  **Pre-requests**
 
@@ -217,7 +239,9 @@ For in-depth technical details on current implementation 
please refer to [python
 
 ### Some features not yet implemented in the Python Interpreter
 
-* Interrupt a paragraph execution (`cancel()` method) is currently only 
supported in Linux and MacOs. If interpreter runs in another operating system 
(for instance MS Windows) , interrupt a paragraph will close the whole 
interpreter. A JIRA ticket 
([ZEPPELIN-893](https://issues.apache.org/jira/browse/ZEPPELIN-893)) is opened 
to implement this feature in a next release of the interpreter.
+* Interrupt a paragraph execution (`cancel()` method) is currently only 
supported in Linux and MacOs. 
+If interpreter runs in another operating system (for instance MS Windows) , 
interrupt a paragraph will close the whole interpreter. 
+A JIRA ticket 
([ZEPPELIN-893](https://issues.apache.org/jira/browse/ZEPPELIN-893)) is opened 
to implement this feature in a next release of the interpreter.
 * Progression bar in webUI  (`getProgress()` method) is currently not 
implemented.
 * Code-completion is currently not implemented.
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/r.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/r.md b/docs/interpreter/r.md
index 2b22491..1f68f89 100644
--- a/docs/interpreter/r.md
+++ b/docs/interpreter/r.md
@@ -69,23 +69,23 @@ By default, the R Interpreter appears as two Zeppelin 
Interpreters, `%r` and `%k
 
 `%r` will behave like an ordinary REPL.  You can execute commands as in the 
CLI.   
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/repl2plus2.png" width="700px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/repl2plus2.png" width="700px"/>
 
 R base plotting is fully supported
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/replhist.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/replhist.png" width="550px"/>
 
 If you return a data.frame, Zeppelin will attempt to display it using 
Zeppelin's built-in visualizations.
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/replhead.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/replhead.png" width="550px"/>
 
 `%knitr` interfaces directly against `knitr`, with chunk options on the first 
line:
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/knitgeo.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/knitgeo.png" width="550px"/>
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/knitstock.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/knitstock.png" width="550px"/>
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/knitmotion.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/knitmotion.png" width="550px"/>
 
 The two interpreters share the same environment.  If you define a variable 
from `%r`, it will be within-scope if you then make a call using `knitr`.
 
@@ -93,23 +93,23 @@ The two interpreters share the same environment.  If you 
define a variable from
 
 If `SPARK_HOME` is set, the `SparkR` package will be loaded automatically:
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/sparkrfaithful.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/sparkrfaithful.png" width="550px"/>
 
 The Spark Context and SQL Context are created and injected into the local 
environment automatically as `sc` and `sql`.
 
 The same context are shared with the `%spark`, `%sql` and `%pyspark` 
interpreters:
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/backtoscala.png" width="700px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/backtoscala.png" width="700px"/>
 
 You can also make an ordinary R variable accessible in scala and Python:
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/varr1.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/varr1.png" width="550px"/>
 
 And vice versa:
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/varscala.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/varscala.png" width="550px"/>
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/varr2.png" width="550px"/>
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/varr2.png" width="550px"/>
 
 ## Caveats & Troubleshooting
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/scalding.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/scalding.md b/docs/interpreter/scalding.md
index 22027f2..bc04a16 100644
--- a/docs/interpreter/scalding.md
+++ b/docs/interpreter/scalding.md
@@ -37,9 +37,9 @@ In a notebook, to enable the **Scalding** interpreter, click 
on the **Gear** ico
 
 <center>
 
-![Interpreter 
Binding](../assets/themes/zeppelin/img/docs-img/scalding-InterpreterBinding.png)
+![Interpreter 
Binding](/assets/themes/zeppelin/img/docs-img/scalding-InterpreterBinding.png)
 
-![Interpreter 
Selection](../assets/themes/zeppelin/img/docs-img/scalding-InterpreterSelection.png)
+![Interpreter 
Selection](/assets/themes/zeppelin/img/docs-img/scalding-InterpreterSelection.png)
 
 </center>
 
@@ -124,7 +124,7 @@ print("%table " + table)
 ```
 
 If you click on the icon for the pie chart, you should be able to see a chart 
like this:
-![Scalding - Pie - 
Chart](../assets/themes/zeppelin/img/docs-img/scalding-pie.png)
+![Scalding - Pie - 
Chart](/assets/themes/zeppelin/img/docs-img/scalding-pie.png)
 
 
 ### HDFS mode

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/shell.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/shell.md b/docs/interpreter/shell.md
index b4b36dd..a3d8cea 100644
--- a/docs/interpreter/shell.md
+++ b/docs/interpreter/shell.md
@@ -63,6 +63,7 @@ At the "Interpreters" menu in Zeppelin dropdown menu, you can 
set the property v
 ## Example
 The following example demonstrates the basic usage of Shell in a Zeppelin 
notebook.
 
-<img src="../assets/themes/zeppelin/img/docs-img/shell-example.png" />
+<img src="/assets/themes/zeppelin/img/docs-img/shell-example.png" />
 
-If you need further information about **Zeppelin Interpreter Setting** for 
using Shell interpreter, please read [What is interpreter 
setting?](../manual/interpreters.html#what-is-interpreter-setting) section 
first.
\ No newline at end of file
+If you need further information about **Zeppelin Interpreter Setting** for 
using Shell interpreter, 
+please read [What is interpreter 
setting?](../usage/interpreter/overview.html#what-is-interpreter-setting) 
section first.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/interpreter/spark.md
----------------------------------------------------------------------
diff --git a/docs/interpreter/spark.md b/docs/interpreter/spark.md
index 59b3430..bfe0e01 100644
--- a/docs/interpreter/spark.md
+++ b/docs/interpreter/spark.md
@@ -200,7 +200,7 @@ Staring from 0.6.1 SparkSession is available as variable 
`spark` when you are us
 There are two ways to load external libraries in Spark interpreter. First is 
using interpreter setting menu and second is loading Spark properties.
 
 ### 1. Setting Dependencies via Interpreter Setting
-Please see [Dependency Management](../manual/dependencymanagement.html) for 
the details.
+Please see [Dependency 
Management](../usage/interpreter/dependency_management.html) for the details.
 
 ### 2. Loading Spark Properties
 Once `SPARK_HOME` is set in `conf/zeppelin-env.sh`, Zeppelin uses 
`spark-submit` as spark interpreter runner. `spark-submit` supports two ways to 
load configurations. 
@@ -389,23 +389,28 @@ In sql environment, you can create form in simple 
template.
 select * from ${table=defaultTableName} where text like '%${search}%'
 ```
 
-To learn more about dynamic form, checkout [Dynamic 
Form](../manual/dynamicform.html).
+To learn more about dynamic form, checkout [Dynamic 
Form](../usage/dynamic_form/intro.html).
 
 
 ## Matplotlib Integration (pyspark)
-Both the `python` and `pyspark` interpreters have built-in support for inline 
visualization using `matplotlib`, a popular plotting library for python. More 
details can be found in the [python interpreter 
documentation](../interpreter/python.html), since matplotlib support is 
identical. More advanced interactive plotting can be done with pyspark through 
utilizing Zeppelin's built-in [Angular Display 
System](../displaysystem/back-end-angular.html), as shown below:
+Both the `python` and `pyspark` interpreters have built-in support for inline 
visualization using `matplotlib`, 
+a popular plotting library for python. More details can be found in the 
[python interpreter documentation](../interpreter/python.html), 
+since matplotlib support is identical. More advanced interactive plotting can 
be done with pyspark through 
+utilizing Zeppelin's built-in [Angular Display 
System](../usage/display_system/angular_backend.html), as shown below:
 
-<img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/matplotlibAngularExample.gif" />
+<img class="img-responsive" 
src="/assets/themes/zeppelin/img/docs-img/matplotlibAngularExample.gif" />
 
 ## Interpreter setting option
 
-You can choose one of `shared`, `scoped` and `isolated` options wheh you 
configure Spark interpreter. Spark interpreter creates separated Scala compiler 
per each notebook but share a single SparkContext in `scoped` mode 
(experimental). It creates separated SparkContext per each notebook in 
`isolated` mode.
+You can choose one of `shared`, `scoped` and `isolated` options wheh you 
configure Spark interpreter. 
+Spark interpreter creates separated Scala compiler per each notebook but share 
a single SparkContext in `scoped` mode (experimental). 
+It creates separated SparkContext per each notebook in `isolated` mode.
 
 
 ## Setting up Zeppelin with Kerberos
 Logical setup with Zeppelin, Kerberos Key Distribution Center (KDC), and Spark 
on YARN:
 
-<img src="../assets/themes/zeppelin/img/docs-img/kdc_zeppelin.png">
+<img src="/assets/themes/zeppelin/img/docs-img/kdc_zeppelin.png">
 
 ### Configuration Setup
 

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/manual/dependencymanagement.md
----------------------------------------------------------------------
diff --git a/docs/manual/dependencymanagement.md 
b/docs/manual/dependencymanagement.md
deleted file mode 100644
index 44068da..0000000
--- a/docs/manual/dependencymanagement.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-layout: page
-title: "Dependency Management for Apache Spark Interpreter"
-description: "Include external libraries to Apache Spark Interpreter by 
setting dependencies in interpreter menu."
-group: manual
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-## Dependency Management for Interpreter
-
-You can include external libraries to interpreter by setting dependencies in 
interpreter menu.
-
-When your code requires external library, instead of doing 
download/copy/restart Zeppelin, you can easily do following jobs in this menu.
-
- * Load libraries recursively from Maven repository
- * Load libraries from local filesystem
- * Add additional maven repository
- * Automatically add libraries to SparkCluster
-
-<hr>
-<div class="row">
-  <div class="col-md-6">
-    <a data-lightbox="compiler" 
href="../assets/themes/zeppelin/img/docs-img/interpreter-dependency-loading.png">
-      <img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/interpreter-dependency-loading.png" 
/>
-    </a>
-  </div>
-  <div class="col-md-6" style="padding-top:30px">
-    <b> Load Dependencies to Interpreter </b>
-    <br /><br />
-    <ol>
-      <li> Click 'Interpreter' menu in navigation bar. </li>
-      <li> Click 'edit' button of the interpreter which you want to load 
dependencies to. </li>
-      <li> Fill artifact and exclude field to your needs.
-           You can enter not only groupId:artifactId:version but also local 
file in artifact field. </li>
-      <li> Press 'Save' to restart the interpreter with loaded libraries. </li>
-    </ol>
-  </div>
-</div>
-<hr>
-<div class="row">
-  <div class="col-md-6">
-    <a data-lightbox="compiler" 
href="../assets/themes/zeppelin/img/docs-img/interpreter-add-repo1.png">
-      <img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/interpreter-add-repo1.png" />
-    </a>
-    <a data-lightbox="compiler" 
href="../assets/themes/zeppelin/img/docs-img/interpreter-add-repo2.png">
-      <img class="img-responsive" 
src="../assets/themes/zeppelin/img/docs-img/interpreter-add-repo2.png" />
-    </a>
-  </div>
-  <div class="col-md-6" style="padding-top:30px">
-    <b> Add repository for dependency resolving </b>
-    <br /><br />
-    <ol>
-      <li> Press <i class="fa fa-cog"></i> icon in 'Interpreter' menu on the 
top right side.
-           It will show you available repository lists.</li>
-      <li> If you need to resolve dependencies from other than central maven 
repository or
-          local ~/.m2 repository, hit <i class="fa fa-plus"></i> icon next to 
repository lists. </li>
-      <li> Fill out the form and click 'Add' button, then you will be able to 
see that new repository is added. </li>
-      <li> Optionally, if you are behind a corporate firewall, you can specify 
also all proxy settings so that Zeppelin can download the dependencies using 
the given credentials</li>
-    </ol>
-  </div>
-</div>

http://git-wip-us.apache.org/repos/asf/zeppelin/blob/4b6d3e55/docs/manual/dynamicform.md
----------------------------------------------------------------------
diff --git a/docs/manual/dynamicform.md b/docs/manual/dynamicform.md
deleted file mode 100644
index b42bd15..0000000
--- a/docs/manual/dynamicform.md
+++ /dev/null
@@ -1,187 +0,0 @@
----
-layout: page
-title: "Dynamic Form in Apache Zeppelin"
-description: "Apache Zeppelin dynamically creates input forms. Depending on 
language backend, there're two different ways to create dynamic form."
-group: manual
----
-<!--
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-{% include JB/setup %}
-
-# Dynamic Form
-
-<div id="toc"></div>
-
-Apache Zeppelin dynamically creates input forms. Depending on language 
backend, there're two different ways to create dynamic form.
-Custom language backend can select which type of form creation it wants to use.
-
-## Using form Templates
-
-This mode creates form using simple template language. It's simple and easy to 
use. For example Markdown, Shell, Spark SQL language backend uses it.
-
-### Text input form
-
-To create text input form, use `${formName}` templates.
-
-for example
-
-<img class="img-responsive" 
src="/assets/themes/zeppelin/img/screenshots/form_input.png" width="450px" />
-
-
-Also you can provide default value, using `${formName=defaultValue}`.
-
-<img src="../assets/themes/zeppelin/img/screenshots/form_input_default.png" />
-
-
-### Select form
-
-To create select form, use `${formName=defaultValue,option1|option2...}`
-
-for example
-
-<img src="../assets/themes/zeppelin/img/screenshots/form_select.png" />
-
-Also you can separate option's display name and value, using 
`${formName=defaultValue,option1(DisplayName)|option2(DisplayName)...}`
-
-<img 
src="../assets/themes/zeppelin/img/screenshots/form_select_displayname.png" />
-
-The paragraph will be automatically run after you change your selection by 
default.
-But in case you have multiple types dynamic form in one paragraph, you might 
want to run the paragraph after changing all the selections.
-You can control this by unchecking the below **Run on selection change** 
option in the setting menu.
-
-Even if you uncheck this option, still you can run it by pressing `Enter`.
-
-<img src="../assets/themes/zeppelin/img/screenshots/selectForm-checkbox.png" />
-
-### Checkbox form
-
-For multi-selection, you can create a checkbox form using 
`${checkbox:formName=defaultValue1|defaultValue2...,option1|option2...}`. The 
variable will be substituted by a comma-separated string based on the selected 
items. For example:
-
-<img src="../assets/themes/zeppelin/img/screenshots/form_checkbox.png">
-
-You can specify the delimiter using `${checkbox(delimiter):formName=...}`:
-
-<img 
src="../assets/themes/zeppelin/img/screenshots/form_checkbox_delimiter.png">
-
-Like [select form](#select-form), the paragraph will be automatically run 
after you change your selection by default.
-But in case you have multiple types dynamic form in one paragraph, you might 
want to run the paragraph after changing all the selections.
-You can control this by unchecking the below **Run on selection change** 
option in the setting menu.
-
-Even if you uncheck this option, still you can run it by pressing `Enter`.
-
-<img src="../assets/themes/zeppelin/img/screenshots/selectForm-checkbox.png" />
-
-## Creates Programmatically
-
-Some language backends can programmatically create forms. For example 
[ZeppelinContext](../interpreter/spark.html#zeppelincontext) provides a form 
creation API
-
-Here are some examples:
-
-### Text input form
-<div class="codetabs">
-    <div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-%spark
-println("Hello "+z.input("name"))
-{% endhighlight %}
-
-    </div>
-    <div data-lang="python" markdown="1">
-
-{% highlight python %}
-%pyspark
-print("Hello "+z.input("name"))
-{% endhighlight %}
-
-    </div>
-</div>
-<img src="../assets/themes/zeppelin/img/screenshots/form_input_prog.png" />
-
-### Text input form with default value
-<div class="codetabs">
-    <div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-%spark
-println("Hello "+z.input("name", "sun")) 
-{% endhighlight %}
-
-    </div>
-    <div data-lang="python" markdown="1">
-
-{% highlight python %}
-%pyspark
-print("Hello "+z.input("name", "sun"))
-{% endhighlight %}
-
-    </div>
-</div>
-<img 
src="../assets/themes/zeppelin/img/screenshots/form_input_default_prog.png" />
-
-### Select form
-<div class="codetabs">
-    <div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-%spark
-println("Hello "+z.select("day", Seq(("1","mon"),
-                                    ("2","tue"),
-                                    ("3","wed"),
-                                    ("4","thurs"),
-                                    ("5","fri"),
-                                    ("6","sat"),
-                                    ("7","sun"))))
-{% endhighlight %}
-
-    </div>
-    <div data-lang="python" markdown="1">
-
-{% highlight python %}
-%pyspark
-print("Hello "+z.select("day", [("1","mon"),
-                                ("2","tue"),
-                                ("3","wed"),
-                                ("4","thurs"),
-                                ("5","fri"),
-                                ("6","sat"),
-                                ("7","sun")]))
-{% endhighlight %}
-
-    </div>
-</div>
-<img src="../assets/themes/zeppelin/img/screenshots/form_select_prog.png" />
-
-#### Checkbox form
-<div class="codetabs">
-    <div data-lang="scala" markdown="1">
-
-{% highlight scala %}
-%spark
-val options = Seq(("apple","Apple"), ("banana","Banana"), ("orange","Orange"))
-println("Hello "+z.checkbox("fruit", options).mkString(" and "))
-{% endhighlight %}
-
-    </div>
-    <div data-lang="python" markdown="1">
-
-{% highlight python %}
-%pyspark
-options = [("apple","Apple"), ("banana","Banana"), ("orange","Orange")]
-print("Hello "+ " and ".join(z.checkbox("fruit", options, ["apple"])))
-{% endhighlight %}
-
-    </div>
-</div>
-<img src="../assets/themes/zeppelin/img/screenshots/form_checkbox_prog.png" />

Reply via email to