http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/create_cube.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/create_cube.cn.md 
b/website/_docs20/tutorial/create_cube.cn.md
new file mode 100644
index 0000000..5c28e11
--- /dev/null
+++ b/website/_docs20/tutorial/create_cube.cn.md
@@ -0,0 +1,129 @@
+---
+layout: docs20-cn
+title:  Kylin Cube 创建教程
+categories: 教程
+permalink: /cn/docs20/tutorial/create_cube.html
+version: v1.2
+since: v0.7.1
+---
+  
+  
+### I. 新建一个项目
+1. 由顶部菜单栏进入`Query`页面,然后点击`Manage Projects`。
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. 点击`+ Project`按钮添加一个新的项目。
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/2 %2Bproject.png)
+
+3. 填写下列表单并点击`submit`按钮提交请求。
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. 成功后,底部会显示通知。
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. 同步一张表
+1. 在顶部菜单栏点击`Tables`,然后点击`+ Sync`按钮加载hive表å…
ƒæ•°æ®ã€‚
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/4 %2Btable.png)
+
+2. 输入表名并点击`Sync`按钮提交请求。
+
+   ![](/images/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+### III. 新建一个cube
+首先,在顶部菜单栏点击`Cubes`。然后点击`+Cube`按钮进入cube 
designer页面。
+
+![](/images/Kylin-Cube-Creation-Tutorial/6 %2Bcube.png)
+
+**步骤1. Cube信息**
+
+填写cube基本信息。点击`Next`进入下一步。
+
+你可以使用字母、数字和“_”来为你
的cube命名(注意名字中不能使用空格)。
+
+![](/images/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+
+**步骤2. 维度**
+
+1. 建立事实表。
+
+    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-factable.png)
+
+2. 点击`+Dimension`按钮添加一个新的维度。
+
+    ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-%2Bdim.png)
+
+3. 可以选择不同类型的维度加入一个cube。我们在这里列出å…
¶ä¸­ä¸€éƒ¨åˆ†ä¾›ä½ å‚考。
+
+    * 从事实表获取维度。
+          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeA.png)
+
+    * 从查找表获取维度。
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-1.png)
+
+        ![]( /images/Kylin-Cube-Creation-Tutorial/8 dim-typeB-2.png)
+   
+    * 从有分级结构的查找表获取维度。
+          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeC.png)
+
+    * 从有衍生维度(derived dimensions)的查找表获取维度。
+          ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-typeD.png)
+
+4. 用户可以在保存维度后进行编辑。
+   ![](/images/Kylin-Cube-Creation-Tutorial/8 dim-edit.png)
+
+**步骤3. 度量**
+
+1. 点击`+Measure`按钮添加一个新的度量。
+   ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-%2Bmeas.png)
+
+2. 根据它的表达式å…
±æœ‰5种不同类型的度量:`SUM`、`MAX`、`MIN`、`COUNT`和`COUNT_DISTINCT`。请谨æ
…Žé€‰æ‹©è¿”回类型,它与`COUNT(DISTINCT)`的误差率相关。
+   * SUM
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-sum.png)
+
+   * MIN
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-min.png)
+
+   * MAX
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-max.png)
+
+   * COUNT
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-count.png)
+
+   * DISTINCT_COUNT
+
+     ![](/images/Kylin-Cube-Creation-Tutorial/9 meas-distinct.png)
+
+**步骤4. 过滤器**
+
+这一步骤是可选的。你可以使用`SQL`格式添加
一些条件过滤器。
+
+![](/images/Kylin-Cube-Creation-Tutorial/10 filter.png)
+
+**步骤5. 更新设置**
+
+这一步骤是为增量构建cube而设计的。
+
+![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting1.png)
+
+选择分区类型、分区列和开始日期。
+
+![](/images/Kylin-Cube-Creation-Tutorial/11 refresh-setting2.png)
+
+**步骤6. 高级设置**
+
+![](/images/Kylin-Cube-Creation-Tutorial/12 advanced.png)
+
+**步骤7. 概览 & 保存**
+
+你可以概览你
的cube并返回之前的步骤进行修改。点击`Save`按钮完成cube创建。
+
+![](/images/Kylin-Cube-Creation-Tutorial/13 overview.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/create_cube.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/create_cube.md 
b/website/_docs20/tutorial/create_cube.md
new file mode 100644
index 0000000..ea2216b
--- /dev/null
+++ b/website/_docs20/tutorial/create_cube.md
@@ -0,0 +1,198 @@
+---
+layout: docs20
+title:  Kylin Cube Creation
+categories: tutorial
+permalink: /docs20/tutorial/create_cube.html
+---
+
+This tutorial will guide you to create a cube. It need you have at least 1 
sample table in Hive. If you don't have, you can follow this to create some 
data.
+  
+### I. Create a Project
+1. Go to `Query` page in top menu bar, then click `Manage Projects`.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/1 manage-prject.png)
+
+2. Click the `+ Project` button to add a new project.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/2 +project.png)
+
+3. Enter a project name, e.g, "Tutorial", with a description (optional), then 
click `submit` button to send the request.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3 new-project.png)
+
+4. After success, the project will show in the table.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/3.1 pj-created.png)
+
+### II. Sync up Hive Table
+1. Click `Model` in top bar and then click `Data Source` tab in the left part, 
it lists all the tables loaded into Kylin; click `Load Hive Table` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
+
+2. Enter the hive table names, separated with commad, and then click `Sync` to 
send the request.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
+
+3. [Optional] If you want to browser the hive database to pick tables, click 
the `Load Hive Table From Tree` button.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table-tree.png)
+
+4. [Optional] Expand the database node, click to select the table to load, and 
then click `Sync`.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 
hive-table-tree.png)
+
+5. A success message will pop up. In the left `Tables` section, the newly 
loaded table is added. Click the table name will expand the columns.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 
hive-table-info.png)
+
+6. In the background, Kylin will run a MapReduce job to calculate the 
approximate cardinality for the newly synced table. After the job be finished, 
refresh web page and then click the table name, the cardinality will be shown 
in the table info.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 
hive-table-cardinality.png)
+
+
+### III. Create Data Model
+Before create a cube, need define a data model. The data model defines the 
star schema. One data model can be reused in multiple cubes.
+
+1. Click `Model` in top bar, and then click `Models` tab. Click `+New` button, 
in the drop-down list select `New Model`.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 +model.png)
+
+2. Enter a name for the model, with an optional description.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-name.png)
+
+3. In the `Fact Table` box, select the fact table of this data model.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 
model-fact-table.png)
+
+4. [Optional] Click `Add Lookup Table` button to add a lookup table. Select 
the table name and join type (inner or left).
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 
model-lookup-table.png)
+
+5. [Optional] Click `New Join Condition` button, select the FK column of fact 
table in the left, and select the PK column of lookup table in the right side. 
Repeat this if have more than one join columns.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 
model-join-condition.png)
+
+6. Click "OK", repeat step 4 and 5 to add more lookup tables if any. After 
finished, click "Next".
+
+7. The "Dimensions" page allows to select the columns that will be used as 
dimension in the child cubes. Click the `Columns` cell of a table, in the 
drop-down list select the column to the list. 
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 
model-dimensions.png)
+
+8. Click "Next" go to the "Measures" page, select the columns that will be 
used in measure/metrics. The measure column can only from fact table. 
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 
model-measures.png)
+
+9. Click "Next" to the "Settings" page. If the data in fact table increases by 
day, select the corresponding date column in the `Partition Date Column`, and 
select the date format, otherwise leave it as blank.
+
+10. [Optional] Select `Cube Size`, which is an indicator on the scale of the 
cube, by default it is `MEDIUM`.
+
+11. [Optional] If some records want to excluded from the cube, like dirty 
data, you can input the condition in `Filter`.
+
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 
model-partition-column.png)
+
+12. Click `Save` and then select `Yes` to save the data model. After created, 
the data model will be shown in the left `Models` list.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/6 model-created.png)
+
+### IV. Create Cube
+After the data model be created, you can start to create cube. 
+
+Click `Model` in top bar, and then click `Models` tab. Click `+New` button, in 
the drop-down list select `New Cube`.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 new-cube.png)
+
+
+**Step 1. Cube Info**
+
+Select the data model, enter the cube name; Click `Next` to enter the next 
step.
+
+You can use letters, numbers and '_' to name your cube (blank space in name is 
not allowed). `Notification List` is a list of email addresses which be 
notified on cube job success/failure.
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 cube-info.png)
+    
+
+**Step 2. Dimensions**
+
+1. Click `Add Dimension`, it popups two option: "Normal" and "Derived": 
"Normal" is to add a normal independent dimension column, "Derived" is to add a 
derived dimension column. Read more in [How to optimize 
cubes](/docs15/howto/howto_optimize_cubes.html).
+
+2. Click "Normal" and then select a dimension column, give it a meaningful 
name.
+
+    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 
cube-dimension-normal.png)
+    
+3. [Optional] Click "Derived" and then pickup 1 more multiple columns on 
lookup table, give them a meaningful name.
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 
cube-dimension-derived.png)
+
+4. Repeate 2 and 3 to add all dimension columns; you can do this in batch for 
"Normal" dimension with the button `Auto Generator`. 
+
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/7 
cube-dimension-batch.png)
+
+5. Click "Next" after select all dimensions.
+
+**Step 3. Measures**
+
+1. Click the `+Measure` to add a new measure.
+   ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 meas-+meas.png)
+
+2. There are 6 types of measure according to its expression: `SUM`, `MAX`, 
`MIN`, `COUNT`, `COUNT_DISTINCT` and `TOP_N`. Properly select the return type 
for `COUNT_DISTINCT` and `TOP_N`, as it will impact on the cube size.
+   * SUM
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-sum.png)
+
+   * MIN
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-min.png)
+
+   * MAX
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-max.png)
+
+   * COUNT
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 
measure-count.png)
+
+   * DISTINCT_COUNT
+   This measure has two implementations: 
+   a) approximate implementation with HyperLogLog, select an acceptable error 
rate, lower error rate will take more storage.
+   b) precise implementation with bitmap (see limitation in 
https://issues.apache.org/jira/browse/KYLIN-1186). 
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 
measure-distinct.png)
+
+   Pleaste note: distinct count is a very heavy data type, it is slower to 
build and query comparing to other measures.
+
+   * TOP_N
+   Approximate TopN measure pre-calculates the top records in each dimension 
combination, it will provide higher performance in query time than no 
pre-calculation; Need specify two parameters here: the first is the column will 
be used as metrics for Top records (aggregated with SUM and then sorted in 
descending order); the second is the literal ID, represents the record like 
seller_id;
+
+   Properly select the return type, depends on how many top records to 
inspect: top 10, top 100 or top 1000. 
+
+     ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/8 measure-topn.png)
+
+
+**Step 4. Refresh Setting**
+
+This step is designed for incremental cube build. 
+
+`Auto Merge Time Ranges (days)`: merge the small segments into medium and 
large segment automatically. If you don't want to auto merge, remove the 
default two ranges.
+
+`Retention Range (days)`: only keep the segment whose data is in past given 
days in cube, the old segment will be automatically dropped from head; 0 means 
not enable this feature.
+
+`Partition Start Date`: the start date of this cube.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/9 refresh-setting1.png)
+
+**Step 5. Advanced Setting**
+
+`Aggregation Groups`: by default Kylin put all dimensions into one aggregation 
group; you can create multiple aggregation groups by knowing well about your 
query patterns. For the concepts of "Mandatory Dimensions", "Hierarchy 
Dimensions" and "Joint Dimensions", read this blog: [New Aggregation 
Group](/blog/2016/02/18/new-aggregation-group/)
+
+`Rowkeys`: the rowkeys are composed by the dimension encoded values. 
"Dictionary" is the default encoding method; If a dimension is not fit with 
dictionary (e.g., cardinality > 10 million), select "false" and then enter the 
fixed length for that dimension, usually that is the max. length of that 
column; if a value is longer than that size it will be truncated. Please note, 
without dictionary encoding, the cube size might be much bigger.
+
+You can drag & drop a dimension column to adjust its position in rowkey; Put 
the mandantory dimension at the begining, then followed the dimensions that 
heavily involved in filters (where condition). Put high cardinality dimensions 
ahead of low cardinality dimensions.
+
+
+**Step 6. Overview & Save**
+
+You can overview your cube and go back to previous step to modify it. Click 
the `Save` button to complete the cube creation.
+
+![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/10 overview.png)
+
+Cheers! now the cube is created, you can go ahead to build and play it.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_build_job.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_build_job.cn.md 
b/website/_docs20/tutorial/cube_build_job.cn.md
new file mode 100644
index 0000000..a0b2a6b
--- /dev/null
+++ b/website/_docs20/tutorial/cube_build_job.cn.md
@@ -0,0 +1,66 @@
+---
+layout: docs20-cn
+title:  Kylin Cube 建立和Job监控教程
+categories: 教程
+permalink: /cn/docs20/tutorial/cube_build_job.html
+version: v1.2
+since: v0.7.1
+---
+
+### Cube建立
+首先,确认你拥有你想要建立的cube的权限。
+
+1. 在`Cubes`页面中,点击cubeæ 
å³ä¾§çš„`Action`下拉按钮并选择`Build`操作。
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 
action-build.png)
+
+2. 选择后会出现一个弹出窗口。
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/2 pop-up.png)
+
+3. 点击`END DATE`输入框选择增量构建这个cube的结束日期。
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 end-date.png)
+
+4. 点击`Submit`提交请求。
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 submit.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4.1 success.png)
+
+   提交请求成功后,你将会看到`Jobs`页面新建了job。
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 jobs-page.png)
+
+5. 如要放弃这个job,点击`Discard`按钮。
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 discard.png)
+
+### Job监控
+在`Jobs`页面,点击job详情按钮查看显示于右侧的详细信息。
+
+![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 job-steps.png)
+
+job详细信息为跟踪一个job提供了它的每一步记录。你
可以将光标停放在一个步骤状态图æ 
‡ä¸ŠæŸ¥çœ‹åŸºæœ¬çŠ¶æ€å’Œä¿¡æ¯ã€‚
+
+![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 hover-step.png)
+
+点击每个步骤显示的图标按钮查看详情
:`Parameters`、`Log`、`MRJob`、`EagleMonitoring`。
+
+* Parameters
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 parameters.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
parameters-d.png)
+
+* Log
+        
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob.png)
+
+   ![]( /images/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 mrjob-d.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_build_job.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_build_job.md 
b/website/_docs20/tutorial/cube_build_job.md
new file mode 100644
index 0000000..0810c5b
--- /dev/null
+++ b/website/_docs20/tutorial/cube_build_job.md
@@ -0,0 +1,67 @@
+---
+layout: docs20
+title:  Kylin Cube Build and Job Monitoring
+categories: tutorial
+permalink: /docs20/tutorial/cube_build_job.html
+---
+
+### Cube Build
+First of all, make sure that you have authority of the cube you want to build.
+
+1. In `Models` page, click the `Action` drop down button in the right of a 
cube column and select operation `Build`.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/1 
action-build.png)
+
+2. There is a pop-up window after the selection, click `END DATE` input box to 
select end date of this incremental cube build.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/3 
end-date.png)
+
+4. Click `Submit` to send the build request. After success, you will see the 
new job in the `Monitor` page.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/4 
jobs-page.png)
+
+5. The new job is in "pending" status; after a while, it will be started to 
run and you will see the progress by refresh the web page or click the refresh 
button.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/5 
job-progress.png)
+
+
+6. Wait the job to finish. In the between if you want to discard it, click 
`Actions` -> `Discard` button.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/6 
discard.png)
+
+7. After the job is 100% finished, the cube's status becomes to "Ready", means 
it is ready to serve SQL queries. In the `Model` tab, find the cube, click cube 
name to expand the section, in the "HBase" tab, it will list the cube segments. 
Each segment has a start/end time; Its underlying HBase table information is 
also listed.
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/10 
cube-segment.png)
+
+If you have more source data, repeate the steps above to build them into the 
cube.
+
+### Job Monitoring
+In the `Monitor` page, click the job detail button to see detail information 
show in the right side.
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/7 
job-steps.png)
+
+The detail information of a job provides a step-by-step record to trace a job. 
You can hover a step status icon to see the basic status and information.
+
+![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/8 
hover-step.png)
+
+Click the icon buttons showing in each step to see the details: `Parameters`, 
`Log`, `MRJob`.
+
+* Parameters
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
parameters.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
parameters-d.png)
+
+* Log
+        
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
log.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
log-d.png)
+
+* MRJob(MapReduce Job)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
mrjob.png)
+
+   ![](/images/tutorial/1.5/Kylin-Cube-Build-and-Job-Monitoring-Tutorial/9 
mrjob-d.png)
+
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_spark.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_spark.md 
b/website/_docs20/tutorial/cube_spark.md
new file mode 100644
index 0000000..5f7893a
--- /dev/null
+++ b/website/_docs20/tutorial/cube_spark.md
@@ -0,0 +1,166 @@
+---
+layout: docs20
+title:  Build Cube with Spark (beta)
+categories: tutorial
+permalink: /docs20/tutorial/cube_spark.html
+---
+Kylin v2.0 introduces the Spark cube engine, it uses Apache Spark to replace 
MapReduce in the build cube step; You can check [this 
blog](/blog/2017/02/23/by-layer-spark-cubing/) for an overall picture. The 
current document uses the sample cube to demo how to try the new engine.
+
+## Preparation
+To finish this tutorial, you need a Hadoop environment which has Kylin v2.0.0 
or above installed. Here we will use Hortonworks HDP 2.4 Sandbox VM, the Hadoop 
components as well as Hive/HBase has already been started. 
+
+## Install Kylin v2.0.0 beta
+
+Download the Kylin v2.0.0 beta for HBase 1.x from Kylin's download page, and 
then uncompress the tar ball into */usr/local/* folder:
+
+{% highlight Groff markup %}
+
+wget 
https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.0.0-beta/apache-kylin-2.0.0-beta-hbase1x.tar.gz
 -P /tmp
+
+tar -zxvf /tmp/apache-kylin-2.0.0-beta-hbase1x.tar.gz -C /usr/local/
+
+export KYLIN_HOME=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin
+{% endhighlight %}
+
+## Prepare "kylin.env.hadoop-conf-dir"
+
+To run Spark on Yarn, need specify **HADOOP_CONF_DIR** environment variable, 
which is the directory that contains the (client side) configuration files for 
Hadoop. In many Hadoop distributions the directory is "/etc/hadoop/conf"; But 
Kylin not only need access HDFS, Yarn and Hive, but also HBase, so the default 
directory might not have all necessary files. In this case, you need create a 
new directory and then copying or linking those client files (core-site.xml, 
yarn-site.xml, hive-site.xml and hbase-site.xml) there. In HDP 2.4, there is a 
conflict between hive-tez and Spark, so need change the default engine from 
"tez" to "mr" when copy for Kylin.
+
+{% highlight Groff markup %}
+
+mkdir $KYLIN_HOME/hadoop-conf
+ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml 
+ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml 
+ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml 
$KYLIN_HOME/hadoop-conf/hbase-site.xml 
+cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml 
+vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value 
from "tez" to "mr")
+
+{% endhighlight %}
+
+Now, let Kylin know this directory with property "kylin.env.hadoop-conf-dir" 
in kylin.properties:
+
+{% highlight Groff markup %}
+kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/hadoop-conf
+{% endhighlight %}
+
+If this property isn't set, Kylin will use the directory that "hive-site.xml" 
locates in; while that folder may have no "hbase-site.xml", will get HBase/ZK 
connection error in Spark.
+
+## Check Spark configuration
+
+Kylin embedes a Spark binary (v1.6.3) in $KYLIN_HOME/spark, all the Spark 
configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix 
*"kylin.engine.spark-conf."*. These properties will be extracted and applied 
when runs submit Spark job; E.g, if you configure 
"kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf 
spark.executor.memory=4G" as parameter when execute "spark-submit".
+
+Before you run Spark cubing, suggest take a look on these configurations and 
do customization according to your cluster. Below is the default 
configurations, which is also the minimal config for a sandbox (1 executor with 
1GB memory); usually in a normal cluster, need much more executors and each has 
at least 4GB memory and 2 cores:
+
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.master=yarn
+kylin.engine.spark-conf.spark.submit.deployMode=cluster
+kylin.engine.spark-conf.spark.yarn.queue=default
+kylin.engine.spark-conf.spark.executor.memory=1G
+kylin.engine.spark-conf.spark.executor.cores=2
+kylin.engine.spark-conf.spark.executor.instances=1
+kylin.engine.spark-conf.spark.eventLog.enabled=true
+kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
+kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
+#kylin.engine.spark-conf.spark.yarn.jar=hdfs://namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
+#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
+
+## uncomment for HDP
+#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+
+{% endhighlight %}
+
+For running on Hortonworks platform, need specify "hdp.version" as Java 
options for Yarn containers, so please uncommment the last three lines in 
kylin.properties. 
+
+Besides, in order to avoid repeatedly uploading Spark assembly jar to Yarn, 
you can manually do that once, and then configure the jar's HDFS location; 
Please note, the HDFS location need be full qualified name.
+
+{% highlight Groff markup %}
+hadoop fs -mkdir -p /kylin/spark/
+hadoop fs -put $KYLIN_HOME/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar 
/kylin/spark/
+{% endhighlight %}
+
+After do that, the config in kylin.properties will be:
+{% highlight Groff markup %}
+kylin.engine.spark-conf.spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
+kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
+kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
+{% endhighlight %}
+
+All the "kylin.engine.spark-conf.*" parameters can be overwritten at Cube or 
Project level, this gives more flexibility to the user.
+
+## Create and modify sample cube
+
+Run the sample.sh to create the sample cube, and then start Kylin server:
+
+{% highlight Groff markup %}
+
+$KYLIN_HOME/bin/sample.sh
+$KYLIN_HOME/bin/kylin.sh start
+
+{% endhighlight %}
+
+After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the 
"Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark 
(Beta)":
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
+
+Click "Next" to the "Configuration Overwrites" page, click "+Property" to add 
property "kylin.engine.spark.rdd-partition-cut-mb" with value "100" (reasons 
below):
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
+
+The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a 
"TOPN(100)"; Their size estimation can be inaccurate when the source data is 
small: the estimized size is much larger than the real size, that causes much 
more RDD partitions be splitted, which slows down the build. Here 100 is a more 
reasonable number for it. Click "Next" and "Save" to save the cube.
+
+
+## Build Cube with Spark
+
+Click "Build", select current date as the build end date. Kylin generates a 
build job in the "Monitor" page, in which the 7th step is the Spark cubing. The 
job engine starts to execute the steps in sequence. 
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_job_with_spark.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/3_spark_cubing_step.png)
+
+When Kylin executes this step, you can monitor the status in Yarn resource 
manager. Click the "Application Master" link will open Spark web UI, it shows 
the progress of each stage and the detailed information.
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/4_job_on_rm.png)
+
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/5_spark_web_gui.png)
+
+
+After all steps be successfully executed, the Cube becomes "Ready" and you can 
query it as normal.
+
+## Troubleshooting
+
+When getting error, you should check "logs/kylin.log" firstly. There has the 
full Spark command that Kylin executes, e.g:
+
+{% highlight Groff markup %}
+2017-03-06 14:44:38,574 INFO  [Job 2d5c1178-c6f6-4b50-8937-8e5e3b39227e-306] 
spark.SparkExecutable:121 : cmd:export 
HADOOP_CONF_DIR=/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/hadoop-conf && 
/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/spark/bin/spark-submit --class 
org.apache.kylin.common.util.SparkEntry  --conf spark.executor.instances=1  
--conf 
spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
  --conf spark.yarn.queue=default  --conf 
spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf 
spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf 
spark.driver.extraJavaOptions=-Dhdp.version=current  --conf spark.master=yarn  
--conf spark.executor.extraJavaOptions=-Dhdp.version=current  --conf 
spark.executor.memory=1G  --conf spark.eventLog.enabled=true  --conf 
spark.eventLog.dir=hdfs:///kylin/spark-history  --conf spark.executor.cores=2  
--conf spark.submit.deployMode=cluster --files 
/etc/hbase/2.4.0.0-169/0/hbase-site.xml
  --jars 
/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar,/usr/hdp/2.4.0.0-169/hbase/lib/htrace-core-3.1.0-incubating.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-client-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-common-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/hbase-protocol-1.1.2.2.4.0.0-169.jar,/usr/hdp/2.4.0.0-169/hbase/lib/metrics-core-2.2.0.jar,/usr/hdp/2.4.0.0-169/hbase/lib/guava-12.0.1.jar,
 /usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/lib/kylin-job-2.0.0-SNAPSHOT.jar 
-className org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable 
kylin_intermediate_kylin_sales_cube_555c4d32_40bb_457d_909a_1bb017bf2d9e 
-segmentId 555c4d32-40bb-457d-909a-1bb017bf2d9e -confPath 
/usr/local/apache-kylin-2.0.0-SNAPSHOT-bin/conf -output 
hdfs:///kylin/kylin_metadata/kylin-2d5c1178-c6f6-4b50-8937-8e5e3b39227e/kylin_sales_cube/cuboid/
 -cubename kylin_sales_cube
+
+{% endhighlight %}
+
+You can copy the cmd to execute manually in shell and then tunning the 
parameters quickly; During the execution, you can access Yarn resource manager 
to check more. If the job has already finished, you can check the history info 
in Spark history server. 
+
+By default Kylin outputs the history to "hdfs:///kylin/spark-history", you 
need start Spark history server on that directory, or change to use your 
existing Spark history server's event directory in conf/kylin.properties with 
parameter "kylin.engine.spark-conf.spark.eventLog.dir" and 
"kylin.engine.spark-conf.spark.history.fs.logDirectory".
+
+The following command will start a Spark history server instance on Kylin's 
output directory, before run it making sure you have stopped the existing Spark 
history server in sandbox:
+
+{% highlight Groff markup %}
+$KYLIN_HOME/spark/sbin/start-history-server.sh 
hdfs://sandbox.hortonworks.com:8020/kylin/spark-history 
+{% endhighlight %}
+
+In web browser, access "http://sandbox:18080"; it shows the job history:
+
+   ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/9_spark_history.png)
+
+Click a specific job, there you will see the detail runtime information, that 
is very helpful for trouble shooting and performance tuning.
+
+## Go further
+
+If you're a Kylin administrator but new to Spark, suggest you go through 
[Spark documents](https://spark.apache.org/docs/1.6.3/), and don't forget to 
update the configurations accordingly. Spark's performance relies on Cluster's 
memory and CPU resource, while Kylin's Cube build is a heavy task when having a 
complex data model and a huge dataset to build at one time. If your cluster 
resource couldn't fulfill, errors like "OutOfMemorry" will be thrown in Spark 
executors, so please use it properly. For Cube which has UHC dimension, many 
combinations (e.g, a full cube with more than 12 dimensions), or memory hungry 
measures (Count Distinct, Top-N), suggest to use the MapReduce engine. If your 
Cube model is simple, all measures are SUM/MIN/MAX/COUNT, source data is small 
to medium scale, Spark engine would be a good choice. Besides, Streaming build 
isn't supported in this engine so far (KYLIN-2484).
+
+Now the Spark engine is in public beta; If you have any question, comment, or 
bug fix, welcome to discuss in d...@kylin.apache.org.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/cube_streaming.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/cube_streaming.md 
b/website/_docs20/tutorial/cube_streaming.md
new file mode 100644
index 0000000..08e5bf9
--- /dev/null
+++ b/website/_docs20/tutorial/cube_streaming.md
@@ -0,0 +1,219 @@
+---
+layout: docs20
+title:  Scalable Cubing from Kafka (beta)
+categories: tutorial
+permalink: /docs20/tutorial/cube_streaming.html
+---
+Kylin v1.6 releases the scalable streaming cubing function, it leverages 
Hadoop to consume the data from Kafka to build the cube, you can check [this 
blog](/blog/2016/10/18/new-nrt-streaming/) for the high level design. This doc 
is a step by step tutorial, illustrating how to create and build a sample cube;
+
+## Preparation
+To finish this tutorial, you need a Hadoop environment which has kylin v1.6.0 
or above installed, and also have a Kafka (v0.10.0 or above) running; Previous 
Kylin version has a couple issues so please upgrade your Kylin instance at 
first.
+
+In this tutorial, we will use Hortonworks HDP 2.2.4 Sandbox VM + Kafka 
v0.10.0(Scala 2.10) as the environment.
+
+## Install Kafka 0.10.0.0 and Kylin
+Don't use HDP 2.2.4's build-in Kafka as it is too old, stop it first if it is 
running.
+{% highlight Groff markup %}
+curl -s 
http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz
 | tar -xz -C /usr/local/
+
+cd /usr/local/kafka_2.10-0.10.0.0/
+
+bin/kafka-server-start.sh config/server.properties &
+
+{% endhighlight %}
+
+Download the Kylin v1.6 from download page, expand the tar ball in /usr/local/ 
folder.
+
+## Create sample Kafka topic and populate data
+
+Create a sample topic "kylindemo", with 3 partitions:
+
+{% highlight Groff markup %}
+
+bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 
--partitions 3 --topic kylindemo
+Created topic "kylindemo".
+{% endhighlight %}
+
+Put sample data to this topic; Kylin has an utility class which can do this;
+
+{% highlight Groff markup %}
+export KAFKA_HOME=/usr/local/kafka_2.10-0.10.0.0
+export KYLIN_HOME=/usr/local/apache-kylin-1.6.0-bin
+
+cd $KYLIN_HOME
+./bin/kylin.sh org.apache.kylin.source.kafka.util.KafkaSampleProducer --topic 
kylindemo --broker localhost:9092
+{% endhighlight %}
+
+This tool will send 100 records to Kafka every second. Please keep it running 
during this tutorial. You can check the sample message with 
kafka-console-consumer.sh now:
+
+{% highlight Groff markup %}
+cd $KAFKA_HOME
+bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server 
localhost:9092 --topic kylindemo --from-beginning
+{"amount":63.50375137330458,"category":"TOY","order_time":1477415932581,"device":"Other","qty":4,"user":{"id":"bf249f36-f593-4307-b156-240b3094a1c3","age":21,"gender":"Male"},"currency":"USD","country":"CHINA"}
+{"amount":22.806058795736583,"category":"ELECTRONIC","order_time":1477415932591,"device":"Andriod","qty":1,"user":{"id":"00283efe-027e-4ec1-bbed-c2bbda873f1d","age":27,"gender":"Female"},"currency":"USD","country":"INDIA"}
+
+ {% endhighlight %}
+
+## Define a table from streaming
+Start Kylin server with "$KYLIN_HOME/bin/kylin.sh start", login Kylin Web GUI 
at http://sandbox:7070/kylin/, select an existing project or create a new 
project; Click "Model" -> "Data Source", then click the icon "Add Streaming 
Table";
+
+   
![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/1_Add_streaming_table.png)
+
+In the pop-up dialogue, enter a sample record which you got from the 
kafka-console-consumer, click the ">>" button, Kylin parses the JSON message 
and listS all the properties;
+
+You need give a logic table name for this streaming data source; The name will 
be used for SQL query later; here enter "STREAMING_SALES_TABLE" as an example 
in the "Table Name" field.
+
+You need select a timestamp field which will be used to identify the time of a 
message; Kylin can derive other time values like "year_start", "quarter_start" 
from this time column, which can give your more flexibility on building and 
querying the cube. Here check "order_time". You can deselect those properties 
which are not needed for cube. Here let's keep all fields.
+
+Notice that Kylin supports structured (or say "embedded") message from v1.6, 
it will convert them into a flat table structure. By default use "_" as the 
separator of the structed properties.
+
+   
![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/2_Define_streaming_table.png)
+
+
+Click "Next". On this page, provide the Kafka cluster information; Enter 
"kylindemo" as "Topic" name; The cluster has 1 broker, whose host name is 
"sandbox", port is "9092", click "Save".
+
+   ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Kafka_setting.png)
+
+In "Advanced setting" section, the "timeout" and "buffer size" are the 
configurations for connecting with Kafka, keep them. 
+
+In "Parser Setting", by default Kylin assumes your message is JSON format, and 
each record's timestamp column (specified by "tsColName") is a bigint (epoch 
time) value; in this case, you just need set the "tsColumn" to "order_time"; 
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_setting.png)
+
+In real case if the timestamp value is a string valued timestamp like "Jul 20, 
2016 9:59:17 AM", you need specify the parser class with "tsParser" and the 
time pattern with "tsPattern" like this:
+
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/3_Paser_time.png)
+
+Click "Submit" to save the configurations. Now a "Streaming" table is created.
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/4_Streaming_table.png)
+
+## Define data model
+With the table defined in previous step, now we can create the data model. The 
step is almost the same as you create a normal data model, but it has two 
requirement:
+
+* Streaming Cube doesn't support join with lookup tables; When define the data 
model, only select fact table, no lookup table;
+* Streaming Cube must be partitioned; If you're going to build the Cube 
incrementally at minutes level, select "MINUTE_START" as the cube's partition 
date column. If at hours level, select "HOUR_START".
+
+Here we pick 13 dimension and 2 measure columns:
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/5_Data_model_dimension.png)
+
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/6_Data_model_measure.png)
+Save the data model.
+
+## Create Cube
+
+The streaming Cube is almost the same as a normal cube. a couple of points 
need get your attention:
+
+* The partition time column should be a dimension of the Cube. In Streaming 
OLAP the time is always a query condition, and Kylin will leverage this to 
narrow down the scanned partitions.
+* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest 
to use "mintue\_start", "hour\_start" or other, depends on how you will inspect 
the data.
+* Define "year\_start", "quarter\_start", "month\_start", "day\_start", 
"hour\_start", "minute\_start" as a hierarchy to reduce the combinations to 
calculate.
+* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 
hours, 1 day, and then 7 days; This will help to control the cube segment 
number.
+* In the "rowkeys" section, drag&drop the "minute\_start" to the head 
position, as for streaming queries, the time condition is always appeared; 
putting it to head will help to narrow down the scan range.
+
+       
![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
+
+       
![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/9_Cube_measure.png)
+
+       ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/10_agg_group.png)
+
+       ![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/11_Rowkey.png)
+
+Save the cube.
+
+## Run a build
+
+You can trigger the build from web GUI, by clicking "Actions" -> "Build", or 
sending a request to Kylin RESTful API with 'curl' command:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: 
application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, 
"sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' 
http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+{% endhighlight %}
+
+Please note the API endpoint is different from a normal cube (this URL end 
with "build2").
+
+Here 0 means from the last position, and 9223372036854775807 (Long.MAX_VALUE) 
means to the end position on Kafka topic. If it is the first time to build (no 
previous segment), Kylin will seek to beginning of the topics as the start 
position. 
+
+In the "Monitor" page, a new job is generated; Wait it 100% finished.
+
+## Click the "Insight" tab, compose a SQL to run, e.g:
+
+ {% highlight Groff markup %}
+select minute_start, count(*), sum(amount), sum(qty) from 
streaming_sales_table group by minute_start order by minute_start
+ {% endhighlight %}
+
+The result looks like below.
+![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/13_Query_result.png)
+
+
+## Automate the build
+
+Once the first build and query got successfully, you can schedule incremental 
builds at a certain frequency. Kylin will record the offsets of each build; 
when receive a build request, it will start from the last end position, and 
then seek the latest offsets from Kafka. With the REST API you can trigger it 
with any scheduler tools like Linux cron:
+
+  {% highlight Groff markup %}
+crontab -e
+*/5 * * * * curl -X PUT --user ADMIN:KYLIN -H "Content-Type: 
application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, 
"sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' 
http://localhost:7070/kylin/api/cubes/{your_cube_name}/build2
+ {% endhighlight %}
+
+Now you can site down and watch the cube be automatically built from 
streaming. And when the cube segments accumulate to bigger time range, Kylin 
will automatically merge them into a bigger segment.
+
+## Trouble shootings
+
+ * You may encounter the following error when run "kylin.sh":
+{% highlight Groff markup %}
+Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/kafka/clients/producer/Producer
+       at java.lang.Class.getDeclaredMethods0(Native Method)
+       at java.lang.Class.privateGetDeclaredMethods(Class.java:2615)
+       at java.lang.Class.getMethod0(Class.java:2856)
+       at java.lang.Class.getMethod(Class.java:1668)
+       at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494)
+       at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486)
+Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.clients.producer.Producer
+       at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
+       at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
+       at java.security.AccessController.doPrivileged(Native Method)
+       at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
+       at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
+       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
+       at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
+       ... 6 more
+{% endhighlight %}
+
+The reason is Kylin wasn't able to find the proper Kafka client jars; Make 
sure you have properly set "KAFKA_HOME" environment variable.
+
+ * Get "killed by admin" error in the "Build Cube" step
+
+ Within a Sandbox VM, YARN may not allocate the requested memory resource to 
MR job as the "inmem" cubing algorithm requests more memory. You can bypass 
this by requesting less memory: edit "conf/kylin_job_conf_inmem.xml", change 
the following two parameters like this:
+
+ {% highlight Groff markup %}
+    <property>
+        <name>mapreduce.map.memory.mb</name>
+        <value>1072</value>
+        <description></description>
+    </property>
+
+    <property>
+        <name>mapreduce.map.java.opts</name>
+        <value>-Xmx800m</value>
+        <description></description>
+    </property>
+ {% endhighlight %}
+
+ * If there already be bunch of history messages in Kafka and you don't want 
to build from the very beginning, you can trigger a call to set the current end 
position as the start for the cube:
+
+{% highlight Groff markup %}
+curl -X PUT --user ADMIN:KYLIN -H "Content-Type: 
application/json;charset=utf-8" -d '{ "sourceOffsetStart": 0, 
"sourceOffsetEnd": 9223372036854775807, "buildType": "BUILD"}' 
http://localhost:7070/kylin/api/cubes/{your_cube_name}/init_start_offsets
+{% endhighlight %}
+
+ * If some build job got error and you discard it, there will be a hole (or 
say gap) left in the Cube. Since each time Kylin will build from last position, 
you couldn't expect the hole be filled by normal builds. Kylin provides API to 
check and fill the holes 
+
+Check holes:
+ {% highlight Groff markup %}
+curl -X GET --user ADMINN:KYLIN -H "Content-Type: 
application/json;charset=utf-8" 
http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+
+If the result is an empty arrary, means there is no hole; Otherwise, trigger 
Kylin to fill them:
+ {% highlight Groff markup %}
+curl -X PUT --user ADMINN:KYLIN -H "Content-Type: 
application/json;charset=utf-8" 
http://localhost:7070/kylin/api/cubes/{your_cube_name}/holes
+{% endhighlight %}
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/flink.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/flink.md 
b/website/_docs20/tutorial/flink.md
new file mode 100644
index 0000000..d74f602
--- /dev/null
+++ b/website/_docs20/tutorial/flink.md
@@ -0,0 +1,249 @@
+---
+layout: docs20
+title:  Connect from Apache Flink
+categories: tutorial
+permalink: /docs20/tutorial/flink.html
+---
+
+
+### Introduction
+
+This document describes how to use Kylin as a data source in Apache Flink; 
+
+There were several attempts to do this in Scala and JDBC, but none of them 
works: 
+
+* 
[attempt1](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBCInputFormat-preparation-with-Flink-1-1-SNAPSHOT-and-Scala-2-11-td5371.html)
  
+* 
[attempt2](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Type-of-TypeVariable-OT-in-class-org-apache-flink-api-common-io-RichInputFormat-could-not-be-determi-td7287.html)
  
+* 
[attempt3](http://stackoverflow.com/questions/36067881/create-dataset-from-jdbc-source-in-flink-using-scala)
  
+* 
[attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala); 
+
+We will try use CreateInput and 
[JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html)
 in batch mode and access via JDBC to Kylin. But it isn’t implemented in 
Scala, is only in Java 
[MailList](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html).
 This doc will go step by step solving these problems.
+
+### Pre-requisites
+
+* Need an instance of Kylin, with a Cube; [Sample Cube](kylin_sample.html) 
will be good enough.
+* [Scala](http://www.scala-lang.org/) and [Apache 
Flink](http://flink.apache.org/) Installed
+* [IntelliJ](https://www.jetbrains.com/idea/) Installed and configured for 
Scala/Flink (see [Flink IDE setup 
guide](https://ci.apache.org/projects/flink/flink-docs-release-1.1/internals/ide_setup.html)
 )
+
+### Used software:
+
+* [Apache Flink](http://flink.apache.org/downloads.html) v1.2-SNAPSHOT
+* [Apache Kylin](http://kylin.apache.org/download/) v1.5.2 (v1.6.0 also works)
+* [IntelliJ](https://www.jetbrains.com/idea/download/#section=linux)  v2016.2
+* [Scala](downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz)  v2.11
+
+### Starting point:
+
+This can be out initial skeleton: 
+
+{% highlight Groff markup %}
+import org.apache.flink.api.scala._
+val env = ExecutionEnvironment.getExecutionEnvironment
+val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
+  .setDrivername("org.apache.kylin.jdbc.Driver")
+  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
+  .setUsername("ADMIN")
+  .setPassword("KYLIN")
+  .setQuery("select count(distinct seller_id) as sellers from kylin_sales 
group by part_dt order by part_dt")
+  .finish()
+  val dataset =env.createInput(inputFormat)
+{% endhighlight %}
+
+The first error is: ![alt text](/images/Flink-Tutorial/02.png)
+
+Add to Scala: 
+{% highlight Groff markup %}
+import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
+{% endhighlight %}
+
+Next error is  ![alt text](/images/Flink-Tutorial/03.png)
+
+We can solve dependencies [(mvn repository: 
jdbc)](https://mvnrepository.com/artifact/org.apache.flink/flink-jdbc/1.1.2); 
Add this to your pom.xml:
+{% highlight Groff markup %}
+<dependency>
+   <groupId>org.apache.flink</groupId>
+   <artifactId>flink-jdbc</artifactId>
+   <version>${flink.version}</version>
+</dependency>
+{% endhighlight %}
+
+## Solve dependencies of row 
+
+Similar to previous point we need solve dependencies of Row Class [(mvn 
repository: Table) 
](https://mvnrepository.com/artifact/org.apache.flink/flink-table_2.10/1.1.2):
+
+  ![](/images/Flink-Tutorial/03b.png)
+
+
+* In pom.xml add:
+{% highlight Groff markup %}
+<dependency>
+   <groupId>org.apache.flink</groupId>
+   <artifactId>flink-table_2.10</artifactId>
+   <version>${flink.version}</version>
+</dependency>
+{% endhighlight %}
+
+* In Scala: 
+{% highlight Groff markup %}
+import org.apache.flink.api.table.Row
+{% endhighlight %}
+
+## Solve RowTypeInfo property (and their new dependencies)
+
+This is the new error to solve:
+
+  ![](/images/Flink-Tutorial/04.png)
+
+
+* If check the code of 
[JDBCInputFormat.java](https://github.com/apache/flink/blob/master/flink-batch-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java#L69),
 we can see [this new 
property](https://github.com/apache/flink/commit/09b428bd65819b946cf82ab1fdee305eb5a941f5#diff-9b49a5041d50d9f9fad3f8060b3d1310R69)
 (and mandatory) added on Apr 2016 by 
[FLINK-3750](https://issues.apache.org/jira/browse/FLINK-3750)  Manual 
[JDBCInputFormat](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.html)
 v1.2 in Java
+
+   Add the new Property: **setRowTypeInfo**
+   
+{% highlight Groff markup %}
+val inputFormat = JDBCInputFormat.buildJDBCInputFormat()
+  .setDrivername("org.apache.kylin.jdbc.Driver")
+  .setDBUrl("jdbc:kylin://172.17.0.2:7070/learn_kylin")
+  .setUsername("ADMIN")
+  .setPassword("KYLIN")
+  .setQuery("select count(distinct seller_id) as sellers from kylin_sales 
group by part_dt order by part_dt")
+  .setRowTypeInfo(DB_ROWTYPE)
+  .finish()
+{% endhighlight %}
+
+* How can configure this property in Scala? In 
[Attempt4](https://codegists.com/snippet/scala/jdbcissuescala_zeitgeist_scala), 
there is an incorrect solution
+   
+   We can check the types using the intellisense: ![alt 
text](/images/Flink-Tutorial/05.png)
+   
+   Then we will need add more dependences; Add to scala:
+
+{% highlight Groff markup %}
+import org.apache.flink.api.table.typeutils.RowTypeInfo
+import org.apache.flink.api.common.typeinfo.{BasicTypeInfo, TypeInformation}
+{% endhighlight %}
+
+   Create a Array or Seq of TypeInformation[ ]
+
+  ![](/images/Flink-Tutorial/06.png)
+
+
+   Solution:
+   
+{% highlight Groff markup %}
+   var stringColum: TypeInformation[String] = createTypeInformation[String]
+   val DB_ROWTYPE = new RowTypeInfo(Seq(stringColum))
+{% endhighlight %}
+
+## Solve ClassNotFoundException
+
+  ![](/images/Flink-Tutorial/07.png)
+
+Need find the kylin-jdbc-x.x.x.jar and then expose to Flink
+
+1. Find the Kylin JDBC jar
+
+   From Kylin [Download](http://kylin.apache.org/download/) choose **Binary** 
and the **correct version of Kylin and HBase**
+   
+   Download & Unpack: in ./lib: 
+   
+  ![](/images/Flink-Tutorial/08.png)
+
+
+2. Make this JAR accessible to Flink
+
+   If you execute like service you need put this JAR in you Java class path 
using your .bashrc 
+
+  ![](/images/Flink-Tutorial/09.png)
+
+
+  Check the actual value: ![alt text](/images/Flink-Tutorial/10.png)
+  
+  Check the permission for this file (Must be accessible for you):
+
+  ![](/images/Flink-Tutorial/11.png)
+
+ 
+  If you are executing from IDE, need add your class path manually:
+  
+  On IntelliJ: ![alt text](/images/Flink-Tutorial/12.png)  > ![alt 
text](/images/Flink-Tutorial/13.png) > ![alt 
text](/images/Flink-Tutorial/14.png) > ![alt 
text](/images/Flink-Tutorial/15.png)
+  
+  The result, will be similar to: ![alt text](/images/Flink-Tutorial/16.png)
+  
+## Solve "Couldn’t access resultSet" error
+
+  ![](/images/Flink-Tutorial/17.png)
+
+
+It is related with [Flink 
4108](https://issues.apache.org/jira/browse/FLINK-4108)  
[(MailList)](http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/jdbc-JDBCInputFormat-td9393.html#a9415)
 and Timo Walther [make a PR](https://github.com/apache/flink/pull/2619)
+
+If you are running Flink <= 1.2 you will need apply this path and make clean 
install
+
+## Solve the casting error
+
+  ![](/images/Flink-Tutorial/18.png)
+
+In the error message you have the problem and solution …. nice ;)  ¡¡
+
+## The result
+
+The output must be similar to this, print the result of query by standard 
output:
+
+  ![](/images/Flink-Tutorial/19.png)
+
+
+## Now, more complex
+
+Try with a multi-colum and multi-type query:
+
+{% highlight Groff markup %}
+select part_dt, sum(price) as total_selled, count(distinct seller_id) as 
sellers 
+from kylin_sales 
+group by part_dt 
+order by part_dt
+{% endhighlight %}
+
+Need changes in DB_ROWTYPE:
+
+  ![](/images/Flink-Tutorial/20.png)
+
+
+And import lib of Java, to work with Data type of Java ![alt 
text](/images/Flink-Tutorial/21.png)
+
+The new result will be: 
+
+  ![](/images/Flink-Tutorial/23.png)
+
+
+## Error:  Reused Connection
+
+
+  ![](/images/Flink-Tutorial/24.png)
+
+Check if your HBase and Kylin is working. Also you can use Kylin UI for it.
+
+
+## Error:  java.lang.AbstractMethodError:  ….Avatica Connection
+
+See [Kylin 1898](https://issues.apache.org/jira/browse/KYLIN-1898) 
+
+It is a problem with kylin-jdbc-1.x.x. JAR, you need use Calcite 1.8 or above; 
The solution is to use Kylin 1.5.4 or above.
+
+  ![](/images/Flink-Tutorial/25.png)
+
+
+
+## Error: can't expand macros compiled by previous versions of scala
+
+Is a problem with versions of scala, check in with "scala -version" your 
actual version and choose your correct POM.
+
+Perhaps you will need a IntelliJ > File > Invalidates Cache > Invalidate and 
Restart.
+
+I added POM for Scala 2.11
+
+
+## Final Words
+
+Now you can read Kylin’s data from Apache Flink, great!
+
+[Full Code 
Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
+
+Solved all integration problems, and tested with different types of data 
(Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will 
be part of Flink 1.2.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/kylin_client_tool.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/kylin_client_tool.cn.md 
b/website/_docs20/tutorial/kylin_client_tool.cn.md
new file mode 100644
index 0000000..7100b19
--- /dev/null
+++ b/website/_docs20/tutorial/kylin_client_tool.cn.md
@@ -0,0 +1,97 @@
+---
+layout: docs20-cn
+title:  Kylin Client Tool 使用教程
+categories: 教程
+permalink: /cn/docs20/tutorial/kylin_client_tool.html
+---
+  
+> Kylin-client-tool是一个用python编写的,完全基于kylin的rest 
api的工具。可以实现kylin的cube创建,按时build 
cube,job的提交、调度、查看、取消与恢复。
+  
+## 安装
+1.确认运行环境安装了python2.6/2.7
+
+2.本工具需安装第三方python包
apscheduler和requests,运行setup.sh进行安装
,mac用户运行setup-mac.sh进行安装。也可用setuptools进行安装
+
+## 配置
+修改工具目录下的settings/settings.py文件,进行配置
+
+`KYLIN_USER`  Kylin用户名
+
+`KYLIN_PASSWORD`  Kylin的密码
+
+`KYLIN_REST_HOST`  Kylin的地址
+
+`KYLIN_REST_PORT`  Kylin的端口
+
+`KYLIN_JOB_MAX_COCURRENT`  允许同时build的job数量
+
+`KYLIN_JOB_MAX_RETRY`  cube build出现error后,允许的重启job次数
+
+## 命令行的使用
+本工具使用optparse通过命令行来执行操作,å…
·ä½“用法可通过`python kylin_client_tool.py -h`来查看
+
+## cube的创建
+本工具定义了一种读手写的文本,来快速cube创建的方法,æ 
¼å¼å¦‚下
+
+`cube名|fact table名|维度1,维度1类型;维度2,维度2类型...|指æ 
‡1,指标1表达式,指标1类型...|设置项|filter|`
+
+设置项内有以下选项,
+
+`no_dictionary`  设置Rowkeys中不生成dictionary的维度及其长度
+
+`mandatory_dimension`  设置Rowkeys中mandatory的维度
+
+`aggregation_group`  设置aggregation group
+
+`partition_date_column`  设置partition date column
+
+`partition_date_start`  设置partition start date
+
+具体例子可以查看cube_def.csv文件,目前不支持含lookup 
table的cube创建
+
+使用`-c`命令进行创建,用`-F`指定cube定义文件,例如
+
+`python kylin_client_tool.py -c -F cube_def.csv`
+
+## build cube
+###使用cube定义文件build
+使用`-b`命令,需要用`-F`指定cube定义文件,如果指定了partition
 date column,通过`-T`指定end date(year-month-dayæ 
¼å¼),若不指定,以当前时间为end date,例如
+
+`python kylin_client_tool.py -b -F cube_def.csv -T 2016-03-01`
+
+###使用cube名文件build
+用`-f`指定cube名文件,文件每行一个cube名
+
+`python kylin_client_tool.py -b -f cube_names.csv -T 2016-03-01`
+
+###直接命令行写cube名build
+用`-C`指定cube名,通过逗号进行分隔
+
+`python kylin_client_tool.py -b -C client_tool_test1,client_tool_test2 -T 
2016-03-01`
+
+## job管理
+###查看job状态
+使用`-s`命令查看,用`-f`指定cube名文件,用`-C`指定cube名,若不指定,将查看所有cube状态。用`-S`指定job状态,R表示`Running`,E表示`Error`,F表示`Finished`,D表示`Discarded`,例如:
+
+`python kylin_client_tool.py -s -C kylin_sales_cube -f cube_names.csv -S F`
+
+###恢复job
+用`-r`命令恢复job,用`-f`指定cube名文件,用`-C`指定cube名,若不指定,将恢复所有Error状态的job,例如:
+
+`python kylin_client_tool.py -r -C kylin_sales_cube -f cube_names.csv`
+
+###取消job
+用`-k`命令取消job,用`-f`指定cube名文件,用`-C`指定cube名,若不指定,将取消所有Running或Error状态的job,例如:
+
+`python kylin_client_tool.py -k -C kylin_sales_cube -f cube_names.csv`
+
+## 定时build cube
+### 每隔一段时间build cube
+在cube build命令的基础上,使用`-B 
i`指定每隔一段时间build的方式,使用`-O`指定间隔的小时数,例如:
+
+`python kylin_client_tool.py -b -F cube_def.csv -B i -O 1`
+
+### 设定时间build cube
+使用`-B t`指定按时build 
cube的方式,使用`-O`指定build时间,用逗号进行分隔
+
+`python kylin_client_tool.py -b -F cube_def.csv -T 2016-03-04 -B t -O 
2016,3,1,0,0,0`

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/kylin_sample.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/kylin_sample.md 
b/website/_docs20/tutorial/kylin_sample.md
new file mode 100644
index 0000000..d083f10
--- /dev/null
+++ b/website/_docs20/tutorial/kylin_sample.md
@@ -0,0 +1,21 @@
+---
+layout: docs20
+title:  Quick Start with Sample Cube
+categories: tutorial
+permalink: /docs20/tutorial/kylin_sample.html
+---
+
+Kylin provides a script for you to create a sample Cube; the script will also 
create three sample hive tables:
+
+1. Run ${KYLIN_HOME}/bin/sample.sh ; Restart kylin server to flush the caches;
+2. Logon Kylin web with default user ADMIN/KYLIN, select project "learn_kylin" 
in the project dropdown list (left upper corner);
+3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick 
up a date later than 2014-01-01 (to cover all 10000 sample records);
+4. Check the build progress in "Monitor" tab, until 100%;
+5. Execute SQLs in the "Insight" tab, for example:
+       select part_dt, sum(price) as total_selled, count(distinct seller_id) 
as sellers from kylin_sales group by part_dt order by part_dt
+6. You can verify the query result and compare the response time with hive;
+
+   
+## What's next
+
+You can create another cube with the sample tables, by following the tutorials.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/odbc.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/odbc.cn.md 
b/website/_docs20/tutorial/odbc.cn.md
new file mode 100644
index 0000000..665b824
--- /dev/null
+++ b/website/_docs20/tutorial/odbc.cn.md
@@ -0,0 +1,34 @@
+---
+layout: docs20-cn
+title:  Kylin ODBC 驱动程序教程
+categories: 教程
+permalink: /cn/docs20/tutorial/odbc.html
+version: v1.2
+since: v0.7.1
+---
+
+> 我们提供Kylin ODBC驱动程序以支持ODBCå…
¼å®¹å®¢æˆ·ç«¯åº”用的数据访问。
+> 
+> 32位版本或64位版本的驱动程序都是可用的。
+> 
+> 测试操作系统:Windows 7,Windows Server 2008 R2
+> 
+> 测试应用:Tableau 8.0.4 和 Tableau 8.1.3
+
+## 前提条件
+1. Microsoft Visual C++ 2012 再分配(Redistributable)
+   * 32位Windows或32位Tableau Desktop:下载:[32bit 
version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe)
 
+   * 64位Windows或64位Tableau Desktop:下载:[64bit 
version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+2. ODBC驱动程序内部从一个REST服务器获取结果,确保你
能够访问一个
+
+## 安装
+1. 如果你已经安装,首先卸载已存在的Kylin ODBC
+2. 从[下载](../../download/)下载附件驱动安装程序,并运行。
+   * 32位Tableau Desktop:请安装KylinODBCDriver (x86).exe
+   * 64位Tableau Desktop:请安装KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be 
able to publish to there without issues
+
+## 错误报告
+如有问题,请报告错误至Apache Kylin JIRA,或者
发送邮件到dev邮件列表。

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/odbc.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/odbc.md b/website/_docs20/tutorial/odbc.md
new file mode 100644
index 0000000..f386fd6
--- /dev/null
+++ b/website/_docs20/tutorial/odbc.md
@@ -0,0 +1,49 @@
+---
+layout: docs20
+title:  Kylin ODBC Driver
+categories: tutorial
+permalink: /docs20/tutorial/odbc.html
+since: v0.7.1
+---
+
+> We provide Kylin ODBC driver to enable data access from ODBC-compatible 
client applications.
+> 
+> Both 32-bit version or 64-bit version driver are available.
+> 
+> Tested Operation System: Windows 7, Windows Server 2008 R2
+> 
+> Tested Application: Tableau 8.0.4, Tableau 8.1.3 and Tableau 9.1
+
+## Prerequisites
+1. Microsoft Visual C++ 2012 Redistributable 
+   * For 32 bit Windows or 32 bit Tableau Desktop: Download: [32bit 
version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x86.exe)
 
+   * For 64 bit Windows or 64 bit Tableau Desktop: Download: [64bit 
version](http://download.microsoft.com/download/1/6/B/16B06F60-3B20-4FF2-B699-5E9B7962F9AE/VSU_4/vcredist_x64.exe)
+
+
+2. ODBC driver internally gets results from a REST server, make sure you have 
access to one
+
+## Installation
+1. Uninstall existing Kylin ODBC first, if you already installled it before
+2. Download ODBC Driver from [download](../../download/).
+   * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
+   * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
+
+3. Both drivers already be installed on Tableau Server, you properly should be 
able to publish to there without issues
+
+## DSN configuration
+1. Open ODBCAD to configure DSN.
+       * For 32 bit driver, please use the 32bit version in 
C:\Windows\SysWOW64\odbcad32.exe
+       * For 64 bit driver, please use the default "Data Sources (ODBC)" in 
Control Panel/Administrator Tools
+![]( /images/Kylin-ODBC-DSN/1.png)
+
+2. Open "System DSN" tab, and click "Add", you will see KylinODBCDriver listed 
as an option, Click "Finish" to continue.
+![]( /images/Kylin-ODBC-DSN/2.png)
+
+3. In the pop up dialog, fill in all the blanks, The server host is where your 
Kylin Rest Server is started.
+![]( /images/Kylin-ODBC-DSN/3.png)
+
+4. Click "Done", and you will see your new DSN listed in the "System Data 
Sources", you can use this DSN afterwards.
+![]( /images/Kylin-ODBC-DSN/4.png)
+
+## Bug Report
+Please open Apache Kylin JIRA to report bug, or send to dev mailing list.

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/powerbi.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/powerbi.cn.md 
b/website/_docs20/tutorial/powerbi.cn.md
new file mode 100644
index 0000000..9326a82
--- /dev/null
+++ b/website/_docs20/tutorial/powerbi.cn.md
@@ -0,0 +1,56 @@
+---
+layout: docs20-cn
+title:  微软Excel及Power BI教程
+categories: tutorial
+permalink: /cn/docs20/tutorial/powerbi.html
+version: v1.2
+since: v1.2
+---
+
+Microsoft 
Excel是当今Windows平台上最流行的数据处理软件之一,支持多种数据处理功能,可以利用Power
 Query从ODBC数据源读取数据并返回到数据表中。
+
+Microsoft Power BI 是由微软推出的商业智能的专业分析工å…
·ï¼Œç»™ç”¨æˆ·æä¾›ç®€å•且丰富的数据可视化及分析功能。
+
+> Apache Kylin目前版本不支持原始数据的查询,部分查询会因
此失败,导致应用程序发生异常,建议打上KYLIN-1075补丁包
以优化查询结果的显示。
+
+
+> Power BI及Excel不支持"connect live"模式,请注意并添加
where条件在查询超大数据集时候,以避å…
ä»ŽæœåŠ¡å™¨æ‹‰åŽ»è¿‡å¤šçš„æ•°æ®åˆ°æœ¬åœ°ï¼Œç”šè‡³åœ¨æŸäº›æƒ…
况下查询执行失败。
+
+### Install ODBC Driver
+参考页面[Kylin ODBC 
驱动程序教程](./odbc.html),请确保下载并安装Kylin ODBC Driver 
__v1.2__. 如果你安装有早前版本,请卸载后再安装。 
+
+### 连接Excel到Kylin
+1. 从微软官网下载和安装Power Query,安装
完成后在Excel中会看到Power Query的Fast Tab,单击`From other 
sources`下拉按钮,并选择`From ODBC`项
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2. 在弹出的`From ODBC`数据连接向导中输入Apache 
Kylin服务器的连接字符串,也可以在`SQL`文本框中输å…
¥æ‚¨æƒ³è¦æ‰§è¡Œçš„SQL语句,单击`OK`,SQL的执行结果就会立即åŠ
 è½½åˆ°Excel的数据表中
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> 为了简化连接字符串的输入,推荐创建Apache 
Kylin的DSN,可以将连接字符串简化为DSN=[YOUR_DSN_NAME],有å…
³DSN的创建请参考:[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599)。
+
+ 
+3. 如果您选择不输入SQL语句,Power 
Query将会列出所有的数据库表,您可以根据需要对整张
表的数据进行加载。但是,Apache 
Kylin暂不支持原数据的查询,部分表的加载可能因此受限
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4. 稍等片刻,数据已成功加载到Excel中
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  
一旦服务器端数据产生更新,则需要对Excel中的数据进行同步,右键单击右侧列表中的数据源,选择`Refresh`,最新的数据便会更新到数据表中.
+
+6.  1.  为了提升性能,可以在Power Query中打开`Query 
Options`设置,然后开启`Fast data load`,这将提高数据加
载速度,但可能造成界面的暂时无响应
+
+### Power BI
+1.  启动您已经安装的Power BI桌面版程序,单击`Get 
data`按钮,并选中ODBC数据源.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  在弹出的`From ODBC`数据连接向导中输入Apache 
Kylin服务器的数据库连接字符串,也可以在`SQL`文本框中输å
…
¥æ‚¨æƒ³è¦æ‰§è¡Œçš„SQL语句。单击`OK`,SQL的执行结果就会立即åŠ
 è½½åˆ°Power BI中
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  如果您选择不输入SQL语句,Power 
BI将会列出项目中所有的表,您可以根据需要将整张
表的数据进行加载。但是,Apache 
Kylin暂不支持原数据的查询,部分表的加载可能因此受限
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  现在你可以进一步使用Power BI进行可视化分析:
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  单击工具栏的`Refresh`按钮即可重新加
载数据并对图表进行更新
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/powerbi.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/powerbi.md 
b/website/_docs20/tutorial/powerbi.md
new file mode 100644
index 0000000..5465c57
--- /dev/null
+++ b/website/_docs20/tutorial/powerbi.md
@@ -0,0 +1,54 @@
+---
+layout: docs20
+title:  MS Excel and Power BI
+categories: tutorial
+permalink: /docs20/tutorial/powerbi.html
+since: v1.2
+---
+
+Microsoft Excel is one of the most famous data tool on Windows platform, and 
has plenty of data analyzing functions. With Power Query installed as plug-in, 
excel can easily read data from ODBC data source and fill spreadsheets. 
+
+Microsoft Power BI is a business intelligence tool providing rich 
functionality and experience for data visualization and processing to user.
+
+> Apache Kylin currently doesn't support query on raw data yet, some queries 
might fail and cause some exceptions in application. Patch KYLIN-1075 is 
recommended to get better look of query result.
+
+> Power BI and Excel do not support "connect live" model for other ODBC driver 
yet, please pay attention when you query on huge dataset, it may pull too many 
data into your client which will take a while even fail at the end.
+
+### Install ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+Please make sure to download and install Kylin ODBC Driver __v1.2__. If you 
already installed ODBC Driver in your system, please uninstall it first. 
+
+### Kylin and Excel
+1. Download Power Query from Microsoft’s Website and install it. Then run 
Excel, switch to `Power Query` fast tab, click `From Other Sources` dropdown 
list, and select `ODBC` item.
+![](/images/tutorial/odbc/ms_tool/Picture1.png)
+
+2.  You’ll see `From ODBC` dialog, just type Database Connection String of 
Apache Kylin Server in the `Connection String` textbox. Optionally you can type 
a SQL statement in `SQL statement` textbox. Click `OK`, result set will run to 
your spreadsheet now.
+![](/images/tutorial/odbc/ms_tool/Picture2.png)
+
+> Tips: In order to simplify the Database Connection String, DSN is 
recommended, which can shorten the Connection String like 
`DSN=[YOUR_DSN_NAME]`. Details about DSN, refer to 
[https://support.microsoft.com/en-us/kb/305599](https://support.microsoft.com/en-us/kb/305599).
+ 
+3. If you didn’t input the SQL statement in last step, Power Query will list 
all tables in the project, which means you can load data from the whole table. 
But, since Apache Kylin cannot query on raw data currently, this function may 
be limited.
+![](/images/tutorial/odbc/ms_tool/Picture3.png)
+
+4.  Hold on for a while, the data is lying in Excel now.
+![](/images/tutorial/odbc/ms_tool/Picture4.png)
+
+5.  If you want to sync data with Kylin Server, just right click the data 
source in right panel, and select `Refresh`, then you’ll see the latest data.
+
+6.  To improve data loading performance, you can enable `Fast data load` in 
Power Query, but this will make your UI unresponsive for a while. 
+
+### Power BI
+1.  Run Power BI Desktop, and click `Get Data` button, then select `ODBC` as 
data source type.
+![](/images/tutorial/odbc/ms_tool/Picture5.png)
+
+2.  Same with Excel, just type Database Connection String of Apache Kylin 
Server in the `Connection String` textbox, and optionally type a SQL statement 
in `SQL statement` textbox. Click `OK`, the result set will come to Power BI as 
a new data source query.
+![](/images/tutorial/odbc/ms_tool/Picture6.png)
+
+3.  If you didn’t input the SQL statement in last step, Power BI will list 
all tables in the project, which means you can load data from the whole table. 
But, since Apache Kylin cannot query on raw data currently, this function may 
be limited.
+![](/images/tutorial/odbc/ms_tool/Picture7.png)
+
+4.  Now you can start to enjoy analyzing with Power BI.
+![](/images/tutorial/odbc/ms_tool/Picture8.png)
+
+5.  To reload the data and redraw the charts, just click `Refresh` button in 
`Home` fast tab.
+

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/squirrel.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/squirrel.md 
b/website/_docs20/tutorial/squirrel.md
new file mode 100644
index 0000000..7d0c9d9
--- /dev/null
+++ b/website/_docs20/tutorial/squirrel.md
@@ -0,0 +1,112 @@
+---
+layout: docs20
+title:  Connect from SQuirreL
+categories: tutorial
+permalink: /docs20/tutorial/squirrel.html
+---
+
+### Introduction
+
+[SQuirreL SQL](http://www.squirrelsql.org/) is a multi platform Universal SQL 
Client (GNU License). You can use it to access HBase + Phoenix and Hive. This 
document introduces how to connect to Kylin from SQuirreL.
+
+### Used Software
+
+* [Kylin v1.6.0](/download/) & ODBC 1.6
+* [SquirreL SQL v3.7.1](http://www.squirrelsql.org/)
+
+## Pre-requisites
+
+* Find the Kylin JDBC driver jar
+  From Kylin Download, Choose Binary and the **correct version of Kylin and 
HBase**
+       Download & Unpack:  in **./lib**: 
+  ![](/images/SQuirreL-Tutorial/01.png)
+
+
+* Need an instance of Kylin, with a Cube; the [Sample Cube](kylin_sample.html) 
is enough.
+
+  ![](/images/SQuirreL-Tutorial/02.png)
+
+
+* [Dowload and install SquirreL](http://www.squirrelsql.org/#installation)
+
+## Add Kylin JDBC Driver
+
+On left menu: ![alt text](/images/SQuirreL-Tutorial/03.png) >![alt 
text](/images/SQuirreL-Tutorial/04.png)  > ![alt 
text](/images/SQuirreL-Tutorial/05.png)  > ![alt 
text](/images/SQuirreL-Tutorial/06.png)
+
+And locate the JAR: ![alt text](/images/SQuirreL-Tutorial/07.png)
+
+Configure this parameters:
+
+* Put a name: ![alt text](/images/SQuirreL-Tutorial/08.png)
+* Example URL ![alt text](/images/SQuirreL-Tutorial/09.png)
+
+  jdbc:kylin://172.17.0.2:7070/learn_kylin
+* Put Class Name: ![alt text](/images/SQuirreL-Tutorial/10.png)
+       Tip:  If auto complete not work, type:  org.apache.kylin.jdbc.Driver 
+       
+Check the Driver List: ![alt text](/images/SQuirreL-Tutorial/11.png)
+
+## Add Aliases
+
+On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  > ![alt 
text](/images/SQuirreL-Tutorial/13.png) : (Login pass by default: ADMIN / KYLIN)
+
+  ![](/images/SQuirreL-Tutorial/14.png)
+
+
+And automatically launch conection:
+
+  ![](/images/SQuirreL-Tutorial/15.png)
+
+
+## Connect and Execute
+
+The startup window when connected:
+
+  ![](/images/SQuirreL-Tutorial/16.png)
+
+
+Choose Tab: and write a query  (whe use Kylin’s example cube):
+
+  ![](/images/SQuirreL-Tutorial/17.png)
+
+
+```
+select part_dt, sum(price) as total_selled, count(distinct seller_id) as 
sellers 
+from kylin_sales group by part_dt 
+order by part_dt
+```
+
+Execute With: ![alt text](/images/SQuirreL-Tutorial/18.png) 
+
+  ![](/images/SQuirreL-Tutorial/19.png)
+
+
+And it’s works!
+
+## Tips:
+
+SquirreL isn’t the most stable SQL Client, but it is very flexible and get a 
lot of info; It can be used for PoC and checking connectivity issues.
+
+List of tables: 
+
+  ![](/images/SQuirreL-Tutorial/21.png)
+
+
+List of columns of table:
+
+  ![](/images/SQuirreL-Tutorial/22.png)
+
+
+List of column of Querie:
+
+  ![](/images/SQuirreL-Tutorial/23.png)
+
+
+Export the result of queries:
+
+  ![](/images/SQuirreL-Tutorial/24.png)
+
+
+ Info about time query execution:
+
+  ![](/images/SQuirreL-Tutorial/25.png)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/tableau.cn.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/tableau.cn.md 
b/website/_docs20/tutorial/tableau.cn.md
new file mode 100644
index 0000000..e185b38
--- /dev/null
+++ b/website/_docs20/tutorial/tableau.cn.md
@@ -0,0 +1,116 @@
+---
+layout: docs20-cn
+title:  Tableau教程
+categories: 教程
+permalink: /cn/docs20/tutorial/tableau.html
+version: v1.2
+since: v0.7.1
+---
+
+> Kylin ODBC驱动程序与Tableau存在一些限制,请在尝试前仔细阅
读本说明书。
+> * 仅
支持“managed”分析路径,Kylin引擎将对意外的维度或度量报错
+> * 请始终优先选择事实表,然后使用正确的连接条件添加
查找表(cube中已定义的连接类型)
+> * 请勿尝试在多个事实表或多个查找表之间进行连接;
+> * 你可以尝试使用类似Tableau过滤器中seller id这æ 
·çš„高基数维度,但引擎现在将只返回有限个Tableau过滤器中的seller
 id。
+> 
+> 
如需更多详细信息或有任何问题,请联系Kylin团队:`kylino...@gmail.com`
+
+
+### 使用Tableau 9.x的用户
+请参考[Tableau 9 教程](./tableau_91.html)以获得更详细帮助。
+
+### 步骤1. 安装Kylin ODBC驱动程序
+参考页面[Kylin ODBC 驱动程序教程](./odbc.html)。
+
+### 步骤2. 连接到Kylin服务器
+> 我们建议使用Connect Using Driver而不是Using DSN。
+
+Connect Using Driver: 选择左侧面板中的“Other 
Database(ODBC)”和弹出窗口的“KylinODBCDriver”。
+
+![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
+
+输入你
的服务器位置和证书:服务器主机,端口,用户名和密码。
+
+![](/images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
+
+点击“Connect”获取你有权限访问的项目列表。有å…
³æƒé™çš„详细信息请参考[Kylin Cube Permission Grant 
Tutorial](https://github.com/KylinOLAP/Kylin/wiki/Kylin-Cube-Permission-Grant-Tutorial)。然后在下拉列表中选择ä½
 æƒ³è¦è¿žæŽ¥çš„项目。
+
+![](/images/Kylin-and-Tableau-Tutorial/3 project.jpg)
+
+点击“Done”完成连接。
+
+![](/images/Kylin-and-Tableau-Tutorial/4 done.jpg)
+
+### 步骤3. 使用单表或多表
+> 限制
+>    * 必须首先选择事实表
+>    * 请勿仅支持从查找表选择
+>    * 连接条件必须与cube定义匹配
+
+**选择事实表**
+
+选择`Multiple Tables`。
+
+![](/images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
+
+然后点击`Add Table...`添加一张事实表。
+
+![](/images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
+
+![](/images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
+
+**选择查找表**
+
+点击`Add Table...`添加一张查找表。
+
+![](/images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
+
+仔细建立连接条款。
+
+![](/images/Kylin-and-Tableau-Tutorial/8 join.jpg)
+
+继续通过点击`Add Table...`添加
表直到所有的查找表都被正确添加
。命名此连接以在Tableau中使用。
+
+![](/images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
+
+**使用Connect Live**
+
+`Data Connection`共有三种类型。选择`Connect Live`选项。
+
+![](/images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
+
+然后你就能够尽情使用Tableau进行分析。
+
+![](/images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
+
+**添加额外查找表**
+
+点击顶部菜单栏的`Data`,选择`Edit Tables...`更新查找表信息。
+
+![](/images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
+
+### 步骤4. 使用自定义SQL
+使用自定义SQL类似于使用单表/多表,但你需要在`Custom SQL`æ 
‡ç­¾å¤åˆ¶ä½ çš„SQL后采取同上指令。
+
+![](/images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
+
+### 步骤5. 发布到Tableau服务器
+如果你已经完成使用Tableau制作一个仪表板,你
可以将它发布到Tableau服务器上。
+点击顶部菜单栏的`Server`,选择`Publish Workbook...`。
+
+![](/images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
+
+然后登陆你的Tableau服务器并准备发布。
+
+![](/images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
+
+如果你正在使用Connect Using Driver而不是DSN连接,你
还将需要嵌入你的密ç 
ã€‚点击左下方的`Authentication`按钮并选择`Embedded 
Password`。点击`Publish`然后你将看到结果。
+
+![](/images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
+
+### 小贴士
+* 在Tableau中隐藏表名
+
+    * Tableau将会根据源表名分组显示列,但用户可能希望æ 
¹æ®å…¶ä»–不同的安排组织列。使用Tableau中的"Group by 
Folder"并创建文件夹来对不同的列分组。
+
+     ![](/images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)

http://git-wip-us.apache.org/repos/asf/kylin/blob/7ea64f38/website/_docs20/tutorial/tableau.md
----------------------------------------------------------------------
diff --git a/website/_docs20/tutorial/tableau.md 
b/website/_docs20/tutorial/tableau.md
new file mode 100644
index 0000000..e46b4e6
--- /dev/null
+++ b/website/_docs20/tutorial/tableau.md
@@ -0,0 +1,113 @@
+---
+layout: docs20
+title:  Tableau 8
+categories: tutorial
+permalink: /docs20/tutorial/tableau.html
+---
+
+> There are some limitations of Kylin ODBC driver with Tableau, please read 
carefully this instruction before you try it.
+> 
+> * Only support "managed" analysis path, Kylin engine will raise exception 
for unexpected dimension or metric
+> * Please always select Fact Table first, then add lookup tables with correct 
join condition (defined join type in cube)
+> * Do not try to join between fact tables or lookup tables;
+> * You can try to use high cardinality dimensions like seller id as Tableau 
Filter, but the engine will only return limited seller id in Tableau's filter 
now.
+
+### For Tableau 9.x User
+Please refer to [Tableau 9.x Tutorial](./tableau_91.html) for detail guide.
+
+### Step 1. Install Kylin ODBC Driver
+Refer to this guide: [Kylin ODBC Driver Tutorial](./odbc.html).
+
+### Step 2. Connect to Kylin Server
+> We recommended to use Connect Using Driver instead of Using DSN.
+
+Connect Using Driver: Select "Other Database(ODBC)" in the left panel and 
choose KylinODBCDriver in the pop-up window. 
+
+![](/images/Kylin-and-Tableau-Tutorial/1 odbc.png)
+
+Enter your Sever location and credentials: server host, port, username and 
password.
+
+![]( /images/Kylin-and-Tableau-Tutorial/2 serverhost.jpg)
+
+Click "Connect" to get the list of projects that you have permission to 
access. See details about permission in [Kylin Cube Permission Grant 
Tutorial](./acl.html). Then choose the project you want to connect in the drop 
down list. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/3 project.jpg)
+
+Click "Done" to complete the connection.
+
+![]( /images/Kylin-and-Tableau-Tutorial/4 done.jpg)
+
+### Step 3. Using Single Table or Multiple Tables
+> Limitation
+> 
+>    * Must select FACT table first
+>    * Do not support select from lookup table only
+>    * The join condition must match within cube definition
+
+**Select Fact Table**
+
+Select `Multiple Tables`.
+
+![]( /images/Kylin-and-Tableau-Tutorial/5 multipleTable.jpg)
+
+Then click `Add Table...` to add a fact table.
+
+![]( /images/Kylin-and-Tableau-Tutorial/6 facttable.jpg)
+
+![]( /images/Kylin-and-Tableau-Tutorial/6 facttable2.jpg)
+
+**Select Look-up Table**
+
+Click `Add Table...` to add a look-up table. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/7 lkptable.jpg)
+
+Set up the join clause carefully. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/8 join.jpg)
+
+Keep add tables through click `Add Table...` until all the look-up tables have 
been added properly. Give the connection a name for use in Tableau.
+
+![]( /images/Kylin-and-Tableau-Tutorial/9 connName.jpg)
+
+**Using Connect Live**
+
+There are three types of `Data Connection`. Choose the `Connect Live` option. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/10 connectLive.jpg)
+
+Then you can enjoy analyzing with Tableau.
+
+![]( /images/Kylin-and-Tableau-Tutorial/11 analysis.jpg)
+
+**Add additional look-up Tables**
+
+Click `Data` in the top menu bar, select `Edit Tables...` to update the 
look-up table information.
+
+![]( /images/Kylin-and-Tableau-Tutorial/12 edit tables.jpg)
+
+### Step 4. Using Customized SQL
+To use customized SQL resembles using Single Table/Multiple Tables, except 
that you just need to paste your SQL in `Custom SQL` tab and take the same 
instruction as above.
+
+![]( /images/Kylin-and-Tableau-Tutorial/19 custom.jpg)
+
+### Step 5. Publish to Tableau Server
+Suppose you have finished making a dashboard with Tableau, you can publish it 
to Tableau Server.
+Click `Server` in the top menu bar, select `Publish Workbook...`. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/14 publish.jpg)
+
+Then sign in your Tableau Server and prepare to publish. 
+
+![]( /images/Kylin-and-Tableau-Tutorial/16 prepare-publish.png)
+
+If you're Using Driver Connect instead of DSN connect, you'll need to 
additionally embed your password in. Click the `Authentication` button at left 
bottom and select `Embedded Password`. Click `Publish` and you will see the 
result.
+
+![]( /images/Kylin-and-Tableau-Tutorial/17 embedded-pwd.png)
+
+### Tips
+* Hide Table name in Tableau
+
+    * Tableau will display columns be grouped by source table name, but user 
may want to organize columns with different structure. Using "Group by Folder" 
in Tableau and Create Folders to group different columns.
+
+     ![]( /images/Kylin-and-Tableau-Tutorial/18 groupby-folder.jpg)

Reply via email to