This is an automated email from the ASF dual-hosted git repository.

billyliu pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/document by this push:
     new 00c9241  fix typo
00c9241 is described below

commit 00c924115f5d382407cfdee37d41d6aa39839cba
Author: Billy Liu <billy...@apache.org>
AuthorDate: Sat Sep 22 13:49:20 2018 +0800

    fix typo
---
 website/_docs/gettingstarted/faq.md              |  6 +++---
 website/_docs/gettingstarted/terminology.md      |  2 +-
 website/_docs/howto/howto_upgrade.md             |  6 +++---
 website/_docs/howto/howto_use_beeline.md         |  2 +-
 website/_docs/install/configuration.md           |  4 ++--
 website/_docs/install/index.md                   |  2 +-
 website/_docs/tutorial/create_cube.md            |  4 ++--
 website/_docs/tutorial/cube_build_performance.md |  4 ++--
 website/_docs/tutorial/cube_spark.md             |  6 +++---
 website/_docs/tutorial/cube_streaming.md         |  4 ++--
 website/_docs/tutorial/flink.md                  |  2 +-
 website/_docs/tutorial/hybrid.md                 |  6 +++---
 website/_docs/tutorial/kylin_client_tool.md      |  2 +-
 website/_docs/tutorial/kylin_sample.md           |  2 +-
 website/_docs/tutorial/odbc.md                   |  2 +-
 website/_docs/tutorial/query_pushdown.md         |  2 +-
 website/_docs/tutorial/setup_jdbc_datasource.md  |  4 ++--
 website/_docs/tutorial/setup_systemcube.md       | 14 +++++++-------
 website/_docs/tutorial/squirrel.md               |  4 ++--
 website/_docs/tutorial/tableau_91.md             |  2 +-
 website/_docs/tutorial/use_dashboard.md          |  8 ++++----
 website/_docs/tutorial/web.md                    |  2 +-
 website/community/poweredby.md                   |  4 ++--
 website/index.md                                 |  2 +-
 24 files changed, 48 insertions(+), 48 deletions(-)

diff --git a/website/_docs/gettingstarted/faq.md 
b/website/_docs/gettingstarted/faq.md
index 99d25a0..9ebcad4 100644
--- a/website/_docs/gettingstarted/faq.md
+++ b/website/_docs/gettingstarted/faq.md
@@ -204,8 +204,8 @@ kylin.engine.spark-conf.spark.yarn.queue=YOUR_QUEUE_NAME
   {% endhighlight %}
 
 
-#### SUM(field) returns a negtive result while all the numbers in this field 
are > 0
-  * If a column is declared as integer in Hive, the SQL engine (calcite) will 
use column's type (integer) as the data type for "SUM(field)", while the 
aggregated value on this field may exceed the scope of integer; in that case 
the cast will cause a negtive value be returned; The workround is, alter that 
column's type to BIGINT in hive, and then sync the table schema to Kylin (the 
cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive 
for an integer column which  [...]
+#### SUM(field) returns a negative result while all the numbers in this field 
are > 0
+  * If a column is declared as integer in Hive, the SQL engine (calcite) will 
use column's type (integer) as the data type for "SUM(field)", while the 
aggregated value on this field may exceed the scope of integer; in that case 
the cast will cause a negtive value be returned; The workaround is, alter that 
column's type to BIGINT in hive, and then sync the table schema to Kylin (the 
cube doesn't need rebuild); Keep in mind that, always declare as BIGINT in hive 
for an integer column which [...]
 
 #### Why Kylin need extract the distinct columns from Fact Table before 
building cube?
   * Kylin uses dictionary to encode the values in each column, this greatly 
reduce the cube's storage size. To build the dictionary, Kylin need fetch the 
distinct values for each column.
@@ -307,4 +307,4 @@ Restart Kylin to take effective. If you have multiple Kylin 
server as a cluster,
 
 The data in 'hdfs-working-dir' ('hdfs:///kylin/kylin_metadata/' by default) 
includes intermediate files (will be GC) and Cuboid data (won't be GC). The 
Cuboid data is kept for the further segments' merge, as Kylin couldn't merge 
from HBase. If you're sure those segments won't be merged, you can move them to 
other paths or even delete.
 
-Please pay attention to the "resources" sub-folder under 'hdfs-working-dir', 
which persists some big metadata files like  dictionaries and lookup tables' 
snapshots. They shouldn't be moved.
\ No newline at end of file
+Please pay attention to the "resources" sub-folder under 'hdfs-working-dir', 
which persists some big metadata files like  dictionaries and lookup tables' 
snapshots. They shouldn't be moved.
diff --git a/website/_docs/gettingstarted/terminology.md 
b/website/_docs/gettingstarted/terminology.md
index 9037469..2d998b3 100644
--- a/website/_docs/gettingstarted/terminology.md
+++ b/website/_docs/gettingstarted/terminology.md
@@ -8,7 +8,7 @@ since: v0.5.x
  
 
 Here are some domain terms we are using in Apache Kylin, please check them for 
your reference.   
-They are basic knowledge of Apache Kylin which also will help to well 
understand such concept, term, knowledge, theory and others about Data 
Warehouse, Business Intelligence for analycits. 
+They are basic knowledge of Apache Kylin which also will help to well 
understand such concept, term, knowledge, theory and others about Data 
Warehouse, Business Intelligence for analytics. 
 
 * __Data Warehouse__: a data warehouse (DW or DWH), also known as an 
enterprise data warehouse (EDW), is a system used for reporting and data 
analysis, [wikipedia](https://en.wikipedia.org/wiki/Data_warehouse)
 * __Business Intelligence__: Business intelligence (BI) is the set of 
techniques and tools for the transformation of raw data into meaningful and 
useful information for business analysis purposes, 
[wikipedia](https://en.wikipedia.org/wiki/Business_intelligence)
diff --git a/website/_docs/howto/howto_upgrade.md 
b/website/_docs/howto/howto_upgrade.md
index e11edfa..1bac9c0 100644
--- a/website/_docs/howto/howto_upgrade.md
+++ b/website/_docs/howto/howto_upgrade.md
@@ -22,7 +22,7 @@ Below are versions specific guides:
 ## Upgrade from 2.4 to 2.5.0
 
 * Kylin 2.5 need Java 8; Please upgrade Java if you're running with Java 7.
-* Kylin metadata is compitable between 2.4 and 2.5. No migration is needed.
+* Kylin metadata is compatible between 2.4 and 2.5. No migration is needed.
 * Spark engine will move more steps from MR to Spark, you may see performance 
difference for the same cube after the upgrade.
 * Property `kylin.source.jdbc.sqoop-home` need be the location of sqoop 
installation, not its "bin" subfolder, please modify it if you're using RDBMS 
as the data source. 
 * The Cube planner is enabled by default now; New cubes will be optimized by 
it on first build. System cube and dashboard still need manual enablement.
@@ -72,7 +72,7 @@ Kylin v1.5.4 and v1.6.0 are compatible in metadata. Please 
follow the common upg
 Kylin v1.5.3 and v1.5.4 are compatible in metadata. Please follow the common 
upgrade steps above.
 
 ## Upgrade from 1.5.2 to v1.5.3
-Kylin v1.5.3 metadata is compitible with v1.5.2, your cubes don't need 
rebuilt, as usual, some actions need to be performed:
+Kylin v1.5.3 metadata is compatible with v1.5.2, your cubes don't need 
rebuilt, as usual, some actions need to be performed:
 
 #### 1. Update HBase coprocessor
 The HBase tables for existing cubes need be updated to the latest coprocessor; 
Follow [this guide](howto_update_coprocessor.html) to update;
@@ -82,7 +82,7 @@ From 1.5.3, Kylin doesn't need Hive to merge small files 
anymore; For users who
 
 
 ## Upgrade from 1.5.1 to v1.5.2
-Kylin v1.5.2 metadata is compitible with v1.5.1, your cubes don't need 
upgrade, while some actions need to be performed:
+Kylin v1.5.2 metadata is compatible with v1.5.1, your cubes don't need 
upgrade, while some actions need to be performed:
 
 #### 1. Update HBase coprocessor
 The HBase tables for existing cubes need be updated to the latest coprocessor; 
Follow [this guide](howto_update_coprocessor.html) to update;
diff --git a/website/_docs/howto/howto_use_beeline.md 
b/website/_docs/howto/howto_use_beeline.md
index 4b06b2e..599d9e9 100644
--- a/website/_docs/howto/howto_use_beeline.md
+++ b/website/_docs/howto/howto_use_beeline.md
@@ -10,5 +10,5 @@ 
Beeline(https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients) is
 Edit $KYLIN_HOME/conf/kylin.properties by:
 
   1. change kylin.hive.client=cli to kylin.hive.client=beeline
-  2. add "kylin.hive.beeline.params", this is where you can specifiy beeline 
commmand parameters. Like username(-n), JDBC URL(-u),etc. There's a sample 
kylin.hive.beeline.params included in default kylin.properties, however it's 
commented. You can modify the sample based on your real environment.
+  2. add "kylin.hive.beeline.params", this is where you can specify beeline 
command parameters. Like username(-n), JDBC URL(-u),etc. There's a sample 
kylin.hive.beeline.params included in default kylin.properties, however it's 
commented. You can modify the sample based on your real environment.
 
diff --git a/website/_docs/install/configuration.md 
b/website/_docs/install/configuration.md
index b2bf6ce..a9c6257 100644
--- a/website/_docs/install/configuration.md
+++ b/website/_docs/install/configuration.md
@@ -80,7 +80,7 @@ The main configuration file of Kylin.
 | kylin.dictionary.append-entry-size                    | 10000000             
|                                                              | No             
           |
 | kylin.dictionary.append-max-versions                  | 3                    
|                                                              | No             
           |
 | kylin.dictionary.append-version-ttl                   | 259200000            
|                                                              | No             
           |
-| kylin.dictionary.resuable                             | false                
| Whether reuse dict                                           | Yes            
           |
+| kylin.dictionary.reusable                             | false                
| Whether reuse dict                                           | Yes            
           |
 | kylin.dictionary.shrunken-from-global-enabled         | false                
| Whether shrink global dict                                   | Yes            
           |
 | kylin.snapshot.max-cache-entry                        | 500                  
|                                                              | No             
           |
 | kylin.snapshot.max-mb                                 | 300                  
|                                                              | No             
           |
@@ -217,7 +217,7 @@ The main configuration file of Kylin.
 | kylin.query.security-enabled                          | true                 
|                                                              |                
           |
 | kylin.query.cache-enabled                             | true                 
|                                                              |                
           |
 | kylin.query.timeout-seconds                           | 0                    
|                                                              |                
           |
-| kylin.query.timeout-seconds-coefficient               | 0.5                  
| the coefficient to controll query timeout seconds            | Yes            
           |
+| kylin.query.timeout-seconds-coefficient               | 0.5                  
| the coefficient to control query timeout seconds            | Yes             
          |
 | kylin.query.pushdown.runner-class-name                |                      
|                                                              |                
           |
 | kylin.query.pushdown.update-enabled                   | false                
|                                                              |                
           |
 | kylin.query.pushdown.cache-enabled                    | false                
|                                                              |                
           |
diff --git a/website/_docs/install/index.md b/website/_docs/install/index.md
index 63766ce..c9d89f6 100644
--- a/website/_docs/install/index.md
+++ b/website/_docs/install/index.md
@@ -27,7 +27,7 @@ The server to run Kylin need 4 core CPU, 16 GB memory and 100 
GB disk as the min
 
 Kylin depends on Hadoop cluster to process the massive data set. You need 
prepare a well configured Hadoop cluster for Kylin to run, with the common 
services includes HDFS, YARN, MapReduce, Hive, HBase, Zookeeper and other 
services. It is most common to install Kylin on a Hadoop client machine, from 
which Kylin can talk with the Hadoop cluster via command lines including 
`hive`, `hbase`, `hadoop`, etc. 
 
-Kylin itself can be started in any node of the Hadoop cluster. For simplity, 
you can run it in the master node. But to get better stability, we suggest you 
to deploy it a pure Hadoop client node, on which the command lines like `hive`, 
`hbase`, `hadoop`, `hdfs` already be installed and the client congfigurations 
(core-site.xml, hive-site.xml, hbase-site.xml, etc) are properly configured and 
will be automatically syned with other nodes. The Linux account that running 
Kylin has the permiss [...]
+Kylin itself can be started in any node of the Hadoop cluster. For simplity, 
you can run it in the master node. But to get better stability, we suggest you 
to deploy it a pure Hadoop client node, on which the command lines like `hive`, 
`hbase`, `hadoop`, `hdfs` already be installed and the client congfigurations 
(core-site.xml, hive-site.xml, hbase-site.xml, etc) are properly configured and 
will be automatically synced with other nodes. The Linux account that running 
Kylin has the permis [...]
 
 ## Installation Kylin
 
diff --git a/website/_docs/tutorial/create_cube.md 
b/website/_docs/tutorial/create_cube.md
index 9d92cbd..47c3c91 100644
--- a/website/_docs/tutorial/create_cube.md
+++ b/website/_docs/tutorial/create_cube.md
@@ -27,7 +27,7 @@ This tutorial will guide you to create a cube. It need you 
have at least 1 sampl
 
    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/4 +table.png)
 
-2. Enter the hive table names, separated with commad, and then click `Sync` .
+2. Enter the hive table names, separated with comma, and then click `Sync` .
 
    ![]( /images/tutorial/1.5/Kylin-Cube-Creation-Tutorial/5 hive-table.png)
 
@@ -185,7 +185,7 @@ For more please read this blog: [New Aggregation 
Group](/blog/2016/02/18/new-agg
 
 `Rowkeys`: the rowkeys are composed by the dimension encoded values. 
"Dictionary" is the default encoding method; If a dimension is not fit with 
dictionary (e.g., cardinality > 10 million), select "false" and then enter the 
fixed length for that dimension, usually that is the max length of that column; 
if a value is longer than that size it will be truncated. Please note, without 
dictionary encoding, the cube size might be much bigger.
 
-You can drag & drop a dimension column to adjust its position in rowkey; Put 
the mandantory dimension at the begining, then followed the dimensions that 
heavily involved in filters (where condition). Put high cardinality dimensions 
ahead of low cardinality dimensions.
+You can drag & drop a dimension column to adjust its position in rowkey; Put 
the mandatory dimension at the beginning, then followed the dimensions that 
heavily involved in filters (where condition). Put high cardinality dimensions 
ahead of low cardinality dimensions.
 
 `Mandatory Cuboids`: Whitelist of the cuboids that you want to build.
 
diff --git a/website/_docs/tutorial/cube_build_performance.md 
b/website/_docs/tutorial/cube_build_performance.md
index 2979836..fbe8b51 100755
--- a/website/_docs/tutorial/cube_build_performance.md
+++ b/website/_docs/tutorial/cube_build_performance.md
@@ -181,8 +181,8 @@ The tunning process has been:
 
 
 Now, there are three types of cubes:
-* Cubes with low cardinality in their dimensions (Like cube 4, most of time is 
usend in flat table steps)
-* Cubes with high cardinality in their dimensions (Like cube 6,most of time is 
usend on Build cube, the flat table steps are lower than 10%)
+* Cubes with low cardinality in their dimensions (Like cube 4, most of time is 
used in flat table steps)
+* Cubes with high cardinality in their dimensions (Like cube 6,most of time is 
used on Build cube, the flat table steps are lower than 10%)
 * The third type, ultra high cardinality (UHC) which is outside the scope of 
this article
 
 
diff --git a/website/_docs/tutorial/cube_spark.md 
b/website/_docs/tutorial/cube_spark.md
index 84e0ae5..9cfe366 100644
--- a/website/_docs/tutorial/cube_spark.md
+++ b/website/_docs/tutorial/cube_spark.md
@@ -29,7 +29,7 @@ To run Spark on Yarn, need specify **HADOOP_CONF_DIR** 
environment variable, whi
 
 ## Check Spark configuration
 
-Kylin embedes a Spark binary (v2.1.0) in $KYLIN_HOME/spark, all the Spark 
configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix 
*"kylin.engine.spark-conf."*. These properties will be extracted and applied 
when runs submit Spark job; E.g, if you configure 
"kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf 
spark.executor.memory=4G" as parameter when execute "spark-submit".
+Kylin embeds a Spark binary (v2.1.0) in $KYLIN_HOME/spark, all the Spark 
configurations can be managed in $KYLIN_HOME/conf/kylin.properties with prefix 
*"kylin.engine.spark-conf."*. These properties will be extracted and applied 
when runs submit Spark job; E.g, if you configure 
"kylin.engine.spark-conf.spark.executor.memory=4G", Kylin will use "--conf 
spark.executor.memory=4G" as parameter when execute "spark-submit".
 
 Before you run Spark cubing, suggest take a look on these configurations and 
do customization according to your cluster. Below is the recommended 
configurations:
 
@@ -63,7 +63,7 @@ 
kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-histo
 
 {% endhighlight %}
 
-For running on Hortonworks platform, need specify "hdp.version" as Java 
options for Yarn containers, so please uncommment the last three lines in 
kylin.properties. 
+For running on Hortonworks platform, need specify "hdp.version" as Java 
options for Yarn containers, so please uncomment the last three lines in 
kylin.properties. 
 
 Besides, in order to avoid repeatedly uploading Spark jars to Yarn, you can 
manually do that once, and then configure the jar's HDFS location; Please note, 
the HDFS location need be full qualified name.
 
@@ -103,7 +103,7 @@ Click "Next" to the "Configuration Overwrites" page, click 
"+Property" to add pr
 
    ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/2_overwrite_partition.png)
 
-The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a 
"TOPN(100)"; Their size estimation can be inaccurate when the source data is 
small: the estimized size is much larger than the real size, that causes much 
more RDD partitions be splitted, which slows down the build. Here 100 is a more 
reasonable number for it. Click "Next" and "Save" to save the cube.
+The sample cube has two memory hungry measures: a "COUNT DISTINCT" and a 
"TOPN(100)"; Their size estimation can be inaccurate when the source data is 
small: the estimated size is much larger than the real size, that causes much 
more RDD partitions be splitted, which slows down the build. Here 100 is a more 
reasonable number for it. Click "Next" and "Save" to save the cube.
 
 
 ## Build Cube with Spark
diff --git a/website/_docs/tutorial/cube_streaming.md 
b/website/_docs/tutorial/cube_streaming.md
index 779b251..27f1924 100644
--- a/website/_docs/tutorial/cube_streaming.md
+++ b/website/_docs/tutorial/cube_streaming.md
@@ -108,9 +108,9 @@ Save the data model.
 The streaming Cube is almost the same as a normal cube. a couple of points 
need get your attention:
 
 * The partition time column should be a dimension of the Cube. In Streaming 
OLAP the time is always a query condition, and Kylin will leverage this to 
narrow down the scanned partitions.
-* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest 
to use "mintue\_start", "hour\_start" or other, depends on how you will inspect 
the data.
+* Don't use "order\_time" as dimension as that is pretty fine-grained; suggest 
to use "minute\_start", "hour\_start" or other, depends on how you will inspect 
the data.
 * Define "year\_start", "quarter\_start", "month\_start", "day\_start", 
"hour\_start", "minute\_start" as a hierarchy to reduce the combinations to 
calculate.
-* In the "refersh setting" step, create more merge ranges, like 0.5 hour, 4 
hours, 1 day, and then 7 days; This will help to control the cube segment 
number.
+* In the "refresh setting" step, create more merge ranges, like 0.5 hour, 4 
hours, 1 day, and then 7 days; This will help to control the cube segment 
number.
 * In the "rowkeys" section, drag&drop the "minute\_start" to the head 
position, as for streaming queries, the time condition is always appeared; 
putting it to head will help to narrow down the scan range.
 
        
![](/images/tutorial/1.6/Kylin-Cube-Streaming-Tutorial/8_Cube_dimension.png)
diff --git a/website/_docs/tutorial/flink.md b/website/_docs/tutorial/flink.md
index 9bbf19d..4ddfbf3 100644
--- a/website/_docs/tutorial/flink.md
+++ b/website/_docs/tutorial/flink.md
@@ -246,4 +246,4 @@ Now you can read Kylin’s data from Apache Flink, great!
 
 [Full Code 
Example](https://github.com/albertoRamon/Flink/tree/master/ReadKylinFromFlink/flink-scala-project)
 
-Solved all integration problems, and tested with different types of data 
(Long, BigDecimal and Dates). The patch has been comited at 15 Oct, then, will 
be part of Flink 1.2.
+Solved all integration problems, and tested with different types of data 
(Long, BigDecimal and Dates). The patch has been committed at 15 Oct, then, 
will be part of Flink 1.2.
diff --git a/website/_docs/tutorial/hybrid.md b/website/_docs/tutorial/hybrid.md
index fff16d2..83ac594 100644
--- a/website/_docs/tutorial/hybrid.md
+++ b/website/_docs/tutorial/hybrid.md
@@ -6,7 +6,7 @@ permalink: /docs/tutorial/hybrid.html
 since: v2.5.0
 ---
 
-This tutorial will guide you to create a hybrid model. Regarding the concept 
of hybri, please refer to [this 
blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/).
+This tutorial will guide you to create a hybrid model. Regarding the concept 
of hybrid, please refer to [this 
blog](http://kylin.apache.org/blog/2015/09/25/hybrid-model/).
 
 ### I. Create a hybrid model
 One Hybrid model can refer to multiple cubes.
@@ -25,7 +25,7 @@ One Hybrid model can refer to multiple cubes.
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/3 
hybrid-created.png)
 
 ### II. Update a hybrid model
-1. Place the mouse over the hybrid name, then click `Action` button, in the 
drop-down list select `Edit`. You can update the ybrid by adding(> button) or 
deleting(< button) cubes to/from it. 
+1. Place the mouse over the hybrid name, then click `Action` button, in the 
drop-down list select `Edit`. You can update the hybrid by adding(> button) or 
deleting(< button) cubes to/from it. 
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/4 edit-hybrid.png)
 
 2. Click `Submit` to save the Hybrid model. 
@@ -43,4 +43,4 @@ After the hybrid model is created, you can run a query. As 
the hybrid has higher
 Click `Insight` in top bar, input a SQL statement to execute.
     ![]( /images/tutorial/2.5/Kylin-Hybrid-Creation-Tutorial/5 
sql-statement.png)
 
-*Please note, Hybrid model is not suitable for "bitmap" count distinct 
measures's merge across cubes, please have the partition date as a group by 
field in the SQL query. *
\ No newline at end of file
+*Please note, Hybrid model is not suitable for "bitmap" count distinct 
measures's merge across cubes, please have the partition date as a group by 
field in the SQL query. *
diff --git a/website/_docs/tutorial/kylin_client_tool.md 
b/website/_docs/tutorial/kylin_client_tool.md
index 2bdfc28..ce51a75 100644
--- a/website/_docs/tutorial/kylin_client_tool.md
+++ b/website/_docs/tutorial/kylin_client_tool.md
@@ -33,7 +33,7 @@ Options:
   -p, --password TEXT   Kylin password  [required]
   --project TEXT        Kylin project  [required]
   --prefix TEXT         Kylin RESTful prefix of url, default: "/kylin/api"
-  --debug / --no-debug  show debug infomation
+  --debug / --no-debug  show debug information
   --help                Show this message and exit.
 
 Commands:
diff --git a/website/_docs/tutorial/kylin_sample.md 
b/website/_docs/tutorial/kylin_sample.md
index bfcdf26..8b2c763 100644
--- a/website/_docs/tutorial/kylin_sample.md
+++ b/website/_docs/tutorial/kylin_sample.md
@@ -12,7 +12,7 @@ Kylin provides a script for you to create a sample Cube; the 
script will also cr
 3. Select the sample cube "kylin_sales_cube", click "Actions" -> "Build", pick 
up a date later than 2014-01-01 (to cover all 10000 sample records);
 4. Check the build progress in "Monitor" tab, until 100%;
 5. Execute SQLs in the "Insight" tab, for example:
-       select part_dt, sum(price) as total_selled, count(distinct seller_id) 
as sellers from kylin_sales group by part_dt order by part_dt
+       select part_dt, sum(price) as total_sold, count(distinct seller_id) as 
sellers from kylin_sales group by part_dt order by part_dt
 6. You can verify the query result and compare the response time with hive;
 
    
diff --git a/website/_docs/tutorial/odbc.md b/website/_docs/tutorial/odbc.md
index 4c01f58..d170854 100644
--- a/website/_docs/tutorial/odbc.md
+++ b/website/_docs/tutorial/odbc.md
@@ -23,7 +23,7 @@ since: v0.7.1
 2. ODBC driver internally gets results from a REST server, make sure you have 
access to one
 
 ## Installation
-1. Uninstall existing Kylin ODBC first, if you already installled it before
+1. Uninstall existing Kylin ODBC first, if you already installed it before
 2. Download ODBC Driver from [download](../../download/).
    * For 32 bit Tableau Desktop: Please install KylinODBCDriver (x86).exe
    * For 64 bit Tableau Desktop: Please install KylinODBCDriver (x64).exe
diff --git a/website/_docs/tutorial/query_pushdown.md 
b/website/_docs/tutorial/query_pushdown.md
index 64dd29e..e427bf1 100644
--- a/website/_docs/tutorial/query_pushdown.md
+++ b/website/_docs/tutorial/query_pushdown.md
@@ -51,7 +51,7 @@ kylin.query.pushdown.jdbc.pool-min-idle=0
 
 ### Do Query Pushdown
 
-After Query Pushdown is configured, user is allowed to do flexible queries to 
the imported tables without avaible cubes.
+After Query Pushdown is configured, user is allowed to do flexible queries to 
the imported tables without available cubes.
 
    ![](/images/tutorial/2.1/push_down/push_down_1.png)
 
diff --git a/website/_docs/tutorial/setup_jdbc_datasource.md 
b/website/_docs/tutorial/setup_jdbc_datasource.md
index c845296..9b59e1a 100644
--- a/website/_docs/tutorial/setup_jdbc_datasource.md
+++ b/website/_docs/tutorial/setup_jdbc_datasource.md
@@ -65,7 +65,7 @@ kylin.source.default=8
 kylin.source.jdbc.filed-delimiter=|
 ```
 
-There is another parameter specifing how many splits should be divided. Sqoop 
would run a mapper for each split.
+There is another parameter specifying how many splits should be divided. Sqoop 
would run a mapper for each split.
 
 ```
 kylin.source.jdbc.sqoop-mapper-num=4
@@ -90,4 +90,4 @@ Click "Sync", Kylin will load the tables' definition through 
the JDBC interface.
 
 ![](/images/docs/jdbc-datasource/load_table_03.png)
 
-Go ahead and design your model and Cube. When building the Cube, Kylin will 
use Sqoop to import the data to HDFS, and then run building over it.
\ No newline at end of file
+Go ahead and design your model and Cube. When building the Cube, Kylin will 
use Sqoop to import the data to HDFS, and then run building over it.
diff --git a/website/_docs/tutorial/setup_systemcube.md 
b/website/_docs/tutorial/setup_systemcube.md
index 610c664..de9cd33 100644
--- a/website/_docs/tutorial/setup_systemcube.md
+++ b/website/_docs/tutorial/setup_systemcube.md
@@ -42,10 +42,10 @@ Run the following command in KYLIN_HOME folder to generate 
related metadata:
 ```
 ./bin/kylin.sh org.apache.kylin.tool.metrics.systemcube.SCCreator \
 -inputConfig SCSinkTools.json \
--output <output_forder>
+-output <output_folder>
 ```
 
-By this command, the related metadata will be generated and its location is 
under the directory `<output_forder>`. The details are as follows, system_cube 
is our `<output_forder>`:
+By this command, the related metadata will be generated and its location is 
under the directory `<output_folder>`. The details are as follows, system_cube 
is our `<output_folder>`:
 
 ![metadata](/images/SystemCube/metadata.png)
 
@@ -53,7 +53,7 @@ By this command, the related metadata will be generated and 
its location is unde
 Running the following command to create source hive tables:
 
 ```
-hive -f <output_forder>/create_hive_tables_for_system_cubes.sql
+hive -f <output_folder>/create_hive_tables_for_system_cubes.sql
 ```
 
 By this command, the related hive table will be created.
@@ -64,7 +64,7 @@ By this command, the related hive table will be created.
 Then we need to upload metadata to hbase by the following command:
 
 ```
-./bin/metastore.sh restore <output_forder>
+./bin/metastore.sh restore <output_folder>
 ```
 
 ### 4. Reload Metadata
@@ -248,7 +248,7 @@ This Cube is for collecting query metrics at the lowest 
level. For a query, the
   </tr>
   <tr>
     <td>MAX, SUM of COUNT_SKIP</td>
-    <td>based on fuzzy filters or else, a few rows will be skiped. This 
indicates the skipped row count</td>
+    <td>based on fuzzy filters or else, a few rows will be skipped. This 
indicates the skipped row count</td>
   </tr>
   <tr>
     <td>MAX, SUM of SIZE_SCAN</td>
@@ -285,7 +285,7 @@ This Cube is for collecting query metrics at the Cube 
level. The most important
   </tr>
   <tr>
     <td>CUBOID_TARGET</td>
-    <td>target cuboid already precalculated and served for source cuboid</td>
+    <td>target cuboid already pre-calculated and served for source cuboid</td>
   </tr>
   <tr>
     <td>IF_MATCH</td>
@@ -395,7 +395,7 @@ This Cube is for collecting job metrics. The details are as 
follows:
   </tr>
   <tr>
     <td>MIN, MAX, SUM of WAIT_RESOURCE_TIME</td>
-    <td>a job may includes serveral MR(map reduce) jobs. Those MR jobs may 
wait because of lack of Hadoop resources.</td>
+    <td>a job may includes several MR(map reduce) jobs. Those MR jobs may wait 
because of lack of Hadoop resources.</td>
   </tr>
 </table>
 
diff --git a/website/_docs/tutorial/squirrel.md 
b/website/_docs/tutorial/squirrel.md
index a9956c8..654ebdc 100644
--- a/website/_docs/tutorial/squirrel.md
+++ b/website/_docs/tutorial/squirrel.md
@@ -53,7 +53,7 @@ On left menu: ![alt text](/images/SQuirreL-Tutorial/12.png)  
> ![alt text](/imag
   ![](/images/SQuirreL-Tutorial/14.png)
 
 
-And automatically launch conection:
+And automatically launch connection:
 
   ![](/images/SQuirreL-Tutorial/15.png)
 
@@ -71,7 +71,7 @@ Choose Tab: and write a query  (whe use Kylin’s example cube):
 
 
 ```
-select part_dt, sum(price) as total_selled, count(distinct seller_id) as 
sellers 
+select part_dt, sum(price) as total_sold, count(distinct seller_id) as sellers 
 from kylin_sales group by part_dt 
 order by part_dt
 ```
diff --git a/website/_docs/tutorial/tableau_91.md 
b/website/_docs/tutorial/tableau_91.md
index 39c23ef..7c35ac0 100644
--- a/website/_docs/tutorial/tableau_91.md
+++ b/website/_docs/tutorial/tableau_91.md
@@ -31,7 +31,7 @@ To use customized SQL, click `New Custom SQL` in left panel 
and type SQL stateme
 ![](/images/tutorial/odbc/tableau_91/5.png)
 
 ### Visualization
-Now you can start to enjou analyzing with Tableau 9.1.
+Now you can start to enjoy analyzing with Tableau 9.1.
 ![](/images/tutorial/odbc/tableau_91/6.png)
 
 ### Publish to Tableau Server
diff --git a/website/_docs/tutorial/use_dashboard.md 
b/website/_docs/tutorial/use_dashboard.md
index 1f0be7a..2062fb2 100644
--- a/website/_docs/tutorial/use_dashboard.md
+++ b/website/_docs/tutorial/use_dashboard.md
@@ -17,7 +17,7 @@ Kylin Dashboard shows useful Cube usage statistics, which are 
very important to
 
 To enable Dashboard on WebUI, you need to ensure these are all set:
 * Set **kylin.web.dashboard-enabled=true** in **kylin.properties**.
-* Setup system Cubes according to [toturial](setup_systemcube.html).
+* Setup system Cubes according to [tutorial](setup_systemcube.html).
 
 ## How to use it
 
@@ -37,7 +37,7 @@ You should now click on the calender to modify the '**Time 
Period**'.
 
 ![SelectPeriod](/images/Dashboard/SelectPeriod.png)
 
-- '**Time period**' is set deafult to **'Last 7 Days**'.
+- '**Time period**' is set default to **'Last 7 Days**'.
 
 - There are **2** ways to modify the time period, one is *using standard time 
period*s and the other is *customizing your time period*.
 
@@ -54,7 +54,7 @@ You should now click on the calender to modify the '**Time 
Period**'.
 
 #### Step 3:
 
-Now the data analysis will be changed and shown on the same page. (Important 
information has been pixilated.)
+Now the data analysis will be changed and shown on the same page. (Important 
information has been pixelated.)
 
 - Numbers in '**Total Cube Count**' and '**Avg Cube Expansion**' are in 
**Blue**.
 
@@ -62,7 +62,7 @@ Now the data analysis will be changed and shown on the same 
page. (Important inf
 
 - Numbers in '**Query Count**', '**Average Query Latency**', '**Job Count**' 
and '**Average Build Time per MB**' are in **Green**.
 
-  You can click on these four rectangles to get detail infomation about the 
data you selected. The detail information will then be shown as diagrams and 
displayed in '**Data grouped by Project**' and '**Data grouped by Time**' boxes.
+  You can click on these four rectangles to get detail information about the 
data you selected. The detail information will then be shown as diagrams and 
displayed in '**Data grouped by Project**' and '**Data grouped by Time**' boxes.
 
   1. '**Query Count**' and '**Average Query Latency**'
 
diff --git a/website/_docs/tutorial/web.md b/website/_docs/tutorial/web.md
index 55eb8e5..d41ef4c 100644
--- a/website/_docs/tutorial/web.md
+++ b/website/_docs/tutorial/web.md
@@ -79,7 +79,7 @@ There's one simple pivot and visualization analysis tool in 
Kylin's web for user
 
 * General Information:
 
-   When the query execute success, it will present a success indictor and also 
a cube's name which be hit. 
+   When the query execute success, it will present a success indicator and 
also a cube's name which be hit. 
    Also it will present how long this query be executed in backend engine (not 
cover network traffic from Kylin server to browser).
 
    ![](/images/tutorial/1.5/Kylin-Web-Tutorial/12 general.png)
diff --git a/website/community/poweredby.md b/website/community/poweredby.md
index 7079d72..dde844a 100644
--- a/website/community/poweredby.md
+++ b/website/community/poweredby.md
@@ -36,7 +36,7 @@ __Companies & Organizations__
 * [LeEco](http://www.leeco.com/), 2017-04-16
     * Apache Kylin has been used as a part of OLAP engine and data query 
engine for big data platform at LeEco, which is powering various big data 
business, such as stream media data and cloud data.     
 * [Gome](https://www.gome.com.cn/), 2017-04-14
-    * Apache Kylin provides a perfect solution for Gome's online Operation 
Advisor. It improved not only the analysis performance, but also the 
productivity. We adopted Kylin in both T+1 batch procesing and real time 
analysis. It is the best OLAP solution in our scenario. 
+    * Apache Kylin provides a perfect solution for Gome's online Operation 
Advisor. It improved not only the analysis performance, but also the 
productivity. We adopted Kylin in both T+1 batch processing and real time 
analysis. It is the best OLAP solution in our scenario. 
 * [Sohu](https://www.sohu.com)   (_NASDAQ: SOHU_), 2017-04-13
     * In Sohu, Apache Kylin is running as the analytics engine on Hadoop, 
serving multiple business domains including advertisement analysis, traffic 
forecast, etc.  
 * [Meizu](https://www.meizu.com), 2017-04-13
@@ -62,4 +62,4 @@ __Companies & Organizations__
 * [JD.com, Inc.](http://www.jd.com)  (_NASDAQ: JD_), 2015-11-05
     * Apache Kylin as Data Analytics Engine to analysis 
[JOS](http://jos.jd.com) API access behavior and report in 
[JCloud](http://www.jcloud.com).
 * [eBay](http://www.ebay.com)  (_NASDAQ: EBAY_), 2015-11-05
-    * Apache Kylin is used at eBay for Big Data Analytics on Hadoop. This 
powers various data products including Behavior Analytics, Traffic Reporting, 
Account Manager Application and Streaming Dashboard.
\ No newline at end of file
+    * Apache Kylin is used at eBay for Big Data Analytics on Hadoop. This 
powers various data products including Behavior Analytics, Traffic Reporting, 
Account Manager Application and Streaming Dashboard.
diff --git a/website/index.md b/website/index.md
index e74e8c4..a0850dd 100644
--- a/website/index.md
+++ b/website/index.md
@@ -18,7 +18,7 @@ title: Home
                 <ol class="none-icon">
                   <li>
                     <span class="li-circle">1</span>
-                    Identify a Star/Snowfalke Schema on Hadoop.
+                    Identify a Star/Snowflake Schema on Hadoop.
                   </li>
                   <li>
                     <span class="li-circle">2</span>

Reply via email to