This is an automated email from the ASF dual-hosted git repository.
xxyu pushed a commit to branch kylin4_on_cloud
in repository https://gitbox.apache.org/repos/asf/kylin.git
The following commit(s) were added to refs/heads/kylin4_on_cloud by this push:
new 85a1674 # update README.md && fix some typo
85a1674 is described below
commit 85a16749c0b17da2f368d27199bd5dc08b04fc30
Author: Mukvin <[email protected]>
AuthorDate: Tue Jan 25 15:18:02 2022 +0800
# update README.md && fix some typo
---
README.md | 65 ++++++++---------------------
bin/init.sh | 18 ++++----
engine_utils.py | 4 +-
kylin_configs.yaml | 1 -
readme/advanced_configs.md | 27 ++++++------
readme/commands.md | 24 +++++------
readme/monitor.md | 4 +-
readme/prerequisites.md | 34 +++++++--------
readme/quick_start_for_multiple_clusters.md | 25 +++++++++++
9 files changed, 98 insertions(+), 104 deletions(-)
diff --git a/README.md b/README.md
index 9189c95..b384960 100644
--- a/README.md
+++ b/README.md
@@ -2,16 +2,14 @@
## Target
-1. Deploy Kylin4 on Ec2 with Spark Standalone mode.
-2. Removed the dependency of hadoop and start quickly.
-3. Support to scale worker nodes for Spark Standalone Cluster quickly and
conveniently.
-4. Improve performance for query in using `Local Cache + Soft Affinity`
feature (`Experimental Feature`), please check the
[details](https://mp.weixin.qq.com/s/jEPvWJwSClQcMLPm64s4fQ).
-5. Support to monitor cluster status with prometheus server and granfana.
-6. Create a Kylin4 cluster on EC2 in 10 minutes.
+1. Deploy Kylin4 on Ec2 with Spark Standalone mode in `10` minutes.
+2. Support to scale nodes (Kylin & Spark Worker) quickly and conveniently.
+4. Improve performance for query of Kylin4 in using `Local Cache + Soft
Affinity` feature (`Experimental Feature`), please check the
[details](https://kylin.apache.org/blog/2021/10/21/Local-Cache-and-Soft-Affinity-Scheduling/).
+5. Support to monitor status of cluster with `prometheus server` and
`granfana`.
-## Structure
+## Architecture
-When cluster was created, services and nodes will like below:
+When cluster created, services and nodes will like below:

@@ -22,15 +20,15 @@ When cluster was created, services and nodes will like
below:
## Quick Start
-1. Initialize aws account credential on local mac, please check
[details](./readme/prerequisites.md#localaws).
+1. Initialize aws account credential on local machine, please check
[details](./readme/prerequisites.md#localaws).
2. Download the source code:
```shell
- git clone https://github.com/Kyligence/kylin-tpch.git && cd kylin-tpch &&
git checkout deploy-kylin-on-aws
+ git clone https://github.com/apache/kylin.git && cd kylin && git checkout
kylin4_on_cloud
```
-3. Modify the `kylin-tpch/kylin_config.yml`.
+3. Modify the `kylin_config.yml`.
1. Set the `AWS_REGION`.
@@ -51,7 +49,7 @@ When cluster was created, services and nodes will like below:
4. Init local env.
```She
-$KYLIN_TPCH_HOME/bin/init.sh
+$ bin/init.sh
```
> Note: Following the information into a python virtual env and get the help
> messages.
@@ -59,7 +57,7 @@ $KYLIN_TPCH_HOME/bin/init.sh
5. Execute commands to deploy a `default` cluster.
```shell
-$ python ./deploy.py --type deploy
+$ python deploy.py --type deploy
```
After `default` cluster is ready, you will see the message `Kylin Cluster
already start successfully.` in the console.
@@ -67,7 +65,7 @@ After `default` cluster is ready, you will see the message
`Kylin Cluster alread
6. Execute commands to list nodes of cluster.
```shell
-$ python ./deploy.py --type list
+$ python deploy.py --type list
```
Then you can check the `public ip` of Kylin Node.
@@ -77,51 +75,24 @@ You can access `Kylin` web by `http://{kylin public
ip}:7070/kylin`.
7. Destroy the `default` cluster.
```shell
-$ python ./deploy.py --type destroy
+$ python deploy.py --type destroy
```
-## Quick Start For Multiple Clusters
-
-> Pre-steps is same as Quick Start steps which is from 1 to 5.
-
-1. Modify the config `CLUSTER_INDEXES` for multiple cluster.
-
- > Note:
- >
- > 1. `CLUSTER_INDEXES` means that cluster index is in the range of
`CLUSTER_INDEXES`.
- > 2. If user create multiple clusters, `default` cluster always be created.
If `CLUSTER_INDEXES` is (1, 3), there will be 4 cluster which contains the
cluster 1, 2, 3 and `default` will be created if user execute the commands.
- > 3. Configs for multiple clusters always are same as the `default` cluster
to read from `kylin-tpch/kylin_configs.yaml`
-
-2. Copy `kylin.properties.template` for expecting clusters to deploy, please
check the [details](./readme/prerequisites.md#cluster).
-
-3. Execute commands to deploy `all` clusters.
-
- ```shell
- python ./deploy.py --type deploy --cluster all
- ```
-
-4. Destroy all clusters.
-
- ```shell
- python ./deploy.py --type destroy --cluster all
- ```
-
-
-
## Notes
+1. More details about `quick start for mutilple clusters`, see document [quick
start for mutilple clusters](./readme/quick_start_for_multiple_clusters.md)
1. More details about `commands` of tool, see document
[commands](./readme/commands.md).
2. More details about `prerequisites` of tool, see document
[prerequisites](./readme/prerequisites.md).
3. More details about `advanced configs` of tool, see document [advanced
configs](./readme/advanced_configs.md).
4. More details about `monitor services` supported by tool, see document
[monitor](./readme/monitor.md).
-1. Current tool already open the port for some services. You can access the
service by `public ip` of related EC2 instance.
+5. Current tool already open the port for some services. You can access the
service by `public ip` of related EC2 instance.
1. `SSH`: 22
2. `Granfana`: 3000
3. `Prmetheus`: 9090, 9100
4. `Kylin`: 7070
5. `Spark`: 8080. 4040.
-2. More about cloudformation syntax, please check [aws
website](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html).
-3. Current Kylin version is 4.0.0.
-4. Current Spark version is 3.1.1.
+6. More about cloudformation syntax, please check [aws
website](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html).
+7. Current Kylin version is 4.0.0.
+8. Current Spark version is 3.1.1.
diff --git a/bin/init.sh b/bin/init.sh
index 88fb704..5eb5c68 100755
--- a/bin/init.sh
+++ b/bin/init.sh
@@ -40,10 +40,10 @@ function logging() {
esac
}
-if [[ -z "$KYLIN_TPCH_HOME" ]]; then
+if [[ -z "$TOOL_HOME" ]]; then
dir=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
- export KYLIN_TPCH_HOME=`cd "${dir}/../"; pwd`
- logging info "KYLIN_TPCH_HOME not set, will use $KYLIN_TPCH_HOME as
KYLIN_TPCH_HOME."
+ export TOOL_HOME=`cd "${dir}/../"; pwd`
+ logging info "TOOL_HOME not set, will use $TOOL_HOME as TOOL_HOME."
fi
function check_python_version {
@@ -61,16 +61,16 @@ function check_python_version {
function install_env {
check_python_version
- if [[ ! -d $KYLIN_TPCH_HOME/venv ]]; then
- python3 -m venv $KYLIN_TPCH_HOME/venv
- source $KYLIN_TPCH_HOME/venv/bin/activate
+ if [[ ! -d $TOOL_HOME/venv ]]; then
+ python3 -m venv $TOOL_HOME/venv
+ source $TOOL_HOME/venv/bin/activate
logging info "Install dependencies ..."
- pip3 install -r $KYLIN_TPCH_HOME/requirements.txt
+ pip3 install -r $TOOL_HOME/requirements.txt
else
- logging warn "$KYLIN_TPCH_HOME/.venv already existing, skip install again."
+ logging warn "$TOOL_HOME/.venv already existing, skip install again."
fi
logging info "Please use 'source venv/bin/activate' to activate venv and
execute commands."
- logging info "Please use 'python ./deploy.py --h[|--help]' to check the
usage."
+ logging info "Please use 'python deploy.py --help' to check the usage."
logging info "Enjoy it and have fun."
}
diff --git a/engine_utils.py b/engine_utils.py
index 823f42a..2662b5c 100644
--- a/engine_utils.py
+++ b/engine_utils.py
@@ -238,9 +238,9 @@ class EngineUtils:
@staticmethod
def refresh_kylin_properties() -> None:
- logger.info('Start to refresh kylin.poperties in
`kylin-tpch/properties/default`.')
+ logger.info('Start to refresh kylin.poperties in
`properties/default`.')
Utils.refresh_kylin_properties()
- logger.info('Refresh kylin.poperties in
`kylin-tpch/properties/default` successfully.')
+ logger.info('Refresh kylin.poperties in `properties/default`
successfully.')
@staticmethod
def refresh_kylin_properties_in_clusters(cluster_nums: List) -> None:
diff --git a/kylin_configs.yaml b/kylin_configs.yaml
index 07b4179..18ec304 100644
--- a/kylin_configs.yaml
+++ b/kylin_configs.yaml
@@ -82,7 +82,6 @@ ALWAYS_DESTROY_ALL: false
ASSOSICATED_PUBLIC_IP: &associated_public_ip 'true'
## Stack Names
-### Note: if need to change stack names please change it in ../utils.py too.
VPC_STACK: ec2-or-emr-vpc-stack
RDS_STACK: ec2-rds-stack
STATIC_SERVICE_STACK: ec2-static-service-stack
diff --git a/readme/advanced_configs.md b/readme/advanced_configs.md
index 86d2ae9..106c1ae 100644
--- a/readme/advanced_configs.md
+++ b/readme/advanced_configs.md
@@ -22,25 +22,25 @@ There are `9` modules params for tools. Introductions as
below:
> Note:
>
- > 1. `KYLIN_SCALE_UP_NODES` is for the range of kylin nodes to scale up.
- > 1. `KYLIN_SCALE_DOWN_NODES` is for the range of kylin nodes to scale
down.
- > 1. The range of `KYLIN_SCALE_UP_NODES` must be contain the range of
`KYLIN_SCALE_DOWN_NODES`.
- > 1. **They are effective to all clusters which is not only `default
cluster` but also other cluster which index is in `${CLUSTER_INDEXES}`.**
+ > 1. `KYLIN_SCALE_UP_NODES` is for the range of kylin nodes to scale up.
+ > 1. `KYLIN_SCALE_DOWN_NODES` is for the range of kylin nodes to scale down.
+ > 1. The range of `KYLIN_SCALE_UP_NODES` must be contain the range of
`KYLIN_SCALE_DOWN_NODES`.
+ > 1. **They are effective to all clusters which is not only `default
cluster` but also other cluster which index is in `${CLUSTER_INDEXES}`.**
- EC2_SPARK_SCALE_SLAVE_PARAMS: this params of module are for scaling **Spark
workers**, the range of **Spark Workers ** is related to
`SPARK_WORKER_SCALE_UP_NODES` and `SPARK_WORKER_SCALE_DOWN_NODES`.
> Note:
>
- > 1. `SPARK_WORKER_SCALE_UP_NODES` is for the range for spark workers to
scale up. **It's effective to all clusters which is not only `default cluster`
but also other cluster which index is in `${CLUSTER_INDEXES}`.**
- > 1. `SPARK_WORKER_SCALE_DOWN_NODES` is for the range for spark workers
to scale down. **It's effective to all clusters which is not only `default
cluster` but also other cluster which index is in `${CLUSTER_INDEXES}`.**
- > 1. The range of `SPARK_WORKER_SCALE_UP_NODES` must be contain the range
of `SPARK_WORKER_SCALE_DOWN_NODES`.
- > 1. **They are effective to all clusters which is not only `default
cluster` but also other cluster which index is in `${CLUSTER_INDEXES}`.**
+ > 1. `SPARK_WORKER_SCALE_UP_NODES` is for the range for spark workers to
scale up. **It's effective to all clusters which is not only `default cluster`
but also other cluster which index is in `${CLUSTER_INDEXES}`.**
+ > 1. `SPARK_WORKER_SCALE_DOWN_NODES` is for the range for spark workers to
scale down. **It's effective to all clusters which is not only `default
cluster` but also other cluster which index is in `${CLUSTER_INDEXES}`.**
+ > 1. The range of `SPARK_WORKER_SCALE_UP_NODES` must be contain the range of
`SPARK_WORKER_SCALE_DOWN_NODES`.
+ > 1. **They are effective to all clusters which is not only `default
cluster` but also other cluster which index is in `${CLUSTER_INDEXES}`.**
### Customize Configs
-User also can customize the params in `kylin-tpch/kylin_configs.yaml` to
create an expected instances. Such as **the type of instance**, **the volume
size of instance** and **the volumn type of instance** and so on.
+User also can customize the params in `kylin_configs.yaml` to create an
expected instances. Such as **the type of instance**, **the volume size of
instance** and **the volumn type of instance** and so on.
-1. If you want to customize configs for instances, you must modify the
`EC2Mode` from `test` to `product` in the ``kylin-tpch/kylin_configs.yml`.
+1. If you want to customize configs for instances, you must modify the
`EC2Mode` from `test` to `product` in `kylin_configs.yml`.
2. `Ec2Mode` is only in the parms of `EC2_STATIC_SERVICES_PARAMS`,
`EC2_ZOOKEEPERS_PARAMS`, `EC2_SPARK_MASTER_PARAMS`, `EC2_KYLIN4_PARAMS`,
`EC2_SPARK_WORKER_PARAMS`, `EC2_KYLIN4_SCALE_PARAMS` and
`EC2_SPARK_SCALE_SLAVE_PARAMS`.
3. So instances can be customized to effect `Monitor
Node`(`EC2_STATIC_SERVICES_PARAMS`), `Zookeeper
Nodes`(`EC2_ZOOKEEPERS_PARAMS`), `Spark Master Node` (
`EC2_SPARK_MASTER_PARAMS`), `Kylin4 Node`( `EC2_KYLIN4_PARAMS`), `Spark workers
`(`EC2_SPARK_WORKER_PARAMS`), `Kylin4 scale nodes`(`EC2_KYLIN4_SCALE_PARAMS`)
and `Spark scale workers`(`EC2_SPARK_SCALE_SLAVE_PARAMS`).
4. Now`Ec2Mode` **only effect** the related params are
`Ec2InstanceTypeFor*`,`Ec2VolumeSizeFor*` and `Ec2VolumnTypeFor`* in the
params modules.
@@ -52,11 +52,10 @@ User also can customize the params in
`kylin-tpch/kylin_configs.yaml` to create
As an example in `EC2_STATIC_SERVICES_PARAMS`:
-- change `Ec2Mode ` from `test`to `product`
+- change `Ec2Mode` from `test`to `product`.
- change `Ec2InstanceTypeForStaticServices` from `m5.2xlarge` to `m5.4xlarge`.
-- change `Ec2VolumeSizeForStaticServicesNode` from `'20'` to `'50'.`
+- change `Ec2VolumeSizeForStaticServicesNode` from `'20'` to `'50'`.
- change `Ec2VolumnTypeForStaticServicesNode` from `gp2` to `standard`.
-- Then create the node of static service node will be a ``m5.4xlarge` and it
attach a volume which size is `50` and type is `standard`.
+- Then create the node of static service node will be a `m5.4xlarge` and it
attach a volume which size is `50` and type is `standard`.

-
diff --git a/readme/commands.md b/readme/commands.md
index 9aa0834..99e6306 100644
--- a/readme/commands.md
+++ b/readme/commands.md
@@ -3,7 +3,7 @@
Command:
```shell
-python ./deploy.py --type [deploy|destroy|list|scale] --scale-type [up|down]
--node-type [kylin|spark_worker] [--cluster {1..6}|all|default]
+python deploy.py --type [deploy|destroy|list|scale] --scale-type [up|down]
--node-type [kylin|spark_worker] [--cluster {1..6}|all|default]
```
- deploy: create cluster(s).
@@ -25,21 +25,21 @@ python ./deploy.py --type [deploy|destroy|list|scale]
--scale-type [up|down] --n
- Deploy default cluster
```shell
-$ python ./deploy.py --type deploy [--cluster default]
+$ python deploy.py --type deploy [--cluster default]
```
- Deploy a cluster with specific cluster index.
```shell
-$ python ./deploy.py --type deploy --cluster ${cluster num}
+$ python deploy.py --type deploy --cluster ${cluster num}
```
> Note: the `${cluster num}` must be in the range of `CLUSTER_INDEXES`.
- Deploy all cluster which contain default cluster and all cluster which index
in the range of `CLUSTER_INDEXES`.
-```sHe
-$ python ./deploy.py --type deploy --cluster all
+```shell
+$ python deploy.py --type deploy --cluster all
```
### Command for destroy
@@ -51,13 +51,13 @@ $ python ./deploy.py --type deploy --cluster all
- Destroy default cluster
```shell
-$ python ./deploy.py --type destroy [--cluster default]
+$ python deploy.py --type destroy [--cluster default]
```
- Destroy a cluster with specific cluster index.
```shell
-$ python ./deploy.py --type destroy --cluster ${cluster num}
+$ python deploy.py --type destroy --cluster ${cluster num}
```
> Note: the `${cluster num}` must be in the range of `CLUSTER_INDEXES`.
@@ -65,7 +65,7 @@ $ python ./deploy.py --type destroy --cluster ${cluster num}
- Destroy all cluster which contain default cluster and all cluster which
index in the range of `CLUSTER_INDEXES`.
```shell
-$ python ./deploy.py --type destroy --cluster all
+$ python deploy.py --type destroy --cluster all
```
### Command for list
@@ -73,7 +73,7 @@ $ python ./deploy.py --type destroy --cluster all
- List nodes which are with **stack name**, **instance id**, **private ip**
and **public ip** in **available stacks** .
```shell
-$ python ./deploy.py --type list
+$ python deploy.py --type list
```
### Command for scale
@@ -86,18 +86,18 @@ $ python ./deploy.py --type list
> 4. Scale params which are `KYLIN_SCALE_UP_NODES`, `KYLIN_SCALE_DOWN_NODES`,
> `SPARK_WORKER_SCALE_UP_NODES` and `SPARK_WORKER_SCALE_DOWN_NODES` effect on
> all cluster. So if user want to scale node for a specify cluster, then
> modify the scale params before **every run time.**
> 5. **(Important!!!)** Current cluster is created with default `3` spark
> workers and `1` kylin node. The `3` spark workers can not be scaled down.
> The `1` kylin node also can not be scaled down.
> 6. **(Important!!!)** Cluster can only scale up or down the range of nodes
> which is in `KYLIN_SCALE_UP_NODES`, `KYLIN_SCALE_DOWN_NODES`,
> `SPARK_WORKER_SCALE_UP_NODES` and `SPARK_WORKER_SCALE_DOWN_NODES` . Not the
> default `3` spark workers and `1` kylin node in the cluster.
-> 7. **(Important!!!)** If user don't want to create a cluster with `3`
default spark workers, then user can remove the useless node module in the
`Ec2InstanceOfSlave0*` of
`kylin-tpch/cloudformation_templates/ec2-cluster-spark-slave.yaml`. User need
to know about the syntax of cloudformation as also.
+> 7. **(Important!!!)** If user don't want to create a cluster with `3`
default spark workers, then user can remove the useless node module in the
`Ec2InstanceOfSlave0*` of
`cloudformation_templates/ec2-cluster-spark-slave.yaml`. User need to know
about the syntax of cloudformation as also.
- Scale up/down kylin/spark workers in default cluster
```shell
-python ./deploy.py --type scale --scale-type up[|down] --node-type
kylin[|spark_worker] [--cluster default]
+python deploy.py --type scale --scale-type up[|down] --node-type
kylin[|spark_worker] [--cluster default]
```
- Scale up/down kylin/spark workers in a specific cluster
```shell
-python ./deploy.py --type scale --scale-type up[|down] --node-type
kylin[|spark_worker] --cluster ${cluster num}
+python deploy.py --type scale --scale-type up[|down] --node-type
kylin[|spark_worker] --cluster ${cluster num}
```
> Note: the `${cluster num}` must be in the range of `CLUSTER_INDEXES`.
\ No newline at end of file
diff --git a/readme/monitor.md b/readme/monitor.md
index 361f369..65de0e4 100644
--- a/readme/monitor.md
+++ b/readme/monitor.md
@@ -2,7 +2,7 @@
Current tool also support the feature of monitor for cluster.
-1. User can execute `python ./deploy.py --type list` to get the `public ip` of
`Static Service` .
+1. User can execute `python deploy.py --type list` to get the `public ip` of
`Static Service` .
2. User can access the `public ip` of `Static Service` and the port is `9090`
to access `Prometheus` server.
3. User can access the `public ip` of `Static Service` and the port is `3000`
to access `Granfana` server.
@@ -42,7 +42,7 @@ More details about `PromQL` for metric exploer, please check
[official website](

-> Just config the url with syntax `http://${private ip of static
service}:9090` . The `private ip of static service` can be from the `python
./deploy.py --type list` command.
+> Just config the url with syntax `http://${private ip of static
service}:9090` . The `private ip of static service` can be from the `python
deploy.py --type list` command.
3. Add a new panel.
diff --git a/readme/prerequisites.md b/readme/prerequisites.md
index 35f4c88..5bcdac7 100644
--- a/readme/prerequisites.md
+++ b/readme/prerequisites.md
@@ -1,11 +1,11 @@
## Prerequisites
-### Download source code & checkout to branch of `deploy-kylin-on-aws`
+### Download source code & checkout to branch of `kylin4_on_cloud`
commands:
```shell
-git clone https://github.com/Kyligence/kylin-tpch.git && cd kylin-tpch && git
checkout deploy-kylin-on-aws
+git clone https://github.com/apache/kylin.git && cd kylin && git checkout
kylin4_on_cloud
```
### Initiliaze an AWS Account
@@ -16,7 +16,7 @@ git clone https://github.com/Kyligence/kylin-tpch.git && cd
kylin-tpch && git ch
>
> `IAM` role must have the access which contains `AmazonEC2RoleforSSM` ,
> `AmazonSSMFullAccess` and `AmazonSSMManagedInstanceCore`.
>
-> This `IAM` Role will be used to initialize every ec2 instances which
are for creating an kylin4 cluster on aws. And it will configure in `Initilize
Env of Local Mac` part.
+> This `IAM` Role will be used to initialize every ec2 instances which
are for creating an kylin4 cluster on aws. And it will configure in `Initilize
Env of Local Machine` part.
#### II. Create a `User`
@@ -49,7 +49,7 @@ git clone https://github.com/Kyligence/kylin-tpch.git && cd
kylin-tpch && git ch
> Note:
>
-> Please download the generated the csv file of `Access Key`
immediately. Get the `Access Key ` and `Secret Key` to initilize local mac to
access aws.
+> Please download the generated the csv file of `Access Key` immediately.
Get the `Access Key` and `Secret Key` to initilize local mac to access aws.

@@ -104,7 +104,7 @@ Example: make a directory named `kylin4-aws-test` . You can
also create a direct
-#### (Optional) III. Upload `kylin-tpch/backup/jars/*` to the S3 Path which
suffix is `*/jars`
+#### (Optional) III. Upload `backup/jars/*` to the S3 Path which suffix is
`*/jars`
> Note:
>
@@ -127,7 +127,7 @@ Kylin4 needed extra jars

-#### (Optional) IV. Upload `kylin-tpch/backup/scripts/*` to the S3 Path which
suffix is `*/scripts`
+#### (Optional) IV. Upload `backup/scripts/*` to the S3 Path which suffix is
`*/scripts`
> Note:
>
@@ -149,11 +149,11 @@ Scripts:

-### Initilize Env Of Local Mac
+### Initilize Env Of Local Machine
-#### I. Initilize an aws account on local mac to access AWS<a
name="localaws"></a>
+#### I. Initilize an aws account on local mac to access AWS<a
name="localaws"></a>
-> Use `Access Key` and `Secret Key ` above to Initilize a aws account on local
mac.
+> Use `Access Key` and `Secret Key` above to Initialize an aws account on
local mac.
```shell
$ aws configure
@@ -173,17 +173,17 @@ Default output format : json
> Note:
>
-> Make sure that your mac already has a Python which version is 3.6.6 or
later.
+> Make sure that your machine already has a Python which version is 3.6.6
or later.
commands:
```shell
-$ ./bin/init.sh
+$ bin/init.sh
```
-> Note: Follow the information after `./bin/init.sh` to activate the python
virtual env.
+> Note: Follow the information after `bin/init.sh` to activate the python
virtual env.
-#### III. Configure the `kylin-tpch/kylin_configs.yaml`
+#### III. Configure the `kylin_configs.yaml`
**Required parameters**:
@@ -200,10 +200,10 @@ $ ./bin/init.sh
#### IV. Configure the `kylin.properties` in `backup/properties`
directories.<a name="cluster"></a>
1. The `kylin.properties` is for starting kylin instance in the cluster.
-2. Default cluster will check the `kylin.properties` in the
`kylin-tpch/backup/properties/default`, and other specific cluster will check
the related num directory such as `1`, `2` and `3`.
-3. User need to create new dir for the cluster num in
`kylin-tpch/backup/properties`, and name it to the `${cluster num}`, such as
`1`, `2` ,`3` and so on. The range of cluster num must be in `CLUSTER_INDEXES`
which is configured in the `kylin-tpch/kylin_configs.yml`.
-4. Follow the `2.` step, copy the `kylin.properties.template` which is in
`kylin-tpch/backup/properties/templates` to the related `${cluster num} `
directories, and rename the template to `kylin.properties`.
-5. The range of cluster nums must match the the config `CLUSTER_INDEXES`, such
as `CLUSTER_INDEXES: (1, 3)` then the directories must be `1`, `2`,`3` in the
`kylin-tpch/backup/properties`.
+2. Default cluster will check the `kylin.properties` in the
`backup/properties/default`, and other specific cluster will check the related
num directory such as `1`, `2` and `3`.
+3. User need to create new dir for the cluster num in `backup/properties`, and
name it to the `${cluster num}`, such as `1`, `2` ,`3` and so on. The range of
cluster num must be in `CLUSTER_INDEXES` which is configured in the
`kylin_configs.yml`.
+4. Follow the `2.` step, copy the `kylin.properties.template` which is in
`backup/properties/templates` to the related `${cluster num} ` directories, and
rename the template to `kylin.properties`.
+5. The range of cluster nums must match the the config `CLUSTER_INDEXES`, such
as `CLUSTER_INDEXES: (1, 3)` then the directories must be `1`, `2`,`3` in the
`backup/properties`.

diff --git a/readme/quick_start_for_multiple_clusters.md
b/readme/quick_start_for_multiple_clusters.md
new file mode 100644
index 0000000..86ef780
--- /dev/null
+++ b/readme/quick_start_for_multiple_clusters.md
@@ -0,0 +1,25 @@
+## Quick Start For Multiple Clusters
+
+> Pre-steps is same as Quick Start steps which is from 1 to 5.
+
+1. Modify the config `CLUSTER_INDEXES` for multiple cluster.
+
+ > Note:
+ >
+ > 1. `CLUSTER_INDEXES` means that cluster index is in the range of
`CLUSTER_INDEXES`.
+ > 2. If user create multiple clusters, `default` cluster always be created.
If `CLUSTER_INDEXES` is (1, 3), there will be 4 cluster which contains the
cluster 1, 2, 3 and `default` will be created if user execute the commands.
+ > 3. Configs for multiple clusters always are same as the `default` cluster
to read from `kylin_configs.yaml`
+
+2. Copy `kylin.properties.template` for expecting clusters to deploy, please
check the [details](./prerequisites.md#cluster).
+
+3. Execute commands to deploy `all` clusters.
+
+ ```shell
+ python deploy.py --type deploy --cluster all
+ ```
+
+4. Destroy all clusters.
+
+ ```shell
+ python deploy.py --type destroy --cluster all
+ ```