This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch kylin4_on_cloud
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 7e59491b870c473631566b674e082ffb4ecc0442
Author: Mukvin <boyboys...@163.com>
AuthorDate: Wed Feb 9 12:29:12 2022 +0800

    # update README.md
---
 README.md                                   |  2 +-
 deploy.py                                   |  1 +
 readme/commands.md                          | 25 +++++++++++++------------
 readme/prerequisites.md                     | 14 +++++++-------
 readme/quick_start.md                       | 19 +++++++++----------
 readme/quick_start_for_multiple_clusters.md |  4 ++--
 readme/trouble_shooting.md                  | 20 ++++++++++----------
 7 files changed, 43 insertions(+), 42 deletions(-)

diff --git a/README.md b/README.md
index 6847bc2..c4704e4 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
 
 **Apache Kylin community** released Kylin 4.0 with a new architecture, which 
is dedicated to building a high-performance and low-cost OLAP engine. The 
architecture of Kylin 4.0 supports the separation of storage and computing, 
which enables Kylin users to run Kylin 4.0 by adopting a more flexible and 
elastically scalable cloud deployment method.
 
-For the best practices of Kylin4 on the cloud,  **Apache Kylin community 
contributes a **tool** to deploy kylin4 clusters on **AWS** cloud easily and 
conveniently.
+For the best practices of Kylin4 on the cloud, Apache Kylin community 
contributes a **tool** to deploy kylin4 clusters on **AWS** cloud easily and 
conveniently.
 
 # Introduction About This Tool
 
diff --git a/deploy.py b/deploy.py
index 3fdb5e0..6085064 100644
--- a/deploy.py
+++ b/deploy.py
@@ -52,6 +52,7 @@ def deploy_on_aws(deploy_type: str, scale_type: str, 
node_type: str, cluster: st
         if not cluster or cluster == Cluster.DEFAULT.value:
             aws_engine.destroy_default_cluster()
             aws_engine.refresh_kylin_properties_in_default()
+            aws_engine.destroy_rds_and_vpc()
 
         if cluster and cluster.isdigit():
             aws_engine.destroy_cluster(cluster_num=int(cluster))
diff --git a/readme/commands.md b/readme/commands.md
index 99821ec..ab55e32 100644
--- a/readme/commands.md
+++ b/readme/commands.md
@@ -18,11 +18,12 @@ python deploy.py --type [deploy|destroy|list|scale] 
--scale-type [up|down] --nod
   >
   > 1. Current support to scale up/down `kylin` or `spark_worker` for a 
specific cluster.
   > 2. Before scaling up/down `kylin` or `spark_worker` nodes, Cluster 
services must be ready.
-  > 3. If you want to scale a `kylin` or `spark_worker` node to a specified 
cluster, please add the `--cluster ${cluster num}` to specify the expected node 
add to the cluster `${cluster num}`.
+  > 3. If you want to scale a `kylin` or `spark_worker` node to a specified 
cluster, please add the `--cluster ${cluster ID}` to specify the expected node 
add to the cluster `${cluster ID}`.
+  > 4. For details about the index of the cluster,  please check [Indexes of 
clusters](./prerequisites.md#indexofcluster).
 
 ### Command for deploy
 
-- Deploy default cluster
+- Deploy a default cluster
 
 ```shell
 $ python deploy.py --type deploy [--cluster default]
@@ -31,10 +32,10 @@ $ python deploy.py --type deploy [--cluster default]
 - Deploy a cluster with a specific cluster index. <a name="deploycluster"></a>
 
 ```shell
-$ python deploy.py --type deploy --cluster ${cluster num}
+$ python deploy.py --type deploy --cluster ${cluster ID}
 ```
 
-> Note: the `${cluster num}` must be in the range of `CLUSTER_INDEXES`.
+> Note: the `${cluster ID}` must be in the range of `CLUSTER_INDEXES`.
 
 - Deploy all clusters which contain the default cluster and all clusters whose 
index is in the range of `CLUSTER_INDEXES`.
 
@@ -48,7 +49,7 @@ $ python deploy.py --type deploy --cluster all
 >
 > ​            Destroy all clusters will not delete vpc, rds, and monitor 
 > node. So if user doesn't want to hold the env, please set the 
 > `ALWAYS_DESTROY_ALL` to be `'true'`.
 
-- Destroy default cluster
+- Destroy a default cluster
 
 ```shell
 $ python deploy.py --type destroy [--cluster default]
@@ -57,10 +58,10 @@ $ python deploy.py --type destroy [--cluster default]
 - Destroy a cluster with a specific cluster index. 
 
 ```shell
-$ python deploy.py --type destroy --cluster ${cluster num}
+$ python deploy.py --type destroy --cluster ${cluster ID}
 ```
 
-> Note: the `${cluster num}` must be in the range of `CLUSTER_INDEXES`.
+> Note: the `${cluster ID}` must be in the range of `CLUSTER_INDEXES`.
 
 - Destroy all clusters which contain the default cluster and all clusters 
whose index is in the range of `CLUSTER_INDEXES`.
 
@@ -81,14 +82,14 @@ $ python deploy.py --type list
 > Note:
 >
 > 1. Scale command must be used with `--scale-type` and `--node-type`.
-> 2. If the scale command does not specify a cluster num, then the scaled 
node(Kylin or spark worker) will be added to the `default` cluster.
+> 2. If the scale command does not specify a `cluster ID`, then the scaled 
node(Kylin or spark worker) will be added to the `default` cluster.
 > 3. Scale command **not support** to **scale** node (kylin or spark worker) 
 > to **all clusters** at **one time**. It means that `python ./deploy.py 
 > --type scale --scale-type up[|down] --node-type kylin[|spark_worker] 
 > --cluster all` is invalid commad.
 > 4. Scale params which are `KYLIN_SCALE_UP_NODES`, `KYLIN_SCALE_DOWN_NODES`, 
 > `SPARK_WORKER_SCALE_UP_NODES` and `SPARK_WORKER_SCALE_DOWN_NODES` effect on 
 > all cluster. So if user wants to scale a node for a specific cluster, then 
 > modify the scale params before **every run time.**
 > 5. **(Important!!!)** The current cluster is created with default `3` spark 
 > workers and `1` Kylin node. The `3` spark workers can not be scaled down. 
 > The `1`  Kylin node also can not be scaled down.
-> 6. **(Important!!!)** The current cluster can only scale up or down the 
range of nodes which is in  `KYLIN_SCALE_UP_NODES`, `KYLIN_SCALE_DOWN_NODES`, 
`SPARK_WORKER_SCALE_UP_NODES,` and `SPARK_WORKER_SCALE_DOWN_NODES`. Not the 
default `3` spark workers and `1` kylin node in the cluster.
+> 6. **(Important!!!)** The current cluster can only scale up or down the 
range of nodes which is in  `KYLIN_SCALE_UP_NODES`, `KYLIN_SCALE_DOWN_NODES`, 
`SPARK_WORKER_SCALE_UP_NODES,` and `SPARK_WORKER_SCALE_DOWN_NODES`. Not the 
default `3` spark workers and `1` kylin node in a cluster.
 > 7. **(Important!!!)**  If user doesn't want to create a cluster with `3` 
 > default spark workers, then user can remove the useless node module in the 
 > `Ec2InstanceOfSlave0*` of 
 > `cloudformation_templates/ec2-cluster-spark-slave.yaml`. User needs to know 
 > about the syntax of `cloudformation` as also.
 
-- Scale up/down Kylin/spark workers in default cluster
+- Scale up/down Kylin/spark workers in the default cluster
 
 ```shell
 python deploy.py --type scale --scale-type up[|down] --node-type 
kylin[|spark_worker] [--cluster default]
@@ -97,7 +98,7 @@ python deploy.py --type scale --scale-type up[|down] 
--node-type kylin[|spark_wo
 - Scale up/down kylin/spark workers in a specific cluster
 
 ```shell
-python deploy.py --type scale --scale-type up[|down] --node-type 
kylin[|spark_worker] --cluster ${cluster num}
+python deploy.py --type scale --scale-type up[|down] --node-type 
kylin[|spark_worker] --cluster ${cluster ID}
 ```
 
-> Note: the `${cluster num}` must be in the range of `CLUSTER_INDEXES`.
\ No newline at end of file
+> Note: the `${cluster ID}` must be in the range of `CLUSTER_INDEXES`.
\ No newline at end of file
diff --git a/readme/prerequisites.md b/readme/prerequisites.md
index 97ccdf9..361077d 100644
--- a/readme/prerequisites.md
+++ b/readme/prerequisites.md
@@ -75,7 +75,7 @@ git clone https://github.com/apache/kylin.git && cd kylin && 
git checkout kylin4
 
 > Note: 
 >
-> ​    Please download the generated CSV file of `Access Key` immediately. Get 
the `Access Key` and `Secret Key` to initialize local mac to access aws.
+> ​    Please download the generated CSV file of `Access Key` immediately. Get 
the `Access Key` and `Secret Key` to initialize local machine to access aws.
 
 ![Access Key](../images/accesskey.png)
 
@@ -186,9 +186,9 @@ Scripts:
 
 ### Initialize Env Of Local Machine
 
-#### I. Initilize an aws account on local mac to access AWS<a 
name="localaws"></a>
+#### I. Initilize an aws account on local machine to access AWS<a 
name="localaws"></a>
 
-> Use `Access Key` and `Secret Key` above to Initialize an AWS account on a 
local mac. 
+> Use `Access Key` and `Secret Key` above to Initialize an AWS account on a 
local machine. 
 
 ```shell
 $ aws configure
@@ -241,7 +241,7 @@ $ bin/init.sh
 >    1. User can modify the `CLUSTER_INDEXES` to be `(1, 3)`, then as 
 > following steps of this module to modify the `kylin.properties` file for 
 > clusters.
 >    2. User can mark the Kylin node of the `default` cluster to be `query` 
 > mode and the Kylin node of the cluster whose index is `1` to be `query` 
 > mode.  User can mark the Kylin node of the cluster which index is `2` to be 
 > `job` mode and the Kylin node of the cluster which index is `3` to job mode.
 >    3. User can also modify the `CLUSTER_INDEXES` to be `(1, 4)`, and mark 
 > the Kylin node of clusters whose index is `1` and `2` to be `query` mode and 
 > the Kylin node of clusters whose index is `3` and `4` to be `job` mode. Just 
 > don't use `default` to mark a cluster and execute exactly deploy cluster 
 > commands. For details about commands, please check to [deploy multiple 
 > clusters](./Commands.md#deploycluster).
-> 4. The list of mark name for clusters will be [`default`, `{cluster num}` 
...] and `{cluster num}` is in the range of `CLUSTER_INDEXES`.
+> 4. The list of mark name for clusters will be [`default`, `{cluster ID}` 
...] and `{cluster ID}` is in the range of `CLUSTER_INDEXES`.
 > 5. For example, if `CLUSTER_INDEXES` is (1, 3) means that tool can deploy a 
 > cluster and it can be marked as `default` or `1` or `2` or `3`.  And tool 
 > can execute to deploy total 1(`default`)  + 3(clusters which mark name can 
 > be `1`, `2`, `3`) = 4 clusters.
 > 6. Every cluster contains 3 `Zookeepers Node`, 1 `Kylin Node`, 1 `Spark 
 > Master Node,` and 3 `Spark Slaves Node` after deployed and it can scale 
 > needed nodes of Kylin and Spark workers. 
 > 7. **The difference between clusters only can be the index or customized 
 > configs of EC2 instances or customized properties of Kylin (and spark and 
 > zookeeper).**
@@ -250,9 +250,9 @@ $ bin/init.sh
 
 1. The `kylin.properties` is for starting kylin instance in the cluster. 
 2. The default cluster will check the `kylin.properties` in the 
`backup/properties/default`, and other specific clusters will check the related 
num directory such as `1`, `2,` and `3`.
-3. User needs to create a new dir for the cluster num in `backup/properties`, 
and name it to the `${cluster num}`, such as `1`, `2` ,`3` and so on. 
-4. Following the `2.` step, copy the `kylin.properties.template` which is in 
`backup/properties/templates` to the related `${cluster num} ` directories, and 
rename the template to `kylin.properties`. 
-5. The range of cluster nums must match the config `CLUSTER_INDEXES`, such as 
`CLUSTER_INDEXES: (1, 3)` then the directories must be `1`, `2`,`3` in the 
`backup/properties`.
+3. User needs to create a new dir for the cluster ID in `backup/properties`, 
and name it to the `${cluster ID}`, such as `1`, `2` ,`3` and so on. 
+4. Following the step `2`, copy the `kylin.properties.template` which is in 
`backup/properties/templates` to the related `${cluster ID} ` directories, and 
rename the template to `kylin.properties`. 
+5. The range of cluster IDs must match the config `CLUSTER_INDEXES`, such as 
`CLUSTER_INDEXES: (1, 3)` then the directories must be `1`, `2`,`3` in the 
`backup/properties`.
 
 ![kylin properties](../images/kylinproperties.png)
 
diff --git a/readme/quick_start.md b/readme/quick_start.md
index 76df90e..3c44d55 100644
--- a/readme/quick_start.md
+++ b/readme/quick_start.md
@@ -3,8 +3,6 @@
 ![sketch map](../images/sketch.png)
 
 - **Services are created as the number order from 1 to 4.**
-- **A cluster will be easily created as same as the image of architecture 
above.**
-
 
 
 ## Quick Start
@@ -43,18 +41,18 @@ $ bin/init.sh
 
 > Note: Following the information into a python virtual env and get the help 
 > messages. 
 
-5. Execute commands to deploy a `default` cluster.
+5. Execute commands to deploy a cluster quickly.
 
 ```shell
 $ python deploy.py --type deploy
 ```
 
-After the `default` cluster is ready, you will see the message `Kylin Cluster 
already start successfully.` in the console. 
+After this cluster is ready, you will see the message `Kylin Cluster already 
start successfully.` in the console. 
 
 >  Note: 
 >
-> 1. For details about the properties of kylin4 in a cluster, please check 
[configure kylin.properties](./prerequisites.md#cluster).
-> 2. For details about the index of the cluster,  please check [Indexes of 
clusters](./prerequisites.md#indexofcluster).
+> 1. For more details about the properties of kylin4 in a cluster, please 
check [configure kylin.properties](./prerequisites.md#cluster).
+> 2. For more details about the index of the clusters,  please check [Indexes 
of clusters](./prerequisites.md#indexofcluster).
 
 6. Execute commands to list nodes of the cluster.
 
@@ -68,13 +66,14 @@ You can access `Kylin` web by `http://{kylin public 
ip}:7070/kylin`.
 
 ![kylin login](../images/kylinlogin.png)
 
-7. Destroy the `default` cluster.
+7. Destroy the cluster quickly.
 
 ```shell
 $ python deploy.py --type destroy
 ```
 
-
-
-> Note: If you want to check about a quick start for multiple clusters, please 
referer to a [quick start for multiple 
clusters](./quick_start_for_multiple_clusters.md).
+> Note:
+>
+> 1. If you want to check about a quick start for multiple clusters, please 
referer to a [quick start for multiple 
clusters](./quick_start_for_multiple_clusters.md).
+> 2. **Current destroy operation will remain some stack which contains `RDS` 
and so on**. So if user want to destroy clearly, please modify the 
`ALWAYS_DESTROY_ALL` in `kylin_configs.yml` to be `true` and re-execute 
`destroy` command. 
 
diff --git a/readme/quick_start_for_multiple_clusters.md 
b/readme/quick_start_for_multiple_clusters.md
index 1b5d50b..14ee9a7 100644
--- a/readme/quick_start_for_multiple_clusters.md
+++ b/readme/quick_start_for_multiple_clusters.md
@@ -7,8 +7,8 @@
    > Note:
    >
    > 1. `CLUSTER_INDEXES` means that cluster index is in the range of 
`CLUSTER_INDEXES`. 
-   > 2. If a user creates multiple clusters, the `default` cluster always is 
created. If `CLUSTER_INDEXES` is (1, 3), there will be 4 cluster that contains 
cluster 1, 2, 3, and `default` will be created if a user executes the commands.
-   > 3. Configs for multiple clusters always are the same as the `default` 
cluster to read from `kylin_configs.yaml`
+   > 2. Configs for multiple clusters are also from `kylin_configs.yaml`.
+   > 3. For more details about the index of the clusters,  please check 
[Indexes of clusters](./prerequisites.md#indexofcluster).
 
 2. Copy `kylin.properties.template` for expecting clusters to deploy, please 
check the [details](./prerequisites.md#cluster). 
 
diff --git a/readme/trouble_shooting.md b/readme/trouble_shooting.md
index 4f2f053..1cc3d2c 100644
--- a/readme/trouble_shooting.md
+++ b/readme/trouble_shooting.md
@@ -14,9 +14,9 @@ A:
 
 ### The stack of related services is normal, but services aren't started 
normally.
 
-#### `Kylin` starts failing.
+#### `Kylin` web can not access.
 
-Q: `Kylin` starts failing.
+Q: `Kylin` web can not access.
 
 A: 
 
@@ -29,9 +29,9 @@ A:
 
 
 
-#### `Prometheus` starts failing.
+#### `Prometheus` can not access.
 
-Q: `Prometheus` starts failing.
+Q:`Prometheus` can not access.
 
 A:
 
@@ -44,9 +44,9 @@ A:
 
 
 
-#### `Granfana` starts failing.
+#### `Granfana` can not access.
 
-Q: `Granfana` starts failing.
+Q: `Granfana` can not access.
 
 A:
 
@@ -60,9 +60,9 @@ A:
 
 
 
-#### `Spark` starts failing.
+#### `Spark` can not access.
 
-Q: `Spark` starts failing.
+Q: `Spark` can not access.
 
 A:
 
@@ -75,9 +75,9 @@ A:
 
 
 
-#### `Kylin starts` failing because can not connect to the Zookeeper.
+#### `Kylin` can not access and exception "Session 0x0 for server null, 
unexpected error, closing socket connection and attempting reconnect." is in 
kylin.log.
 
-Q: `Kylin starts` failing because can not connect to the Zookeeper.
+Q: `Kylin` can not access and exception "Session 0x0 for server null, 
unexpected error, closing socket connection and attempting reconnect." is in 
kylin.log.
 
 A: 
 

Reply via email to