This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch doc5.0
in repository https://gitbox.apache.org/repos/asf/kylin.git


The following commit(s) were added to refs/heads/doc5.0 by this push:
     new e8b9bdf068 Minor remove useless image
e8b9bdf068 is described below

commit e8b9bdf068714ff6d1633a75f20ab97cec8aa371
Author: Mukvin <boyboys...@163.com>
AuthorDate: Fri Nov 25 11:16:20 2022 +0800

    Minor remove useless image
---
 website/docs/modeling/data_modeling.md | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/website/docs/modeling/data_modeling.md 
b/website/docs/modeling/data_modeling.md
index 955c0368fe..a36d76c58d 100755
--- a/website/docs/modeling/data_modeling.md
+++ b/website/docs/modeling/data_modeling.md
@@ -24,10 +24,7 @@ If it takes 1 minute to query 100 million entries of data 
records, querying 10 b
 
 ### Accelerate query with Kylin pre-computation
 
-Kylin leverages pre-computation to avoid the computing pressure brought by the 
growing data volume. That is, Kylin will precompute the combinations of defined 
model dimensions and then store the aggregated results as indexes to shorten 
query latency. In addition, Kylin uses parallel computing and columnar storage 
techniques to improve computing and storage speed.  
-
-![Reduce IO](images/reduceio.png)
-
+Kylin leverages pre-computation to avoid the computing pressure brought by the 
growing data volume. That is, Kylin will precompute the combinations of defined 
model dimensions and then store the aggregated results as indexes to shorten 
query latency. In addition, Kylin uses parallel computing and columnar storage 
techniques to improve computing and storage speed.
 
 With pre-computation, the number of indexes will be determined by the 
dimension cardinality only, and will no longer undergo exponential growth as 
data volume increases. Taking the data analysis of online transactions as an 
example, with Kylin pre-computation, even if the volume of transaction data 
increases by 10 times, the query speed against the same analytical dimensions 
changes little. The computing time complexity can be kept at O(1), helping 
enterprises to analyze data more efficiently. 
 

Reply via email to