Modified: kylin/site/feed.xml URL: http://svn.apache.org/viewvc/kylin/site/feed.xml?rev=1901905&r1=1901904&r2=1901905&view=diff ============================================================================== --- kylin/site/feed.xml (original) +++ kylin/site/feed.xml Tue Jun 14 14:08:04 2022 @@ -19,8 +19,8 @@ <description>Apache Kylin Home</description> <link>http://kylin.apache.org/</link> <atom:link href="http://kylin.apache.org/feed.xml" rel="self" type="application/rss+xml"/> - <pubDate>Fri, 13 May 2022 06:59:22 -0700</pubDate> - <lastBuildDate>Fri, 13 May 2022 06:59:22 -0700</lastBuildDate> + <pubDate>Tue, 14 Jun 2022 06:59:09 -0700</pubDate> + <lastBuildDate>Tue, 14 Jun 2022 06:59:09 -0700</lastBuildDate> <generator>Jekyll v2.5.3</generator> <item> @@ -298,6 +298,317 @@ FROM [covid_trip_dataset] </item> <item> + <title>Kylin on Cloud â Build A Data Analysis Platform on the Cloud in Two Hours Part 2</title> + <description><p>This is the second part of the blog series, for part 1, see ï¼<a href="../kylin4-on-cloud-part1/">Kylin on Cloud â Build A Data Analysis Platform on the Cloud in Two Hours Part 1</a></p> + +<h3 id="video-tutorials">Video Tutorials</h3> + +<p><a href="https://youtu.be/LPHxqZ-au4w">Kylin on Cloud â Build A Data Analysis Platform on the Cloud in Two Hours Part 2</a></p> + +<h3 id="kylin-query-cluster">Kylin query cluster</h3> + +<h4 id="start-kylin-query-cluster">Start Kylin query cluster</h4> + +<ol> + <li> + <p>Besides the <code class="highlighter-rouge">kylin_configs.yaml</code> file for starting the build cluster, we will also enable MDX with the command below:</p> + + <div class="highlighter-rouge"><pre class="highlight"><code>ENABLE_MDX: &amp;ENABLE_MDX 'true' +</code></pre> + </div> + </li> + <li> + <p>Then execute the deploy command to start the cluster:</p> + + <div class="highlighter-rouge"><pre class="highlight"><code>python deploy.py --type deploy --mode query +</code></pre> + </div> + </li> +</ol> + +<h4 id="query-with-kylin">Query with Kylin</h4> + +<ol> + <li> + <p>After the query cluster is successfully started, first execute <code class="highlighter-rouge">python deploy.py --type list</code> to get all node information, and then type in your browser <code class="highlighter-rouge">http://${kylin_node_public_ip}:7070/kylin</code> to log in to Kylin web UI:</p> + + <p><img src="/images/blog/kylin4_on_cloud/14_kylin_web_ui.png" alt="" /></p> + </li> + <li> + <p>Execute the same SQL on Insight page as what we have done with spark-SQL:</p> + + <div class="highlighter-rouge"><pre class="highlight"><code>select TAXI_TRIP_RECORDS_VIEW.PICKUP_DATE, NEWYORK_ZONE.BOROUGH, count(*), sum(TAXI_TRIP_RECORDS_VIEW.TRIP_TIME_HOUR), sum(TAXI_TRIP_RECORDS_VIEW.TOTAL_AMOUNT) +from TAXI_TRIP_RECORDS_VIEW +left join NEWYORK_ZONE +on TAXI_TRIP_RECORDS_VIEW.PULOCATIONID = NEWYORK_ZONE.LOCATIONID +group by TAXI_TRIP_RECORDS_VIEW.PICKUP_DATE, NEWYORK_ZONE.BOROUGH; +</code></pre> + </div> + + <p><img src="/images/blog/kylin4_on_cloud/15_query_in_kylin.png" alt="" /></p> + </li> +</ol> + +<p>As we can see, when the query hits the cube, that is, the query is directly answered by the pre-computed data, the query result is returned in about 4s, a great reduction from the over 100s of query latency.</p> + +<h3 id="pre-computation-reduces-query-cost">Pre-computation reduces query cost</h3> + +<p>In this test, we used the New York taxi order data with fact table containing 200+ million entries of data. As we can see from the result, Kylin has significantly improved the query efficiency in this big data analysis scenario against hundreds of millions of data entries. Moreover, the build data could be reused to answer thousands of subsequent queries, thereby reducing query cost.</p> + +<h3 id="configure-semantic-layer">Configure semantic layer</h3> + +<h4 id="import-dataset-into-mdx-for-kylin">Import Dataset into MDX for Kylin</h4> + +<p>With <code class="highlighter-rouge">MDX for Kylin</code>, you can create <code class="highlighter-rouge">Dataset</code> based on the Kylin Cube, define Cube relations, and create business metrics. To make it easy for beginners, you can directly download Dataset file from S3 and import it into <code class="highlighter-rouge">MDX for Kylin</code>:</p> + +<ol> + <li> + <p>Download the dataset to your local machine from S3.</p> + + <div class="highlighter-rouge"><pre class="highlight"><code>wget https://s3.cn-north-1.amazonaws.com.cn/public.kyligence.io/kylin/kylin_demo/covid_trip_project_covid_trip_dataset.json +</code></pre> + </div> + </li> + <li> + <p>Access <code class="highlighter-rouge">MDX for Kylin</code> web UI</p> + + <p>Enter <code class="highlighter-rouge">http://${kylin_node_public_ip}:7080</code> in your browser to access <code class="highlighter-rouge">MDX for Kylin</code> web UI and log in with the default username and password <code class="highlighter-rouge">ADMIN/KYLIN</code>:</p> + + <p><img src="/images/blog/kylin4_on_cloud/16_mdx_web_ui.png" alt="" /></p> + </li> + <li> + <p>Confirm Kylin connection</p> + + <p><code class="highlighter-rouge">MDX for Kylin</code> is already configured with the information of the Kylin node to be connected. You only need to type in the username and password (<code class="highlighter-rouge">ADMIN/KYLIN</code>) for the Kylin node when logging in for the first time.</p> + + <p><img src="/images/blog/kylin4_on_cloud/17_connect_to_kylin.png" alt="" /></p> + + <p><img src="/images/blog/kylin4_on_cloud/18_exit_management.png" alt="" /></p> + </li> + <li> + <p>Import Dataset</p> + + <p>After Kylin is successfully connected, click the icon in the upper right corner to exit the management page:</p> + + <p><img src="/images/blog/kylin4_on_cloud/19_kylin_running.png" alt="" /></p> + + <p>Switch to the <code class="highlighter-rouge">covid_trip_project</code> project and click <code class="highlighter-rouge">Import Dataset</code> on <code class="highlighter-rouge">Dataset</code> page:</p> + + <p><img src="/images/blog/kylin4_on_cloud/20_import_dataset.png" alt="" /></p> + + <p>Select and import the <code class="highlighter-rouge">covid_trip_project_covid_trip_dataset.json</code> file we just download from S3.</p> + + <p><code class="highlighter-rouge">covid_trip_dataset</code> contains specific dimensions and measures for each atomic metric, such as YTD, MTD, annual growth, monthly growth, time hierarchy, and regional hierarchy; as well as various business metrics including COVID-19 death rate, the average speed of taxi trips, etc. For more information on how to manually create a dataset, see Create dataset in <code class="highlighter-rouge">MDX for Kylin</code> or <a href="https://kyligence.github.io/mdx-kylin/">MDX for Kylin User Manual</a>.</p> + </li> +</ol> + +<h2 id="data-analysis-with-bi-and-excel">Data analysis with BI and Excel</h2> + +<h3 id="data-analysis-using-tableau">Data analysis using Tableau</h3> + +<p>Letâs take Tableau installed on a local Windows machine as an example to connect to MDX for Kylin for data analysis.</p> + +<ol> + <li> + <p>Select Tableauâs built-in <code class="highlighter-rouge">Microsoft Analysis Service</code> to connect to <code class="highlighter-rouge">MDX for Kylin</code>. (Note: Please install the <a href="https://www.tableau.com/support/drivers?_ga=2.104833284.564621013.1647953885-1839825424.1608198275"><code class="highlighter-rouge">Microsoft Analysis Services</code> driver</a> in advance, which can be downloaded from Tableau).</p> + + <p><img src="/images/blog/kylin4_on_cloud/21_tableau_connect.png" alt="" /></p> + </li> + <li> + <p>In the pop-up settings page, enter the <code class="highlighter-rouge">MDX for Kylin</code> server address, the username and password. The server address is <code class="highlighter-rouge">http://${kylin_node_public_ip}:7080/mdx/xmla/covid_trip_project</code>:</p> + + <p><img src="/images/blog/kylin4_on_cloud/22_tableau_server.png" alt="" /></p> + </li> + <li> + <p>Select covid_trip_dataset as the dataset:</p> + + <p><img src="/images/blog/kylin4_on_cloud/23_tableau_dataset.png" alt="" /></p> + </li> + <li> + <p>Then we can run data analysis with the worksheet. Since we have defined the business metrics with <code class="highlighter-rouge">MDX for Kylin</code>, when we want to generate a business report with Tableau, we can directly drag the pre-defined business metrics into the worksheet to create a report.</p> + </li> + <li> + <p>Firstly, we will analyze the pandemic data and draw the national-level pandemic map with the number of confirmed cases and mortality rate. We only need to drag and drop <code class="highlighter-rouge">COUNTRY_SHORT_NAME</code> under <code class="highlighter-rouge">REGION_HIERARCHY</code> to the Columns field and drop and drop <code class="highlighter-rouge">SUM_NEW_POSITIVE_CASES</code> and <code class="highlighter-rouge">CFR_COVID19</code> (fatality rate) under Measures to the Rows field, and then select to display the data results as a map:</p> + + <p><img src="/images/blog/kylin4_on_cloud/24_tableau_covid19_map.png" alt="" /></p> + + <p>The size of the symbols represents the level of COVID-19 death count and the shade of the color represents the level of the mortality rate. According to the pandemic map, the United States and India have more confirmed cases, but the mortality rates in the two countries are not significantly different from the other countries. However, countries with much fewer confirmed cases, such as Peru, Vanuatu, and Mexico, have persistently high death rates. You can continue to explore the reasons behind this if you are interested.</p> + + <p>Since we have set up a regional hierarchy, we can break down the country-level situation to the provincial/state level to see the pandemic situation in different regions of each country:</p> + + <p><img src="/images/blog/kylin4_on_cloud/25_tableau_province.png" alt="" /></p> + + <p>Zoom in on the COVID map to see the status in each state of the United States:</p> + + <p><img src="/images/blog/kylin4_on_cloud/26_tableau_us_covid19.png" alt="" /></p> + + <p>It can be concluded that there is no significant difference in the mortality rate in each state of the United States, which is around 0.01. In terms of the number of confirmed cases, it is significantly higher in California, Texas, Florida, and New York City. These regions are economically developed and have a large population. This might be the reason behind the higher number of confirmed COVID-19 cases. In the following part, we will combine the pandemic data with the New York taxi dataset to analyze the impact of the pandemic on the New York Taxi industry.</p> + </li> + <li> + <p>For the New York taxi order dataset, we want to compare the order numbers and travel speed in different boroughs.</p> + </li> +</ol> + +<p>Drag and drop <code class="highlighter-rouge">BOROUGH</code> under <code class="highlighter-rouge">PICKUP_NEWYORK_ZONE</code> to Columns, and drag and drop <code class="highlighter-rouge">ORDER_COUNT</code> and <code class="highlighter-rouge">trip_mean_speed</code> under Measures to Rows, and display the results as a map. The color shade represents the average speed and the size of the symbol represents the order number. We can see that taxi orders departing from Manhattan are higher than all the other boroughs combined, but the average speed is the lowest. Queens ranks second in terms of order number while Staten Island has the lowest amount of taxi activities. The average speed of taxi trips departing from the Bronx is 82 mph, several times higher than that of the other boroughs. This also reflects the population density and the level of economic development in different New York borou ghs.</p> + +<p><img src="/images/blog/kylin4_on_cloud/27_tableau_taxi_1.png" alt="" /></p> + +<p>Then we will replace the field <code class="highlighter-rouge">BOROUGH</code> from <code class="highlighter-rouge">PICKUP_NEWYORK_ZONE</code> with <code class="highlighter-rouge">BOROUGH</code> from <code class="highlighter-rouge">DROPOFF_NEWYORK_ZONE</code>, to analyze the number of taxi orders and average speed by drop-off ID:</p> + +<p><img src="/images/blog/kylin4_on_cloud/27_tableau_taxi_2.png" alt="" /></p> + +<p>The pick-up and drop-off data of Brooklyn, Queens, and Bronx differ greatly, for example, the taxi orders to Brooklyn or Bronx are much higher than those departing from there, while there are much fewer taxi trips to Queens than those starting from it.</p> + +<ul> + <li>Travel habits change after the pandemic (long-distance vs. short-distance travels)</li> +</ul> + +<p>To analyze the average trip mileage we can get the residentsâ travel habit changes, drag and drop dimension <code class="highlighter-rouge">MONTH_START</code> to Rows, and drag and drop the metric <code class="highlighter-rouge">trip_mean_distance</code> to Columns:</p> + +<p><img src="/images/blog/kylin4_on_cloud/28_tableau_taxi_3.png" alt="" /></p> + +<p>Based on the histogram we can see that there have been significant changes in peopleâs travel behavior before and after the outbreak of COVID-19, as the average trip mileage has increased significantly since March 2020 and in some months is even several times higher, and the trip mileage of each month fluctuated greatly. We can combine these data with the pandemic data in the month dimension, so we drag and drop <code class="highlighter-rouge">SUM_NEW_POSITIVE_CASES</code> and <code class="highlighter-rouge">MTD_ORDER_COUNT</code> to Rows and add <code class="highlighter-rouge">PROVINCE_STATE_NAME=New York</code> as the filter condition:</p> + +<p><img src="/images/blog/kylin4_on_cloud/29_tableau_taxi_4.png" alt="" /></p> + +<p>It is interesting to see that the number of taxi orders decreased sharply at the beginning of the outbreak while the average trip mileage increased, indicating people have cut unnecessary short-distance travels or switched to a safer means of transportation. By comparing the data curves, we can see that the severity of the pandemic and peopleâs travel patterns are highly related, taxi orders drop and average trip mileage increases when the pandemic worsens, while when the situation improves, taxi order increases while average trip mileage drops.</p> + +<h3 id="data-analysis-via-excel">Data analysis via Excel</h3> + +<p>With <code class="highlighter-rouge">MDX for Kylin</code>, we can also use Kylin for big data analysis with Excel. In this test, we will use Excel installed on a local Windows machine to connect MDX for Kylin.</p> + +<ol> + <li> + <p>Open Excel, select <code class="highlighter-rouge">Data</code> -&gt; <code class="highlighter-rouge">Get Data</code> -&gt; <code class="highlighter-rouge">From Database</code> -&gt; <code class="highlighter-rouge">From Analysis Services</code>:</p> + + <p><img src="/images/blog/kylin4_on_cloud/30_excel_connect.png" alt="" /></p> + </li> + <li> + <p>In <code class="highlighter-rouge">Data Connection Wizard</code>, enter the connection information as the server name:<code class="highlighter-rouge">http://${kylin_node_public_ip}:7080/mdx/xmla/covid_trip_project</code>:</p> + + <p><img src="/images/blog/kylin4_on_cloud/31_excel_server.png" alt="" /></p> + + <p><img src="/images/blog/kylin4_on_cloud/32_tableau_dataset.png" alt="" /></p> + </li> + <li> + <p>Then create a PivotTable for this data connection. We can see the data listed here is the same as that when we are using Tableau. So no matter whether analysts are using Tableau or Excel, they are working on identical sets of data models, dimensions, and business metrics, thereby realizing unified semantics.</p> + </li> + <li> + <p>We have just created a pandemic map and run a trend analysis using <code class="highlighter-rouge">covid19</code> and <code class="highlighter-rouge">newyork_trip_data</code> with Tableau. In Excel, we can check more details for the same datasets and data scenarios.</p> + </li> +</ol> + +<ul> + <li>For COVID-19 related data, we add <code class="highlighter-rouge">REGION_HIERARCHY</code> and pre-defined <code class="highlighter-rouge">SUM_NEW_POSITIVE_CASES</code> and mortality rate <code class="highlighter-rouge">CFR_COVID19</code> to the PivotTable:</li> +</ul> + +<p><img src="/images/blog/kylin4_on_cloud/33_tableau_covid19_1.png" alt="" /></p> + +<p>The highest level of the regional hierarchy is <code class="highlighter-rouge">CONTINENT_NAME</code>, which includes the number of confirmed cases and mortality rate in each continent. We can see that Europe has the highest number of confirmed cases while Africa has the highest mortality rate. In this PivotTable, we can easily drill down to lower regional levels to check more fine-grained data, such as data from different Asian countries, and sort them in descending order according to the number of confirmed cases:</p> + +<p><img src="/images/blog/kylin4_on_cloud/34_excel_covid20_2.png" alt="" /></p> + +<p>The data shows that India, Turkey, and Iran are the countries with the highest number of confirmed cases.</p> + +<ul> + <li>Regarding the problem, does the pandemic have a significant impact on taxi orders, we first look at the YTD and growth rate of taxi orders from the year dimension by creating a PivotTable with <code class="highlighter-rouge">TIME_HIERARCHY</code>, <code class="highlighter-rouge">YOY_ORDER_COUNT</code>, and <code class="highlighter-rouge">YTD_ORDER_COUNT</code> as the dimension for time hierarchy:</li> +</ul> + +<p><img src="/images/blog/kylin4_on_cloud/35_excel_taxi_1.png" alt="" /></p> + +<p>It can be seen that since the outbreak of the pandemic in 2020, there is a sharp decrease in taxi orders. The growth rate in 2020 is -0.7079, that is, a reduction of 70% in taxi orders. The growth rate in 2021 is still negative, but the decrease is not so obvious compared to 2020 when the pandemic just started.</p> + +<p>Click to expand the time hierarchy to view the data at quarter, month, and even day levels. By selecting <code class="highlighter-rouge">MOM_ORDER_COUNT</code> and <code class="highlighter-rouge">ORDER_COUNT</code>, we can check the monthly order growth rate and order numbers in different time hierarchies:</p> + +<p><img src="/images/blog/kylin4_on_cloud/36_excel_taxi_2.png" alt="" /></p> + +<p>The order growth rate in March 2020 was -0.52, which is already a significant fall. The rate dropped even further to -0.92 in April, that is, a 90% reduction in orders. Then the decreasing rate becomes less obvious. But taxi orders were still much lower than before the outbreak.</p> + +<h3 id="use-api-to-integrate-kylin-with-data-analysis-platform">Use API to integrate Kylin with data analysis platform</h3> + +<p>In addition to mainstream BI tools such as Excel and Tableau, many companies also like to develop their in-house data analysis platforms. For such self-developed data analysis platforms, users can still use Kylin + MDX for Kylin as the base for the analysis platform by calling API to ensure a unified data definition. In the following part, we will show you how to send a query to MDX for Kylin through Olap4j, the Java library similar to JDBC driver that can access any OLAP service.</p> + +<p>We also provide a simple demo for our users, you may click <a href="https://github.com/apache/kylin/tree/mdx-query-demo">mdx query demo</a> to download the source code.</p> + +<ol> + <li> + <p>Download jar package for the demo:</p> + + <div class="highlighter-rouge"><pre class="highlight"><code>wget https://s3.cn-north-1.amazonaws.com.cn/public.kyligence.io/kylin/kylin_demo/mdx_query_demo.tgz +tar -xvf mdx_query_demo.tgz +cd mdx_query_demo +</code></pre> + </div> + </li> + <li> + <p>Run demo</p> + </li> +</ol> + +<p>Make sure Java 8 is installed before running the demo:</p> + +<p><img src="/images/blog/kylin4_on_cloud/37_jdk_8.png" alt="" /></p> + +<p>Two parameters are needed to run the demo: the IP of the MDX node and the MDX query to be run. The default port is 7080. The MDX node IP here is the public IP of the Kylin node.</p> + +<div class="highlighter-rouge"><pre class="highlight"><code>java -cp olap4j-xmla-1.2.0.jar:olap4j-1.2.0.jar:xercesImpl-2.9.1.jar:mdx-query-demo-0.0.1.jar io.kyligence.mdxquerydemo.MdxQueryDemoApplication "${kylin_node_public_ip}" "${mdx_query}" +</code></pre> +</div> + +<p>Or you could just enter the IP of the MDX node, the system will automatically run the following MDX statement to count the order number and average trip mileage of each borough according to the pickup ID:</p> + +<div class="highlighter-rouge"><pre class="highlight"><code>SELECT +{[Measures].[ORDER_COUNT], +[Measures].[trip_mean_distance]} +DIMENSION PROPERTIES [MEMBER_UNIQUE_NAME],[MEMBER_ORDINAL],[MEMBER_CAPTION] ON COLUMNS, +NON EMPTY [PICKUP_NEWYORK_ZONE].[BOROUGH].[BOROUGH].AllMembers +DIMENSION PROPERTIES [MEMBER_UNIQUE_NAME],[MEMBER_ORDINAL],[MEMBER_CAPTION] ON ROWS +FROM [covid_trip_dataset] +</code></pre> +</div> + +<p>We will also use the default query in this tutorial. After the execution is completed, we can get the query result in the command line:</p> + +<p><img src="/images/blog/kylin4_on_cloud/38_demo_result.png" alt="" /></p> + +<p>As you can see, we have successfully obtained the data needed. The result shows that the largest number of taxi orders are from Manhattan, with an average order distance of only about 2.4 miles, which is reasonable if we consider the area and dense population in Manhattan; while the average distance of orders departing from Bronx is 33 miles, much higher than any other boroughs, probably due to Bronxâs remote location.</p> + +<p>As with Tableau and Excel, the MDX statement here can directly use the metrics defined in Kylin and MDX for Kylin. Users can do further analysis of the data with their own data analysis platform.</p> + +<h3 id="unified-data-definition">Unified data definition</h3> + +<p>We have demonstrated 3 ways to work with Kylin + MDX for Kylin, from which we can see that with the help of Kylin multi-dimensional database and MDX for Kylin semantic layer, no matter which data analytic system you are using, you can always use the same data model and business metrics and enjoy the advantages brought by unified semantics.</p> + +<h2 id="delete-clusters">Delete clusters</h2> + +<h3 id="delete-query-cluster">Delete query cluster</h3> + +<p>After the analysis, we can execute the cluster destruction command to delete the query cluster. If you also want to delete metadata database RDS, monitor node and VPC of Kylin and MDX for Kylin, you can execute the following cluster destroy command:</p> + +<div class="highlighter-rouge"><pre class="highlight"><code>python deploy.py --type destroy-all +</code></pre> +</div> + +<h3 id="check-aws-resources">Check AWS resources</h3> + +<p>After all cluster resources are deleted, there should be no Kylin deployment tool-related Stack on <code class="highlighter-rouge">CloudFormation</code>. If you also want to delete the deployment-related files and data from S3, you can manually delete the following folders under the S3 working directory:</p> + +<p><img src="/images/blog/kylin4_on_cloud/39_check_s3_demo.png" alt="" /></p> + +<h2 id="summary">Summary</h2> + +<p>You only need an AWS account to follow the steps in this tutorial to explore our Kylin deployment tool on the Cloud. Kylin + MDX for Kylin, with our pre-computation technology, multi-dimensional models, and basic metrics management capabilities, enables users to build a big data analysis platform on the cloud in a convenient way. In addition, we also support seamless connection to mainstream BI tools, helping our users to better leverage their data with higher efficiency and the lowest TCO.</p> +</description> + <pubDate>Wed, 20 Apr 2022 04:00:00 -0700</pubDate> + <link>http://kylin.apache.org/blog/2022/04/20/kylin4-on-cloud-part2/</link> + <guid isPermaLink="true">http://kylin.apache.org/blog/2022/04/20/kylin4-on-cloud-part2/</guid> + + + <category>blog</category> + + </item> + + <item> <title>Kylin on Cloud ââ 䏤尿¶å¿«éæå»ºäºä¸æ°æ®åæå¹³å°(ä¸)</title> <description><h2 id="section">èæ¯</h2> @@ -661,6 +972,403 @@ sed -i <span class="s2"> </item> <item> + <title>Kylin on Cloud â Build A Data Analysis Platform on the Cloud in Two Hours Part 1</title> + <description><h2 id="video-tutorials">Video Tutorials</h2> + +<p><a href="https://youtu.be/5kKXEMjO1Sc">Kylin on Cloud â Build A Data Analysis Platform on the Cloud in Two Hours Part 1</a></p> + +<h2 id="background">Background</h2> + +<p>Apache Kylin is a multidimensional database based on pre-computation and multidimensional models. It also supports standard SQL query interface. In Kylin, users can define table relationships by creating Models, define dimensions and measures by creating Cubes, and run data aggregation with Cube building. The pre-computed data will be saved to answer user queries and users also can perform further aggregation on the pre-computed data, significantly improving the query performance.</p> + +<p>With the release of Kylin 4.0, Kylin can now be deployed without a Hadoop environment. To make it easier for users to deploy Kylin on the cloud, Kylin community recently developed a cloud deployment tool that allows users to obtain a complete Kylin cluster by executing just one line of command, delivering a fast and efficient analysis experience for the users. Moreover, in January 2022, the Kylin community released MDX for Kylin to enhance the semantic capability of Kylin as a multidimensional database. MDX for Kylin provides the MDX query interface, users can define business metrics based on the multidimensional model and translate the Kylin data models into a business-friendly language to give data business values, making it easier to integrate with Excel, Tableau, and other BI tools.</p> + +<p>With all these innovations, users can easily and quickly deploy Kylin clusters on the cloud, create multi-dimensional models, and enjoy the short query latency brought by pre-computation; whatâs more, users can also use MDX for Kylin to define and manage business metrics, leveraging both the advantages of data warehouse and business semantics.</p> + +<p>With Kylin + MDX for Kylin, users can directly work with BI tools for multidimensional data analysis, or use it as the basis to build complex applications such as metrics platforms. Compared with the solution of building a metrics platform directly with computing engines such as Spark and Hive that perform Join and aggregated query computation at runtime, Kylin, with our multidimensional modeling, pre-computation technology, and semantics layer capabilities empowered by MDX for Kylin, provides users with key functions such as massive data computation, extremely fast query response, unified multidimensional model, interface to a variety of BI tools, and basic business metrics management capabilities.</p> + +<p>This tutorial will start from a data engineerâs perspective to show how to build a Kylin on Cloud data analysis platform, which will deliver a high-performance query experience for hundreds of millions of rows of data with a lower TCO, the capability to manage business metrics through MDX for Kylin, and direct connection to BI tools for quick reports generating.</p> + +<p>Each step of this tutorial is explained in detail with illustrations and checkpoints to help newcomers. All you need to start is to an AWS account and 2 hours. Note: The cloud cost to finish this tutorial is around 15$.</p> + +<p><img src="/images/blog/kylin4_on_cloud/0_deploy_kylin.png" alt="" /></p> + +<h2 id="business-scenario">Business scenario</h2> + +<p>Since the beginning of 2020, COVID-19 has spread rapidly all over the world, which has greatly changed peopleâs daily life, especially their travel habits. This tutorial wants to learn the impact of the pandemic on the New York taxi industry based on the pandemic data and New York taxi travel data since 2018 and indicators such as positive cases, fatality rate, taxi orders, and average travel mileage will be analyzed. We hope this analysis could provide some insights for future decision-making.</p> + +<h3 id="business-issues">Business issues</h3> + +<ul> + <li>The severity of the pandemic in different countries and regions</li> + <li>Travel metrics of different blocks in New York City, such as order number, travel mileage, etc.</li> + <li>Does the pandemic have a significant impact on taxi orders?</li> + <li>Travel habits change after the pandemic (long-distance vs. short-distance travels)</li> + <li>Is the severity of the pandemic strongly related to taxi travel?</li> +</ul> + +<h3 id="dataset">Dataset</h3> + +<h4 id="covid-19-dataset">COVID-19 Dataset</h4> + +<p>The COVID-19 dataset includes a fact table <code class="highlighter-rouge">covid_19_activity</code> and a dimension table <code class="highlighter-rouge">lookup_calendar</code>.</p> + +<p><code class="highlighter-rouge">covid_19_activity</code> contains the number of confirmed cases and deaths reported each day in different regions around the world. <code class="highlighter-rouge">lookup_calendar</code> is a date dimension table that holds time-extended information, such as the beginning of the year, and the beginning of the month for each date. <code class="highlighter-rouge">covid_19_activity</code> and <code class="highlighter-rouge">lookup_calendar</code> are associated by date.<br /> +COVID-19 æ°æ®éç¸å ³ä¿¡æ¯å¦ä¸:</p> + +<table> + <tbody> + <tr> + <td>Data size</td> + <td>235 MB</td> + </tr> + <tr> + <td>Fact table row count</td> + <td>2,753,688</td> + </tr> + <tr> + <td>Data range</td> + <td>2020-01-21~2022-03-07</td> + </tr> + <tr> + <td>Download address provided by the dataset provider</td> + <td>https://data.world/covid-19-data-resource-hub/covid-19-case-counts/workspace/file?filename=COVID-19+Activity.csv</td> + </tr> + <tr> + <td>S3 directory of the dataset</td> + <td>s3://public.kyligence.io/kylin/kylin_demo/data/covid19_data/</td> + </tr> + </tbody> +</table> + +<h4 id="nyc-taxi-order-dataset">NYC taxi order dataset</h4> + +<p>The NYC taxi order dataset consists of a fact table <code class="highlighter-rouge">taxi_trip_records_view</code>, and two dimension tables, <code class="highlighter-rouge">newyork_zone</code> and <code class="highlighter-rouge">lookup_calendar</code>.</p> + +<p>Among them, each record in <code class="highlighter-rouge">taxi_trip_records_view</code> corresponds to one taxi trip and contains information like the pick-up ID, drop-off ID, trip duration, order amount, travel mileage, etc. <code class="highlighter-rouge">newyork_zone</code> records the administrative district corresponding to the location ID. <code class="highlighter-rouge">taxi_trip_records_view</code> are connected with <code class="highlighter-rouge">newyork_zone</code> through columns PULocationID and DOLocationID to get the information about pick-up and drop-off blocks. <code class="highlighter-rouge">lookup_calendar</code> is the same dimension table as in the COVID-19 dataset. <code class="highlighter-rouge">taxi_trip_records_view</code> and <code class="highlighter-rouge">lookup_calendar</code> are connected by date.</p> + +<p>NYC taxi order dataset informationï¼</p> + +<table> + <tbody> + <tr> + <td>Data size</td> + <td>19 G</td> + </tr> + <tr> + <td>Fact table row count</td> + <td>226,849,274</td> + </tr> + <tr> + <td>Data range</td> + <td>2018-01-01~2021-07-31</td> + </tr> + <tr> + <td>Download address provided by the dataset provider</td> + <td>https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page</td> + </tr> + <tr> + <td>S3 directory of the dataset</td> + <td>s3://public.kyligence.io/kylin/kylin_demo/data/trip_data_2018-2021/</td> + </tr> + </tbody> +</table> + +<h4 id="er-diagram">ER Diagram</h4> + +<p>The ER diagram of the COVID-19 dataset and NYC taxi order dataset is as follows:</p> + +<p><img src="/images/blog/kylin4_on_cloud/1_table_ER.png" alt="" /></p> + +<h3 id="metrics-design">Metrics design</h3> + +<p>Based on what we try to solve with this model, we designed the following atomic metrics and business metrics:</p> + +<h6 id="atomic-metrics">1. Atomic metrics</h6> + +<p>Atomic metrics refer to measures created in Kylin Cube, which are relatively simple, as they only run aggregated calculations on one column.</p> + +<ul> + <li>Covid19 case count: <code class="highlighter-rouge">sum(covid_19_activity.people_positive_cases_count)</code></li> + <li>Covid19 fatality: <code class="highlighter-rouge">sum(covid_19_activity. people_death_count)</code></li> + <li>Covid19 new positive case count: <code class="highlighter-rouge">sum(covid_19_activity. people_positive_new_cases_count)</code></li> + <li>Covid19 new death count: <code class="highlighter-rouge">sum(covid_19_activity. people_death_new_count)</code></li> + <li>Taxi trip mileage: <code class="highlighter-rouge">sum(taxi_trip_records_view. trip_distance)</code></li> + <li>Taxi order amount: <code class="highlighter-rouge">sum(taxi_trip_records_view. total_amount)</code></li> + <li>Taxi trip count: <code class="highlighter-rouge">count()</code></li> + <li>Taxi trip duration: <code class="highlighter-rouge">sum(taxi_trip_records_view.trip_time_hour)</code></li> +</ul> + +<h6 id="business-metrics">2. Business metrics</h6> + +<p>Business metrics are various compound operations based on atomic metrics that have specific business meanings.</p> + +<ul> + <li>MTD, YTD of each atomic metric</li> + <li>MOM, YOY of each atomic metric</li> + <li>Covid19 fatality rate: death count/positive case count</li> + <li>Average taxi trip speed: taxi trip distance/taxi trip duration</li> + <li>Average taxi trip mileage: taxi trip distance/taxi trip count</li> +</ul> + +<h2 id="operation-overview">Operation Overview</h2> + +<p>The diagram below is the main steps to build a cloud data analysis platform with Apache Kylin and how to perform data analysis:</p> + +<p><img src="/images/blog/kylin4_on_cloud/2_step_overview.jpg" alt="" /></p> + +<h2 id="cluster-architecture">Cluster architecture</h2> + +<p>Here is the architecture of the Kylin cluster deployed by the cloud deployment tool:</p> + +<p><img src="/images/blog/kylin4_on_cloud/3_kylin_cluster.jpg" alt="" /></p> + +<h2 id="kylin-on-cloud-deployment">Kylin on Cloud deployment</h2> + +<h3 id="prerequisites">Prerequisites</h3> + +<ul> + <li>GitHub Desktop: for downloading the deployment tool;</li> + <li>Python 3.6.6: for running the deployment tool</li> +</ul> + +<h3 id="aws-permission-check-and-initialization">AWS permission check and initialization</h3> + +<p>Log in to AWS with your account to check the permission status and then create the Access Key, IAM Role, Key Pair, and S3 working directory according to the document <a href="https://github.com/apache/kylin/blob/kylin4_on_cloud/readme/prerequisites.md">Prerequisites</a>. Subsequent AWS operations will be performed with this account.</p> + +<h3 id="configure-the-deployment-tool">Configure the deployment tool</h3> + +<ol> + <li> + <p>Execute the following command to clone the code for the Kylin on AWS deployment tool.</p> + + <p><code class="highlighter-rouge">shell + git clone -b kylin4_on_cloud --single-branch https://github.com/apache/kylin.git &amp;&amp; cd kylin +</code></p> + </li> + <li> + <p>Initialize the virtual environment for your Python on local machine.</p> + + <p>Run the command below to check the Python version. Note: Python 3.6.6 or above is needed:</p> + + <p><code class="highlighter-rouge">shell + python --version +</code></p> + + <p>Initialize the virtual environment for Python and install dependencies:</p> + + <p><code class="highlighter-rouge">shell + bin/init.sh + source venv/bin/activate +</code></p> + </li> + <li> + <p>Modify the configuration file <code class="highlighter-rouge">kylin_configs.yaml</code></p> + </li> +</ol> + +<p>Open kylin_configs.yaml file, and replace the configuration items with the actual values:</p> + +<ul> + <li><code class="highlighter-rouge">AWS_REGION</code>: Region for EC2 instance, the default value is <code class="highlighter-rouge">cn-northwest-1</code></li> + <li><code class="highlighter-rouge">${IAM_ROLE_NAME}</code>: IAM Role just created, e.g. <code class="highlighter-rouge">kylin_deploy_role</code></li> + <li><code class="highlighter-rouge">${S3_URI}</code>: S3 working directory for deploying Kylin, e.g. s3://kylindemo/kylin_demo_dir/</li> + <li><code class="highlighter-rouge">${KEY_PAIR}</code>: Key pairs just created, e.g. kylin_deploy_key</li> + <li><code class="highlighter-rouge">${Cidr Ip}</code>: IP address range that is allowed to access EC2 instances, e.g. 10.1.0.0/32, usually set as your external IP address to ensure that only you can access these EC2 instances</li> +</ul> + +<p>As Kylin adopts a read-write separation architecture to separate build and query resources, in the following steps, we will first start a build cluster to connect to Glue to create tables, load data sources, and submit build jobs for pre-computation, then delete the build cluster but save the metadata. Then we will start a query cluster with MDX for Kylin to create business metrics, connect to BI tools for queries, and perform data analysis. Kylin on AWS cluster uses RDS to store metadata and S3 to store the built data. It also supports loading data sources from AWS Glue. Except for the EC2 nodes, the other resources used are permanent and will not disappear with the deletion of nodes. Therefore, when there is no query or build job, users can delete the build or query clusters and only keep the metadata and S3 working directory.</p> + +<h3 id="kylin-build-cluster">Kylin build cluster</h3> + +<h4 id="start-kylin-build-cluster">Start Kylin build cluster</h4> + +<ol> + <li> + <p>Start the build cluster with the following command. The whole process may take 15-30 minutes depending on your network conditions.</p> + + <p><code class="highlighter-rouge">shell + python deploy.py --type deploy --mode job +</code></p> + </li> + <li> + <p>You may check the terminal to see if the build cluster is successfully deployed:</p> + </li> +</ol> + +<p><img src="/images/blog/kylin4_on_cloud/4_deploy_cluster_successfully.png" alt="" /></p> + +<h4 id="check-aws-service">Check AWS Service</h4> + +<ol> + <li> + <p>Go to CloudFormation on AWS console, where you can see 7 stacks are created by the Kylin deployment tool:</p> + + <p><img src="/images/blog/kylin4_on_cloud/5_check_aws_stacks.png" alt="" /></p> + </li> + <li> + <p>Users can view the details of EC2 nodes through the AWS console or use the command below to check the names, private IPs, and public IPs of all EC2 nodes.</p> + </li> +</ol> + +<div class="highlighter-rouge"><pre class="highlight"><code>python deploy.py --type list +</code></pre> +</div> + +<p><img src="/images/blog/kylin4_on_cloud/6_list_cluster_node.png" alt="" /></p> + +<h4 id="spark-sql-query-response-time">Spark-SQL query response time</h4> + +<p>Letâs first check the query response time in Spark-SQL environment as a comparison.</p> + +<ol> + <li> + <p>First, log in to the EC2 where Kylin is deployed with the public IP of the Kylin node, switch to root user, and execute <code class="highlighter-rouge">~/.bash_profile</code> to implement the environment variables set beforehand.</p> + + <p><code class="highlighter-rouge">shell + ssh -i "${KEY_PAIR}" ec2-user@${kylin_node_public_ip} + sudo su + source ~/.bash_profile +</code></p> + </li> + <li> + <p>Go to <code class="highlighter-rouge">$SPARK_HOME</code> to modify configuration file <code class="highlighter-rouge">conf/spark-defaults.conf</code>, change spark_master_node_private_ip to a private IP of the Spark master node:</p> + + <p>```shell<br /> + cd $SPARK_HOME<br /> + vim conf/spark-defaults.conf</p> + + <p>## Replace spark_master_node_private_ip with the private IP of the real Spark master node<br /> + spark.master spark://spark_master_node_private_ip:7077<br /> + ```</p> + + <p>In <code class="highlighter-rouge">spark-defaults.conf</code>, the resource allocation for driver and executor is the same as that for Kylin query cluster.</p> + </li> + <li> + <p>Create table in Spark-SQL</p> + + <p>All data from the test dataset is stored in S3 bucket of <code class="highlighter-rouge">cn-north-1</code> and <code class="highlighter-rouge">us-east-1</code>. If your S3 bucket is in <code class="highlighter-rouge">cn-north-1</code> or <code class="highlighter-rouge">us-east-1</code>, you can directly run SQL to create the table; Or, you will need to execute the following script to copy the data to the S3 working directory set up in <code class="highlighter-rouge">kylin_configs.yaml</code>, and modify your SQL for creating the table:</p> + + <p>```shell<br /> + ## AWS CN user<br /> + aws s3 sync s3://public.kyligence.io/kylin/kylin_demo/data/ ${S3_DATA_DIR} âregion cn-north-1</p> + + <p>## AWS Global user<br /> + aws s3 sync s3://public.kyligence.io/kylin/kylin_demo/data/ ${S3_DATA_DIR} âregion us-east-1</p> + + <p>## Modify create table SQL<br /> + sed -i âs#s3://public.kyligence.io/kylin/kylin_demo/data/#${S3_DATA_DIR}#gâ /home/ec2-user/kylin_demo/create_kylin_demo_table.sql<br /> + ```</p> + + <p>Execute SQL for creating table:</p> + + <p><code class="highlighter-rouge">shell + bin/spark-sql -f /home/ec2-user/kylin_demo/create_kylin_demo_table.sql +</code></p> + </li> + <li> + <p>Execute query in Spark-SQL</p> + + <p>Go to Spark-SQL:</p> + + <p><code class="highlighter-rouge">shell + bin/spark-sql +</code></p> + + <p>Run query in Spark-SQL:</p> + + <p><code class="highlighter-rouge">sql + use kylin_demo; + select TAXI_TRIP_RECORDS_VIEW.PICKUP_DATE, NEWYORK_ZONE.BOROUGH, count(*), sum(TAXI_TRIP_RECORDS_VIEW.TRIP_TIME_HOUR), sum(TAXI_TRIP_RECORDS_VIEW.TOTAL_AMOUNT) + from TAXI_TRIP_RECORDS_VIEW + left join NEWYORK_ZONE + on TAXI_TRIP_RECORDS_VIEW.PULOCATIONID = NEWYORK_ZONE.LOCATIONID + group by TAXI_TRIP_RECORDS_VIEW.PICKUP_DATE, NEWYORK_ZONE.BOROUGH; +</code></p> + + <p>We can see that with the same configuration as Kylin query cluster, direct query using Spark-SQL takes over 100s:</p> + + <p><img src="/images/blog/kylin4_on_cloud/7_query_in_spark_sql.png" alt="" /></p> + </li> + <li> + <p>After the query is successfully executed, we should exit the Spark-SQL before proceeding to the following steps to save resources.</p> + </li> +</ol> + +<h4 id="import-kylin-metadata">Import Kylin metadata</h4> + +<ol> + <li> + <p>Go to <code class="highlighter-rouge">$KYLIN_HOME</code></p> + + <p><code class="highlighter-rouge">shell + cd $KYLIN_HOME +</code></p> + </li> + <li> + <p>Import metadata</p> + + <p><code class="highlighter-rouge">shell + bin/metastore.sh restore /home/ec2-user/meta_backups/ +</code></p> + </li> + <li> + <p>Reload metadata</p> + </li> +</ol> + +<p>Type <code class="highlighter-rouge">http://${kylin_node_public_ip}:7070/kylin</code> (relace the IP with the public IP of the EC2 node) in your browser to log in to Kylin web UI, and log in with the default username and password ADMIN/KYLIN:</p> + +<p><img src="/images/blog/kylin4_on_cloud/8_kylin_web_ui.png" alt="" /></p> + +<p>Reload Kylin metadata by clicking System - &gt; Configuration - &gt; Reload Metadata:</p> + +<p><img src="/images/blog/kylin4_on_cloud/9_reload_kylin_metadata.png" alt="" /></p> + +<p>If youâd like to learn how to manually create the Model and Cube included in Kylin metadata, please refer to <a href="https://cwiki.apache.org/confluence/display/KYLIN/Create+Model+and+Cube+in+Kylin">Create model and cube in Kylin</a>.</p> + +<h4 id="run-build">Run build</h4> + +<p>Submit the Cube build job. Since no partition column is set in the model, we will directly perform a full build for the two cubes:</p> + +<p><img src="/images/blog/kylin4_on_cloud/10_full_build_cube.png.png" alt="" /></p> + +<p><img src="/images/blog/kylin4_on_cloud/11_kylin_job_complete.png" alt="" /></p> + +<h4 id="destroy-build-cluster">Destroy build cluster</h4> + +<p>After the building Job is completed, execute the cluster delete command to close the build cluster. By default, the RDS stack, monitor stack, and VPC stack will be kept.</p> + +<div class="highlighter-rouge"><pre class="highlight"><code>python deploy.py --type destroy +</code></pre> +</div> + +<p>Cluster is successfully closed:</p> + +<p><img src="/images/blog/kylin4_on_cloud/12_destroy_job_cluster.png" alt="" /></p> + +<h4 id="check-aws-resource">Check AWS resource</h4> + +<p>After the cluster is successfully deleted, you can go to the <code class="highlighter-rouge">CloudFormation</code> page in AWS console to confirm whether there are remaining resources. Since the metadata RDS, monitor nodes, and VPC nodes are kept by default, you will see only the following three stacks on the page.</p> + +<p><img src="/images/blog/kylin4_on_cloud/13_check_aws_stacks.png" alt="" /></p> + +<p>The resources in the three stacks will still be used when we start the query cluster, to ensure that the query cluster and the build cluster use the same set of metadata.</p> + +<h4 id="intro-to-next-part">Intro to next part</h4> + +<p>Thatâs all for the first part of Kylin on Cloud ââ Build A Data Analysis Platform on the Cloud in Two Hours, please see part 2 here: <a href="../kylin4-on-cloud-part2/">Kylin on Cloud ââ Quickly Build Cloud Data Analysis Service Platform within Two Hours</a> (Part 2)</p> +</description> + <pubDate>Wed, 20 Apr 2022 04:00:00 -0700</pubDate> + <link>http://kylin.apache.org/blog/2022/04/20/kylin4-on-cloud-part1/</link> + <guid isPermaLink="true">http://kylin.apache.org/blog/2022/04/20/kylin4-on-cloud-part1/</guid> + + + <category>blog</category> + + </item> + + <item> <title>å¦ä½ä½¿ç¨ Excel æ¥è¯¢ Kylinï¼MDX for Kylinï¼</title> <description><h2 id="kylin--mdx">Kylin 为ä»ä¹éè¦ MDXï¼</h2> @@ -2337,157 +3045,6 @@ Kylin 4.0 对æå»ºåæ¥ <category>cn_blog</category> - - </item> - - <item> - <title>Performance optimization of Kylin 4.0 in cloud -- local cache and soft affinity scheduling</title> - <description><h2 id="background-introduction">01 Background Introduction</h2> -<p>Recently, the Apache Kylin community released Kylin 4.0.0 with a new architecture. The architecture of Kylin 4.0 supports the separation of storage and computing, which enables kylin users to run Kylin 4.0 in a more flexible cloud deployment mode with flexible computing resources. With the cloud infrastructure, users can choose to use cheap and reliable object storage to store cube data, such as S3. However, in the architecture of separation of storage and computing, we need to consider that reading data from remote storage by computing nodes through the network is still a costly operation, which often leads to performance loss.<br /> -In order to improve the query performance of Kylin 4.0 when using cloud object storage as the storage, we try to introduce the local cache mechanism into the Kylin 4.0 query engine. When executing the query, the frequently used data is cached on the local disk to reduce the delay caused by pulling data from the remote object storage and achieve faster query response. In addition, in order to avoid wasting disk space when the same data is cached on a large number of spark executors at the same time, and the computing node can read more required data from the local cache, we introduce the scheduling strategy of soft affinity. The soft affinity strategy is to establish a corresponding relationship between the spark executor and the data file through some method, In most cases, the same data can always be read on the same executor, so as to improve the hit rate of the cache.</p> - -<h2 id="implementation-principle">02 Implementation Principle</h2> - -<h4 id="local-cache">1. Local Cache</h4> - -<p>When Kylin 4.0 executes a query, it mainly goes through the following stages, in which the stages where local cache can be used to improve performance are marked with dotted lines:</p> - -<p><img src="/images/blog/local-cache/Local_cache_stage.png" alt="" /></p> - -<ul> - <li>File list cacheï¼Cache the file status on the spark driver side. When executing the query, the spark driver needs to read the file list and obtain some file information for subsequent scheduling execution. Here, the file status information will be cached locally to avoid frequent reading of remote file directories.</li> - <li>Data cacheï¼Cache the data on the spark executor side. You can set the data cache to memory or disk. If it is set to cache to memory, you need to appropriately increase the executor memory to ensure that the executor has enough memory for data cache; If it is cached to disk, you need to set the data cache directory, preferably SSD disk directory.</li> -</ul> - -<p>Based on the above design, different types of caches are made on the driver side and the executor side of the query engine of kylin 4.0. The basic architecture is as follows:</p> - -<p><img src="/images/blog/local-cache/kylin4_local_cache.png" alt="" /></p> - -<h4 id="soft-affinity-scheduling">2. Soft Affinity Scheduling</h4> - -<p>When doing data cache on the executor side, if all data is cached on all executors, the size of cached data will be very considerable and a great waste of disk space, and it is easy to cause frequent evict cache data. In order to maximize the cache hit rate of the spark executor, the spark driver needs to schedule the tasks of the same file to the same executor as far as possible when the resource conditions are me, so as to ensure that the data of the same file can be cached on a specific one or several executors, and the data can be read through the cache when it is read again.<br /> -To this end, we calculate the target executor list by calculating the hash according to the file name and then modulo with the executor num. The number of executors to cache is determined by the number of data cache replications configured by the user. Generally, the larger the number of cache replications, the higher the probability of hitting the cache. When the target executors are unreachable or have no resources for scheduling, the scheduler will fall back to the random scheduling mechanism of spark. This scheduling method is called soft affinity scheduling strategy. Although it can not guarantee 100% hit to the cache, it can effectively improve the cache hit rate and avoid a large amount of disk space wasted by full cache on the premise of minimizing performance loss.</p> - -<h2 id="related-configuration">03 Related Configuration</h2> - -<p>According to the above principles, we implemented the basic function of local cache + soft affinity scheduling in Kylin 4.0, and tested the query performance based on SSB data set and TPCH data set respectively.<br /> -Several important configuration items are listed here for users to understand. The actual configuration will be given in the attachment at the end:</p> - -<ul> - <li>Enable soft affinity schedulingï¼kylin.query.spark-conf.spark.kylin.soft-affinity.enabled</li> - <li>Enable local cacheï¼kylin.query.spark-conf.spark.hadoop.spark.kylin.local-cache.enabled</li> - <li>The number of data cache replications, that is, how many executors cache the same data fileï¼kylin.query.spark-conf.spark.kylin.soft-affinity.replications.num</li> - <li>Cache to memory or local directory. Set cache to memory as buff and cache to local as local: kylin.query.spark-conf.spark.hadoop.alluxio.user.client.cache.store.type</li> - <li>Maximum cache capacityï¼kylin.query.spark-conf.spark.hadoop.alluxio.user.client.cache.size</li> -</ul> - -<h2 id="performance-benchmark">04 Performance Benchmark</h2> - -<p>We conducted performance tests in three scenarios under AWS EMR environment. When scale factor = 10, we conducted single concurrent query test on SSB dataset, single concurrent query test and 4 concurrent query test on TPCH dataset. S3 was configured as storage in the experimental group and the control group. Local cache and soft affinity scheduling were enabled in the experimental group, but not in the control group. In addition, we also compare the results of the experimental group with the results when HDFS is used as storage in the same environment, so that users can intuitively feel the optimization effect of local cache + soft affinity scheduling on deploying Kylin 4.0 on the cloud and using object storage as storage.</p> - -<p><img src="/images/blog/local-cache/local_cache_benchmark_result_ssb.png" alt="" /></p> - -<p><img src="/images/blog/local-cache/local_cache_benchmark_result_tpch1.png" alt="" /></p> - -<p><img src="/images/blog/local-cache/local_cache_benchmark_result_tpch4.png" alt="" /></p> - -<p>As can be seen from the above results:</p> - -<ol> - <li>In the single concurrency scenario of SSB data set, when S3 is used as storage, turning on the local cache and soft affinity scheduling can achieve about three times the performance improvement, which can be the same as that of HDFS, or even improved.</li> - <li>Under TPCH data set, when S3 is used as storage, whether single concurrent query or multiple concurrent query, after local cache and soft affinity scheduling are enabled, the performance of all queries can be greatly improved.</li> -</ol> - -<p>However, in the comparison results of Q21 under the 4 concurrent tests of TPCH dataset, we observed that the results of enabling local cache and soft affinity scheduling are lower than those when using S3 alone as storage. Here, it may be that the data is not read through the cache for some reason. The underlying reason is not further analyzed in this test, in the subsequent optimization process, we will gradually improve. Moreover, because the query of TPCH is complex and the SQL types are different, compared with the results of HDFS, the performance of some SQL is improved, while the performance of some SQL is slightly insufficient, but generally speaking, it is very close to the results of HDFS as storage.<br /> -The result of this performance test is a preliminary verification of the performance improvement effect of local cache + soft affinity scheduling. On the whole, local cache + soft affinity scheduling can achieve significant performance improvement for both simple queries and complex queries, but there is a certain performance loss in the scenario of high concurrent queries.<br /> -If users use cloud object storage as Kylin 4.0 storage, they can get a good performance experience when local cache + soft affinity scheduling is enabled, which provides performance guarantee for Kylin 4.0 to use the separation architecture of computing and storage in the cloud.</p> - -<h2 id="code-implementation">05 Code Implementation</h2> - -<p>Since the current code implementation is still in the basic stage, there are still many details to be improved, such as implementing consistent hash, how to deal with the existing cache when the number of executors changes, so the author has not submitted PR to the community code base. Developers who want to preview in advance can view the source code through the following link:</p> - -<p><a href="https://github.com/zzcclp/kylin/commit/4e75b7fa4059dd2eaed24061fda7797fecaf2e35">The code implementation of local cache and soft affinity scheduling</a></p> - -<h2 id="related-link">06 Related Link</h2> - -<p>You can view the performance test result data and specific configuration through the link:<br /> -<a href="https://github.com/Kyligence/kylin-tpch/issues/9">The benchmark of Kylin4.0 with local cache and soft affinity scheduling</a></p> -</description> - <pubDate>Thu, 21 Oct 2021 04:00:00 -0700</pubDate> - <link>http://kylin.apache.org/blog/2021/10/21/Local-Cache-and-Soft-Affinity-Scheduling/</link> - <guid isPermaLink="true">http://kylin.apache.org/blog/2021/10/21/Local-Cache-and-Soft-Affinity-Scheduling/</guid> - - - <category>blog</category> - - </item> - - <item> - <title>Kylin4 äºä¸æ§è½ä¼åï¼æ¬å°ç¼ååè½¯äº²åæ§è°åº¦</title> - <description><h2 id="section">01 èæ¯ä»ç»</h2> -<p>æ¥åï¼Apache Kylin 社åºåå¸äºå ¨æ°æ¶æç Kylin 4.0ãKylin 4.0 çæ¶ææ¯æåå¨å计ç®å离ï¼è¿ä½¿å¾ kylin ç¨æ·å¯ä»¥éåæ´å çµæ´»ã计ç®èµæºå¯ä»¥å¼¹æ§ä¼¸ç¼©çäºä¸é¨ç½²æ¹å¼æ¥è¿è¡ Kylin 4.0ãåå©äºä¸çåºç¡è®¾æ½ï¼ç¨æ·å¯ä»¥éæ©ä½¿ç¨ä¾¿å®ä¸å¯é ç对象å卿¥å¨å cube æ°æ®ï¼æ¯å¦ S3 çãä¸è¿å¨åå¨ä¸è®¡ç®åç¦»çæ¶æä¸ï¼æä»¬éè¦èèå°ï¼è®¡ç®èç¹éè¿ç½ç»ä»è¿ç«¯åå¨è¯»åæ°æ®ä»ç¶æ¯ä¸ä¸ªä»£ä»·è¾å¤§çæä½ï¼å¾å¾ ä¼å¸¦æ¥æ§è½çæèã<br /> -ä¸ºäºæé« Kylin 4.0 å¨ä½¿ç¨äºä¸å¯¹è±¡åå¨ä½ä¸ºå卿¶çæ¥è¯¢æ§è½ï¼æä»¬å°è¯å¨ Kylin 4.0 çæ¥è¯¢å¼æä¸å¼å ¥æ¬å°ç¼åï¼Local Cacheï¼æºå¶ï¼å¨æ§è¡æ¥è¯¢æ¶ï¼å°ç»å¸¸ä½¿ç¨çæ°æ®ç¼å卿¬å°ç£çï¼åå°ä»è¿ç¨å¯¹è±¡åå¨ä¸æåæ°æ®å¸¦æ¥çå»¶è¿ï¼å®ç°æ´å¿«çæ¥è¯¢ååºï¼é¤æ¤ä¹å¤ï¼ä¸ºäºé¿å åæ ·çæ°æ®å¨å¤§é spark executor ä¸åæ¶ç¼å浪费ç£ç空é´ï¼å¹¶ä¸è®¡ç®èç¹å¯ä»¥æ´å¤ç仿¬å°ç¼å读åæéæ°æ®ï¼æä»¬å¼å ¥äº 软äº� �åæ§ï¼Soft Affinity ï¼çè°åº¦çç¥ï¼æè°è½¯äº²åæ§çç¥ï¼å°±æ¯éè¿æç§æ¹æ³å¨ spark executor åæ°æ®æä»¶ä¹é´å»ºç«å¯¹åºå ³ç³»ï¼ä½¿å¾åæ ·çæ°æ®å¨å¤§é¨åæ åµä¸è½å¤æ»æ¯å¨åä¸ä¸ª executor ä¸é¢è¯»åï¼ä»èæé«ç¼åçå½ä¸çã</p> - -<h2 id="section-1">02 å®ç°åç</h2> - -<h4 id="section-2">1.æ¬å°ç¼å</h4> -<p>å¨ Kylin 4.0 æ§è¡æ¥è¯¢æ¶ï¼ä¸»è¦ç»è¿ä»¥ä¸å ä¸ªé¶æ®µï¼å ¶ä¸ç¨èçº¿æ æ³¨åºäºå¯ä»¥ä½¿ç¨æ¬å°ç¼åæ¥æåæ§è½çé¶æ®µï¼</p> - -<p><img src="/images/blog/local-cache/Local_cache_stage.png" alt="" /></p> - -<ul> - <li>File list cacheï¼å¨ spark driver 端对 file status è¿è¡ç¼åã卿§è¡æ¥è¯¢æ¶ï¼spark driver éè¦è¯»åæä»¶å表ï¼è·åä¸äºæä»¶ä¿¡æ¯è¿è¡åç»çè°åº¦æ§è¡ï¼è¿éä¼å° file status ä¿¡æ¯ç¼åå°æ¬å°é¿å é¢ç¹è¯»åè¿ç¨æä»¶ç®å½ã</li> - <li>Data cacheï¼å¨ spark executor ç«¯å¯¹æ°æ®è¿è¡ç¼åãç¨æ·å¯ä»¥è®¾ç½®å°æ°æ®ç¼åå°å åææ¯ç£çï¼è¥è®¾ç½®ä¸ºç¼åå°å åï¼åéè¦éå½è°å¤§ executor memoryï¼ä¿è¯ executor æè¶³å¤çå åå¯ä»¥è¿è¡æ°æ®ç¼åï¼è¥æ¯ç¼åå°ç£çï¼éè¦ç¨æ·è®¾ç½®æ°æ®ç¼åç®å½ï¼æå¥½è®¾ç½®ä¸º SSD ç£çç®å½ã餿¤ä¹å¤ï¼ç¼åæ°æ®çæå¤§å®¹éãå¤ä»½æ°éçåå¯ç±ç¨æ·é ç½®è°æ´ã</li> -</ul> - -<p>åºäºä»¥ä¸è®¾è®¡ï¼å¨ Kylin 4.0 çæ¥è¯¢å¼æ sparder ç driver 端å executor 端åå«åä¸åç±»åçç¼åï¼åºæ¬æ¶æå¦ä¸ï¼</p> - -<p><img src="/images/blog/local-cache/kylin4_local_cache.png" alt="" /></p> - -<h4 id="section-3">2.è½¯äº²åæ§è°åº¦</h4> -<p>å¨ executor 端å data cache æ¶ï¼å¦æå¨ææç executor ä¸é½ç¼åå ¨é¨çæ°æ®ï¼é£ä¹ç¼åæ°æ®ç大å°å°ä¼é常å¯è§ï¼æå¤§ç浪费ç£ç空é´ï¼åæ¶ä¹å®¹æå¯¼è´ç¼åæ°æ®è¢«é¢ç¹æ¸ çãä¸ºäºæå¤§å spark executor çç¼åå½ä¸çï¼spark driver éè¦å°å䏿件ç task å¨èµæºæ¡ä»¶æ»¡è¶³çæ åµä¸å°½å¯è½è°åº¦å°åæ ·ç executorï¼è¿æ ·å¯ä»¥ä¿è¯ç¸åæä»¶çæ°æ®è½å¤ç¼åå¨ç¹å®çæä¸ªæè æå 个 executor ä¸ï¼åæ¬¡è¯»åæ¶ä¾¿å¯ä»¥éè¿ç¼åè¯»åæ°æ ®ã<br /> -为æ¤ï¼æä»¬éåæ ¹æ®æä»¶åè®¡ç® hash ä¹ååä¸ executors num 忍¡çç»ææ¥è®¡ç®ç®æ executor å表ï¼å¨å¤å°ä¸ª executor ä¸é¢åç¼åç±ç¨æ·é ç½®çç¼åå¤ä»½æ°éå³å®ï¼ä¸è¬æ åµä¸ï¼ç¼åå¤ä»½æ°éè¶å¤§ï¼å»ä¸ç¼åçæ¦çè¶é«ãå½ç®æ executor åä¸å¯è¾¾æè 没æèµæºä¾è°åº¦æ¶ï¼è°åº¦ç¨åºå°åéå° spark çéæºè°åº¦æºå¶ä¸ãè¿ç§è°åº¦æ¹å¼ä¾¿ç§°ä¸ºè½¯äº²åæ§è°åº¦çç¥ï¼å®è½ç¶ä¸è½ä¿è¯ 100% å»ä¸ç¼åï¼ä½è½å¤æææé«ç¼åå½ä� �çï¼å¨å°½é䏿失æ§è½çåæä¸é¿å full cache 浪费大éç£ç空é´ã</p> - -<h2 id="section-4">03 ç¸å ³é ç½®</h2> -<p>æ ¹æ®ä»¥ä¸åçï¼æä»¬å¨ Kylin 4.0 ä¸å®ç°äºæ¬å°ç¼å+è½¯äº²åæ§è°åº¦çåºç¡åè½ï¼å¹¶åå«åºäº ssb æ°æ®éå tpch æ°æ®éåäºæ¥è¯¢æ§è½æµè¯ã<br /> -è¿éååºå 个æ¯è¾éè¦çé 置项ä¾ç¨æ·äºè§£ï¼å®é 使ç¨çé ç½®å°å¨ç»å°¾é¾æ¥ä¸ç»åºï¼<br /> -- æ¯å¦å¼å¯è½¯äº²åæ§è°åº¦çç¥ï¼kylin.query.spark-conf.spark.kylin.soft-affinity.enabled<br /> -- æ¯å¦å¼å¯æ¬å°ç¼åï¼kylin.query.spark-conf.spark.hadoop.spark.kylin.local-cache.enabled<br /> -- Data cache çå¤ä»½æ°éï¼å³å¨å¤å°ä¸ª executor ä¸å¯¹å䏿°æ®æä»¶è¿è¡ç¼åï¼kylin.query.spark-conf.spark.kylin.soft-affinity.replications.num<br /> -- ç¼åå°å åä¸è¿æ¯æ¬å°ç®å½ï¼ç¼åå°å å设置为 BUFFï¼ç¼åå°æ¬å°è®¾ç½®ä¸º LOCALï¼kylin.query.spark-conf.spark.hadoop.alluxio.user.client.cache.store.type<br /> -- æå¤§ç¼å容éï¼kylin.query.spark-conf.spark.hadoop.alluxio.user.client.cache.size</p> - -<h2 id="section-5">04 æ§è½å¯¹æ¯</h2> -<p>æä»¬å¨ AWS EMR ç¯å¢ä¸è¿è¡äº 3 ç§åºæ¯çæ§è½æµè¯ï¼å¨ scale factor = 10çæ åµä¸ï¼å¯¹ ssb æ°æ®éè¿è¡åå¹¶åæ¥è¯¢æµè¯ãtpch æ°æ®éè¿è¡åå¹¶åæ¥è¯¢ä»¥å 4 å¹¶åæ¥è¯¢æµè¯ï¼å®éªç»åå¯¹ç §ç»åé ç½® s3 ä½ä¸ºåå¨ï¼å¨å®éªç»ä¸å¼å¯æ¬å°ç¼ååè½¯äº²åæ§è°åº¦ï¼å¯¹ç §ç»åä¸å¼å¯ã餿¤ä¹å¤ï¼æä»¬è¿å°å®éªç»ç»æä¸ç¸åç¯å¢ä¸ hdfs ä½ä¸ºå卿¶çç»æè¿è¡å¯¹æ¯ï¼ä»¥ä¾¿ç¨æ·å¯ä»¥ç´è§çæåå° æ¬å°ç¼å+è½¯äº²åæ§è°åº¦ å¯¹ä ºä¸é¨ç½² Kylin 4.0 使ç¨å¯¹è±¡åå¨ä½ä¸ºåå¨åºæ¯ä¸çä¼åææã</p> - -<p><img src="/images/blog/local-cache/local_cache_benchmark_result_ssb.png" alt="" /></p> - -<p><img src="/images/blog/local-cache/local_cache_benchmark_result_tpch1.png" alt="" /></p> - -<p><img src="/images/blog/local-cache/local_cache_benchmark_result_tpch4.png" alt="" /></p> - -<p>ä»ä»¥ä¸ç»æå¯ä»¥çåºï¼<br /> -1. å¨ ssb 10 æ°æ®éåå¹¶ååºæ¯ä¸ï¼ä½¿ç¨ s3 ä½ä¸ºå卿¶ï¼å¼å¯æ¬å°ç¼ååè½¯äº²åæ§è°åº¦è½å¤è·å¾3åå·¦å³çæ§è½æåï¼å¯ä»¥è¾¾å°ä¸ hdfs ä½ä¸ºå卿¶çç¸åæ§è½çè³è¿æ 5% å·¦å³çæåã<br /> -2. å¨ tpch 10 æ°æ®éä¸ï¼ä½¿ç¨ s3 ä½ä¸ºå卿¶ï¼æ 论æ¯åå¹¶åæ¥è¯¢è¿æ¯å¤å¹¶åæ¥è¯¢ï¼å¼å¯æ¬å°ç¼ååè½¯äº²åæ§è°åº¦åï¼åºæ¬å¨æææ¥è¯¢ä¸é½è½å¤è·å¾å¤§å¹ åº¦çæ§è½æåã</p> - -<p>ä¸è¿å¨ tpch 10 æ°æ®éç 4 å¹¶åæµè¯ä¸ç Q21 ç对æ¯ç»æä¸ï¼æä»¬è§å¯å°ï¼å¼å¯æ¬å°ç¼ååè½¯äº²åæ§è°åº¦çç»æåèæ¯åç¬ä½¿ç¨ s3 ä½ä¸ºå卿¶ææä¸éï¼è¿éå¯è½æ¯ç±äºæç§åå å¯¼è´æ²¡æéè¿ç¼åè¯»åæ°æ®ï¼æ·±å±åå 卿¤æ¬¡æµè¯ä¸æ²¡æè¿è¡è¿ä¸æ¥çåæï¼å¨åç»çä¼åè¿ç¨ä¸æä»¬ä¼éæ¥æ¹è¿ãç±äº tpch çæ¥è¯¢æ¯è¾å¤æä¸ SQL ç±»ååå¼ï¼ä¸ hdfs ä½ä¸ºå卿¶çç»æç¸æ¯ï¼ä»ç¶æé¨å sql çæ§è½ç¥æ� �ä¸è¶³ï¼ä¸è¿æ»ä½æ¥è¯´å·²ç»ä¸ hdfs çç»ææ¯è¾æ¥è¿ã<br /> -æ¬æ¬¡æ§è½æµè¯çç»ææ¯ä¸æ¬¡å¯¹ æ¬å°ç¼å+è½¯äº²åæ§è°åº¦ æ§è½æåææç忥éªè¯ï¼ä»æ»ä½ä¸æ¥çï¼æ¬å°ç¼å+è½¯äº²åæ§è°åº¦ æ 论对äºç®åæ¥è¯¢è¿æ¯å¤ææ¥è¯¢é½è½å¤è·å¾ææ¾çæ§è½æåï¼ä½æ¯å¨é«å¹¶åæ¥è¯¢åºæ¯ä¸åå¨ä¸å®çæ§è½æå¤±ã<br /> -å¦æç¨æ·ä½¿ç¨äºä¸å¯¹è±¡åå¨ä½ä¸º Kylin 4.0 çåå¨ï¼å¨å¼å¯ æ¬å°ç¼å+ è½¯äº²åæ§è°åº¦çæ åµä¸ï¼æ¯å¯ä»¥è·å¾å¾å¥½çæ§è½ä½éªçï¼è¿ä¸º Kylin 4.0 å¨äºä¸ä½¿ç¨è®¡ç®ååå¨åç¦»æ¶ææä¾äºæ§è½ä¿éã</p> - -<h2 id="section-6">05 代ç å®ç°</h2> -<p>ç±äºç®åç代ç å®ç°è¿å¤äºæ¯è¾åºç¡çé¶æ®µï¼è¿æè®¸å¤ç»èéè¦å®åï¼æ¯å¦å®ç°ä¸è´æ§åå¸ãå½ executor æ°éåçååæ¶å¦ä½å¤çå·²æ cache çï¼æä»¥ä½è è¿æªå社åºä»£ç åºæäº¤ PRï¼æ³è¦æåé¢è§çå¼åè å¯ä»¥éè¿ä¸é¢ç龿¥æ¥çæºç ï¼<br /> -<a href="https://github.com/zzcclp/kylin/commit/4e75b7fa4059dd2eaed24061fda7797fecaf2e35">Kylin4.0 æ¬å°ç¼å+è½¯äº²åæ§è°åº¦ä»£ç å®ç°</a></p> - -<h2 id="section-7">06 ç¸å ³é¾æ¥</h2> -<p>éè¿é¾æ¥å¯æ¥é æ§è½æµè¯ç»ææ°æ®åå ·ä½é ç½®ï¼<br /> -<a href="https://github.com/Kyligence/kylin-tpch/issues/9">Kylin4.0 æ¬å°ç¼å+è½¯äº²åæ§è°åº¦æµè¯</a></p> -</description> - <pubDate>Thu, 21 Oct 2021 04:00:00 -0700</pubDate> - <link>http://kylin.apache.org/cn_blog/2021/10/21/Local-Cache-and-Soft-Affinity-Scheduling/</link> - <guid isPermaLink="true">http://kylin.apache.org/cn_blog/2021/10/21/Local-Cache-and-Soft-Affinity-Scheduling/</guid> - - - <category>cn_blog</category> </item>