This is an automated email from the ASF dual-hosted git repository.

morrysnow pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 50ae9e6b19 [enhancement](planner) support select table sample (#10170)
50ae9e6b19 is described below

commit 50ae9e6b19ecab50515cf18c23de06008ac4f01a
Author: Xinyi Zou <zouxiny...@gmail.com>
AuthorDate: Fri Oct 14 15:05:23 2022 +0800

    [enhancement](planner) support select table sample (#10170)
    
    ### Motivation
    TABLESAMPLE allows you to limit the number of rows from a table in the FROM 
clause.
    
    Used for data detection, quick verification of the accuracy of SQL, table 
statistics collection.
    
    ### Grammar
    ```
    [TABLET tids] TABLESAMPLE n [ROWS | PERCENT] [REPEATABLE seek]
    ```
    
    Limit the number of rows read from the table in the FROM clause,
    select a number of Tablets pseudo-randomly from the table according to the 
specified number of rows or percentages,
    and specify the number of seeds in REPEATABLE to return the selected 
samples again.
    In addition, can also manually specify the TableID,
    Note that this can only be used for OLAP tables.
    
    ### Example
    Q1:
    ```
    SELECT * FROM t1 TABLET(10001,10002) limit 1000;
    ```
    explain:
    ```
    partitions=1/1, tablets=2/12, tabletList=10001,10002
    ```
    Select the specified tabletID of the t1.
    
    Q2:
    ```
    SELECT * FROM t1 TABLESAMPLE(1000000 ROWS) REPEATABLE 1 limit 1000;
    ```
    explain:
    ```
    partitions=1/1, tablets=3/12, tabletList=10001,10002,10003
    ```
    
    Q3:
    ```
    SELECT * FROM t1 TABLESAMPLE(1000000 ROWS) REPEATABLE 2 limit 1000;
    ```
    explain:
    ```
    partitions=1/1, tablets=3/12, tabletList=10002,10003,10004
    ```
    
    Pseudo-randomly sample 1000 rows in t1.
    Note that several Tablets are actually selected according to the statistics 
of the table,
    and the total number of selected Tablet rows may be greater than 1000,
    so if you want to explicitly return 1000 rows, you need to add Limit.
    
    ### Design
    First, determine how many rows to sample from each partition according to 
the number of partitions.
    Then determine the number of Tablets to be selected for each partition 
according to the average number of rows of Tablet,
    If seek is not specified, the specified number of Tablets are 
pseudo-randomly selected from each partition.
    If seek is specified, it will be selected sequentially from the seek tablet 
of the partition.
    And add the manually specified Tablet id to the selected Tablet.
---
 .../Manipulation/SELECT.md                         |  12 ++
 .../Manipulation/SELECT.md                         |  14 ++-
 fe/fe-core/pom.xml                                 |   2 +-
 fe/fe-core/src/main/cup/sql_parser.cup             |  56 +++++++++-
 .../org/apache/doris/analysis/BaseTableRef.java    |   1 +
 .../java/org/apache/doris/analysis/TableRef.java   | 119 +++++++++++---------
 .../org/apache/doris/analysis/TableSample.java     | 100 +++++++++++++++++
 .../org/apache/doris/analysis/TupleDescriptor.java |  16 +--
 .../org/apache/doris/planner/OlapScanNode.java     |  93 ++++++++++++++++
 .../apache/doris/planner/SingleNodePlanner.java    |   2 +
 fe/fe-core/src/main/jflex/sql_scanner.flex         |   2 +
 .../org/apache/doris/analysis/SelectStmtTest.java  | 123 +++++++++++++++++++++
 .../doris/nereids/rules/mv/SelectRollupTest.java   |   2 +
 13 files changed, 480 insertions(+), 62 deletions(-)

diff --git 
a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
 
b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
index 91a9c630e7..c5d88583e4 100644
--- 
a/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
+++ 
b/docs/en/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
@@ -43,6 +43,9 @@ SELECT
     select_expr [, select_expr ...]
     [FROM table_references
       [PARTITION partition_list]
+      [TABLET tabletid_list]
+      [TABLESAMPLE sample_value [ROWS | PERCENT]
+        [REPEATABLE pos_seek]]
     [WHERE where_condition]
     [GROUP BY {col_name | expr | position}
       [ASC | DESC], ... [WITH ROLLUP]]
@@ -81,6 +84,8 @@ SELECT
 
    10. SELECT supports explicit partition selection using PARTITION containing 
a list of partitions or subpartitions (or both) following the name of the table 
in `table_reference`
 
+   11. `[TABLET tids] TABLESAMPLE n [ROWS | PERCENT] [REPEATABLE seek]`: Limit 
the number of rows read from the table in the FROM clause, select a number of 
Tablets pseudo-randomly from the table according to the specified number of 
rows or percentages, and specify the number of seeds in REPEATABLE to return 
the selected samples again. In addition, you can also manually specify the 
TableID, Note that this can only be used for OLAP tables.
+
 **Syntax constraints:**
 
 1. SELECT can also be used to retrieve calculated rows without referencing any 
table.
@@ -281,6 +286,13 @@ A CTE can refer to itself to define a recursive CTE. 
Common applications of recu
     +------+------+------+------+
     ```
 
+15. TABLESAMPLE
+
+    ```sql
+    --Pseudo-randomly sample 1000 rows in t1. Note that several Tablets are 
actually selected according to the statistics of the table, and the total 
number of selected Tablet rows may be greater than 1000, so if you want to 
explicitly return 1000 rows, you need to add Limit.
+    SELECT * FROM t1 TABLET(10001) TABLESAMPLE(1000 ROWS) REPEATABLE 2 limit 
1000;
+    ```
+
 ### keywords
 
     SELECT
diff --git 
a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
 
b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
index 948e6390c3..4af6ea5e99 100644
--- 
a/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
+++ 
b/docs/zh-CN/docs/sql-manual/sql-reference/Data-Manipulation-Statements/Manipulation/SELECT.md
@@ -43,6 +43,9 @@ SELECT
     select_expr [, select_expr ...]
     [FROM table_references
       [PARTITION partition_list]
+      [TABLET tabletid_list]
+      [TABLESAMPLE sample_value [ROWS | PERCENT]
+        [REPEATABLE pos_seek]]
     [WHERE where_condition]
     [GROUP BY {col_name | expr | position}
       [ASC | DESC], ... [WITH ROLLUP]]
@@ -79,7 +82,9 @@ SELECT
 
    通常来说 `having` 要和聚合函数(例如 :`COUNT(), SUM(), AVG(), MIN(), MAX()`)以及 `group 
by` 从句一起使用。
 
-11. SELECT 支持使用 PARTITION 显式分区选择,其中包含 `table_reference` 中表的名称后面的分区或子分区(或两者)列表
+10. SELECT 支持使用 PARTITION 显式分区选择,其中包含 `table_reference` 中表的名称后面的分区或子分区(或两者)列表。
+
+11. `[TABLET tids] TABLESAMPLE n [ROWS | PERCENT] [REPEATABLE seek]`: 
在FROM子句中限制表的读取行数,根据指定的行数或百分比从表中伪随机的选择数个Tablet,REPEATABLE指定种子数可使选择的样本再次返回,此外也可手动指定TableID,注意这只能用于OLAP表。
 
 **语法约束:**
 
@@ -281,6 +286,13 @@ CTE 可以引用自身来定义递归 CTE 。 递归 CTE 的常见应用包括
     +------+------+------+------+
     ```
 
+15. TABLESAMPLE
+
+    ```sql
+    
--在t1中伪随机的抽样1000行。注意实际是根据表的统计信息选择若干Tablet,被选择的Tablet总行数可能大于1000,所以若想明确返回1000行需要加上Limit。
+    SELECT * FROM t1 TABLET(10001) TABLESAMPLE(1000 ROWS) REPEATABLE 2 limit 
1000;
+    ```
+
 ### keywords
 
     SELECT
diff --git a/fe/fe-core/pom.xml b/fe/fe-core/pom.xml
index fa42aefb99..ea91b6a275 100644
--- a/fe/fe-core/pom.xml
+++ b/fe/fe-core/pom.xml
@@ -781,7 +781,7 @@ under the License.
             <plugin>
                 <groupId>com.github.os72</groupId>
                 <artifactId>protoc-jar-maven-plugin</artifactId>
-                <version>3.11.1</version>
+                <version>3.11.4</version>
                 <executions>
                     <execution>
                         <phase>generate-sources</phase>
diff --git a/fe/fe-core/src/main/cup/sql_parser.cup 
b/fe/fe-core/src/main/cup/sql_parser.cup
index b479f171e2..e62edbb310 100644
--- a/fe/fe-core/src/main/cup/sql_parser.cup
+++ b/fe/fe-core/src/main/cup/sql_parser.cup
@@ -478,6 +478,7 @@ terminal String
     KW_PLUGINS,
     KW_POLICY,
     KW_PRECEDING,
+    KW_PERCENT,
     KW_PROC,
     KW_PROCEDURE,
     KW_PROCESSLIST,
@@ -551,6 +552,7 @@ terminal String
     KW_SYSTEM,
     KW_TABLE,
     KW_TABLES,
+    KW_TABLESAMPLE,
     KW_TABLET,
     KW_TABLETS,
     KW_TASK,
@@ -675,6 +677,8 @@ nonterminal ArrayList<Expr> expr_pipe_list;
 nonterminal String select_alias, opt_table_alias, lock_alias;
 nonterminal ArrayList<String> ident_list;
 nonterminal PartitionNames opt_partition_names, partition_names;
+nonterminal ArrayList<Long> opt_tablet_list, tablet_list;
+nonterminal TableSample opt_table_sample, table_sample;
 nonterminal ClusterName cluster_name;
 nonterminal ClusterName des_cluster_name;
 nonterminal TableName table_name, opt_table_name;
@@ -4947,9 +4951,9 @@ base_table_ref_list ::=
   ;
 
 base_table_ref ::=
-    table_name:name opt_partition_names:partitionNames opt_table_alias:alias 
opt_common_hints:commonHints
+    table_name:name opt_partition_names:partitionNames 
opt_tablet_list:tabletIds opt_table_alias:alias opt_table_sample:tableSample 
opt_common_hints:commonHints
     {:
-        RESULT = new TableRef(name, alias, partitionNames, commonHints);
+        RESULT = new TableRef(name, alias, partitionNames, tabletIds, 
tableSample, commonHints);
     :}
     ;
 
@@ -4998,6 +5002,24 @@ opt_partition_names ::=
     :}
     ;
 
+opt_tablet_list ::=
+    /* empty */
+    {:
+        RESULT = null;
+    :}
+    | tablet_list:tabletList
+    {:
+        RESULT = tabletList;
+    :}
+    ;
+
+tablet_list ::=
+    KW_TABLET LPAREN integer_list:tabletIds RPAREN
+    {:
+        RESULT = Lists.newArrayList(tabletIds);
+    :}
+    ;
+
 partition_names ::=
     KW_PARTITION LPAREN ident_list:partitions RPAREN
     {:
@@ -5025,6 +5047,36 @@ partition_names ::=
     :}
     ;
 
+opt_table_sample ::=
+    /* empty */
+    {:
+        RESULT = null;
+    :}
+    | table_sample:tableSample
+    {:
+        RESULT = tableSample;
+    :}
+    ;
+
+table_sample ::=
+    KW_TABLESAMPLE LPAREN INTEGER_LITERAL:sampleValue KW_PERCENT RPAREN
+    {:
+        RESULT = new TableSample(true, sampleValue);
+    :}
+    | KW_TABLESAMPLE LPAREN INTEGER_LITERAL:sampleValue KW_ROWS RPAREN
+    {:
+        RESULT = new TableSample(false, sampleValue);
+    :}
+    | KW_TABLESAMPLE LPAREN INTEGER_LITERAL:sampleValue KW_PERCENT RPAREN 
KW_REPEATABLE INTEGER_LITERAL:seek
+    {:
+        RESULT = new TableSample(true, sampleValue, seek);
+    :}
+    | KW_TABLESAMPLE LPAREN INTEGER_LITERAL:sampleValue KW_ROWS RPAREN 
KW_REPEATABLE INTEGER_LITERAL:seek
+    {:
+        RESULT = new TableSample(false, sampleValue, seek);
+    :}
+    ;
+
 opt_lateral_view_ref_list ::=
     /* empty */
     {:
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/analysis/BaseTableRef.java 
b/fe/fe-core/src/main/java/org/apache/doris/analysis/BaseTableRef.java
index d33405ec3b..ccfc508cd1 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/analysis/BaseTableRef.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/BaseTableRef.java
@@ -74,5 +74,6 @@ public class BaseTableRef extends TableRef {
         analyzeJoin(analyzer);
         analyzeSortHints();
         analyzeHints();
+        analyzeSample();
     }
 }
diff --git a/fe/fe-core/src/main/java/org/apache/doris/analysis/TableRef.java 
b/fe/fe-core/src/main/java/org/apache/doris/analysis/TableRef.java
index ee7cdfb86c..6a8099c719 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/analysis/TableRef.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/TableRef.java
@@ -79,8 +79,6 @@ import java.util.Set;
 public class TableRef implements ParseNode, Writable {
     private static final Logger LOG = LogManager.getLogger(TableRef.class);
     protected TableName name;
-    private PartitionNames partitionNames = null;
-
     // Legal aliases of this table ref. Contains the explicit alias as its 
sole element if
     // there is one. Otherwise, contains the two implicit aliases. Implicit 
aliases are set
     // in the c'tor of the corresponding resolved table ref (subclasses of 
TableRef) during
@@ -88,29 +86,20 @@ public class TableRef implements ParseNode, Writable {
     // contains the fully-qualified implicit alias to ensure that aliases_[0] 
always
     // uniquely identifies this table ref regardless of whether it has an 
explicit alias.
     protected String[] aliases;
-
+    protected List<Long> sampleTabletIds;
     // Indicates whether this table ref is given an explicit alias,
     protected boolean hasExplicitAlias;
-
     protected JoinOperator joinOp;
     protected List<String> usingColNames;
-    private ArrayList<String> joinHints;
-    private ArrayList<String> sortHints;
-    private ArrayList<String> commonHints; //The Hints is set by user
     protected ArrayList<LateralViewRef> lateralViewRefs;
-    private boolean isForcePreAggOpened;
-    // ///////////////////////////////////////
-    // BEGIN: Members that need to be reset()
-
     protected Expr onClause;
-
     // the ref to the left of us, if we're part of a JOIN clause
     protected TableRef leftTblRef;
+    protected TableSample tableSample;
 
     // true if this TableRef has been analyzed; implementing subclass should 
set it to true
     // at the end of analyze() call.
     protected boolean isAnalyzed;
-
     // Lists of table ref ids and materialized tuple ids of the full sequence 
of table
     // refs up to and including this one. These ids are cached during analysis 
because
     // we may alter the chain of table refs during plan generation, but we 
still rely
@@ -118,16 +107,20 @@ public class TableRef implements ParseNode, Writable {
     // Populated in analyzeJoin().
     protected List<TupleId> allTableRefIds = Lists.newArrayList();
     protected List<TupleId> allMaterializedTupleIds = Lists.newArrayList();
-
+    // ///////////////////////////////////////
+    // BEGIN: Members that need to be reset()
     // All physical tuple ids that this table ref is correlated with:
     // Tuple ids of root descriptors from outer query blocks that this table 
ref
     // (if a CollectionTableRef) or contained CollectionTableRefs (if an 
InlineViewRef)
     // are rooted at. Populated during analysis.
     protected List<TupleId> correlatedTupleIds = Lists.newArrayList();
-
     // analysis output
     protected TupleDescriptor desc;
-
+    private PartitionNames partitionNames = null;
+    private ArrayList<String> joinHints;
+    private ArrayList<String> sortHints;
+    private ArrayList<String> commonHints; //The Hints is set by user
+    private boolean isForcePreAggOpened;
     // set after analyzeJoinHints(); true if explicitly set via hints
     private boolean isBroadcastJoin;
     private boolean isPartitionJoin;
@@ -149,6 +142,14 @@ public class TableRef implements ParseNode, Writable {
     }
 
     public TableRef(TableName name, String alias, PartitionNames 
partitionNames, ArrayList<String> commonHints) {
+        this(name, alias, partitionNames, null, null, commonHints);
+    }
+
+    /**
+     * This method construct TableRef.
+     */
+    public TableRef(TableName name, String alias, PartitionNames 
partitionNames, ArrayList<Long> sampleTabletIds,
+                    TableSample tableSample, ArrayList<String> commonHints) {
         this.name = name;
         if (alias != null) {
             if (Env.isStoredTableNamesLowerCase()) {
@@ -160,6 +161,8 @@ public class TableRef implements ParseNode, Writable {
             hasExplicitAlias = false;
         }
         this.partitionNames = partitionNames;
+        this.sampleTabletIds = sampleTabletIds;
+        this.tableSample = tableSample;
         this.commonHints = commonHints;
         isAnalyzed = false;
     }
@@ -178,6 +181,7 @@ public class TableRef implements ParseNode, Writable {
                 (other.sortHints != null) ? 
Lists.newArrayList(other.sortHints) : null;
         onClause = (other.onClause != null) ? other.onClause.clone().reset() : 
null;
         partitionNames = (other.partitionNames != null) ? new 
PartitionNames(other.partitionNames) : null;
+        tableSample = (other.tableSample != null) ? new 
TableSample(other.tableSample) : null;
         commonHints = other.commonHints;
 
         usingColNames =
@@ -197,6 +201,7 @@ public class TableRef implements ParseNode, Writable {
                 lateralViewRefs.add((LateralViewRef) viewRef.clone());
             }
         }
+        this.sampleTabletIds = other.sampleTabletIds;
     }
 
     public PartitionNames getPartitionNames() {
@@ -208,6 +213,27 @@ public class TableRef implements ParseNode, Writable {
         
ErrorReport.reportAnalysisException(ErrorCode.ERR_UNRESOLVED_TABLE_REF, 
tableRefToSql());
     }
 
+    @Override
+    public String toSql() {
+        if (joinOp == null) {
+            // prepend "," if we're part of a sequence of table refs w/o an
+            // explicit JOIN clause
+            return (leftTblRef != null ? ", " : "") + tableRefToSql();
+        }
+
+        StringBuilder output = new StringBuilder(" " + joinOpToSql() + " ");
+        if (joinHints != null && !joinHints.isEmpty()) {
+            output.append("[").append(Joiner.on(", 
").join(joinHints)).append("] ");
+        }
+        output.append(tableRefToSql()).append(" ");
+        if (usingColNames != null) {
+            output.append("USING (").append(Joiner.on(", 
").join(usingColNames)).append(")");
+        } else if (onClause != null) {
+            output.append("ON ").append(onClause.toSql());
+        }
+        return output.toString();
+    }
+
     /**
      * Creates and returns a empty TupleDescriptor registered with the 
analyzer. The
      * returned tuple descriptor must have its source table set via 
descTbl.setTable()).
@@ -218,7 +244,6 @@ public class TableRef implements ParseNode, Writable {
         return null;
     }
 
-
     public JoinOperator getJoinOp() {
         // if it's not explicitly set, we're doing an inner join
         return (joinOp == null ? JoinOperator.INNER_JOIN : joinOp);
@@ -240,6 +265,14 @@ public class TableRef implements ParseNode, Writable {
         return name;
     }
 
+    public List<Long> getSampleTabletIds() {
+        return sampleTabletIds;
+    }
+
+    public TableSample getTableSample() {
+        return tableSample;
+    }
+
     /**
      * This method should only be called after the TableRef has been analyzed.
      */
@@ -305,14 +338,14 @@ public class TableRef implements ParseNode, Writable {
         return desc.getTable();
     }
 
-    public void setUsingClause(List<String> colNames) {
-        this.usingColNames = colNames;
-    }
-
     public List<String> getUsingClause() {
         return this.usingColNames;
     }
 
+    public void setUsingClause(List<String> colNames) {
+        this.usingColNames = colNames;
+    }
+
     public TableRef getLeftTblRef() {
         return leftTblRef;
     }
@@ -325,14 +358,14 @@ public class TableRef implements ParseNode, Writable {
         return joinHints;
     }
 
-    public boolean hasJoinHints() {
-        return CollectionUtils.isNotEmpty(joinHints);
-    }
-
     public void setJoinHints(ArrayList<String> hints) {
         this.joinHints = hints;
     }
 
+    public boolean hasJoinHints() {
+        return CollectionUtils.isNotEmpty(joinHints);
+    }
+
     public boolean isBroadcastJoin() {
         return isBroadcastJoin;
     }
@@ -353,14 +386,14 @@ public class TableRef implements ParseNode, Writable {
         return sortColumn;
     }
 
-    public void setLateralViewRefs(ArrayList<LateralViewRef> lateralViewRefs) {
-        this.lateralViewRefs = lateralViewRefs;
-    }
-
     public ArrayList<LateralViewRef> getLateralViewRefs() {
         return lateralViewRefs;
     }
 
+    public void setLateralViewRefs(ArrayList<LateralViewRef> lateralViewRefs) {
+        this.lateralViewRefs = lateralViewRefs;
+    }
+
     protected void analyzeLateralViewRef(Analyzer analyzer) throws 
UserException {
         if (lateralViewRefs == null) {
             return;
@@ -380,6 +413,13 @@ public class TableRef implements ParseNode, Writable {
         }
     }
 
+    protected void analyzeSample() throws AnalysisException {
+        if ((sampleTabletIds != null || tableSample != null) && 
desc.getTable().getType() != TableIf.TableType.OLAP) {
+            throw new AnalysisException("Sample table " + 
desc.getTable().getName()
+                + " type " + desc.getTable().getType() + " is not OLAP");
+        }
+    }
+
     /**
      * Parse hints.
      */
@@ -645,27 +685,6 @@ public class TableRef implements ParseNode, Writable {
         return tableRefToSql();
     }
 
-    @Override
-    public String toSql() {
-        if (joinOp == null) {
-            // prepend "," if we're part of a sequence of table refs w/o an
-            // explicit JOIN clause
-            return (leftTblRef != null ? ", " : "") + tableRefToSql();
-        }
-
-        StringBuilder output = new StringBuilder(" " + joinOpToSql() + " ");
-        if (joinHints != null && !joinHints.isEmpty()) {
-            output.append("[").append(Joiner.on(", 
").join(joinHints)).append("] ");
-        }
-        output.append(tableRefToSql()).append(" ");
-        if (usingColNames != null) {
-            output.append("USING (").append(Joiner.on(", 
").join(usingColNames)).append(")");
-        } else if (onClause != null) {
-            output.append("ON ").append(onClause.toSql());
-        }
-        return output.toString();
-    }
-
     public String toDigest() {
         if (joinOp == null) {
             // prepend "," if we're part of a sequence of table refs w/o an
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/analysis/TableSample.java 
b/fe/fe-core/src/main/java/org/apache/doris/analysis/TableSample.java
new file mode 100644
index 0000000000..cce7fe0328
--- /dev/null
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/TableSample.java
@@ -0,0 +1,100 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.analysis;
+
+import org.apache.doris.common.AnalysisException;
+
+/*
+ * To represent following stmt:
+ *      TABLESAMPLE (10 PERCENT)
+ *      TABLESAMPLE (100 ROWS)
+ *      TABLESAMPLE (10 PERCENT) REPEATABLE 2
+ *      TABLESAMPLE (100 ROWS) REPEATABLE 2
+ *
+ * references:
+ *      
https://simplebiinsights.com/sql-server-tablesample-retrieving-random-data-from-sql-server/
+ *      https://sqlrambling.net/2018/01/24/tablesample-basic-examples/
+ */
+public class TableSample implements ParseNode {
+
+    private final Long sampleValue;
+    private final boolean isPercent;
+    private Long seek = -1L;
+
+    public TableSample(boolean isPercent, Long sampleValue) {
+        this.sampleValue = sampleValue;
+        this.isPercent = isPercent;
+    }
+
+    public TableSample(boolean isPercent, Long sampleValue, Long seek) {
+        this.sampleValue = sampleValue;
+        this.isPercent = isPercent;
+        this.seek = seek;
+    }
+
+    public TableSample(TableSample other) {
+        this.sampleValue = other.sampleValue;
+        this.isPercent = other.isPercent;
+        this.seek = other.seek;
+    }
+
+    public Long getSampleValue() {
+        return sampleValue;
+    }
+
+    public boolean isPercent() {
+        return isPercent;
+    }
+
+    public Long getSeek() {
+        return seek;
+    }
+
+    @Override
+    public void analyze(Analyzer analyzer) throws AnalysisException {
+        if (sampleValue <= 0 || (isPercent && sampleValue > 100)) {
+            throw new AnalysisException("table sample value must be greater 
than 0, percent need less than 100.");
+        }
+    }
+
+    @Override
+    public String toSql() {
+        if (sampleValue == null) {
+            return "";
+        }
+        StringBuilder sb = new StringBuilder();
+        sb.append("TABLESAMPLE ( ");
+        sb.append(sampleValue);
+        if (isPercent) {
+            sb.append(" PERCENT ");
+        } else {
+            sb.append(" ROWS ");
+        }
+        sb.append(")");
+        if (seek != 0) {
+            sb.append(" REPEATABLE ");
+            sb.append(seek);
+        }
+        return sb.toString();
+    }
+
+    @Override
+    public String toString() {
+        return toSql();
+    }
+}
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/analysis/TupleDescriptor.java 
b/fe/fe-core/src/main/java/org/apache/doris/analysis/TupleDescriptor.java
index 7dff51e8d9..55f7d141f5 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/analysis/TupleDescriptor.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/TupleDescriptor.java
@@ -120,14 +120,14 @@ public class TupleDescriptor {
         return null;
     }
 
-    public void setCardinality(long cardinality) {
-        this.cardinality = cardinality;
-    }
-
     public long getCardinality() {
         return cardinality;
     }
 
+    public void setCardinality(long cardinality) {
+        this.cardinality = cardinality;
+    }
+
     public ArrayList<SlotDescriptor> getMaterializedSlots() {
         ArrayList<SlotDescriptor> result = Lists.newArrayList();
         for (SlotDescriptor slot : slots) {
@@ -167,14 +167,14 @@ public class TupleDescriptor {
         return isMaterialized;
     }
 
-    public boolean isMaterialized() {
-        return isMaterialized;
-    }
-
     public void setIsMaterialized(boolean value) {
         isMaterialized = value;
     }
 
+    public boolean isMaterialized() {
+        return isMaterialized;
+    }
+
     public float getAvgSerializedSize() {
         return avgSerializedSize;
     }
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/planner/OlapScanNode.java 
b/fe/fe-core/src/main/java/org/apache/doris/planner/OlapScanNode.java
index c9d2858727..a445e777db 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/planner/OlapScanNode.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/planner/OlapScanNode.java
@@ -28,6 +28,7 @@ import org.apache.doris.analysis.PartitionNames;
 import org.apache.doris.analysis.SlotDescriptor;
 import org.apache.doris.analysis.SlotRef;
 import org.apache.doris.analysis.SortInfo;
+import org.apache.doris.analysis.TableSample;
 import org.apache.doris.analysis.TupleDescriptor;
 import org.apache.doris.analysis.TupleId;
 import org.apache.doris.catalog.ColocateTableIndex;
@@ -160,6 +161,9 @@ public class OlapScanNode extends ScanNode {
     // List of tablets will be scanned by current olap_scan_node
     private ArrayList<Long> scanTabletIds = Lists.newArrayList();
 
+    private Set<Long> sampleTabletIds = Sets.newHashSet();
+    private TableSample tableSample;
+
     private HashSet<Long> scanBackendIds = new HashSet<>();
 
     private Map<Long, Integer> tabletId2BucketSeq = Maps.newHashMap();
@@ -191,6 +195,20 @@ public class OlapScanNode extends ScanNode {
         return canTurnOnPreAggr;
     }
 
+    public Set<Long> getSampleTabletIds() {
+        return sampleTabletIds;
+    }
+
+    public void setSampleTabletIds(List<Long> sampleTablets) {
+        if (sampleTablets != null) {
+            this.sampleTabletIds.addAll(sampleTablets);
+        }
+    }
+
+    public void setTableSample(TableSample tSample) {
+        this.tableSample = tSample;
+    }
+
     public void setCanTurnOnPreAggr(boolean canChangePreAggr) {
         this.canTurnOnPreAggr = canChangePreAggr;
     }
@@ -431,6 +449,7 @@ public class OlapScanNode extends ScanNode {
         computeColumnFilter();
         computePartitionInfo();
         computeTupleState(analyzer);
+        computeSampleTabletIds();
 
         /**
          * Compute InAccurate cardinality before mv selector and tablet 
pruning.
@@ -756,6 +775,76 @@ public class OlapScanNode extends ScanNode {
         LOG.debug("distribution prune cost: {} ms", 
(System.currentTimeMillis() - start));
     }
 
+    /**
+     * First, determine how many rows to sample from each partition according 
to the number of partitions.
+     * Then determine the number of Tablets to be selected for each partition 
according to the average number
+     * of rows of Tablet,
+     * If seek is not specified, the specified number of Tablets are 
pseudo-randomly selected from each partition.
+     * If seek is specified, it will be selected sequentially from the seek 
tablet of the partition.
+     * And add the manually specified Tablet id to the selected Tablet.
+     * simpleTabletNums = simpleRows / partitionNums / (partitionRows / 
partitionTabletNums)
+     */
+    public void computeSampleTabletIds() {
+        if (tableSample == null) {
+            return;
+        }
+        OlapTable olapTable = (OlapTable) desc.getTable();
+        long sampleRows; // The total number of sample rows
+        long hitRows = 1; // The total number of rows hit by the tablet
+        long totalRows = 0; // The total number of partition rows hit
+        long totalTablet = 0; // The total number of tablets in the hit 
partition
+        if (tableSample.isPercent()) {
+            sampleRows = (long) Math.max(olapTable.getRowCount() * 
(tableSample.getSampleValue() / 100.0), 1);
+        } else {
+            sampleRows = Math.max(tableSample.getSampleValue(), 1);
+        }
+
+        // calculate the number of tablets by each partition
+        long avgRowsPerPartition = sampleRows / 
Math.max(olapTable.getPartitions().size(), 1);
+
+        for (Partition p : olapTable.getPartitions()) {
+            List<Long> ids = p.getBaseIndex().getTabletIdsInOrder();
+
+            if (ids.isEmpty()) {
+                continue;
+            }
+
+            // Skip partitions with row count < row count / 2 expected to be 
sampled per partition.
+            // It can be expected to sample a smaller number of partitions to 
avoid uneven distribution
+            // of sampling results.
+            if (p.getBaseIndex().getRowCount() < (avgRowsPerPartition / 2)) {
+                continue;
+            }
+
+            long avgRowsPerTablet = Math.max(p.getBaseIndex().getRowCount() / 
ids.size(), 1);
+            long tabletCounts = Math.max(
+                    avgRowsPerPartition / avgRowsPerTablet + 
(avgRowsPerPartition % avgRowsPerTablet != 0 ? 1 : 0), 1);
+            tabletCounts = Math.min(tabletCounts, ids.size());
+
+            long seek = tableSample.getSeek() != -1
+                    ? tableSample.getSeek() : (long) (Math.random() * 
ids.size());
+            for (int i = 0; i < tabletCounts; i++) {
+                int seekTid = (int) ((i + seek) % ids.size());
+                sampleTabletIds.add(ids.get(seekTid));
+            }
+
+            hitRows += avgRowsPerTablet * tabletCounts;
+            totalRows += p.getBaseIndex().getRowCount();
+            totalTablet += ids.size();
+        }
+
+        // all hit, direct full
+        if (totalRows < sampleRows) {
+            // can't fill full sample rows
+            sampleTabletIds.clear();
+        } else if (sampleTabletIds.size() == totalTablet) {
+            // TODO add limit
+            sampleTabletIds.clear();
+        } else if (!sampleTabletIds.isEmpty()) {
+            // TODO add limit
+        }
+    }
+
     private void computeTabletInfo() throws UserException {
         /**
          * The tablet info could be computed only once.
@@ -769,6 +858,10 @@ public class OlapScanNode extends ScanNode {
             final List<Tablet> tablets = Lists.newArrayList();
             final Collection<Long> tabletIds = 
distributionPrune(selectedTable, partition.getDistributionInfo());
             LOG.debug("distribution prune tablets: {}", tabletIds);
+            if (sampleTabletIds.size() != 0) {
+                tabletIds.retainAll(sampleTabletIds);
+                LOG.debug("after sample tablets: {}", tabletIds);
+            }
 
             List<Long> allTabletIds = selectedTable.getTabletIdsInOrder();
             if (tabletIds != null) {
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/planner/SingleNodePlanner.java 
b/fe/fe-core/src/main/java/org/apache/doris/planner/SingleNodePlanner.java
index 4370d69f98..4035392c93 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/planner/SingleNodePlanner.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/planner/SingleNodePlanner.java
@@ -1895,6 +1895,8 @@ public class SingleNodePlanner {
                 OlapScanNode olapNode = new OlapScanNode(ctx.getNextNodeId(), 
tblRef.getDesc(),
                         "OlapScanNode");
                 olapNode.setForceOpenPreAgg(tblRef.isForcePreAggOpened());
+                olapNode.setSampleTabletIds(tblRef.getSampleTabletIds());
+                olapNode.setTableSample(tblRef.getTableSample());
                 scanNode = olapNode;
                 break;
             case ODBC:
diff --git a/fe/fe-core/src/main/jflex/sql_scanner.flex 
b/fe/fe-core/src/main/jflex/sql_scanner.flex
index 3360978d2a..5b7df473e7 100644
--- a/fe/fe-core/src/main/jflex/sql_scanner.flex
+++ b/fe/fe-core/src/main/jflex/sql_scanner.flex
@@ -336,6 +336,7 @@ import org.apache.doris.qe.SqlModeHelper;
         keywordMap.put("plugins", new Integer(SqlParserSymbols.KW_PLUGINS));
         keywordMap.put("policy", new Integer(SqlParserSymbols.KW_POLICY));
         keywordMap.put("preceding", new 
Integer(SqlParserSymbols.KW_PRECEDING));
+        keywordMap.put("percent", new Integer(SqlParserSymbols.KW_PERCENT));
         keywordMap.put("proc", new Integer(SqlParserSymbols.KW_PROC));
         keywordMap.put("procedure", new 
Integer(SqlParserSymbols.KW_PROCEDURE));
         keywordMap.put("processlist", new 
Integer(SqlParserSymbols.KW_PROCESSLIST));
@@ -412,6 +413,7 @@ import org.apache.doris.qe.SqlModeHelper;
         keywordMap.put("system", new Integer(SqlParserSymbols.KW_SYSTEM));
         keywordMap.put("table", new Integer(SqlParserSymbols.KW_TABLE));
         keywordMap.put("tables", new Integer(SqlParserSymbols.KW_TABLES));
+        keywordMap.put("tablesample", new 
Integer(SqlParserSymbols.KW_TABLESAMPLE));
         keywordMap.put("tablet", new Integer(SqlParserSymbols.KW_TABLET));
         keywordMap.put("tablets", new Integer(SqlParserSymbols.KW_TABLETS));
         keywordMap.put("task", new Integer(SqlParserSymbols.KW_TASK));
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/analysis/SelectStmtTest.java 
b/fe/fe-core/src/test/java/org/apache/doris/analysis/SelectStmtTest.java
index 4945a59690..ecb81927ab 100755
--- a/fe/fe-core/src/test/java/org/apache/doris/analysis/SelectStmtTest.java
+++ b/fe/fe-core/src/test/java/org/apache/doris/analysis/SelectStmtTest.java
@@ -17,9 +17,17 @@
 
 package org.apache.doris.analysis;
 
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.Env;
+import org.apache.doris.catalog.MaterializedIndex;
+import org.apache.doris.catalog.MaterializedIndex.IndexExtState;
+import org.apache.doris.catalog.OlapTable;
+import org.apache.doris.catalog.Partition;
+import org.apache.doris.catalog.Tablet;
 import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.Config;
 import org.apache.doris.common.util.Util;
+import org.apache.doris.planner.OlapScanNode;
 import org.apache.doris.planner.OriginalPlanner;
 import org.apache.doris.qe.ConnectContext;
 import org.apache.doris.qe.VariableMgr;
@@ -814,4 +822,119 @@ public class SelectStmtTest {
         FunctionCallExpr expr = (FunctionCallExpr) 
stmt1.getSelectList().getItems().get(1).getExpr();
         Assert.assertTrue(expr.getFnParams().isDistinct());
     }
+
+    @Test
+    public void testSelectTablet() throws Exception {
+        String sql1 = "SELECT * FROM db1.table1 TABLET(10031,10032,10033)";
+        OriginalPlanner planner = (OriginalPlanner) 
dorisAssert.query(sql1).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds = ((OlapScanNode) 
planner.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertTrue(sampleTabletIds.contains(10031L));
+        Assert.assertTrue(sampleTabletIds.contains(10032L));
+        Assert.assertTrue(sampleTabletIds.contains(10033L));
+    }
+
+    @Test
+    public void testSelectSampleTable() throws Exception {
+        Database db = 
Env.getCurrentInternalCatalog().getDbOrMetaException("default_cluster:db1");
+        OlapTable tbl = (OlapTable) db.getTableOrMetaException("table1");
+        long tabletId = 10031L;
+        for (Partition partition : tbl.getPartitions()) {
+            for (MaterializedIndex mIndex : 
partition.getMaterializedIndices(IndexExtState.VISIBLE)) {
+                mIndex.setRowCount(10000);
+                for (Tablet tablet : mIndex.getTablets()) {
+                    tablet.setTabletId(tabletId);
+                    tabletId += 1;
+                }
+            }
+        }
+
+        // 1. TABLESAMPLE ROWS
+        String sql1 = "SELECT * FROM db1.table1 TABLESAMPLE(10 ROWS)";
+        OriginalPlanner planner1 = (OriginalPlanner) 
dorisAssert.query(sql1).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds1 = ((OlapScanNode) 
planner1.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds1.size());
+
+        String sql2 = "SELECT * FROM db1.table1 TABLESAMPLE(1000 ROWS)";
+        OriginalPlanner planner2 = (OriginalPlanner) 
dorisAssert.query(sql2).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds2 = ((OlapScanNode) 
planner2.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds2.size());
+
+        String sql3 = "SELECT * FROM db1.table1 TABLESAMPLE(1001 ROWS)";
+        OriginalPlanner planner3 = (OriginalPlanner) 
dorisAssert.query(sql3).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds3 = ((OlapScanNode) 
planner3.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(2, sampleTabletIds3.size());
+
+        String sql4 = "SELECT * FROM db1.table1 TABLESAMPLE(9500 ROWS)";
+        OriginalPlanner planner4 = (OriginalPlanner) 
dorisAssert.query(sql4).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds4 = ((OlapScanNode) 
planner4.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(0, sampleTabletIds4.size()); // no sample, all 
tablet
+
+        String sql5 = "SELECT * FROM db1.table1 TABLESAMPLE(11000 ROWS)";
+        OriginalPlanner planner5 = (OriginalPlanner) 
dorisAssert.query(sql5).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds5 = ((OlapScanNode) 
planner5.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(0, sampleTabletIds5.size()); // no sample, all 
tablet
+
+        String sql6 = "SELECT * FROM db1.table1 TABLET(10033) TABLESAMPLE(900 
ROWS)";
+        OriginalPlanner planner6 = (OriginalPlanner) 
dorisAssert.query(sql6).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds6 = ((OlapScanNode) 
planner6.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertTrue(sampleTabletIds6.size() >= 1 && 
sampleTabletIds6.size() <= 2);
+        Assert.assertTrue(sampleTabletIds6.contains(10033L));
+
+        // 2. TABLESAMPLE PERCENT
+        String sql7 = "SELECT * FROM db1.table1 TABLESAMPLE(10 PERCENT)";
+        OriginalPlanner planner7 = (OriginalPlanner) 
dorisAssert.query(sql7).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds7 = ((OlapScanNode) 
planner7.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds7.size());
+
+        String sql8 = "SELECT * FROM db1.table1 TABLESAMPLE(15 PERCENT)";
+        OriginalPlanner planner8 = (OriginalPlanner) 
dorisAssert.query(sql8).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds8 = ((OlapScanNode) 
planner8.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(2, sampleTabletIds8.size());
+
+        String sql9 = "SELECT * FROM db1.table1 TABLESAMPLE(100 PERCENT)";
+        OriginalPlanner planner9 = (OriginalPlanner) 
dorisAssert.query(sql9).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds9 = ((OlapScanNode) 
planner9.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(0, sampleTabletIds9.size());
+
+        String sql10 = "SELECT * FROM db1.table1 TABLESAMPLE(110 PERCENT)";
+        OriginalPlanner planner10 = (OriginalPlanner) 
dorisAssert.query(sql10).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds10 = ((OlapScanNode) 
planner10.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(0, sampleTabletIds10.size());
+
+        String sql11 = "SELECT * FROM db1.table1 TABLET(10033) TABLESAMPLE(5 
PERCENT)";
+        OriginalPlanner planner11 = (OriginalPlanner) 
dorisAssert.query(sql11).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds11 = ((OlapScanNode) 
planner11.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertTrue(sampleTabletIds11.size() >= 1 && 
sampleTabletIds11.size() <= 2);
+        Assert.assertTrue(sampleTabletIds11.contains(10033L));
+
+        // 3. TABLESAMPLE REPEATABLE
+        String sql12 = "SELECT * FROM db1.table1 TABLESAMPLE(900 ROWS)";
+        OriginalPlanner planner12 = (OriginalPlanner) 
dorisAssert.query(sql12).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds12 = ((OlapScanNode) 
planner12.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds12.size());
+
+        String sql13 = "SELECT * FROM db1.table1 TABLESAMPLE(900 ROWS) 
REPEATABLE 2";
+        OriginalPlanner planner13 = (OriginalPlanner) 
dorisAssert.query(sql13).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds13 = ((OlapScanNode) 
planner13.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds13.size());
+        Assert.assertTrue(sampleTabletIds13.contains(10033L));
+
+        String sql14 = "SELECT * FROM db1.table1 TABLESAMPLE(900 ROWS) 
REPEATABLE 10";
+        OriginalPlanner planner14 = (OriginalPlanner) 
dorisAssert.query(sql14).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds14 = ((OlapScanNode) 
planner14.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds14.size());
+        Assert.assertTrue(sampleTabletIds14.contains(10031L));
+
+        String sql15 = "SELECT * FROM db1.table1 TABLESAMPLE(900 ROWS) 
REPEATABLE 0";
+        OriginalPlanner planner15 = (OriginalPlanner) 
dorisAssert.query(sql15).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds15 = ((OlapScanNode) 
planner15.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds15.size());
+        Assert.assertTrue(sampleTabletIds15.contains(10031L));
+
+        // 4. select returns 900 rows of results
+        String sql16 = "SELECT * FROM (SELECT * FROM db1.table1 
TABLESAMPLE(900 ROWS) REPEATABLE 9999999 limit 900) t";
+        OriginalPlanner planner16 = (OriginalPlanner) 
dorisAssert.query(sql16).internalExecuteOneAndGetPlan();
+        Set<Long> sampleTabletIds16 = ((OlapScanNode) 
planner16.getScanNodes().get(0)).getSampleTabletIds();
+        Assert.assertEquals(1, sampleTabletIds16.size());
+    }
 }
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/mv/SelectRollupTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/mv/SelectRollupTest.java
index d4d996aa2f..256aec0b66 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/mv/SelectRollupTest.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/mv/SelectRollupTest.java
@@ -54,6 +54,8 @@ class SelectRollupTest extends TestWithFeService implements 
PatternMatchSupporte
                 + "\"disable_auto_compaction\" = \"false\"\n"
                 + ");");
         addRollup("alter table t add rollup r1(k2, v1)");
+        // waiting table state to normal
+        Thread.sleep(500);
         addRollup("alter table t add rollup r2(k2, k3, v1)");
 
         createTable("CREATE TABLE `duplicate_tbl` (\n"


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org


Reply via email to