(doris) branch master updated (d1d87295688 -> 91439bfeb75)
This is an automated email from the ASF dual-hosted git repository. zhangchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from d1d87295688 [Chore](GA)Use github's codeowner to implement maintainer review (#36852) add 91439bfeb75 [Fix](delete command) Mark delete sign when do delete command in MoW table (#35917) No new revisions were added by this update. Summary of changes: .../main/java/org/apache/doris/common/Config.java | 9 + .../apache/doris/alter/SchemaChangeHandler.java| 9 + .../java/org/apache/doris/analysis/DeleteStmt.java | 3 +- .../analysis/ModifyTablePropertiesClause.java | 33 ++- .../main/java/org/apache/doris/catalog/Env.java| 7 + .../java/org/apache/doris/catalog/OlapTable.java | 8 + .../org/apache/doris/catalog/TableProperty.java| 10 + .../apache/doris/common/util/PropertyAnalyzer.java | 24 ++ .../apache/doris/datasource/InternalCatalog.java | 14 ++ .../trees/plans/commands/DeleteFromCommand.java| 7 + .../plans/commands/DeleteFromUsingCommand.java | 10 +- .../analysis/CreateTableAsSelectStmtTest.java | 3 +- .../data/compaction/test_full_compaction.out | 2 + .../test_full_compaction_by_table_id.out | 2 + .../test_delete_generated_column.out | 4 +- .../data/delete_p0/test_delete_on_value.out| 14 +- .../delete/delete_mow_partial_update.out | 12 +- .../nereids_p0/explain/test_pushdown_explain.out | 4 +- .../data/query_p0/system/test_table_options.out| 12 +- .../test_new_partial_update_delete.out | 129 +++ .../test_partial_update_after_delete.out} | 10 +- .../partial_update/test_partial_update_delete.out | 12 +- .../suites/show_p0/test_show_delete.groovy | 2 +- .../test_new_partial_update_delete.groovy | 252 + .../test_partial_update_after_delete.groovy| 79 +++ 25 files changed, 626 insertions(+), 45 deletions(-) create mode 100644 regression-test/data/unique_with_mow_p0/partial_update/test_new_partial_update_delete.out copy regression-test/data/{correctness_p0/test_select_decimal.out => unique_with_mow_p0/partial_update/test_partial_update_after_delete.out} (73%) create mode 100644 regression-test/suites/unique_with_mow_p0/partial_update/test_new_partial_update_delete.groovy create mode 100644 regression-test/suites/unique_with_mow_p0/partial_update/test_partial_update_after_delete.groovy - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (91439bfeb75 -> 2021aedf7be)
This is an automated email from the ASF dual-hosted git repository. zhangstar333 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 91439bfeb75 [Fix](delete command) Mark delete sign when do delete command in MoW table (#35917) add 2021aedf7be [case](udf) Only one backend, skip scp udf file (#36810) No new revisions were added by this update. Summary of changes: .../groovy/org/apache/doris/regression/suite/Suite.groovy | 15 +++ 1 file changed, 11 insertions(+), 4 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-website) branch master updated: [doc] Add the release note of version 2.1.4 in 2.1 & dev version (#790)
This is an automated email from the ASF dual-hosted git repository. luzhijing pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-website.git The following commit(s) were added to refs/heads/master by this push: new bcc34252436 [doc] Add the release note of version 2.1.4 in 2.1 & dev version (#790) bcc34252436 is described below commit bcc342524366418af8f5352e1dc78afec1f3541d Author: lishiqi_amy AuthorDate: Wed Jun 26 15:19:58 2024 +0800 [doc] Add the release note of version 2.1.4 in 2.1 & dev version (#790) --- docs/releasenotes/release-2.1.4.md | 281 + .../current/releasenotes/release-2.1.4.md | 278 .../version-2.1/releasenotes/release-2.1.4.md | 278 sidebars.json | 1 + .../version-2.1/releasenotes/release-2.1.4.md | 281 + versioned_sidebars/version-2.1-sidebars.json | 1 + 6 files changed, 1120 insertions(+) diff --git a/docs/releasenotes/release-2.1.4.md b/docs/releasenotes/release-2.1.4.md new file mode 100644 index 000..bff23d629b5 --- /dev/null +++ b/docs/releasenotes/release-2.1.4.md @@ -0,0 +1,281 @@ +--- +{ +"title": "Release 2.1.4", +"language": "en" +} +--- + + + +Apache Doris version 2.1.4 was officially released on June 26, 2024. In this update, we have optimized various functional experiences for data lakehouse scenarios, with a focus on resolving the abnormal memory usage issue in the previous version. Additionally, we have implemented several improvemnents and bug fixes to enhance the stability. Welcome to download and use it. + + +**Quick Download:** https://doris.apache.org/download/ + +**GitHub Release:** https://github.com/apache/doris/releases + + +## Behavior Changed + +- Non-existent files will be ignored when querying external tables such as Hive. [#35319](https://github.com/apache/doris/pull/35319) + + The file list is obtained from the meta cache, and it may not be consistent with the actual file list. + + Ignoring non-existent files helps to avoid query errors. + +- By default, creating a Bitmap Index will no longer be automatically changed to an Inverted Index. [#35521](https://github.com/apache/doris/pull/35521) + + This behavior is controlled by the FE configuration item `enable_create_bitmap_index_as_inverted_index`, which defaults to false. + +- When starting FE and BE processes using `--console`, all logs will be output to the standard output and differentiated by prefixes indicating the log type. [#35679](https://github.com/apache/doris/pull/35679) + + For more infomation, please see the documentations: + + - [Log Management - FE Log](https://doris.apache.org/docs/admin-manual/log-management/fe-log) + + - [Log Management - FE Log](https://doris.apache.org/docs/admin-manual/log-management/be-log) + +- If no table comment is provided when creating a table, the default comment will be empty instead of using the table type as the default comment. [#36025](https://github.com/apache/doris/pull/36025) + +- The default precision of DECIMALV3 has been adjusted from (9, 0) to (38, 9) to maintain compatibility with the version in which this feature was initially released. [#36316](https://github.com/apache/doris/pull/36316) + +## New Features + +### 01 Query Optimizer + +- Support FE flame graph tool + + For more information, see the [documentation](https://doris.apache.org/community/developer-guide/fe-profiler) + +- Support `SELECT DISTINCT` to be used with aggregation. + +- Support single table query rewrite without `GROUP BY`. This is useful for complex filters or expressions. [#35242](https://github.com/apache/doris/pull/35242). + +- The new optimizer fully supports point query functionality [#36205](https://github.com/apache/doris/pull/36205). + +### 02 Lakehouse + +- Support native reader of Apache Paimon deletion vector [#35241](https://github.com/apache/doris/pull/35241) + +- Support using Resource in Table Valued Functions [#35139](https://github.com/apache/doris/pull/35139) + +- Access controller with Hive Ranger plugin supports Data Mask + +### 03 Asynchronous Materialized Views + +- Support partition roll-up during construction. [#31812](https://github.com/apache/doris/pull/31812) + +- Support triggered updates during construction. [#34548](https://github.com/apache/doris/pull/34548) + +- Support specifying the `store_row_column` and `storage_medium` attribute during construction. [#35860](https://github.com/apache/doris/pull/35860) + +- Transparent rewrite supports single table asynchronous materialized views. [#34646](https://github.com/apache/doris/pull/34646) + +- Transparent rewrite supports `AGG_STATE` type aggregation roll-up. [#35026](https://github.com/apache/doris/pull/35026) + +### 04 Others + +- Added function `replace_empty`. + + For more inform
Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!
An error occurred while running notifications feature in .asf.yaml!: Invalid notification target 'comm...@foo.apache.org'. Must be a valid @doris.apache.org list! - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (2021aedf7be -> c46816a2bba)
This is an automated email from the ASF dual-hosted git repository. wangbo pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 2021aedf7be [case](udf) Only one backend, skip scp udf file (#36810) add c46816a2bba [test] fix workload policy test failed (#36837) No new revisions were added by this update. Summary of changes: .../test_workload_sched_policy.groovy | 31 ++ 1 file changed, 20 insertions(+), 11 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (c46816a2bba -> f0b7422c239)
This is an automated email from the ASF dual-hosted git repository. lihaopeng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from c46816a2bba [test] fix workload policy test failed (#36837) add f0b7422c239 [fix](array)fix array with empty arg in be behavior (#36845) No new revisions were added by this update. Summary of changes: be/src/vec/functions/array/function_array_constructor.cpp | 10 +++--- .../data/nereids_function_p0/scalar_function/Array.out | 6 ++ .../suites/nereids_function_p0/scalar_function/Array.groovy| 5 + 3 files changed, 18 insertions(+), 3 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
svn commit: r69962 - /dev/doris/2.1.4-rc03/ /release/doris/2.1.4-rc03/
Author: morningman Date: Wed Jun 26 07:40:30 2024 New Revision: 69962 Log: move doris 2.1.4 rc03 to release Added: release/doris/2.1.4-rc03/ - copied from r69961, dev/doris/2.1.4-rc03/ Removed: dev/doris/2.1.4-rc03/ - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
svn commit: r69963 - in /release/doris: 2.1.4-rc03/ 2.1/2.1.4-rc03/
Author: morningman Date: Wed Jun 26 07:42:13 2024 New Revision: 69963 Log: move doris 2.1.4 rc03 to release Added: release/doris/2.1/2.1.4-rc03/ - copied from r69962, release/doris/2.1.4-rc03/ Removed: release/doris/2.1.4-rc03/ - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
svn commit: r69964 - in /release/doris/2.1: 2.1.4-rc03/ 2.1.4/
Author: morningman Date: Wed Jun 26 07:47:52 2024 New Revision: 69964 Log: move doris 2.1.4 rc03 to release Added: release/doris/2.1/2.1.4/ - copied from r69963, release/doris/2.1/2.1.4-rc03/ Removed: release/doris/2.1/2.1.4-rc03/ - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (f0b7422c239 -> 6d9d2dd242c)
This is an automated email from the ASF dual-hosted git repository. morrysnow pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from f0b7422c239 [fix](array)fix array with empty arg in be behavior (#36845) add 6d9d2dd242c [test](mtmv)Add group by aggregate negative case (#36562) No new revisions were added by this update. Summary of changes: .../dimension_join_agg_negative.groovy | 480 + 1 file changed, 480 insertions(+) create mode 100644 regression-test/suites/nereids_rules_p0/mv/dimension_2_join_agg/dimension_join_agg_negative.groovy - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-kafka-connector) branch master updated: [feature]Implement DorisAvroConverter to support parsing schema from avro avsc file path (#32)
This is an automated email from the ASF dual-hosted git repository. diwu pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-kafka-connector.git The following commit(s) were added to refs/heads/master by this push: new e43e59f [feature]Implement DorisAvroConverter to support parsing schema from avro avsc file path (#32) e43e59f is described below commit e43e59f5d42eda6a3844a9b8825a9971bcefe4ad Author: wudongliang <46414265+donglian...@users.noreply.github.com> AuthorDate: Wed Jun 26 16:49:26 2024 +0800 [feature]Implement DorisAvroConverter to support parsing schema from avro avsc file path (#32) --- .github/workflows/license-eyes.yml | 4 + .../workflows/license-eyes.yml => .licenserc.yaml | 31 ++-- .../kafka/connector/decode/DorisConverter.java | 40 + .../kafka/connector/decode/DorisJsonSchema.java| 90 ++ .../connector/decode/avro/DorisAvroConverter.java | 194 + .../connector/exception/DataDecodeException.java | 35 .../decode/avro/DorisAvroConverterTest.java| 96 ++ src/test/resources/decode/avro/product.avsc| 18 ++ src/test/resources/decode/avro/user.avsc | 18 ++ 9 files changed, 507 insertions(+), 19 deletions(-) diff --git a/.github/workflows/license-eyes.yml b/.github/workflows/license-eyes.yml index 02d108a..ba0c303 100644 --- a/.github/workflows/license-eyes.yml +++ b/.github/workflows/license-eyes.yml @@ -23,6 +23,7 @@ on: push: branches: - master + jobs: license-check: name: "License Check" @@ -30,7 +31,10 @@ jobs: steps: - name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )" uses: actions/checkout@v2 + - name: Check License uses: apache/skywalking-eyes@v0.2.0 +with: + config-path: ./.licenserc.yaml env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/license-eyes.yml b/.licenserc.yaml similarity index 66% copy from .github/workflows/license-eyes.yml copy to .licenserc.yaml index 02d108a..ca50638 100644 --- a/.github/workflows/license-eyes.yml +++ b/.licenserc.yaml @@ -15,22 +15,15 @@ # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. -# -name: License Check -on: - pull_request: - push: -branches: - - master -jobs: - license-check: -name: "License Check" -runs-on: ubuntu-latest -steps: - - name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )" -uses: actions/checkout@v2 - - name: Check License -uses: apache/skywalking-eyes@v0.2.0 -env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + +header: + license: +spdx-id: Apache-2.0 +copyright-owner: Apache Software Foundation + + paths-ignore: +- 'LICENSE' +- '.gitignore' +- 'src/test/resources/decode/avro/**' + + comment: on-failure \ No newline at end of file diff --git a/src/main/java/org/apache/doris/kafka/connector/decode/DorisConverter.java b/src/main/java/org/apache/doris/kafka/connector/decode/DorisConverter.java new file mode 100644 index 000..6b0e960 --- /dev/null +++ b/src/main/java/org/apache/doris/kafka/connector/decode/DorisConverter.java @@ -0,0 +1,40 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.doris.kafka.connector.decode; + +import java.util.Map; +import org.apache.doris.kafka.connector.exception.DataDecodeException; +import org.apache.kafka.connect.data.Schema; +import org.apache.kafka.connect.storage.Converter; + +public abstract class DorisConverter implements Converter { + +/** unused */ +@Override +public void configure(final Map map, final boolean b) { +// not necessary +} + +/** doesn't support data source connector */ +@Override +public byte[] fromConnectData(String topic, Schema schema, Object value) { +throw new DataDecodeException("DorisConverter doesn't support data source connector yet."); +} +} diff --git a/src/main/java/org/apache/doris/kafka/connector/decode/DorisJsonSchema.java b/src/main/java/org/ap
(doris) branch master updated: [opt](Nereids) Optimize findValidItems method to handle circular dependencies (#36839)
This is an automated email from the ASF dual-hosted git repository. xiejiann pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 015f051f73a [opt](Nereids) Optimize findValidItems method to handle circular dependencies (#36839) 015f051f73a is described below commit 015f051f73a8fe6d0e50dc643b2fcd114838b465 Author: 谢健 AuthorDate: Wed Jun 26 16:51:39 2024 +0800 [opt](Nereids) Optimize findValidItems method to handle circular dependencies (#36839) ## Proposed changes These optimizations allow the findValidItems method to correctly handle circular dependencies while maintaining the required output slots. The code is now more efficient and ensures that the necessary edges and items are preserved during the traversal process. --- .../apache/doris/nereids/properties/FuncDeps.java | 25 +- .../nereids/rules/rewrite/EliminateGroupByKey.java | 2 +- .../doris/nereids/properties/FuncDepsTest.java | 13 +-- .../rules/rewrite/EliminateGroupByKeyTest.java | 21 -- 4 files changed, 43 insertions(+), 18 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/nereids/properties/FuncDeps.java b/fe/fe-core/src/main/java/org/apache/doris/nereids/properties/FuncDeps.java index 6c1b302d7dc..c17fd2eee57 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/nereids/properties/FuncDeps.java +++ b/fe/fe-core/src/main/java/org/apache/doris/nereids/properties/FuncDeps.java @@ -27,6 +27,7 @@ import java.util.HashSet; import java.util.Map; import java.util.Objects; import java.util.Set; +import java.util.stream.Collectors; /** * Function dependence items. @@ -96,11 +97,25 @@ public class FuncDeps { } } -// find item that not in a circle -private Set findValidItems() { +// Find items that are not part of a circular dependency. +// To keep the slots in requireOutputs, we need to always keep the edges that start with output slots. +// Note: We reduce the last edge in a circular dependency, +// so we need to traverse from parents that contain the required output slots. +private Set findValidItems(Set requireOutputs) { Set circleItem = new HashSet<>(); Set> visited = new HashSet<>(); -for (Set parent : edges.keySet()) { +Set> parentInOutput = edges.keySet().stream() +.filter(requireOutputs::containsAll) +.collect(Collectors.toSet()); +for (Set parent : parentInOutput) { +if (!visited.contains(parent)) { +dfs(parent, visited, circleItem); +} +} +Set> otherParent = edges.keySet().stream() +.filter(parent -> !parentInOutput.contains(parent)) +.collect(Collectors.toSet()); +for (Set parent : otherParent) { if (!visited.contains(parent)) { dfs(parent, visited, circleItem); } @@ -126,10 +141,10 @@ public class FuncDeps { * @param slots the initial set of slot sets to be reduced * @return the minimal set of slot sets after applying all possible reductions */ -public Set> eliminateDeps(Set> slots) { +public Set> eliminateDeps(Set> slots, Set requireOutputs) { Set> minSlotSet = Sets.newHashSet(slots); Set> eliminatedSlots = new HashSet<>(); -Set validItems = findValidItems(); +Set validItems = findValidItems(requireOutputs); for (FuncDepsItem funcDepsItem : validItems) { if (minSlotSet.contains(funcDepsItem.dependencies) && minSlotSet.contains(funcDepsItem.determinants)) { diff --git a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/EliminateGroupByKey.java b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/EliminateGroupByKey.java index 9e205f85809..fbe0988daff 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/EliminateGroupByKey.java +++ b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/EliminateGroupByKey.java @@ -91,7 +91,7 @@ public class EliminateGroupByKey implements RewriteRuleFactory { return null; } -Set> minGroupBySlots = funcDeps.eliminateDeps(new HashSet<>(groupBySlots.values())); +Set> minGroupBySlots = funcDeps.eliminateDeps(new HashSet<>(groupBySlots.values()), requireOutput); Set removeExpression = new HashSet<>(); for (Entry> entry : groupBySlots.entrySet()) { if (!minGroupBySlots.contains(entry.getValue()) diff --git a/fe/fe-core/src/test/java/org/apache/doris/nereids/properties/FuncDepsTest.java b/fe/fe-core/src/test/java/org/apache/doris/nereids/properties/FuncDepsTest.java index 64df33acd60..6b17305ed7a 100644 --- a/fe/fe-core/src/test/java/org/apache/doris/nereids/properties/FuncD
(doris) branch master updated (015f051f73a -> 665bcc91a36)
This is an automated email from the ASF dual-hosted git repository. gavinchou pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 015f051f73a [opt](Nereids) Optimize findValidItems method to handle circular dependencies (#36839) add 665bcc91a36 [enhance](Azure) Check delete operation's response on Azure Blob Storage (#36800) No new revisions were added by this update. Summary of changes: be/src/io/fs/azure_obj_storage_client.cpp | 110 ++-- cloud/src/recycler/azure_obj_client.cpp | 138 +- 2 files changed, 181 insertions(+), 67 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-spark-connector) branch master updated: [fix](compatible) Fix cast eror when select data from doris 2.0 (#209)
This is an automated email from the ASF dual-hosted git repository. diwu pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-spark-connector.git The following commit(s) were added to refs/heads/master by this push: new 1df1c96 [fix](compatible) Fix cast eror when select data from doris 2.0 (#209) 1df1c96 is described below commit 1df1c96b516cf3c5a07b61d0375edfc9eeff6801 Author: Lijia Liu AuthorDate: Wed Jun 26 17:07:59 2024 +0800 [fix](compatible) Fix cast eror when select data from doris 2.0 (#209) --- .../src/main/scala/org/apache/doris/spark/sql/Utils.scala | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/spark-doris-connector/src/main/scala/org/apache/doris/spark/sql/Utils.scala b/spark-doris-connector/src/main/scala/org/apache/doris/spark/sql/Utils.scala index 8910389..7cffbe5 100644 --- a/spark-doris-connector/src/main/scala/org/apache/doris/spark/sql/Utils.scala +++ b/spark-doris-connector/src/main/scala/org/apache/doris/spark/sql/Utils.scala @@ -25,7 +25,7 @@ import org.apache.spark.sql.sources._ import org.slf4j.Logger import java.sql.{Date, Timestamp} -import java.time.Duration +import java.time.{Duration, LocalDate} import java.util.concurrent.locks.LockSupport import scala.annotation.tailrec import scala.reflect.ClassTag @@ -106,6 +106,7 @@ private[spark] object Utils { case stringValue: String => s"'${escapeSql(stringValue)}'" case timestampValue: Timestamp => "'" + timestampValue + "'" case dateValue: Date => "'" + dateValue + "'" +case dateValue: LocalDate => "'" + dateValue + "'" case arrayValue: Array[Any] => arrayValue.map(compileValue).mkString(", ") case _ => value } - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [fix](spill) fix memory orphan check failure of partitioned hash join (#36806)
This is an automated email from the ASF dual-hosted git repository. jacktengg pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 10ea08fa856 [fix](spill) fix memory orphan check failure of partitioned hash join (#36806) 10ea08fa856 is described below commit 10ea08fa85624c2f6c4de62c33c2aa1e24c0c550 Author: TengJianPing <18241664+jackte...@users.noreply.github.com> AuthorDate: Wed Jun 26 17:08:57 2024 +0800 [fix](spill) fix memory orphan check failure of partitioned hash join (#36806) Also add fault injection regression test cases for spill. --- .../exec/partitioned_hash_join_sink_operator.cpp | 22 ++- .../spill/partitioned_agg_fault_injection.groovy | 149 ++ .../partitioned_hash_join_fault_injection.groovy | 216 + .../spill/spill_sort_fault_injection.groovy| 158 +++ 4 files changed, 535 insertions(+), 10 deletions(-) diff --git a/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp b/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp index 81bc2253657..cb104cfc7cd 100644 --- a/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp +++ b/be/src/pipeline/exec/partitioned_hash_join_sink_operator.cpp @@ -126,13 +126,8 @@ Status PartitionedHashJoinSinkLocalState::_revoke_unpartitioned_block(RuntimeSta _shared_state->shared_from_this(); auto query_id = state->query_id(); auto mem_tracker = state->get_query_ctx()->query_mem_tracker; -auto spill_func = [build_blocks = std::move(build_blocks), state, num_slots, this]() mutable { -Defer defer {[&]() { -// need to reset build_block here, or else build_block will be destructed -// after SCOPED_ATTACH_TASK_WITH_ID and will trigger memory_orphan_check failure -build_blocks.clear(); -}}; - +auto spill_func = [state, num_slots, + this](std::vector& build_blocks) mutable { auto& p = _parent->cast(); auto& partitioned_blocks = _shared_state->partitioned_build_blocks; std::vector> partitions_indexes(p._partition_count); @@ -216,9 +211,16 @@ Status PartitionedHashJoinSinkLocalState::_revoke_unpartitioned_block(RuntimeSta _dependency->set_ready(); }; -auto exception_catch_func = [spill_func, shared_state_holder, execution_context, state, - query_id, mem_tracker, this]() mutable { +auto exception_catch_func = [build_blocks = std::move(build_blocks), spill_func, + shared_state_holder, execution_context, state, query_id, + mem_tracker, this]() mutable { SCOPED_ATTACH_TASK_WITH_ID(mem_tracker, query_id); +Defer defer {[&]() { +// need to reset build_block here, or else build_block will be destructed +// after SCOPED_ATTACH_TASK_WITH_ID and will trigger memory_orphan_check failure +build_blocks.clear(); +}}; + std::shared_ptr execution_context_lock; auto shared_state_sptr = shared_state_holder.lock(); if (shared_state_sptr) { @@ -230,7 +232,7 @@ Status PartitionedHashJoinSinkLocalState::_revoke_unpartitioned_block(RuntimeSta } auto status = [&]() { -RETURN_IF_CATCH_EXCEPTION(spill_func()); +RETURN_IF_CATCH_EXCEPTION(spill_func(build_blocks)); return Status::OK(); }(); diff --git a/regression-test/suites/fault_injection_p0/spill/partitioned_agg_fault_injection.groovy b/regression-test/suites/fault_injection_p0/spill/partitioned_agg_fault_injection.groovy new file mode 100644 index 000..0cefbba0657 --- /dev/null +++ b/regression-test/suites/fault_injection_p0/spill/partitioned_agg_fault_injection.groovy @@ -0,0 +1,149 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +suite("partitioned_agg_fault_injection", "nonConcurrent") { +multi_sql """ +use regression_test_tpch_unique_sql_zstd_p0; +set enable_force_spill=true; +set min_revocable_mem=1024; +""" +de
Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!
An error occurred while running notifications feature in .asf.yaml!: Invalid notification target 'comm...@foo.apache.org'. Must be a valid @doris.apache.org list! - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-website) branch master updated: [download](2.1.4) Update 2.1.4 release of download and blog (#789)
This is an automated email from the ASF dual-hosted git repository. luzhijing pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-website.git The following commit(s) were added to refs/heads/master by this push: new ce211281063 [download](2.1.4) Update 2.1.4 release of download and blog (#789) ce211281063 is described below commit ce21128106399e5e46f7cb3652067c7fb540f6ba Author: KassieZ <139741991+kass...@users.noreply.github.com> AuthorDate: Wed Jun 26 16:20:55 2024 +0700 [download](2.1.4) Update 2.1.4 release of download and blog (#789) --- blog/apache-doris-vs-rockset.md| 2 +- ...olution-of-the-apache-doris-execution-engine.md | 2 +- blog/job-scheduler-for-task-automation.md | 2 +- blog/release-note-2.0.11.md| 2 - blog/release-note-2.1.4.md | 284 + src/components/recent-blogs/recent-blogs.data.ts | 8 +- src/constant/download.data.ts | 70 +++-- src/constant/newsletter.data.ts| 15 +- src/pages/download/index.tsx | 2 +- static/images/2.1.4.jpg| Bin 0 -> 415574 bytes 10 files changed, 352 insertions(+), 35 deletions(-) diff --git a/blog/apache-doris-vs-rockset.md b/blog/apache-doris-vs-rockset.md index d7c80c97ccc..07e8ded74ab 100644 --- a/blog/apache-doris-vs-rockset.md +++ b/blog/apache-doris-vs-rockset.md @@ -7,7 +7,7 @@ 'author': 'Zaki Lu', 'tags': ['Top News'], 'picked': "true", -'order': "1", +'order': "2", "image": '/images/doris-vs-rockset.jpeg' } diff --git a/blog/evolution-of-the-apache-doris-execution-engine.md b/blog/evolution-of-the-apache-doris-execution-engine.md index e9c00f4395d..a75d49fc9da 100644 --- a/blog/evolution-of-the-apache-doris-execution-engine.md +++ b/blog/evolution-of-the-apache-doris-execution-engine.md @@ -7,7 +7,7 @@ 'author': 'Apache Doris', 'tags': ['Tech Sharing'], 'picked': "true", -'order': "2", +'order': "3", "image": '/images/evolution-of-the-apache-doris-execution-engine.jpg' } diff --git a/blog/job-scheduler-for-task-automation.md b/blog/job-scheduler-for-task-automation.md index f4e9e67fbc2..9f234828071 100644 --- a/blog/job-scheduler-for-task-automation.md +++ b/blog/job-scheduler-for-task-automation.md @@ -7,7 +7,7 @@ 'author': 'Apache Doris', 'tags': ['Tech Sharing'], 'picked': "true", -'order': "3", +'order': "4", "image": '/images/job-scheduler-for-task-automation.jpg' } diff --git a/blog/release-note-2.0.11.md b/blog/release-note-2.0.11.md index 1de88be7e5e..1f103abac63 100644 --- a/blog/release-note-2.0.11.md +++ b/blog/release-note-2.0.11.md @@ -6,8 +6,6 @@ 'date': '2024-06-05', 'author': 'Apache Doris', 'tags': ['Release Notes'], -'picked': "true", -'order': "4", "image": '/images/2.0.11.jpg' } --- diff --git a/blog/release-note-2.1.4.md b/blog/release-note-2.1.4.md new file mode 100644 index 000..a924e1da7fd --- /dev/null +++ b/blog/release-note-2.1.4.md @@ -0,0 +1,284 @@ +--- +{ +'title': 'Apache Doris 2.1.4 just released', +'summary': 'In this update, we have optimized various functional experiences for data lakehouse, with a focus on resolving the abnormal memory usage issue in the previous version.', +'description': 'In this update, we have optimized various functional experiences for data lakehouse scenarios, with a focus on resolving the abnormal memory usage issue in the previous version.', +'date': '2024-06-26', +'author': 'Apache Doris', +'tags': ['Release Notes'], +'picked': "true", +'order': "1", +"image": '/images/2.1.4.jpg' +} +--- + + + +Dear community, Apache Doris version 2.1.4 was released on June 26, 2024. In this update, we have optimized various functional experiences for data lakehouse scenarios, with a focus on resolving the abnormal memory usage issue in the previous version. Additionally, we have implemented several improvemnents and bug fixes to enhance the stability. Welcome to download and use it. + +**Quick Download:** https://doris.apache.org/download/ + +**GitHub Release:** https://github.com/apache/doris/releases + +## Behavior Changed + +- Non-existent files will be ignored when querying external tables such as Hive. [#35319](https://github.com/apache/doris/pull/35319) + + The file list is obtained from the meta cache, and it may not be consistent with the actual file list. + + Ignoring non-existent files helps to avoid query errors. + +- By default, creating a Bitmap Index will no longer be automatically changed to an Inverted Index. [#35521](https://github.com/apache/doris/pull/35521) + + This behavior is controlled by the FE configuration item `enable_create_bitmap_index_as_inverted_index`, which defaults to false. + +- When starting FE and BE processes using `--console`, al
(doris) branch master updated: [refactor](inverted index) Refactor the idx storage format to avoid cyclic references in Thrift files. (#36757)
This is an automated email from the ASF dual-hosted git repository. airborne pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new e7b931b3cbf [refactor](inverted index) Refactor the idx storage format to avoid cyclic references in Thrift files. (#36757) e7b931b3cbf is described below commit e7b931b3cbfc55f566dc10cfb62d6ec6835edde2 Author: Sun Chenyang AuthorDate: Wed Jun 26 17:25:17 2024 +0800 [refactor](inverted index) Refactor the idx storage format to avoid cyclic references in Thrift files. (#36757) ## Proposed changes `TInvertedIndexStorageFormat` is defined in the `AgentService.thrift`. When other Thirft files use `TInvertedIndexStorageFormat`, need to `include AgentService.thrift`. However, `AgentService.thrift` might already include that file, causing cyclic references. I have defined `TInvetedIndexFileStorageFormat` in `Types.thrift` to represent the inverted index file format. `TInvertedIndexStorageFormat` will be deprecated. This will not impact the upgrade. --- be/src/olap/tablet_meta.cpp| 28 +- be/src/olap/tablet_meta.h | 4 ++-- .../apache/doris/alter/SchemaChangeHandler.java| 8 --- .../org/apache/doris/alter/SchemaChangeJobV2.java | 3 ++- .../java/org/apache/doris/backup/RestoreJob.java | 2 +- .../main/java/org/apache/doris/catalog/Env.java| 2 +- .../java/org/apache/doris/catalog/OlapTable.java | 14 +-- .../org/apache/doris/catalog/TableProperty.java| 16 ++--- .../apache/doris/common/util/PropertyAnalyzer.java | 28 +++--- .../apache/doris/datasource/InternalCatalog.java | 10 .../org/apache/doris/master/ReportHandler.java | 4 ++-- .../java/org/apache/doris/qe/ShowExecutor.java | 6 ++--- .../org/apache/doris/task/CreateReplicaTask.java | 22 + gensrc/thrift/AgentService.thrift | 4 +++- gensrc/thrift/Types.thrift | 9 +++ 15 files changed, 96 insertions(+), 64 deletions(-) diff --git a/be/src/olap/tablet_meta.cpp b/be/src/olap/tablet_meta.cpp index 26ed8d3ee57..cced41a86ee 100644 --- a/be/src/olap/tablet_meta.cpp +++ b/be/src/olap/tablet_meta.cpp @@ -62,6 +62,22 @@ TabletMetaSharedPtr TabletMeta::create( if (request.__isset.binlog_config) { binlog_config = request.binlog_config; } +TInvertedIndexFileStorageFormat::type inverted_index_file_storage_format = +request.inverted_index_file_storage_format; + +// We will discard this format. Don't make any further changes here. +if (request.__isset.inverted_index_storage_format) { +switch (request.inverted_index_storage_format) { +case TInvertedIndexStorageFormat::V1: +inverted_index_file_storage_format = TInvertedIndexFileStorageFormat::V1; +break; +case TInvertedIndexStorageFormat::V2: +inverted_index_file_storage_format = TInvertedIndexFileStorageFormat::V2; +break; +default: +break; +} +} return std::make_shared( request.table_id, request.partition_id, request.tablet_id, request.replica_id, request.tablet_schema.schema_hash, shard_id, request.tablet_schema, next_unique_id, @@ -76,7 +92,7 @@ TabletMetaSharedPtr TabletMeta::create( request.time_series_compaction_file_count_threshold, request.time_series_compaction_time_threshold_seconds, request.time_series_compaction_empty_rowsets_threshold, -request.time_series_compaction_level_threshold, request.inverted_index_storage_format); +request.time_series_compaction_level_threshold, inverted_index_file_storage_format); } TabletMeta::TabletMeta() @@ -97,7 +113,7 @@ TabletMeta::TabletMeta(int64_t table_id, int64_t partition_id, int64_t tablet_id int64_t time_series_compaction_time_threshold_seconds, int64_t time_series_compaction_empty_rowsets_threshold, int64_t time_series_compaction_level_threshold, - TInvertedIndexStorageFormat::type inverted_index_storage_format) + TInvertedIndexFileStorageFormat::type inverted_index_file_storage_format) : _tablet_uid(0, 0), _schema(new TabletSchema), _delete_bitmap(new DeleteBitmap(tablet_id)) { @@ -175,15 +191,15 @@ TabletMeta::TabletMeta(int64_t table_id, int64_t partition_id, int64_t tablet_id break; } -switch (inverted_index_storage_format) { -case TInvertedIndexStorageFormat::V1: +switch (inverted_index_file_storage_format) { +case TInvertedIndexFileStorageFormat::V1: schema->set_inverted_index_storage_format(Inve
(doris) branch master updated: [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36748)
This is an automated email from the ASF dual-hosted git repository. mrhhsg pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 9f15d93f7b9 [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36748) 9f15d93f7b9 is described below commit 9f15d93f7b9b4adf298d484e265b2f1eebf9c955 Author: Jerry Hu AuthorDate: Wed Jun 26 17:33:20 2024 +0800 [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36748) ## Proposed changes ```cpp RETURN_IF_ERROR(vectorized::MutableBlock(block).merge(*pos_to_pull->_block)) ``` this line may throw an exception(cannot allocate) --- be/src/pipeline/exec/multi_cast_data_streamer.cpp | 20 +--- be/src/pipeline/exec/multi_cast_data_streamer.h | 2 -- 2 files changed, 1 insertion(+), 21 deletions(-) diff --git a/be/src/pipeline/exec/multi_cast_data_streamer.cpp b/be/src/pipeline/exec/multi_cast_data_streamer.cpp index d3047c42a2d..deebf7d11bb 100644 --- a/be/src/pipeline/exec/multi_cast_data_streamer.cpp +++ b/be/src/pipeline/exec/multi_cast_data_streamer.cpp @@ -41,9 +41,9 @@ Status MultiCastDataStreamer::pull(int sender_idx, doris::vectorized::Block* blo pos_to_pull++; _multi_cast_blocks.pop_front(); } else { -pos_to_pull->_used_count--; pos_to_pull->_block->create_same_struct_block(0)->swap(*block); RETURN_IF_ERROR(vectorized::MutableBlock(block).merge(*pos_to_pull->_block)); +pos_to_pull->_used_count--; pos_to_pull++; } } @@ -54,24 +54,6 @@ Status MultiCastDataStreamer::pull(int sender_idx, doris::vectorized::Block* blo return Status::OK(); } -void MultiCastDataStreamer::close_sender(int sender_idx) { -std::lock_guard l(_mutex); -auto& pos_to_pull = _sender_pos_to_read[sender_idx]; -while (pos_to_pull != _multi_cast_blocks.end()) { -if (pos_to_pull->_used_count == 1) { -DCHECK(pos_to_pull == _multi_cast_blocks.begin()); -_cumulative_mem_size -= pos_to_pull->_mem_size; -pos_to_pull++; -_multi_cast_blocks.pop_front(); -} else { -pos_to_pull->_used_count--; -pos_to_pull++; -} -} -_closed_sender_count++; -_block_reading(sender_idx); -} - Status MultiCastDataStreamer::push(RuntimeState* state, doris::vectorized::Block* block, bool eos) { auto rows = block->rows(); COUNTER_UPDATE(_process_rows, rows); diff --git a/be/src/pipeline/exec/multi_cast_data_streamer.h b/be/src/pipeline/exec/multi_cast_data_streamer.h index 0a1276c4f1b..2112ebaaf20 100644 --- a/be/src/pipeline/exec/multi_cast_data_streamer.h +++ b/be/src/pipeline/exec/multi_cast_data_streamer.h @@ -52,8 +52,6 @@ public: Status pull(int sender_idx, vectorized::Block* block, bool* eos); -void close_sender(int sender_idx); - Status push(RuntimeState* state, vectorized::Block* block, bool eos); const RowDescriptor& row_desc() { return _row_desc; } - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.0 updated: [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36813)
This is an automated email from the ASF dual-hosted git repository. mrhhsg pushed a commit to branch branch-2.0 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.0 by this push: new b4186c718a0 [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36813) b4186c718a0 is described below commit b4186c718a0d9c675331f2ce578205e641383e2e Author: Jerry Hu AuthorDate: Wed Jun 26 17:34:33 2024 +0800 [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36813) ## Proposed changes pick #36748 ```cpp RETURN_IF_ERROR(vectorized::MutableBlock(block).merge(*pos_to_pull->_block)) ``` this line may throw an exception(cannot allocate) --- be/src/pipeline/exec/multi_cast_data_streamer.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/be/src/pipeline/exec/multi_cast_data_streamer.cpp b/be/src/pipeline/exec/multi_cast_data_streamer.cpp index 3948ef0b9cd..373a896852e 100644 --- a/be/src/pipeline/exec/multi_cast_data_streamer.cpp +++ b/be/src/pipeline/exec/multi_cast_data_streamer.cpp @@ -39,9 +39,9 @@ Status MultiCastDataStreamer::pull(int sender_idx, doris::vectorized::Block* blo pos_to_pull++; _multi_cast_blocks.pop_front(); } else { -pos_to_pull->_used_count--; pos_to_pull->_block->create_same_struct_block(0)->swap(*block); RETURN_IF_ERROR(vectorized::MutableBlock(block).merge(*pos_to_pull->_block)); +pos_to_pull->_used_count--; pos_to_pull++; } } - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (9f15d93f7b9 -> 0a8eb216ac2)
This is an automated email from the ASF dual-hosted git repository. lihaopeng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 9f15d93f7b9 [fix](pipeline) fix exception safety issue in MultiCastDataStreamer (#36748) add 0a8eb216ac2 [test]add check for query release when p0 finish (#36660) No new revisions were added by this update. Summary of changes: .../apache/doris/regression/RegressionTest.groovy | 17 - .../check_before_quit/check_before_quit.groovy | 74 ++ 2 files changed, 90 insertions(+), 1 deletion(-) create mode 100644 regression-test/suites/check_before_quit/check_before_quit.groovy - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.1 updated: [fix](intersect) fix coredump caused by intersect of nullable and not nullable children #36401 (#36441)
This is an automated email from the ASF dual-hosted git repository. lihaopeng pushed a commit to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.1 by this push: new 25fb30c723b [fix](intersect) fix coredump caused by intersect of nullable and not nullable children #36401 (#36441) 25fb30c723b is described below commit 25fb30c723b1ad4255a2262752d95b80db2d7667 Author: TengJianPing <18241664+jackte...@users.noreply.github.com> AuthorDate: Wed Jun 26 17:45:21 2024 +0800 [fix](intersect) fix coredump caused by intersect of nullable and not nullable children #36401 (#36441) ## Proposed changes Pick #36765 --- be/src/vec/exec/vset_operation_node.cpp| 1 + .../sql/intersect_nullable_not_nullable.out| 19 +++ .../sql/intersect_nullable_not_nullable.groovy | 171 + 3 files changed, 191 insertions(+) diff --git a/be/src/vec/exec/vset_operation_node.cpp b/be/src/vec/exec/vset_operation_node.cpp index 294ac58482a..afc3273f459 100644 --- a/be/src/vec/exec/vset_operation_node.cpp +++ b/be/src/vec/exec/vset_operation_node.cpp @@ -121,6 +121,7 @@ Status VSetOperationNode::open(RuntimeState* state) { RETURN_IF_ERROR(child(i)->open(state)); eos = false; +_probe_block.clear(); while (!eos) { release_block_memory(_probe_block, i); RETURN_IF_CANCELLED(state); diff --git a/regression-test/data/query_p0/set_operations/sql/intersect_nullable_not_nullable.out b/regression-test/data/query_p0/set_operations/sql/intersect_nullable_not_nullable.out new file mode 100644 index 000..8728992c789 --- /dev/null +++ b/regression-test/data/query_p0/set_operations/sql/intersect_nullable_not_nullable.out @@ -0,0 +1,19 @@ +-- This file is automatically generated. You should know what you did if you want to edit this +-- !intersect_nullable_not_nullable_1 -- +c + +-- !intersect_nullable_not_nullable_2 -- +c + +-- !intersect_nullable_not_nullable_3 -- +a + +-- !intersect_nullable_not_nullable_4 -- +a + +-- !intersect_nullable_not_nullable_5 -- +a + +-- !intersect_nullable_not_nullable_6 -- +a + diff --git a/regression-test/suites/query_p0/set_operations/sql/intersect_nullable_not_nullable.groovy b/regression-test/suites/query_p0/set_operations/sql/intersect_nullable_not_nullable.groovy new file mode 100644 index 000..dbec73f3c4e --- /dev/null +++ b/regression-test/suites/query_p0/set_operations/sql/intersect_nullable_not_nullable.groovy @@ -0,0 +1,171 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +suite("intersect_nullable_not_nullable") { +sql """ +set experimental_enable_pipeline_x_engine = false; +""" +sql """ +set experimental_enable_pipeline_engine = false; +""" +sql """ +drop table if exists intersect_nullable_not_nullable_t1; +""" +sql """ +drop table if exists intersect_nullable_not_nullable_t2; +""" +sql """ +drop table if exists intersect_nullable_not_nullable_t3; +""" +sql """ +drop table if exists intersect_nullable_not_nullable_t4; +""" +sql """ +create table intersect_nullable_not_nullable_t1 (k1 char(255) not null) distributed by hash(k1) properties("replication_num"="1"); +""" +sql """ +insert into intersect_nullable_not_nullable_t1 values("a"), ("b"), ("c"), ("d"), ("e"); +""" + +sql """ +create table intersect_nullable_not_nullable_t2 (kk0 int, kk1 char(100) not null) distributed by hash(kk0) properties("replication_num"="1"); +""" +sql """ +insert into intersect_nullable_not_nullable_t2 values(1, "b"), (2, "c"), (3, "d"), (4, "e"); +""" + +sql """ +create table intersect_nullable_not_nullable_t3 (kkk0 int, kkk1 char(100) ) distributed by hash(kkk0) properties("replication_num"="1"); +""" +sql """ +insert into intersect_nullable_not_nullable_t3 values(1, "c"), (2, "d"), (3, "e"); +""" + +sql """ +create table intersect_nullable_not_nullable_t4 (1 char(100) ) distributed by hash(1) properties("replication_num"="1"); +""" +sql ""
(doris) branch master updated: [bug](meta) fix can't deserialize meta from gson about polymorphic function class (#36847)
This is an automated email from the ASF dual-hosted git repository. zhangstar333 pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new f413ec422a8 [bug](meta) fix can't deserialize meta from gson about polymorphic function class (#36847) f413ec422a8 is described below commit f413ec422a86a817d4f392cd80a00cc062930fbb Author: zhangstar333 <87313068+zhangstar...@users.noreply.github.com> AuthorDate: Wed Jun 26 18:14:34 2024 +0800 [bug](meta) fix can't deserialize meta from gson about polymorphic function class (#36847) in PR #36552, start to serialize meta by Gson and function class is polymorphic ``` Function.class ---ScalarFunction.class ---AggregateFunction.class ---AliasFunction.class ``` --- .../main/java/org/apache/doris/persist/gson/GsonUtils.java | 12 1 file changed, 12 insertions(+) diff --git a/fe/fe-core/src/main/java/org/apache/doris/persist/gson/GsonUtils.java b/fe/fe-core/src/main/java/org/apache/doris/persist/gson/GsonUtils.java index 28e7cbc1a13..90a1c507011 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/persist/gson/GsonUtils.java +++ b/fe/fe-core/src/main/java/org/apache/doris/persist/gson/GsonUtils.java @@ -61,6 +61,8 @@ import org.apache.doris.analysis.VirtualSlotRef; import org.apache.doris.backup.BackupJob; import org.apache.doris.backup.RestoreJob; import org.apache.doris.catalog.AggStateType; +import org.apache.doris.catalog.AggregateFunction; +import org.apache.doris.catalog.AliasFunction; import org.apache.doris.catalog.AnyElementType; import org.apache.doris.catalog.AnyStructType; import org.apache.doris.catalog.AnyType; @@ -71,6 +73,7 @@ import org.apache.doris.catalog.DistributionInfo; import org.apache.doris.catalog.Env; import org.apache.doris.catalog.EsResource; import org.apache.doris.catalog.EsTable; +import org.apache.doris.catalog.Function; import org.apache.doris.catalog.FunctionGenTable; import org.apache.doris.catalog.HMSResource; import org.apache.doris.catalog.HashDistributionInfo; @@ -99,6 +102,7 @@ import org.apache.doris.catalog.RangePartitionItem; import org.apache.doris.catalog.Replica; import org.apache.doris.catalog.Resource; import org.apache.doris.catalog.S3Resource; +import org.apache.doris.catalog.ScalarFunction; import org.apache.doris.catalog.ScalarType; import org.apache.doris.catalog.SchemaTable; import org.apache.doris.catalog.SinglePartitionInfo; @@ -486,6 +490,13 @@ public class GsonUtils { .registerSubtype(FrontendHbResponse.class, FrontendHbResponse.class.getSimpleName()) .registerSubtype(BrokerHbResponse.class, BrokerHbResponse.class.getSimpleName()); +// runtime adapter for class "Function" +private static RuntimeTypeAdapterFactory functionAdapterFactory += RuntimeTypeAdapterFactory.of(Function.class, "clazz") +.registerSubtype(ScalarFunction.class, ScalarFunction.class.getSimpleName()) +.registerSubtype(AggregateFunction.class, AggregateFunction.class.getSimpleName()) +.registerSubtype(AliasFunction.class, AliasFunction.class.getSimpleName()); + // runtime adapter for class "CloudReplica". private static RuntimeTypeAdapterFactory replicaTypeAdapterFactory = RuntimeTypeAdapterFactory .of(Replica.class, "clazz") @@ -585,6 +596,7 @@ public class GsonUtils { .registerTypeAdapterFactory(partitionTypeAdapterFactory) .registerTypeAdapterFactory(partitionInfoTypeAdapterFactory) .registerTypeAdapterFactory(hbResponseTypeAdapterFactory) +.registerTypeAdapterFactory(functionAdapterFactory) .registerTypeAdapterFactory(rdsTypeAdapterFactory) .registerTypeAdapterFactory(jobExecutorRuntimeTypeAdapterFactory) .registerTypeAdapterFactory(mtmvSnapshotTypeAdapterFactory) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [enhance](mtmv)support partition tvf (#36479)
This is an automated email from the ASF dual-hosted git repository. morningman pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new d39b8e1be51 [enhance](mtmv)support partition tvf (#36479) d39b8e1be51 is described below commit d39b8e1be510ec0257415d24c6df95f5904a6bc7 Author: zhangdong <493738...@qq.com> AuthorDate: Wed Jun 26 18:56:12 2024 +0800 [enhance](mtmv)support partition tvf (#36479) The displayed content is consistent with the show partitions ``` mysql> select * from partitions("catalog"="internal","database"="zd","table"="ss")\G *** 1. row *** PartitionId: 14004 PartitionName: p1 VisibleVersion: 1 VisibleVersionTime: 2024-06-18 17:41:07 State: NORMAL PartitionKey: k3 Range: [types: [INT]; keys: [1]; ] DistributionKey: k2 Buckets: 2 ReplicationNum: 1 StorageMedium: HDD CooldownTime: -12-31 23:59:59 RemoteStoragePolicy: LastConsistencyCheckTime: \N DataSize: 0.000 IsInMemory: 0 ReplicaAllocation: tag.location.default: 1 IsMutable: 1 SyncWithBaseTables: 1 UnsyncTables: \N 1 row in set (0.02 sec) ``` --- be/src/vec/exec/scan/vmeta_scanner.cpp | 23 ++ be/src/vec/exec/scan/vmeta_scanner.h | 2 + .../doris/catalog/BuiltinTableValuedFunctions.java | 2 + .../doris/common/proc/PartitionsProcDir.java | 77 +-- .../expressions/functions/table/Partitions.java| 58 + .../visitor/TableValuedFunctionVisitor.java| 5 + .../doris/tablefunction/MetadataGenerator.java | 114 +- .../tablefunction/MetadataTableValuedFunction.java | 2 + .../PartitionsTableValuedFunction.java | 243 + .../doris/tablefunction/TableValuedFunctionIf.java | 2 + gensrc/thrift/FrontendService.thrift | 1 + gensrc/thrift/PlanNodes.thrift | 8 + gensrc/thrift/Types.thrift | 3 +- .../tvf/test_hms_partitions_tvf.out| 8 + .../external_table_p0/tvf/test_partitions_tvf.out | 23 ++ .../tvf/test_hms_partitions_tvf.groovy | 46 .../tvf/test_partitions_tvf.groovy | 78 +++ 17 files changed, 675 insertions(+), 20 deletions(-) diff --git a/be/src/vec/exec/scan/vmeta_scanner.cpp b/be/src/vec/exec/scan/vmeta_scanner.cpp index 44a8624f405..64819cfaaa2 100644 --- a/be/src/vec/exec/scan/vmeta_scanner.cpp +++ b/be/src/vec/exec/scan/vmeta_scanner.cpp @@ -233,6 +233,9 @@ Status VMetaScanner::_fetch_metadata(const TMetaScanRange& meta_scan_range) { case TMetadataType::MATERIALIZED_VIEWS: RETURN_IF_ERROR(_build_materialized_views_metadata_request(meta_scan_range, &request)); break; +case TMetadataType::PARTITIONS: +RETURN_IF_ERROR(_build_partitions_metadata_request(meta_scan_range, &request)); +break; case TMetadataType::JOBS: RETURN_IF_ERROR(_build_jobs_metadata_request(meta_scan_range, &request)); break; @@ -401,6 +404,26 @@ Status VMetaScanner::_build_materialized_views_metadata_request( return Status::OK(); } +Status VMetaScanner::_build_partitions_metadata_request(const TMetaScanRange& meta_scan_range, + TFetchSchemaTableDataRequest* request) { +VLOG_CRITICAL << "VMetaScanner::_build_partitions_metadata_request"; +if (!meta_scan_range.__isset.partitions_params) { +return Status::InternalError( +"Can not find TPartitionsMetadataParams from meta_scan_range."); +} + +// create request +request->__set_schema_table_name(TSchemaTableName::METADATA_TABLE); + +// create TMetadataTableRequestParams +TMetadataTableRequestParams metadata_table_params; +metadata_table_params.__set_metadata_type(TMetadataType::PARTITIONS); + metadata_table_params.__set_partitions_metadata_params(meta_scan_range.partitions_params); + +request->__set_metada_table_params(metadata_table_params); +return Status::OK(); +} + Status VMetaScanner::_build_jobs_metadata_request(const TMetaScanRange& meta_scan_range, TFetchSchemaTableDataRequest* request) { VLOG_CRITICAL << "VMetaScanner::_build_jobs_metadata_request"; diff --git a/be/src/vec/exec/scan/vmeta_scanner.h b/be/src/vec/exec/scan/vmeta_scanner.h index 7bf308cd56a..5936130069c 100644 --- a/be/src/vec/exec/scan/vmeta_scanner.h +++ b/be/src/vec/exec/scan/vmeta_scanner.h @@ -81,6 +81,8 @@ private:
(doris) branch master updated (d39b8e1be51 -> 3a8dbe48c3e)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from d39b8e1be51 [enhance](mtmv)support partition tvf (#36479) add 3a8dbe48c3e [improvement](meta) Switch meta serialization to gson 4 (#36568) No new revisions were added by this update. Summary of changes: .../org/apache/doris/common/FeMetaVersion.java | 4 +- .../apache/doris/analysis/ImportColumnDesc.java| 3 + .../org/apache/doris/analysis/StorageBackend.java | 2 + .../java/org/apache/doris/backup/Repository.java | 48 ++--- .../org/apache/doris/backup/RepositoryMgr.java | 36 +-- .../org/apache/doris/fs/PersistentFileSystem.java | 28 +++-- .../doris/load/loadv2/LoadJobFinalOperation.java | 28 + .../load/loadv2/MiniLoadTxnCommitAttachment.java | 16 +-- .../doris/load/routineload/KafkaProgress.java | 48 - .../load/routineload/KafkaRoutineLoadJob.java | 25 ++--- .../routineload/RLTaskTxnCommitAttachment.java | 13 +-- .../doris/load/routineload/RoutineLoadJob.java | 120 +++-- .../load/routineload/RoutineLoadProgress.java | 11 +- .../load/routineload/RoutineLoadStatistic.java | 10 +- .../doris/mysql/privilege/UserPropertyInfo.java| 32 -- .../apache/doris/persist/AnalyzeDeletionLog.java | 17 ++- .../org/apache/doris/persist/CreateTableInfo.java | 17 ++- .../java/org/apache/doris/persist/HbPackage.java | 22 ++-- .../apache/doris/persist/PartitionPersistInfo.java | 1 + .../apache/doris/persist/ReplicaPersistInfo.java | 65 +++ .../apache/doris/persist/RoutineLoadOperation.java | 21 +++- ...ostProcessable.java => GsonPreProcessable.java} | 4 +- .../org/apache/doris/persist/gson/GsonUtils.java | 39 ++- .../java/org/apache/doris/qe/OriginStatement.java | 10 +- .../java/org/apache/doris/task/LoadTaskInfo.java | 3 + .../apache/doris/transaction/TransactionState.java | 1 + .../doris/transaction/TxnCommitAttachment.java | 48 + .../routineload/RoutineLoadTaskSchedulerTest.java | 3 +- .../transaction/GlobalTransactionMgrTest.java | 5 +- 29 files changed, 385 insertions(+), 295 deletions(-) copy fe/fe-core/src/main/java/org/apache/doris/persist/gson/{GsonPostProcessable.java => GsonPreProcessable.java} (90%) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (3a8dbe48c3e -> 8170df94bfe)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 3a8dbe48c3e [improvement](meta) Switch meta serialization to gson 4 (#36568) add 8170df94bfe [fix](load) fix no error url if no partition can be found (#36831) No new revisions were added by this update. Summary of changes: be/src/vec/sink/vrow_distribution.cpp | 17 ++-- .../data/load_p0/stream_load/test_error_url_1.csv | 1 + .../stream_load/test_stream_load_error_url.groovy | 94 ++ 3 files changed, 104 insertions(+), 8 deletions(-) create mode 100644 regression-test/data/load_p0/stream_load/test_error_url_1.csv - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.0 updated: [fix](clone) Fix clone and alter tablet use same tablet path #34889 (#36791)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch branch-2.0 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.0 by this push: new fb72f2ab627 [fix](clone) Fix clone and alter tablet use same tablet path #34889 (#36791) fb72f2ab627 is described below commit fb72f2ab627c1f77c70c60efea7f32f543aa6da7 Author: deardeng <565620...@qq.com> AuthorDate: Wed Jun 26 19:24:05 2024 +0800 [fix](clone) Fix clone and alter tablet use same tablet path #34889 (#36791) cherry pick from #34889 --- be/src/olap/data_dir.cpp | 40 +++--- be/src/olap/data_dir.h | 4 +- be/src/olap/delta_writer.cpp | 5 - be/src/olap/schema_change.cpp | 10 -- be/src/olap/storage_engine.cpp | 2 + be/src/olap/tablet_manager.cpp | 137 + be/src/olap/tablet_manager.h | 16 ++- be/src/olap/task/engine_clone_task.cpp | 31 +++-- be/src/olap/task/engine_storage_migration_task.cpp | 10 +- .../test_drop_clone_tablet_path_race.groovy| 85 + 10 files changed, 266 insertions(+), 74 deletions(-) diff --git a/be/src/olap/data_dir.cpp b/be/src/olap/data_dir.cpp index a12c9155439..12086586b41 100644 --- a/be/src/olap/data_dir.cpp +++ b/be/src/olap/data_dir.cpp @@ -627,16 +627,6 @@ Status DataDir::load() { return Status::OK(); } -void DataDir::add_pending_ids(const std::string& id) { -std::lock_guard wr_lock(_pending_path_mutex); -_pending_path_ids.insert(id); -} - -void DataDir::remove_pending_ids(const std::string& id) { -std::lock_guard wr_lock(_pending_path_mutex); -_pending_path_ids.erase(id); -} - void DataDir::perform_path_gc() { std::unique_lock lck(_check_path_mutex); _check_path_cv.wait(lck, [this] { @@ -684,6 +674,8 @@ void DataDir::_perform_path_gc_by_tablet() { // could find the tablet, then skip check it continue; } +// data_dir_path/data/8/10031/1785511963 +// data_dir_path/ std::string data_dir_path = io::Path(path).parent_path().parent_path().parent_path().parent_path(); DataDir* data_dir = StorageEngine::instance()->get_store(data_dir_path); @@ -691,7 +683,19 @@ void DataDir::_perform_path_gc_by_tablet() { LOG(WARNING) << "could not find data dir for tablet path " << path; continue; } -_tablet_manager->try_delete_unused_tablet_path(data_dir, tablet_id, schema_hash, path); +// data_dir_path/data/8 +std::string shard_path = io::Path(path).parent_path().parent_path(); +std::filesystem::path sp(shard_path); +int16_t shard_id = -1; +try { +// 8 +shard_id = std::stoi(sp.filename().string()); +} catch (const std::exception&) { +LOG(WARNING) << "failed to stoi shard_id, shard name=" << sp.filename().string(); +continue; +} +_tablet_manager->try_delete_unused_tablet_path(data_dir, tablet_id, schema_hash, path, + shard_id); } _all_tablet_schemahash_paths.clear(); LOG(INFO) << "finished one time path gc by tablet."; @@ -840,11 +844,6 @@ void DataDir::_process_garbage_path(const std::string& path) { } } -bool DataDir::_check_pending_ids(const std::string& id) { -std::shared_lock rd_lock(_pending_path_mutex); -return _pending_path_ids.find(id) != _pending_path_ids.end(); -} - Status DataDir::update_capacity() { RETURN_IF_ERROR(io::global_local_filesystem()->get_space_info(_path, &_disk_capacity_bytes, &_available_bytes)); @@ -947,8 +946,16 @@ Status DataDir::move_to_trash(const std::string& tablet_path) { } // 5. check parent dir of source file, delete it when empty +RETURN_IF_ERROR(delete_tablet_parent_path_if_empty(tablet_path)); + +return Status::OK(); +} + +Status DataDir::delete_tablet_parent_path_if_empty(const std::string& tablet_path) { +auto fs_tablet_path = io::Path(tablet_path); std::string source_parent_dir = fs_tablet_path.parent_path(); // tablet_id level std::vector sub_files; +bool exists = true; RETURN_IF_ERROR( io::global_local_filesystem()->list(source_parent_dir, false, &sub_files, &exists)); if (sub_files.empty()) { @@ -956,7 +963,6 @@ Status DataDir::move_to_trash(const std::string& tablet_path) { // no need to exam return status io::global_local_filesystem()->delete_directory(source_parent_dir); } - return Status::OK(); } diff --git a/be/src/olap/data_dir.h b/be/src/olap/data_dir.h index 81c74f3bb2e..cf587b6d0db 100644 --- a/be/src/olap/da
(doris) branch master updated: [improvement](clone) dead be will abort sched task (#36795)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new cfb878863b3 [improvement](clone) dead be will abort sched task (#36795) cfb878863b3 is described below commit cfb878863b38f6a251c04e92c8631836c8393219 Author: yujun AuthorDate: Wed Jun 26 19:29:11 2024 +0800 [improvement](clone) dead be will abort sched task (#36795) When be is down, its related clone task need to abort. Otherwise this task need to wait until timeout. --- .../org/apache/doris/clone/TabletScheduler.java| 9 ++ .../apache/doris/common/util/DebugPointUtil.java | 10 +- .../java/org/apache/doris/system/HeartbeatMgr.java | 10 ++ .../apache/doris/clone/BeDownCancelCloneTest.java | 148 + .../apache/doris/utframe/MockedBackendFactory.java | 8 ++ 5 files changed, 183 insertions(+), 2 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java b/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java index a8093e33e20..55e3ba2e341 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java +++ b/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java @@ -1866,10 +1866,13 @@ public class TabletScheduler extends MasterDaemon { * If task is timeout, remove the tablet. */ public void handleRunningTablets() { +Set aliveBeIds = Sets.newHashSet(Env.getCurrentSystemInfo().getAllBackendIds(true)); // 1. remove the tablet ctx if timeout List cancelTablets = Lists.newArrayList(); synchronized (this) { for (TabletSchedCtx tabletCtx : runningTablets.values()) { +long srcBeId = tabletCtx.getSrcBackendId(); +long destBeId = tabletCtx.getDestBackendId(); if (Config.disable_tablet_scheduler) { tabletCtx.setErrMsg("tablet scheduler is disabled"); cancelTablets.add(tabletCtx); @@ -1880,6 +1883,12 @@ public class TabletScheduler extends MasterDaemon { tabletCtx.setErrMsg("timeout"); cancelTablets.add(tabletCtx); stat.counterCloneTaskTimeout.incrementAndGet(); +} else if (destBeId > 0 && !aliveBeIds.contains(destBeId)) { +tabletCtx.setErrMsg("dest be " + destBeId + " is dead"); +cancelTablets.add(tabletCtx); +} else if (srcBeId > 0 && !aliveBeIds.contains(srcBeId)) { +tabletCtx.setErrMsg("src be " + srcBeId + " is dead"); +cancelTablets.add(tabletCtx); } } } diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java b/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java index da06232f0c0..420cee77631 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java @@ -134,12 +134,18 @@ public class DebugPointUtil { addDebugPoint(name, new DebugPoint()); } -public static void addDebugPointWithValue(String name, E value) { +public static void addDebugPointWithParams(String name, Map params) { DebugPoint debugPoint = new DebugPoint(); -debugPoint.params.put("value", String.format("%s", value)); +debugPoint.params = params; addDebugPoint(name, debugPoint); } +public static void addDebugPointWithValue(String name, E value) { +Map params = Maps.newHashMap(); +params.put("value", String.format("%s", value)); +addDebugPointWithParams(name, params); +} + public static void removeDebugPoint(String name) { DebugPoint debugPoint = debugPoints.remove(name); LOG.info("remove debug point: name={}, exists={}", name, debugPoint != null); diff --git a/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java b/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java index 5f49e88ba6a..93bc7df083c 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java +++ b/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java @@ -24,6 +24,7 @@ import org.apache.doris.common.Config; import org.apache.doris.common.FeConstants; import org.apache.doris.common.ThreadPoolManager; import org.apache.doris.common.Version; +import org.apache.doris.common.util.DebugPointUtil; import org.apache.doris.common.util.MasterDaemon; import org.apache.doris.persist.HbPackage; import org.apache.doris.resource.Tag; @@ -56,6 +57,7 @@ import com.google.common.collect.Maps; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; +import java.util.Arrays; impo
(doris) branch master updated (cfb878863b3 -> b2747c5a811)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from cfb878863b3 [improvement](clone) dead be will abort sched task (#36795) add b2747c5a811 [improvement](balance) partition rebalance chose disk by rr (#36826) No new revisions were added by this update. Summary of changes: .../apache/doris/clone/PartitionRebalancer.java| 5 +- .../org/apache/doris/clone/TabletScheduler.java| 48 +--- .../java/org/apache/doris/clone/PathSlotTest.java | 64 ++ 3 files changed, 93 insertions(+), 24 deletions(-) create mode 100644 fe/fe-core/src/test/java/org/apache/doris/clone/PathSlotTest.java - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (b2747c5a811 -> c567ef4ff39)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from b2747c5a811 [improvement](balance) partition rebalance chose disk by rr (#36826) add c567ef4ff39 [chore](rpc) Throw exception when use RPC in ckpt thread or the compatiblility mode (#36856) No new revisions were added by this update. Summary of changes: .../java/org/apache/doris/cloud/rpc/MetaServiceProxy.java | 11 +++ 1 file changed, 11 insertions(+) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [fix](load) Fix wrong results for high-concurrent loading (#36841)
This is an automated email from the ASF dual-hosted git repository. gabriellee pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 4649faf2a84 [fix](load) Fix wrong results for high-concurrent loading (#36841) 4649faf2a84 is described below commit 4649faf2a841d3d421ee48640770ed1f4e764dbf Author: Gabriel AuthorDate: Wed Jun 26 19:41:42 2024 +0800 [fix](load) Fix wrong results for high-concurrent loading (#36841) --- be/src/runtime/group_commit_mgr.cpp | 31 +++ be/src/runtime/group_commit_mgr.h | 1 + 2 files changed, 20 insertions(+), 12 deletions(-) diff --git a/be/src/runtime/group_commit_mgr.cpp b/be/src/runtime/group_commit_mgr.cpp index ab11b795ed5..111780c9a42 100644 --- a/be/src/runtime/group_commit_mgr.cpp +++ b/be/src/runtime/group_commit_mgr.cpp @@ -287,18 +287,25 @@ Status GroupCommitTable::get_first_block_load_queue( if (!_is_creating_plan_fragment) { _is_creating_plan_fragment = true; create_plan_dep->block(); -RETURN_IF_ERROR( -_thread_pool->submit_func([&, be_exe_version, mem_tracker, dep = create_plan_dep] { -Defer defer {[&, dep = dep]() { -dep->set_ready(); -std::unique_lock l(_lock); -_is_creating_plan_fragment = false; -}}; -auto st = _create_group_commit_load(be_exe_version, mem_tracker); -if (!st.ok()) { -LOG(WARNING) << "create group commit load error, st=" << st.to_string(); -} -})); +RETURN_IF_ERROR(_thread_pool->submit_func([&, be_exe_version, mem_tracker, + dep = create_plan_dep] { +Defer defer {[&, dep = dep]() { +dep->set_ready(); +std::unique_lock l(_lock); +for (auto it : _create_plan_deps) { +it->set_ready(); +} +std::vector> {}.swap(_create_plan_deps); +_is_creating_plan_fragment = false; +}}; +auto st = _create_group_commit_load(be_exe_version, mem_tracker); +if (!st.ok()) { +LOG(WARNING) << "create group commit load error, st=" << st.to_string(); +} +})); +} else { +create_plan_dep->block(); +_create_plan_deps.push_back(create_plan_dep); } return try_to_get_matched_queue(); } diff --git a/be/src/runtime/group_commit_mgr.h b/be/src/runtime/group_commit_mgr.h index 702ebb9c746..c668197e8dc 100644 --- a/be/src/runtime/group_commit_mgr.h +++ b/be/src/runtime/group_commit_mgr.h @@ -187,6 +187,7 @@ private: // fragment_instance_id to load_block_queue std::unordered_map> _load_block_queues; bool _is_creating_plan_fragment = false; +std::vector> _create_plan_deps; }; class GroupCommitMgr { - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [feat](Nereids) after partition prune, output rows of scan node only contains rows from selected partitions (#36760)
This is an automated email from the ASF dual-hosted git repository. englefly pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 29eab1ae532 [feat](Nereids) after partition prune, output rows of scan node only contains rows from selected partitions (#36760) 29eab1ae532 is described below commit 29eab1ae532352e708832999aadc558a9165c245 Author: minghong AuthorDate: Wed Jun 26 19:54:45 2024 +0800 [feat](Nereids) after partition prune, output rows of scan node only contains rows from selected partitions (#36760) 1. update rowcount if some partitions are pruned 2. refactor StatsCalcualtor for Scan --- .../org/apache/doris/nereids/StatementContext.java | 12 +- .../doris/nereids/rules/rewrite/ColumnPruning.java | 2 +- .../doris/nereids/stats/StatsCalculator.java | 391 + .../nereids/trees/plans/algebra/OlapScan.java | 6 + .../logical/LogicalDeferMaterializeOlapScan.java | 5 + .../trees/plans/logical/LogicalOlapScan.java | 8 + .../physical/PhysicalDeferMaterializeOlapScan.java | 5 + .../trees/plans/physical/PhysicalOlapScan.java | 8 + .../apache/doris/statistics/StatisticsBuilder.java | 7 +- .../nereids_p0/stats/partition_col_stats.groovy| 2 +- 10 files changed, 297 insertions(+), 149 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/nereids/StatementContext.java b/fe/fe-core/src/main/java/org/apache/doris/nereids/StatementContext.java index 58ccaae34d0..e79f079129d 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/nereids/StatementContext.java +++ b/fe/fe-core/src/main/java/org/apache/doris/nereids/StatementContext.java @@ -18,7 +18,6 @@ package org.apache.doris.nereids; import org.apache.doris.analysis.StatementBase; -import org.apache.doris.catalog.Column; import org.apache.doris.catalog.TableIf; import org.apache.doris.catalog.constraint.TableIdentifier; import org.apache.doris.common.Id; @@ -32,6 +31,7 @@ import org.apache.doris.nereids.trees.expressions.ExprId; import org.apache.doris.nereids.trees.expressions.Expression; import org.apache.doris.nereids.trees.expressions.Placeholder; import org.apache.doris.nereids.trees.expressions.Slot; +import org.apache.doris.nereids.trees.expressions.SlotReference; import org.apache.doris.nereids.trees.expressions.StatementScopeIdGenerator; import org.apache.doris.nereids.trees.plans.ObjectId; import org.apache.doris.nereids.trees.plans.PlaceholderId; @@ -135,7 +135,7 @@ public class StatementContext implements Closeable { private final Map slotToRelation = Maps.newHashMap(); // the columns in Plan.getExpressions(), such as columns in join condition or filter condition, group by expression -private final Set keyColumns = Sets.newHashSet(); +private final Set keySlots = Sets.newHashSet(); private BitSet disableRules; // table locks @@ -516,12 +516,12 @@ public class StatementContext implements Closeable { } } -public void addKeyColumn(Column column) { -keyColumns.add(column); +public void addKeySlot(SlotReference slot) { +keySlots.add(slot); } -public boolean isKeyColumn(Column column) { -return keyColumns.contains(column); +public boolean isKeySlot(SlotReference slot) { +return keySlots.contains(slot); } /** Get table id with lazy */ diff --git a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/ColumnPruning.java b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/ColumnPruning.java index e8d4c6d96ab..20a91ca5657 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/ColumnPruning.java +++ b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/rewrite/ColumnPruning.java @@ -138,7 +138,7 @@ public class ColumnPruning extends DefaultPlanRewriter implements if (stmtContext != null) { for (Slot key : keys) { if (key instanceof SlotReference) { -((SlotReference) key).getColumn().ifPresent(stmtContext::addKeyColumn); +stmtContext.addKeySlot((SlotReference) key); } } } diff --git a/fe/fe-core/src/main/java/org/apache/doris/nereids/stats/StatsCalculator.java b/fe/fe-core/src/main/java/org/apache/doris/nereids/stats/StatsCalculator.java index a96ec287f76..bf78fe2a0bf 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/nereids/stats/StatsCalculator.java +++ b/fe/fe-core/src/main/java/org/apache/doris/nereids/stats/StatsCalculator.java @@ -18,7 +18,9 @@ package org.apache.doris.nereids.stats; import org.apache.doris.analysis.IntLiteral; +import org.apache.doris.catalog.Column; import org.apache.doris.catalog.Env; +import org.apache.doris.catalog.MTMV; import org.apache.doris.catalog.OlapTable
(doris) branch master updated: [chore](query) print query id when killed by timeout checker (#36868)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new f71a416d4c6 [chore](query) print query id when killed by timeout checker (#36868) f71a416d4c6 is described below commit f71a416d4c63f67e4b4a1e75598c37de5ca259c0 Author: hui lai <1353307...@qq.com> AuthorDate: Wed Jun 26 20:10:08 2024 +0800 [chore](query) print query id when killed by timeout checker (#36868) ## Proposed changes ``` 2024-06-26 14:58:30,917 WARN (connect-scheduler-check-timer-0|92) [ConnectContext.checkTimeout():776] kill query timeout, remote: :XX, query timeout: 90 ``` It is hard to know which query is killed when timeout, so it is necessary to print query id. --- fe/fe-core/src/main/java/org/apache/doris/qe/ConnectContext.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectContext.java b/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectContext.java index 0c276f55686..6b6278db7d6 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectContext.java +++ b/fe/fe-core/src/main/java/org/apache/doris/qe/ConnectContext.java @@ -967,8 +967,8 @@ public class ConnectContext { // to ms long timeout = getExecTimeout() * 1000L; if (delta > timeout) { -LOG.warn("kill {} timeout, remote: {}, query timeout: {}", -timeoutTag, getRemoteHostPortString(), timeout); +LOG.warn("kill {} timeout, remote: {}, query timeout: {}, query id: {}", +timeoutTag, getRemoteHostPortString(), timeout, queryId); killFlag = true; } } - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.1 updated: [improvement](stream load)(cherry-pick) support hll_from_base64 for stream load column mapping (#36819)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.1 by this push: new a6a84b8ecc3 [improvement](stream load)(cherry-pick) support hll_from_base64 for stream load column mapping (#36819) a6a84b8ecc3 is described below commit a6a84b8ecc324834f79f3a61e5e587366890a06e Author: gnehil AuthorDate: Wed Jun 26 20:12:40 2024 +0800 [improvement](stream load)(cherry-pick) support hll_from_base64 for stream load column mapping (#36819) picked from https://github.com/apache/doris/pull/35923 --- .../java/org/apache/doris/catalog/FunctionSet.java | 1 + .../org/apache/doris/planner/FileLoadScanNode.java | 4 ++- .../data/load_p0/http_stream/test_http_stream.out | 12 +++ .../stream_load/test_stream_load_hll_type.csv | 10 ++ .../load_p0/stream_load/test_stream_load_new.out | 12 +++ .../load_p0/http_stream/test_http_stream.groovy| 41 + .../stream_load/test_stream_load_new.groovy| 42 ++ 7 files changed, 121 insertions(+), 1 deletion(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/catalog/FunctionSet.java b/fe/fe-core/src/main/java/org/apache/doris/catalog/FunctionSet.java index b0d4c654531..2db943993dd 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/catalog/FunctionSet.java +++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/FunctionSet.java @@ -178,6 +178,7 @@ public class FunctionSet { public static final String HLL_UNION_AGG = "hll_union_agg"; public static final String HLL_RAW_AGG = "hll_raw_agg"; public static final String HLL_CARDINALITY = "hll_cardinality"; +public static final String HLL_FROM_BASE64 = "hll_from_base64"; public static final String TO_BITMAP = "to_bitmap"; public static final String TO_BITMAP_WITH_CHECK = "to_bitmap_with_check"; diff --git a/fe/fe-core/src/main/java/org/apache/doris/planner/FileLoadScanNode.java b/fe/fe-core/src/main/java/org/apache/doris/planner/FileLoadScanNode.java index ca0324a51d0..0d674a70517 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/planner/FileLoadScanNode.java +++ b/fe/fe-core/src/main/java/org/apache/doris/planner/FileLoadScanNode.java @@ -280,9 +280,11 @@ public class FileLoadScanNode extends FileScanNode { } FunctionCallExpr fn = (FunctionCallExpr) expr; if (!fn.getFnName().getFunction().equalsIgnoreCase(FunctionSet.HLL_HASH) && !fn.getFnName() -.getFunction().equalsIgnoreCase("hll_empty")) { +.getFunction().equalsIgnoreCase("hll_empty") +&& !fn.getFnName().getFunction().equalsIgnoreCase(FunctionSet.HLL_FROM_BASE64)) { throw new AnalysisException("HLL column must use " + FunctionSet.HLL_HASH + " function, like " + destSlotDesc.getColumn().getName() + "=" + FunctionSet.HLL_HASH + "(xxx) or " ++ destSlotDesc.getColumn().getName() + "=" + FunctionSet.HLL_FROM_BASE64 + "(xxx) or " + destSlotDesc.getColumn().getName() + "=hll_empty()"); } expr.setType(org.apache.doris.catalog.Type.HLL); diff --git a/regression-test/data/load_p0/http_stream/test_http_stream.out b/regression-test/data/load_p0/http_stream/test_http_stream.out index 7ce24eea095..2475ed24961 100644 --- a/regression-test/data/load_p0/http_stream/test_http_stream.out +++ b/regression-test/data/load_p0/http_stream/test_http_stream.out @@ -620,3 +620,15 @@ 1 test 2 test +-- !sql19 -- +buag 1 1 +huang 1 1 +jfin 1 1 +koga 1 1 +kon1 1 +lofn 1 1 +lojn 1 1 +nfubg 1 1 +nhga 1 1 +nijg 1 1 + diff --git a/regression-test/data/load_p0/stream_load/test_stream_load_hll_type.csv b/regression-test/data/load_p0/stream_load/test_stream_load_hll_type.csv new file mode 100644 index 000..0b1d798782c --- /dev/null +++ b/regression-test/data/load_p0/stream_load/test_stream_load_hll_type.csv @@ -0,0 +1,10 @@ +1001,koga,AQEMYSmSmfh+mA== +1002,nijg,AQGs1RXTaA+hkQ== +1003,lojn,AQFyJr4rwn+S0A== +1004,lofn,AQFvE0bU6Pc9uw== +1005,jfin,AQEmxbO3VGItCA== +1006,kon,AQEm5d0Gw4uvZw== +1007,nhga,AQHOpocenFnBwQ== +1008,nfubg,AQFzYsFz+NIgUg== +1009,huang,AQH2slI7qAUmYA== +1010,buag,AQGBXZ3xnU79YA== \ No newline at end of file diff --git a/regression-test/data/load_p0/stream_load/test_stream_load_new.out b/regression-test/data/load_p0/stream_load/test_stream_load_new.out index 52440d98436..f251042a9df 100644 --- a/regression-test/data/load_p0/stream_load/test_stream_load_new.out +++ b/regression-test/data/load_p0/stream_load/test_stream_load_new.out @@ -124,3 +124,15 @@ 10009 jj 10010 kk +-- !sql13 -- +buag 1 1 +huang 1 1 +jfin
(doris) branch master updated (f71a416d4c6 -> ec79a94113a)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from f71a416d4c6 [chore](query) print query id when killed by timeout checker (#36868) add ec79a94113a [fix](regression test) Disable the case in cloud mode (#36769) No new revisions were added by this update. Summary of changes: .../suites/inverted_index_p0/test_variant_index_format_v1.groovy | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.1 updated: [fix](schema-change) Fix schema-change from non-null to null (#36389)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.1 by this push: new 23cf494b485 [fix](schema-change) Fix schema-change from non-null to null (#36389) 23cf494b485 is described below commit 23cf494b4850dc5b7e53ea1ee3c9fe0bb86a76bf Author: Lightman <31928846+lchangli...@users.noreply.github.com> AuthorDate: Wed Jun 26 20:20:50 2024 +0800 [fix](schema-change) Fix schema-change from non-null to null (#36389) https://github.com/apache/doris/pull/32913 --- be/src/olap/schema_change.cpp | 23 +- .../org/apache/doris/alter/SchemaChangeJobV2.java | 2 +- 2 files changed, 19 insertions(+), 6 deletions(-) diff --git a/be/src/olap/schema_change.cpp b/be/src/olap/schema_change.cpp index e7ef0464ffa..a4ed6a527bf 100644 --- a/be/src/olap/schema_change.cpp +++ b/be/src/olap/schema_change.cpp @@ -341,7 +341,7 @@ Status BlockChanger::change_block(vectorized::Block* ref_block, assert_cast(new_col->assume_mutable().get()); new_nullable_col->change_nested_column(ref_col); - new_nullable_col->get_null_map_data().resize_fill(new_nullable_col->size()); + new_nullable_col->get_null_map_data().resize_fill(ref_col->size()); } else { // nullable to not nullable: // suppose column `c_phone` is originally varchar(16) NOT NULL, @@ -389,11 +389,24 @@ Status BlockChanger::_check_cast_valid(vectorized::ColumnPtr ref_column, return Status::DataQualityError("Null data is changed to not nullable"); } } else { -const auto* new_null_map = +const auto& null_map_column = vectorized::check_and_get_column(new_column) -->get_null_map_column() -.get_data() -.data(); +->get_null_map_column(); +const auto& nested_column = + vectorized::check_and_get_column(new_column) +->get_nested_column(); +const auto* new_null_map = null_map_column.get_data().data(); + +if (null_map_column.size() != new_column->size() || +nested_column.size() != new_column->size()) { +DCHECK(false) << "null_map_column_size=" << null_map_column.size() + << " new_column_size=" << new_column->size() + << " nested_column_size=" << nested_column.size(); +return Status::InternalError( +"null_map_column size is changed, null_map_column_size={}, " +"new_column_size={}", +null_map_column.size(), new_column->size()); +} bool is_changed = false; for (size_t i = 0; i < ref_column->size(); i++) { diff --git a/fe/fe-core/src/main/java/org/apache/doris/alter/SchemaChangeJobV2.java b/fe/fe-core/src/main/java/org/apache/doris/alter/SchemaChangeJobV2.java index b58568d3945..919fc673648 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/alter/SchemaChangeJobV2.java +++ b/fe/fe-core/src/main/java/org/apache/doris/alter/SchemaChangeJobV2.java @@ -443,7 +443,7 @@ public class SchemaChangeJobV2 extends AlterJobV2 { if (indexColumnMap.containsKey(SchemaChangeHandler.SHADOW_NAME_PREFIX + column.getName())) { Column newColumn = indexColumnMap.get( SchemaChangeHandler.SHADOW_NAME_PREFIX + column.getName()); -if (newColumn.getType() != column.getType()) { +if (!newColumn.getType().equals(column.getType())) { try { SlotRef slot = new SlotRef(destSlotDesc); slot.setCol(column.getName()); - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [Featrue](default value) add pi as default value (#36280)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 543576227db [Featrue](default value) add pi as default value (#36280) 543576227db is described below commit 543576227db1521e66c2d32a6fa522c1a7a7aa61 Author: Liu Zhenlong <49094455+dragonliu2...@users.noreply.github.com> AuthorDate: Wed Jun 26 20:21:34 2024 +0800 [Featrue](default value) add pi as default value (#36280) ## Proposed changes Issue Number: refers https://github.com/apache/doris/issues/34228 add pi as default value, such as: ```sql CREATE TABLE IF NOT EXISTS t1 ( k TINYINT, v double not null DEFAULT PI ) UNIQUE KEY(k) DISTRIBUTED BY HASH(k) PROPERTIES("replication_num" = "1"); ``` --- .../antlr4/org/apache/doris/nereids/DorisLexer.g4 | 1 + .../antlr4/org/apache/doris/nereids/DorisParser.g4 | 3 +- .../antlr4/org/apache/doris/nereids/PLParser.g4| 1 + .../java/org/apache/doris/analysis/ColumnDef.java | 9 ++ .../doris/nereids/parser/LogicalPlanBuilder.java | 2 + .../trees/plans/commands/info/DefaultValue.java| 5 + .../data/correctness_p0/test_default_pi.out| 28 + .../correctness_p0/test_default_pi_streamload.csv | 2 + .../suites/correctness_p0/test_default_pi.groovy | 123 + 9 files changed, 173 insertions(+), 1 deletion(-) diff --git a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisLexer.g4 b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisLexer.g4 index 9971d277ebe..5ed655c212a 100644 --- a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisLexer.g4 +++ b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisLexer.g4 @@ -415,6 +415,7 @@ PERCENT: 'PERCENT'; PERIOD: 'PERIOD'; PERMISSIVE: 'PERMISSIVE'; PHYSICAL: 'PHYSICAL'; +PI: 'PI'; PLACEHOLDER: '?'; PLAN: 'PLAN'; PRIVILEGES: 'PRIVILEGES'; diff --git a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4 b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4 index 68da90f9da1..ee9b91fdf6f 100644 --- a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4 +++ b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4 @@ -588,7 +588,7 @@ columnDef ((GENERATED ALWAYS)? AS LEFT_PAREN generatedExpr=expression RIGHT_PAREN)? ((NOT)? nullable=NULL)? (AUTO_INCREMENT (LEFT_PAREN autoIncInitValue=number RIGHT_PAREN)?)? -(DEFAULT (nullValue=NULL | INTEGER_VALUE | DECIMAL_VALUE | stringValue=STRING_LITERAL +(DEFAULT (nullValue=NULL | INTEGER_VALUE | DECIMAL_VALUE | PI | stringValue=STRING_LITERAL | CURRENT_DATE | defaultTimestamp=CURRENT_TIMESTAMP (LEFT_PAREN defaultValuePrecision=number RIGHT_PAREN)?))? (ON UPDATE CURRENT_TIMESTAMP (LEFT_PAREN onUpdateValuePrecision=number RIGHT_PAREN)?)? (COMMENT comment=STRING_LITERAL)? @@ -1237,6 +1237,7 @@ nonReserved | PERIOD | PERMISSIVE | PHYSICAL +| PI | PLAN | PLUGIN | PLUGINS diff --git a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/PLParser.g4 b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/PLParser.g4 index f8dc6039145..e967da2f955 100644 --- a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/PLParser.g4 +++ b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/PLParser.g4 @@ -884,6 +884,7 @@ non_reserved_words : // Tokens that are not reserved words | PART_LOC | PCTFREE | PCTUSED + | PI | PRECISION | PRESERVE | PRINT diff --git a/fe/fe-core/src/main/java/org/apache/doris/analysis/ColumnDef.java b/fe/fe-core/src/main/java/org/apache/doris/analysis/ColumnDef.java index f306bb782fe..e01a4f11793 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/analysis/ColumnDef.java +++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/ColumnDef.java @@ -99,6 +99,7 @@ public class ColumnDef { this.defaultValueExprDef = new DefaultValueExprDef(exprName, precision); } +public static String PI = "PI"; public static String CURRENT_DATE = "CURRENT_DATE"; // default "CURRENT_TIMESTAMP", only for DATETIME type public static String CURRENT_TIMESTAMP = "CURRENT_TIMESTAMP"; @@ -526,6 +527,14 @@ public class ColumnDef { throw new AnalysisException("Types other than DATE and DATEV2 " + "cannot use current_date as the default value"); } +} else if (null != defaultValueExprDef +&& defaultValueExprDef.getExprName().equalsIgnoreCase(DefaultValue.PI)) { +switch (primitiveType) { +case DOUBLE: +break; +default: +throw
(doris) branch master updated (543576227db -> 4e1dbb0a9e7)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 543576227db [Featrue](default value) add pi as default value (#36280) add 4e1dbb0a9e7 [Enhance](Routine Load) enhance routine load get topic metadata (#35651) No new revisions were added by this update. Summary of changes: be/src/runtime/routine_load/data_consumer.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-shade) branch paimon-0.8.1 created (now a093837)
This is an automated email from the ASF dual-hosted git repository. kirs pushed a change to branch paimon-0.8.1 in repository https://gitbox.apache.org/repos/asf/doris-shade.git at a093837 upgrade hive-catalog-shade dependencies (#45) No new revisions were added by this update. - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-shade) branch paimon-0.8.1 updated: Upgrade paimon to 0.8.1
This is an automated email from the ASF dual-hosted git repository. kirs pushed a commit to branch paimon-0.8.1 in repository https://gitbox.apache.org/repos/asf/doris-shade.git The following commit(s) were added to refs/heads/paimon-0.8.1 by this push: new 538dac4 Upgrade paimon to 0.8.1 538dac4 is described below commit 538dac47d2d21586aafa060b579e5aaed3e8b6f7 Author: Calvin Kirs AuthorDate: Wed Jun 26 20:32:24 2024 +0800 Upgrade paimon to 0.8.1 --- CHANGE-LOG.md | 2 ++ hive-catalog-shade/pom.xml | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/CHANGE-LOG.md b/CHANGE-LOG.md index de331b7..57116d0 100644 --- a/CHANGE-LOG.md +++ b/CHANGE-LOG.md @@ -4,6 +4,8 @@ 2.1.0 - Upgrade paimon to 0.8.0 - Upgrade bcpkix-jdkon15 dependency to bcpkix-jdkon18 + 2.1.1 +- Upgrade paimon to 0.8.1 ### 2.0 2.0.0 - upgrade avro to 1.11.3 diff --git a/hive-catalog-shade/pom.xml b/hive-catalog-shade/pom.xml index 53baa3f..6100968 100644 --- a/hive-catalog-shade/pom.xml +++ b/hive-catalog-shade/pom.xml @@ -31,7 +31,7 @@ under the License. 3.3.6 2.8.1 1.4.3 -0.8.0 +0.8.1 1.11.3 2.5.2 1.13.1 - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [fix](test)fix regression test case failure (#36391)
This is an automated email from the ASF dual-hosted git repository. starocean999 pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 1def10f3f58 [fix](test)fix regression test case failure (#36391) 1def10f3f58 is described below commit 1def10f3f58fa10c0d79965589aa8560d8475089 Author: starocean999 <40539150+starocean...@users.noreply.github.com> AuthorDate: Wed Jun 26 21:30:19 2024 +0800 [fix](test)fix regression test case failure (#36391) ## Proposed changes Issue Number: close #xxx --- .../suites/correctness_p0/test_ctas_mv/test_ctas_mv.groovy | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/regression-test/suites/correctness_p0/test_ctas_mv/test_ctas_mv.groovy b/regression-test/suites/correctness_p0/test_ctas_mv/test_ctas_mv.groovy index 81c71221756..932ffdbdfb4 100644 --- a/regression-test/suites/correctness_p0/test_ctas_mv/test_ctas_mv.groovy +++ b/regression-test/suites/correctness_p0/test_ctas_mv/test_ctas_mv.groovy @@ -66,9 +66,9 @@ suite("test_ctas_mv") { sql """ insert into test_table_t1 values(); """ sql """ insert into test_table_t2 values(); """ -sql """ CREATE MATERIALIZED VIEW test_table_view As +createMV(""" CREATE MATERIALIZED VIEW test_table_view As select a1,a3,a4,DATE_FORMAT(a5, 'MMdd') QUERY_TIME,DATE_FORMAT(a6 ,'MMdd') CREATE_TIME -from test_table_t1 where DATE_FORMAT(a5, 'MMdd') =20230131; """ +from test_table_t1 where DATE_FORMAT(a5, 'MMdd') =20230131; """) sql """ create table szjdf_zjb PROPERTIES("replication_num"="1") as select disf.a2, disf.a1, disf.a3, disf.a4, @@ -90,7 +90,7 @@ suite("test_ctas_mv") { """ sql """ -drop table if exists test_table_t1 FORCE; +drop table if exists test_table_t1; """ sql """ - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [enhancement](compaction) adjust compaction concurrency based on compaction score and workload (#36672)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 2c3a96ed97b [enhancement](compaction) adjust compaction concurrency based on compaction score and workload (#36672) 2c3a96ed97b is described below commit 2c3a96ed97bef5c9653424fdc61b7b40ff8119a3 Author: Luwei <814383...@qq.com> AuthorDate: Wed Jun 26 22:00:23 2024 +0800 [enhancement](compaction) adjust compaction concurrency based on compaction score and workload (#36672) 1 Resolved the issue where the priority queue did not reserve slots for cumulative compaction. 2 When considering compaction task priorities, introduced metrics for CPU and memory usage rates. When the compaction score is low, and CPU or memory usage is high, reduce the number of compaction tasks generated and allocate CPU and memory resources to queries or load. 3 Integrated the logic of the priority queue and concurrency control together, removing the previous priority code. --- be/src/common/config.cpp | 6 ++- be/src/common/config.h | 2 +- be/src/olap/olap_server.cpp| 85 +++--- be/src/olap/storage_engine.cpp | 20 -- be/src/util/system_metrics.cpp | 4 ++ be/src/util/system_metrics.h | 2 + 6 files changed, 57 insertions(+), 62 deletions(-) diff --git a/be/src/common/config.cpp b/be/src/common/config.cpp index 5eb0e8d26ba..4460e477c8f 100644 --- a/be/src/common/config.cpp +++ b/be/src/common/config.cpp @@ -437,6 +437,8 @@ DEFINE_Validator(compaction_task_num_per_disk, [](const int config) -> bool { return config >= 2; }); DEFINE_Validator(compaction_task_num_per_fast_disk, [](const int config) -> bool { return config >= 2; }); +DEFINE_Validator(low_priority_compaction_task_num_per_disk, + [](const int config) -> bool { return config >= 2; }); // How many rounds of cumulative compaction for each round of base compaction when compaction tasks generation. DEFINE_mInt32(cumulative_compaction_rounds_for_each_base_compaction_round, "9"); @@ -458,8 +460,8 @@ DEFINE_mInt64(pick_rowset_to_compact_interval_sec, "86400"); // Compaction priority schedule DEFINE_mBool(enable_compaction_priority_scheduling, "true"); -DEFINE_mInt32(low_priority_compaction_task_num_per_disk, "1"); -DEFINE_mDouble(low_priority_tablet_version_num_ratio, "0.7"); +DEFINE_mInt32(low_priority_compaction_task_num_per_disk, "2"); +DEFINE_mInt32(low_priority_compaction_score_threshold, "200"); // Thread count to do tablet meta checkpoint, -1 means use the data directories count. DEFINE_Int32(max_meta_checkpoint_threads, "-1"); diff --git a/be/src/common/config.h b/be/src/common/config.h index 592e96d1dc0..dbf18002704 100644 --- a/be/src/common/config.h +++ b/be/src/common/config.h @@ -507,7 +507,7 @@ DECLARE_mInt64(pick_rowset_to_compact_interval_sec); // Compaction priority schedule DECLARE_mBool(enable_compaction_priority_scheduling); DECLARE_mInt32(low_priority_compaction_task_num_per_disk); -DECLARE_mDouble(low_priority_tablet_version_num_ratio); +DECLARE_mInt32(low_priority_compaction_score_threshold); // Thread count to do tablet meta checkpoint, -1 means use the data directories count. DECLARE_Int32(max_meta_checkpoint_threads); diff --git a/be/src/olap/olap_server.cpp b/be/src/olap/olap_server.cpp index 67d171f1a22..ec4e5843b26 100644 --- a/be/src/olap/olap_server.cpp +++ b/be/src/olap/olap_server.cpp @@ -845,6 +845,42 @@ void StorageEngine::get_tablet_rowset_versions(const PGetTabletVersionsRequest* response->mutable_status()->set_status_code(0); } +bool need_generate_compaction_tasks(int count, int thread_per_disk, CompactionType compaction_type, +bool all_base) { +if (count >= thread_per_disk) { +// Return if no available slot +return false; +} else if (count >= thread_per_disk - 1) { +// Only one slot left, check if it can be assigned to base compaction task. +if (compaction_type == CompactionType::BASE_COMPACTION) { +if (all_base) { +return false; +} +} +} +return true; +} + +int get_concurrent_per_disk(int max_score, int thread_per_disk) { +if (!config::enable_compaction_priority_scheduling) { +return thread_per_disk; +} + +double load_average = DorisMetrics::instance()->system_metrics()->get_load_average_1_min(); +int num_cores = doris::CpuInfo::num_cores(); +bool cpu_usage_high = load_average > num_cores * 0.8; + +auto process_memory_usage = doris::GlobalMemoryArbitrator::process_memory_usage(); +bool memory_usage_high = process_memory_usage > MemInfo::soft_mem_limit() * 0.8; + +if (max_score <= config::low_priority_compaction_s
(doris) branch master updated: [regression](kerberos)add hive kerberos docker regression env (#36430)
This is an automated email from the ASF dual-hosted git repository. morningman pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 8b8be642a92 [regression](kerberos)add hive kerberos docker regression env (#36430) 8b8be642a92 is described below commit 8b8be642a92950027bdf172ef04ae94141924244 Author: slothever <18522955+w...@users.noreply.github.com> AuthorDate: Thu Jun 27 00:23:42 2024 +0800 [regression](kerberos)add hive kerberos docker regression env (#36430) add kerberos docker regression environment add cases for two different hive kerberos --- .../create_kerberos_credential_cache_files.sh | 33 +++ .../kerberos/common/conf/doris-krb5.conf | 52 +++ .../common/hadoop/apply-config-overrides.sh| 31 +++ .../kerberos/common/hadoop/hadoop-run.sh | 42 + .../kerberos/entrypoint-hive-master-2.sh | 36 .../kerberos/entrypoint-hive-master.sh | 34 +++ .../kerberos/health-checks/hadoop-health-check.sh | 39 .../kerberos/health-checks/health.sh | 34 +++ .../docker-compose/kerberos/kerberos.yaml.tpl | 73 +++ .../kerberos/sql/create_kerberos_hive_table.sql| 17 .../kerberos/two-kerberos-hives/auth-to-local.xml | 29 ++ .../two-kerberos-hives/hive2-default-fs-site.xml | 25 + .../kerberos/two-kerberos-hives/update-location.sh | 25 + docker/thirdparties/run-thirdparties-docker.sh | 30 +- .../common/security/authentication/HadoopUGI.java | 4 +- .../apache/doris/datasource/ExternalCatalog.java | 9 +- .../doris/datasource/hive/HMSExternalCatalog.java | 6 +- .../doris/datasource/hive/HiveMetaStoreCache.java | 4 +- .../datasource/hive/HiveMetaStoreClientHelper.java | 4 +- .../datasource/iceberg/IcebergMetadataCache.java | 5 +- .../datasource/paimon/PaimonExternalCatalog.java | 3 +- .../apache/doris/fs/remote/RemoteFileSystem.java | 5 + .../org/apache/doris/fs/remote/S3FileSystem.java | 3 +- .../apache/doris/fs/remote/dfs/DFSFileSystem.java | 13 ++- regression-test/conf/regression-conf.groovy| 4 + .../kerberos/test_single_hive_kerberos.out | 6 ++ .../kerberos/test_two_hive_kerberos.out| 12 +++ regression-test/pipeline/external/conf/be.conf | 3 + regression-test/pipeline/external/conf/fe.conf | 2 + .../pipeline/external/conf/regression-conf.groovy | 5 + .../kerberos/test_single_hive_kerberos.groovy | 101 + .../kerberos/test_two_hive_kerberos.groovy | 72 +++ 32 files changed, 742 insertions(+), 19 deletions(-) diff --git a/docker/thirdparties/docker-compose/kerberos/ccache/create_kerberos_credential_cache_files.sh b/docker/thirdparties/docker-compose/kerberos/ccache/create_kerberos_credential_cache_files.sh new file mode 100644 index 000..2bba3f928b1 --- /dev/null +++ b/docker/thirdparties/docker-compose/kerberos/ccache/create_kerberos_credential_cache_files.sh @@ -0,0 +1,33 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -exuo pipefail + +TICKET_LIFETIME='30m' + +kinit -l "$TICKET_LIFETIME" -f -c /etc/trino/conf/presto-server-krbcc \ + -kt /etc/trino/conf/presto-server.keytab presto-server/$(hostname -f)@LABS.TERADATA.COM + +kinit -l "$TICKET_LIFETIME" -f -c /etc/trino/conf/hive-presto-master-krbcc \ + -kt /etc/trino/conf/hive-presto-master.keytab hive/$(hostname -f)@LABS.TERADATA.COM + +kinit -l "$TICKET_LIFETIME" -f -c /etc/trino/conf/hdfs-krbcc \ + -kt /etc/hadoop/conf/hdfs.keytab hdfs/hadoop-mas...@labs.teradata.com + +kinit -l "$TICKET_LIFETIME" -f -c /etc/trino/conf/hive-krbcc \ + -kt /etc/hive/conf/hive.keytab hive/hadoop-mas...@labs.teradata.com diff --git a/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf b/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf new file mode 100644 index 000..7624b94e6ad --- /dev/null +++ b/docker/thirdparties/docker-compose/kerberos/common/conf/doris-krb5.conf
(doris-website) branch asf-site updated (71feec695e7 -> 17b152b8579)
This is an automated email from the ASF dual-hosted git repository. github-bot pushed a change to branch asf-site in repository https://gitbox.apache.org/repos/asf/doris-website.git discard 71feec695e7 Automated deployment with doris branch @ ce21128106399e5e46f7cb3652067c7fb540f6ba new 17b152b8579 Automated deployment with doris branch @ ce21128106399e5e46f7cb3652067c7fb540f6ba This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (71feec695e7) \ N -- N -- N refs/heads/asf-site (17b152b8579) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: docs/1.2/search-index.json | 2 +- docs/2.0/search-index.json | 2 +- docs/dev/search-index.json | 2 +- search-index.json| 2 +- zh-CN/docs/1.2/search-index.json | 2 +- zh-CN/docs/2.0/search-index.json | 2 +- zh-CN/docs/dev/search-index.json | 2 +- zh-CN/search-index.json | 2 +- 8 files changed, 8 insertions(+), 8 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [test](auth)add upgrade and downgrade compatibility test case (#34489)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 5c344d8371d [test](auth)add upgrade and downgrade compatibility test case (#34489) 5c344d8371d is described below commit 5c344d8371d3d3e32e9a3be51d3b235c3274ee7a Author: zfr95 <87513668+zfr9...@users.noreply.github.com> AuthorDate: Thu Jun 27 09:14:01 2024 +0800 [test](auth)add upgrade and downgrade compatibility test case (#34489) [test](auth)add upgrade and downgrade compatibility test case --- .../test_master_slave_consistency_auth.groovy | 323 + .../suites/auth_p0/test_select_column_auth.groovy | 126 regression-test/suites/auth_up_down_p0/load.groovy | 191 .../auth_up_down_p0/test_grant_revoke_auth.groovy | 75 + 4 files changed, 715 insertions(+) diff --git a/regression-test/suites/auth_p0/test_master_slave_consistency_auth.groovy b/regression-test/suites/auth_p0/test_master_slave_consistency_auth.groovy new file mode 100644 index 000..379ea68f3ce --- /dev/null +++ b/regression-test/suites/auth_p0/test_master_slave_consistency_auth.groovy @@ -0,0 +1,323 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +suite ("test_follower_consistent_auth","p0,auth") { + +def get_follower_ip = { +def result = sql """show frontends;""" +for (int i = 0; i < result.size(); i++) { +if (result[i][7] == "FOLLOWER" && result[i][8] == "false") { +return result[i][1] +} +} +return "null" +} +def switch_ip = get_follower_ip() +if (switch_ip != "null") { +logger.info("switch_ip: " + switch_ip) +def new_jdbc_url = context.config.jdbcUrl.replaceAll(/\/\/[0-9.]+:/, "//${switch_ip}:") +logger.info("new_jdbc_url: " + new_jdbc_url) + +String user = 'test_follower_consistent_user' +String pwd = 'C123_567p' +String dbName = 'test_select_column_auth_db' +String tableName = 'test_select_column_auth_table' +String role = 'test_select_column_auth_role' +String wg = 'test_select_column_auth_wg' +String rg = 'test_select_column_auth_rg' +try_sql("DROP role ${role}") +sql """CREATE ROLE ${role}""" +sql """drop WORKLOAD GROUP if exists '${wg}'""" +sql """CREATE WORKLOAD GROUP "${wg}" +PROPERTIES ( +"cpu_share"="10" +);""" +sql """DROP RESOURCE if exists ${rg}""" +sql """ +CREATE RESOURCE IF NOT EXISTS "${rg}" +PROPERTIES( +"type"="hdfs", +"fs.defaultFS"="127.0.0.1:8120", +"hadoop.username"="hive", +"hadoop.password"="hive", +"dfs.nameservices" = "my_ha", +"dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2", +"dfs.namenode.rpc-address.my_ha.my_namenode1" = "127.0.0.1:1", +"dfs.namenode.rpc-address.my_ha.my_namenode2" = "127.0.0.1:1", +"dfs.client.failover.proxy.provider" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" +); +""" +try_sql("drop user ${user}") +try_sql """drop table if exists ${dbName}.${tableName}""" +sql """drop database if exists ${dbName}""" +sql """create database ${dbName}""" +sql """ +CREATE TABLE IF NOT EXISTS ${dbName}.`${tableName}` ( +id BIGINT, +username VARCHAR(20) +) +DISTRIBUTED BY HASH(id) BUCKETS 2 +PROPERTIES ( +"replication_num" = "1" +); +""" + +sql """create view ${dbName}.v1 as select * from ${dbName}.${tableName};""" +sql """alter table ${dbName}.${tableName} add rollup rollup1(username)""" +sleep(5 * 1000) +sql """create materialized view mv1 as select username from ${dbName}.${tableName}""" +sleep(5 * 1000) +sql """CREATE MATERIALIZED VIEW ${dbName}.mtmv1 +BUILD IMMEDIATE REFRESH AUTO ON MANUAL +
(doris) branch master updated: [Chore](test) add WideInteger unit test from clickhouse (#36752)
This is an automated email from the ASF dual-hosted git repository. yiguolei pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 9631d3c99fa [Chore](test) add WideInteger unit test from clickhouse (#36752) 9631d3c99fa is described below commit 9631d3c99fa6658ae695e598b618d076a48ea9f6 Author: Pxl AuthorDate: Thu Jun 27 09:24:30 2024 +0800 [Chore](test) add WideInteger unit test from clickhouse (#36752) add WideInteger unit test from clickhouse --- be/test/vec/common/pod_array_test.cpp| 3 + be/test/vec/common/wide_integer_test.cpp | 197 +++ 2 files changed, 200 insertions(+) diff --git a/be/test/vec/common/pod_array_test.cpp b/be/test/vec/common/pod_array_test.cpp index c8525d23b97..e8b0a67ffa0 100644 --- a/be/test/vec/common/pod_array_test.cpp +++ b/be/test/vec/common/pod_array_test.cpp @@ -14,6 +14,9 @@ // KIND, either express or implied. See the License for the // specific language governing permissions and limitations // under the License. +// This file is copied from +// https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/tests/gtest_pod_array.cpp +// and modified by Doris #include "vec/common/pod_array.h" diff --git a/be/test/vec/common/wide_integer_test.cpp b/be/test/vec/common/wide_integer_test.cpp new file mode 100644 index 000..21dde380da5 --- /dev/null +++ b/be/test/vec/common/wide_integer_test.cpp @@ -0,0 +1,197 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// This file is copied from +// https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/tests/gtest_wide_integer.cpp +// and modified by Doris + +#include + +#include "vec/common/uint128.h" +#include "vec/core/types.h" + +namespace doris::vectorized { +TEST(WideInteger, Conversions) { +ASSERT_EQ(UInt64(UInt128(12345678901234567890ULL)), 12345678901234567890ULL); +ASSERT_EQ(UInt64(UInt256(12345678901234567890ULL)), 12345678901234567890ULL); + +ASSERT_EQ(__uint128_t(UInt128(12345678901234567890ULL)), 12345678901234567890ULL); +ASSERT_EQ(__uint128_t(UInt256(12345678901234567890ULL)), 12345678901234567890ULL); + +ASSERT_EQ((UInt64(UInt128(123.456))), 123); +ASSERT_EQ((UInt64(UInt256(123.456))), 123); + +ASSERT_EQ(UInt64(UInt128(123.456F)), 123); +ASSERT_EQ(UInt64(UInt256(123.456F)), 123); + +ASSERT_EQ(Float64(UInt128(1) * 10 * 10 * 10 * 10), 1e36); + +ASSERT_EQ(Float64(UInt256(1) * 10 * 10 * 10 * 10 * 10 * + 10 * 10 * 10), + 1e72); +} + +TEST(WideInteger, Arithmetic) { +Int128 minus_one = -1; +Int128 zero = 0; + +zero += -1; +ASSERT_EQ(zero, -1); +ASSERT_EQ(zero, minus_one); + +zero += minus_one; +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ +ASSERT_EQ(0, memcmp(&zero, "\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFE", +sizeof(zero))); +#else +ASSERT_EQ(0, memcmp(&zero, "\xFE\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF\xFF", +sizeof(zero))); +#endif +zero += 2; +ASSERT_EQ(zero, 0); + +ASSERT_EQ(UInt256(12345678901234567890ULL) * 12345678901234567890ULL / 12345678901234567890ULL, + 12345678901234567890ULL); +ASSERT_EQ(UInt256(12345678901234567890ULL) * UInt256(12345678901234567890ULL) / + 12345678901234567890ULL, + 12345678901234567890ULL); +ASSERT_EQ(UInt256(12345678901234567890ULL) * 12345678901234567890ULL / + UInt256(12345678901234567890ULL), + 12345678901234567890ULL); +ASSERT_EQ(UInt256(12345678901234567890ULL) * 12345678901234567890ULL / 12345678901234567890ULL, + UInt256(12345678901234567890ULL)); +ASSERT_EQ(UInt128(12345678901234567890ULL) * 12345678901234567890ULL / + UInt128(12345678901234567890ULL), + 12345678901234567890ULL); +ASSERT_EQ(UInt256(12345678901234567890ULL) * UInt128(12345678901234567890ULL) / +
(doris) branch master updated (9631d3c99fa -> 66efefca3c3)
This is an automated email from the ASF dual-hosted git repository. yiguolei pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 9631d3c99fa [Chore](test) add WideInteger unit test from clickhouse (#36752) add 66efefca3c3 [Refactor](scanner) remove the unless timer in scanner (#36746) No new revisions were added by this update. Summary of changes: be/src/pipeline/exec/scan_operator.cpp | 1 - be/src/pipeline/exec/scan_operator.h | 2 -- 2 files changed, 3 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (66efefca3c3 -> 929f040b1a3)
This is an automated email from the ASF dual-hosted git repository. gavinchou pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 66efefca3c3 [Refactor](scanner) remove the unless timer in scanner (#36746) add 929f040b1a3 [feature](Azure) Implement generate_presigned_url function for Azure object client (#36829) No new revisions were added by this update. Summary of changes: be/src/io/fs/azure_obj_storage_client.cpp | 44 +-- be/src/io/fs/azure_obj_storage_client.h | 5 +--- be/src/io/fs/obj_storage_client.h | 12 - be/src/io/fs/s3_file_system.cpp | 3 ++- be/src/io/fs/s3_obj_storage_client.cpp| 21 --- be/src/io/fs/s3_obj_storage_client.h | 2 +- be/src/util/s3_util.h | 1 + cloud/src/recycler/azure_obj_client.cpp | 20 +++--- cloud/src/recycler/obj_store_accessor.h | 1 + 9 files changed, 75 insertions(+), 34 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-website) branch master updated: add some note for workload group/resource group (#793)
This is an automated email from the ASF dual-hosted git repository. wangbo pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-website.git The following commit(s) were added to refs/heads/master by this push: new 9e6eb168170 add some note for workload group/resource group (#793) 9e6eb168170 is described below commit 9e6eb16817029b178971356e6bf7f6d333e80683 Author: wangbo AuthorDate: Thu Jun 27 09:58:16 2024 +0800 add some note for workload group/resource group (#793) --- docs/admin-manual/resource-admin/multi-tenant.md | 2 ++ .../admin-manual/resource-admin/multi-tenant.md| 3 ++- .../admin-manual/resource-admin/workload-group.md | 21 ++-- .../admin-manual/resource-admin/multi-tenant.md| 3 ++- .../admin-manual/resource-admin/workload-group.md | 23 +++--- .../admin-manual/resource-admin/multi-tenant.md| 2 ++ 6 files changed, 39 insertions(+), 15 deletions(-) diff --git a/docs/admin-manual/resource-admin/multi-tenant.md b/docs/admin-manual/resource-admin/multi-tenant.md index ef7cf47613f..1a6c1d505ba 100644 --- a/docs/admin-manual/resource-admin/multi-tenant.md +++ b/docs/admin-manual/resource-admin/multi-tenant.md @@ -181,6 +181,8 @@ At present, Doris's resource restrictions on single queries are mainly divided i 2. CPU limitations + > Note: Since Doris 2.1, cpu_resource_limit will gradually be replaced by workload group, so it is not recommended to use it. + Users can limit the CPU resources of the query in the following ways: ```sql diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/multi-tenant.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/multi-tenant.md index f4daa8d8ddc..9c5b0f86610 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/multi-tenant.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/resource-admin/multi-tenant.md @@ -180,6 +180,8 @@ FE 不参与用户数据的处理计算等工作,因此是一个资源消耗 2. CPU 限制 + >注:从Doris 2.1 之后,cpu_resource_limit 将逐渐被 workload group 替代,因此不建议使用该参数。 + 用户可以通过以下方式限制查询的 CPU 资源: ```sql @@ -192,7 +194,6 @@ FE 不参与用户数据的处理计算等工作,因此是一个资源消耗 `cpu_resource_limit` 的取值是一个相对值,取值越大则能够使用的 CPU 资源越多。但一个查询能使用的 CPU 上限也取决于表的分区分桶数。原则上,一个查询的最大 CPU 使用量和查询涉及到的 tablet 数量正相关。极端情况下,假设一个查询仅涉及到一个 tablet,则即使 `cpu_resource_limit` 设置一个较大值,也仅能使用 1 个 CPU 资源。 通过内存和 CPU 的资源限制。我们可以在一个资源组内,将用户的查询进行更细粒度的资源划分。比如我们可以让部分时效性要求不高,但是计算量很大的离线任务使用更少的 CPU 资源和更多的内存资源。而部分延迟敏感的在线任务,使用更多的 CPU 资源以及合理的内存资源。 -注:在 Doris 2.1 之后,cpu_resource_limit 将逐渐被 workload group 替代。 ## 最佳实践和向前兼容 diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/resource-admin/workload-group.md b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/resource-admin/workload-group.md index a1212e1b9f4..aa20fb79415 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/resource-admin/workload-group.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/version-2.0/admin-manual/resource-admin/workload-group.md @@ -40,14 +40,23 @@ Workload Group 可限制组内任务在单个 BE 节点上的计算资源和内 ## Workload Group 使用 -1. 开启 experimental_enable_workload_group 配置项,在 fe.conf 中设置: +1. 手动创建名为`normal`的Workload Group,该Group不可删除。也可以在打开Workload Group开关后重启FE,会自动创建这个Group。 +``` +create workload group if not exists normal +properties ( +"cpu_share"="10", +"memory_limit"="30%", +"enable_memory_overcommit"="true" +); +``` + +2. 开启 experimental_enable_workload_group 配置项,在 fe.conf 中设置: ```bash experimental_enable_workload_group=true ``` -在开启该配置后系统会自动创建名为`normal`的默认 Workload Group。 -2. 创建 Workload Group: +3. 创建 Workload Group: ``` create workload group if not exists g1 @@ -60,13 +69,13 @@ properties ( 创建 workload group 详细可参考:[CREATE-WORKLOAD-GROUP](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-WORKLOAD-GROUP),另删除 Workload Group 可参考[DROP-WORKLOAD-GROUP](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-WORKLOAD-GROUP);修改 Workload Group 可参考:[ALTER-WORKLOAD-GROUP](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-WORKLOAD-GROUP);查看 Workload Group 可参考:[WORKLOAD_GROUPS()](../../sql-manual/sql-functions/table-functions/workload [...] -3. 开启 Pipeline 执行引擎,Workload Group CPU 隔离基于 Pipeline 执行引擎实现,因此需开启 Session 变量: +4. 开启 Pipeline 执行引擎,Workload Group CPU 隔离基于 Pipeline 执行引擎实现,因此需开启 Session 变量: ```bash set experimental_enable_pipeline_engine = true; ``` -4. 绑定 Workload Group。 +5. 绑定 Workload Group。 * 通过设置 user property 将 user 默认绑定到 workload group,默认为`normal`: ``` @@ -84,7 +93,7 @@ session 变量`workload_group`优先于 user property `default_workload_group`, 如果是非 Admin 用户,需要先执行[SHOW-WORKLOAD-GROUPS](../../sql-manual/sql-reference/Show-Statements/SHOW-WORKLOAD-GROUPS) 确认下当前用户能否看到该 workload group,不能看到的 workload group 可能不存在或者当前用户没有权限,执行查询时会报错。给 worklaod gr
Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!
An error occurred while running notifications feature in .asf.yaml!: Invalid notification target 'comm...@foo.apache.org'. Must be a valid @doris.apache.org list! - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-shade) branch master updated: Upgrade paimon to 0.8.1 (#46)
This is an automated email from the ASF dual-hosted git repository. diwu pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-shade.git The following commit(s) were added to refs/heads/master by this push: new c84872a Upgrade paimon to 0.8.1 (#46) c84872a is described below commit c84872a09edae7c6c89d1c6a0f0e27439f2f68c4 Author: Calvin Kirs AuthorDate: Thu Jun 27 10:11:58 2024 +0800 Upgrade paimon to 0.8.1 (#46) --- CHANGE-LOG.md | 2 ++ hive-catalog-shade/pom.xml | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/CHANGE-LOG.md b/CHANGE-LOG.md index de331b7..57116d0 100644 --- a/CHANGE-LOG.md +++ b/CHANGE-LOG.md @@ -4,6 +4,8 @@ 2.1.0 - Upgrade paimon to 0.8.0 - Upgrade bcpkix-jdkon15 dependency to bcpkix-jdkon18 + 2.1.1 +- Upgrade paimon to 0.8.1 ### 2.0 2.0.0 - upgrade avro to 1.11.3 diff --git a/hive-catalog-shade/pom.xml b/hive-catalog-shade/pom.xml index 53baa3f..6100968 100644 --- a/hive-catalog-shade/pom.xml +++ b/hive-catalog-shade/pom.xml @@ -31,7 +31,7 @@ under the License. 3.3.6 2.8.1 1.4.3 -0.8.0 +0.8.1 1.11.3 2.5.2 1.13.1 - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-shade) branch paimon-0.8.1 deleted (was 538dac4)
This is an automated email from the ASF dual-hosted git repository. kirs pushed a change to branch paimon-0.8.1 in repository https://gitbox.apache.org/repos/asf/doris-shade.git was 538dac4 Upgrade paimon to 0.8.1 The revisions that were on this branch are still contained in other references; therefore, this change does not discard any commits from the repository. - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!
An error occurred while running notifications feature in .asf.yaml!: Invalid notification target 'comm...@foo.apache.org'. Must be a valid @doris.apache.org list! - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-website) branch master updated: Add config thrift_max_message_size (#775)
This is an automated email from the ASF dual-hosted git repository. luzhijing pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris-website.git The following commit(s) were added to refs/heads/master by this push: new c9327ea132c Add config thrift_max_message_size (#775) c9327ea132c is described below commit c9327ea132c9d2805d2659fba89305dcd67ebee1 Author: walter AuthorDate: Thu Jun 27 10:14:12 2024 +0800 Add config thrift_max_message_size (#775) Co-authored-by: Luzhijing <82810928+luzhij...@users.noreply.github.com> --- docs/admin-manual/config/be-config.md | 8 +++ docs/admin-manual/config/fe-config.md | 8 +++ .../current/admin-manual/config/be-config.md | 72 -- .../current/admin-manual/config/fe-config.md | 8 +++ .../version-2.0/admin-manual/config/be-config.md | 8 +++ .../version-2.0/admin-manual/config/fe-config.md | 8 +++ .../version-2.1/admin-manual/config/be-config.md | 8 +++ .../version-2.1/admin-manual/config/fe-config.md | 8 +++ .../version-2.0/admin-manual/config/be-config.md | 8 +++ .../version-2.0/admin-manual/config/fe-config.md | 8 +++ .../version-2.1/admin-manual/config/be-config.md | 8 +++ .../version-2.1/admin-manual/config/fe-config.md | 8 +++ 12 files changed, 128 insertions(+), 32 deletions(-) diff --git a/docs/admin-manual/config/be-config.md b/docs/admin-manual/config/be-config.md index f9ecb796111..b0a2342ef3c 100644 --- a/docs/admin-manual/config/be-config.md +++ b/docs/admin-manual/config/be-config.md @@ -277,6 +277,14 @@ There are two ways to configure BE configuration items: - If the parameter is `THREAD_POOL`, the model is a blocking I/O model. + `thrift_max_message_size` + + + +Default: 100MB + +The maximum size of a (received) message of the thrift server, in bytes. If the size of the message sent by the client exceeds this limit, the Thrift server will reject the request and close the connection. As a result, the client will encounter the error: "connection has been closed by peer." In this case, you can try increasing this parameter. + `txn_commit_rpc_timeout_ms` * Description:txn submit rpc timeout diff --git a/docs/admin-manual/config/fe-config.md b/docs/admin-manual/config/fe-config.md index 60e0d531036..ec4dcb03247 100644 --- a/docs/admin-manual/config/fe-config.md +++ b/docs/admin-manual/config/fe-config.md @@ -491,6 +491,14 @@ The connection timeout and socket timeout config for thrift server. The value for thrift_client_timeout_ms is set to be zero to prevent read timeout. + `thrift_max_message_size` + + + +Default: 100MB + +The maximum size of a (received) message of the thrift server, in bytes. If the size of the message sent by the client exceeds this limit, the Thrift server will reject the request and close the connection. As a result, the client will encounter the error: "connection has been closed by peer." In this case, you can try increasing this parameter. + `use_compact_thrift_rpc` Default: true diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md index 1728818c37c..1c9173eedbf 100644 --- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md +++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/admin-manual/config/be-config.md @@ -102,19 +102,19 @@ BE 重启后该配置将失效。如果想持久化修改结果,使用如下 `be_port` * 类型:int32 -* 描述:BE 上 thrift server 的端口号,用于接收来自 FE 的请求 +* 描述:BE 上 Thrift Server 的端口号,用于接收来自 FE 的请求 * 默认值:9060 `heartbeat_service_port` * 类型:int32 -* 描述:BE 上心跳服务端口(thrift),用于接收来自 FE 的心跳 +* 描述:BE 上心跳服务端口(Thrift),用于接收来自 FE 的心跳 * 默认值:9050 `webserver_port` * 类型:int32 -* 描述:BE 上的 http server 的服务端口 +* 描述:BE 上的 HTTP Server 的服务端口 * 默认值:8040 `brpc_port` @@ -126,7 +126,7 @@ BE 重启后该配置将失效。如果想持久化修改结果,使用如下 `arrow_flight_sql_port` * 类型:int32 -* 描述:FE 上的 Arrow Flight SQL server 的端口,用于从 Arrow Flight Client 和 BE 之间通讯 +* 描述:FE 上的 Arrow Flight SQL Server 的端口,用于从 Arrow Flight Client 和 BE 之间通讯 * 默认值:-1 `enable_https` @@ -265,29 +265,37 @@ BE 重启后该配置将失效。如果想持久化修改结果,使用如下 `thrift_rpc_timeout_ms` -* 描述:thrift 默认超时时间 +* 描述:Thrift 默认超时时间 * 默认值:6 `thrift_client_retry_interval_ms` * 类型:int64 -* 描述:用来为 be 的 thrift 客户端设置重试间隔,避免 fe 的 thrift server 发生雪崩问题,单位为 ms。 +* 描述:用来为 be 的 thrift 客户端设置重试间隔,避免 FE 的 Thrift Server 发生雪崩问题,单位为 ms。 * 默认值:1000 `thrift_connect_timeout_seconds` -* 描述:默认 thrift 客户端连接超时时间 +* 描述:默认 Thrift 客户端连接超时时间 * 默认值:3 (s) `thrift_server_type_of_fe` * 类型:string -* 描述:该配置表示 FE 的 Thrift 服务使用的服务模型,类型为 string, 大小写不敏感,该参数需要和 fe 的 thrift_server_type 参数的设置保持一致。目前该参数的取值有两个,`THREADED`和`THREAD_POOL`。 +* 描述:该配置表示 FE 的 Thrift 服务使用的服务模型,类型为 string, 大小写不敏感,该参数需要和 FE 的 thrift_server_type 参数的设置保持一致。目前该参数的取值有两个,`THREADED`和`THREAD_POOL`
(doris-shade) annotated tag doris-shade-2.1.1 updated (8bb84d8 -> 1856978)
This is an automated email from the ASF dual-hosted git repository. kirs pushed a change to annotated tag doris-shade-2.1.1 in repository https://gitbox.apache.org/repos/asf/doris-shade.git *** WARNING: tag doris-shade-2.1.1 was modified! *** from 8bb84d8 (commit) to 1856978 (tag) tagging 8bb84d81173449357701b45cfc21f09e5d562d7d (commit) by Calvin Kirs on Thu Jun 27 10:35:22 2024 +0800 - Log - [maven-release-plugin] copy for tag doris-shade-2.1.1 --- No new revisions were added by this update. Summary of changes: - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (929f040b1a3 -> 233b0b5dd9d)
This is an automated email from the ASF dual-hosted git repository. lihaopeng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 929f040b1a3 [feature](Azure) Implement generate_presigned_url function for Azure object client (#36829) add 233b0b5dd9d [fix](bitmap) incorrect type of BitmapValue with fastunion (#36834) No new revisions were added by this update. Summary of changes: be/src/util/bitmap_value.h | 5 +++-- be/test/util/bitmap_value_test.cpp | 23 +++ 2 files changed, 26 insertions(+), 2 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
svn commit: r70001 - in /dev/doris/doris-shade/2.1.1: ./ apache-doris-shade-2.1.1-src.tar.gz apache-doris-shade-2.1.1-src.tar.gz.asc apache-doris-shade-2.1.1-src.tar.gz.sha512
Author: kirs Date: Thu Jun 27 03:08:49 2024 New Revision: 70001 Log: release shade 2.1.1 Added: dev/doris/doris-shade/2.1.1/ dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz (with props) dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.asc dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.sha512 Added: dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz == Binary file - no diff available. Propchange: dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz -- svn:mime-type = application/octet-stream Added: dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.asc == --- dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.asc (added) +++ dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.asc Thu Jun 27 03:08:49 2024 @@ -0,0 +1,16 @@ +-BEGIN PGP SIGNATURE- + +iQIzBAABCAAdFiEEtBNwzwcRfz7pC9k4HTdcbn3eZEIFAmZ802QACgkQHTdcbn3e +ZEIV/BAAwXsz0fEJ4Rl1niLojJMInZXVHNmBbqAZEc8Ebg1jm9TFhS7AmB9V55Rz +Jeb0ayQ0PbuEOCbIHULda8HuD/6UhGp3mUwFr+5XaGZPk9NUg9B+tHrt+Y5limt7 +D18yZxhPdsu7AfDB9MbInAxAjXC6b+LkF1+6s+JlST82vLSBF3xIIvNutgtH8oZm +VGlYmdDgGEpb4EalSWAFS7B4xFFPvmJlaGO8w2z6xkbwLeu0qqiXGpAvqPjlcTv9 +GTVNqKpjXyUiPXWpzsEcnDphFa7uy/w2g/yVpHpt81mHkDzS52+hFfmhyAx5SuKq +CYEjYM+QGCW2sUK78rbzflfkX2QWmw++U4P4kc4hNFUV6yQ4N1x/ReOpq+Kgpp15 +IYZw/fDuP1qR5Y8bngGJkjPwXwgtqUWbsvGsdx5ZxteglQMckiCiiNQGwCQKlVfl +cuzKAThMv85Qb/uHdMzUh8Yb/SaVzvrFCQHdkVS/uDXT6n9cxWFE/pyw++UW1t/C +DbTh3knGPTzepdOIjG2jB5wTpg1eIFB6IxRKuLbh83pEhXFpDeiQV3c21u3PvEBT +yvzcUdvOMDLNw1CQkHDqUVUVeuhTxS1/vNrsEVtjF/sIX4akcU5pBopg5HTyj8JN +SYV355iYlG6q0umt4U6u4oAWJd5v1frPBKMeNRx0h8MaDAvRFIY= +=UFTk +-END PGP SIGNATURE- Added: dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.sha512 == --- dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.sha512 (added) +++ dev/doris/doris-shade/2.1.1/apache-doris-shade-2.1.1-src.tar.gz.sha512 Thu Jun 27 03:08:49 2024 @@ -0,0 +1 @@ +5fd5f630496a04071bca2fb3d106d05f4d43a5a3442053e54b2f39c00a37e99bc514cc93243ded61fbba5f5a8912010bc029fdfe078a646408814ac69be8de3c apache-doris-shade-2.1.1-src.tar.gz - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (233b0b5dd9d -> 23b0b7b28d4)
This is an automated email from the ASF dual-hosted git repository. morningman pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 233b0b5dd9d [fix](bitmap) incorrect type of BitmapValue with fastunion (#36834) add 23b0b7b28d4 [fix](tvf) Partition columns in CTAS need to be compatible with the STRING type of external tables/TVF (#35489) No new revisions were added by this update. Summary of changes: .../apache/doris/analysis/DistributionDesc.java| 8 + .../doris/analysis/HashDistributionDesc.java | 1 + .../org/apache/doris/analysis/PartitionDesc.java | 4 + .../doris/analysis/RandomDistributionDesc.java | 5 + .../apache/doris/datasource/InternalCatalog.java | 14 +- .../datasource/hive/ThriftHMSCachedClient.java | 2 +- .../doris/nereids/parser/PartitionTableInfo.java | 48 +-- .../trees/plans/commands/CreateTableCommand.java | 17 +- .../trees/plans/commands/info/CreateTableInfo.java | 8 + .../commands/info/DistributionDescriptor.java | 4 + .../ExternalFileTableValuedFunction.java | 2 +- .../hive/write/test_hive_ctas_to_doris.out | 37 ++ .../external_table_p0/tvf/test_ctas_with_hdfs.out | 383 + .../hive/ddl/test_hive_ddl.groovy | 4 +- .../hive/write/test_hive_ctas_to_doris.groovy | 100 ++ .../tvf/test_ctas_with_hdfs.groovy | 129 +++ 16 files changed, 735 insertions(+), 31 deletions(-) create mode 100644 regression-test/data/external_table_p0/hive/write/test_hive_ctas_to_doris.out create mode 100644 regression-test/data/external_table_p0/tvf/test_ctas_with_hdfs.out create mode 100644 regression-test/suites/external_table_p0/hive/write/test_hive_ctas_to_doris.groovy create mode 100644 regression-test/suites/external_table_p0/tvf/test_ctas_with_hdfs.groovy - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-website) branch master updated (c9327ea132c -> b4d6865cee7)
This is an automated email from the ASF dual-hosted git repository. luzhijing pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris-website.git from c9327ea132c Add config thrift_max_message_size (#775) add b4d6865cee7 [Doc]modify doc for superset (#730) No new revisions were added by this update. Summary of changes: docs/ecosystem/bi/apache-superset.md | 32 +- .../current/ecosystem/bi/apache-superset.md| 36 +++-- .../version-2.0/ecosystem/bi/apache-superset.md| 34 +-- .../version-2.1/ecosystem/bi/apache-superset.md| 34 +-- static/images/bi-superset-en-1.png | Bin 0 -> 17077 bytes static/images/bi-superset-en-2.jpeg| Bin 0 -> 36566 bytes static/images/bi-superset-en-2.png | Bin 0 -> 36566 bytes static/images/bi-superset-en-3.jpeg| Bin 0 -> 61291 bytes static/images/bi-superset-en-3.png | Bin 0 -> 61291 bytes static/images/bi-superset-en-4.jpeg| Bin 0 -> 74679 bytes static/images/bi-superset-en-4.png | Bin 0 -> 397160 bytes .../version-2.0/ecosystem/bi/apache-superset.md| 16 + .../version-2.1/ecosystem/bi/apache-superset.md| 16 + 13 files changed, 132 insertions(+), 36 deletions(-) create mode 100644 static/images/bi-superset-en-1.png create mode 100644 static/images/bi-superset-en-2.jpeg create mode 100644 static/images/bi-superset-en-2.png create mode 100644 static/images/bi-superset-en-3.jpeg create mode 100644 static/images/bi-superset-en-3.png create mode 100644 static/images/bi-superset-en-4.jpeg create mode 100644 static/images/bi-superset-en-4.png - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
Error while running notifications feature from refs/heads/master:.asf.yaml in doris-website!
An error occurred while running notifications feature in .asf.yaml!: Invalid notification target 'comm...@foo.apache.org'. Must be a valid @doris.apache.org list! - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [Improvement](multicatalog) support read tencent dlc table on lakefs (#36823)
This is an automated email from the ASF dual-hosted git repository. lide pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 507dfd8b7ed [Improvement](multicatalog) support read tencent dlc table on lakefs (#36823) 507dfd8b7ed is described below commit 507dfd8b7ed73d7d0cfd211fe64edde3cf916c9e Author: Yulei-Yang AuthorDate: Thu Jun 27 12:02:51 2024 +0800 [Improvement](multicatalog) support read tencent dlc table on lakefs (#36823) --- .../main/java/org/apache/doris/common/FeConstants.java| 1 + .../java/org/apache/doris/common/util/LocationPath.java | 15 +++ .../doris/datasource/property/PropertyConverter.java | 3 ++- .../doris/datasource/property/PropertyConverterTest.java | 2 +- 4 files changed, 19 insertions(+), 2 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java b/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java index 9567304be2f..1c24ca69d4f 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java @@ -74,6 +74,7 @@ public class FeConstants { public static final String FS_PREFIX_BOS = "bos"; public static final String FS_PREFIX_COS = "cos"; public static final String FS_PREFIX_COSN = "cosn"; +public static final String FS_PREFIX_LAKEFS = "lakefs"; public static final String FS_PREFIX_OBS = "obs"; public static final String FS_PREFIX_OFS = "ofs"; public static final String FS_PREFIX_GFS = "gfs"; diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java b/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java index dd1641126bf..eccb483578a 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java @@ -62,6 +62,7 @@ public class LocationPath { COSN, // Tencent OFS, // Tencent CHDFS GFS, // Tencent GooseFs, +LAKEFS, // used by Tencent DLC OSS, // Alibaba, OSS_HDFS, // JindoFS on OSS JFS, // JuiceFS, @@ -163,6 +164,10 @@ public class LocationPath { locationType = LocationType.COSN; this.location = location; break; +case FeConstants.FS_PREFIX_LAKEFS: +locationType = LocationType.COSN; +this.location = normalizedLakefsPath(location); +break; case FeConstants.FS_PREFIX_VIEWFS: locationType = LocationType.VIEWFS; this.location = location; @@ -277,6 +282,15 @@ public class LocationPath { } } +private static String normalizedLakefsPath(String location) { +int atIndex = location.indexOf("@dlc"); +if (atIndex != -1) { +return "lakefs://" + location.substring(atIndex + 1); +} else { +return location; +} +} + public static Pair getFSIdentity(String location, String bindBrokerName) { LocationPath locationPath = new LocationPath(location); FileSystemType fsType = (bindBrokerName != null) ? FileSystemType.BROKER : locationPath.getFileSystemType(); @@ -351,6 +365,7 @@ public class LocationPath { case GCS: // ATTN, for COSN, on FE side, use HadoopFS to access, but on BE, use S3 client to access. case COSN: +case LAKEFS: // now we only support S3 client for object storage on BE return TFileType.FILE_S3; case HDFS: diff --git a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java index fd0c8846029..9b9a92f7f32 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java +++ b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java @@ -195,7 +195,7 @@ public class PropertyConverter { return OBSFileSystem.class.getName(); } else if (fsScheme.equalsIgnoreCase("oss")) { return AliyunOSSFileSystem.class.getName(); -} else if (fsScheme.equalsIgnoreCase("cosn")) { +} else if (fsScheme.equalsIgnoreCase("cosn") || fsScheme.equalsIgnoreCase("lakefs")) { return CosFileSystem.class.getName(); } else { return S3AFileSystem.class.getName(); @@ -361,6 +361,7 @@ public class PropertyConverter { cosProperties.put(CosNConfigKeys.COSN_ENDPOINT_SUFFIX_KEY, props.get(CosProperties.ENDPOINT)); cosProperties.put("fs.cosn.impl.disable.cache", "true"); cosProperties.put("fs.cosn.impl", getHadoopFSIm
(doris) branch branch-2.0 updated: [Improvement](multicatalog) support read tencent dlc on lakefs (#36807)
This is an automated email from the ASF dual-hosted git repository. lide pushed a commit to branch branch-2.0 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.0 by this push: new 17873d00871 [Improvement](multicatalog) support read tencent dlc on lakefs (#36807) 17873d00871 is described below commit 17873d00871c6a517a2af371b8d2b24b4f97 Author: Yulei-Yang AuthorDate: Thu Jun 27 12:03:16 2024 +0800 [Improvement](multicatalog) support read tencent dlc on lakefs (#36807) --- .../main/java/org/apache/doris/common/FeConstants.java| 1 + .../java/org/apache/doris/common/util/LocationPath.java | 15 +++ .../doris/datasource/property/PropertyConverter.java | 3 ++- .../doris/datasource/property/PropertyConverterTest.java | 2 +- 4 files changed, 19 insertions(+), 2 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java b/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java index 3012f6a62e7..1afa12856f0 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java @@ -73,6 +73,7 @@ public class FeConstants { public static final String FS_PREFIX_BOS = "bos"; public static final String FS_PREFIX_COS = "cos"; public static final String FS_PREFIX_COSN = "cosn"; +public static final String FS_PREFIX_LAKEFS = "lakefs"; public static final String FS_PREFIX_OBS = "obs"; public static final String FS_PREFIX_OFS = "ofs"; public static final String FS_PREFIX_GFS = "gfs"; diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java b/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java index fd7da29e519..a307ff63699 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java @@ -61,6 +61,7 @@ public class LocationPath { COSN, // Tencent OFS, // Tencent CHDFS GFS, // Tencent GooseFs, +LAKEFS, // used by Tencent DLC OSS, // Alibaba, OSS_HDFS, // JindoFS on OSS JFS, // JuiceFS, @@ -158,6 +159,10 @@ public class LocationPath { locationType = LocationType.COSN; this.location = location; break; +case FeConstants.FS_PREFIX_LAKEFS: +locationType = LocationType.COSN; +this.location = normalizedLakefsPath(location); +break; case FeConstants.FS_PREFIX_VIEWFS: locationType = LocationType.VIEWFS; this.location = location; @@ -270,6 +275,15 @@ public class LocationPath { } } +private static String normalizedLakefsPath(String location) { +int atIndex = location.indexOf("@dlc"); +if (atIndex != -1) { +return "lakefs://" + location.substring(atIndex + 1); +} else { +return location; +} +} + public static Pair getFSIdentity(String location, String bindBrokerName) { LocationPath locationPath = new LocationPath(location); FileSystemType fsType = (bindBrokerName != null) ? FileSystemType.BROKER : locationPath.getFileSystemType(); @@ -337,6 +351,7 @@ public class LocationPath { case GCS: // ATTN, for COSN, on FE side, use HadoopFS to access, but on BE, use S3 client to access. case COSN: +case LAKEFS: // now we only support S3 client for object storage on BE return TFileType.FILE_S3; case HDFS: diff --git a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java index c10594c4977..a2ea2cee47e 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java +++ b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java @@ -185,7 +185,7 @@ public class PropertyConverter { return OBSFileSystem.class.getName(); } else if (fsScheme.equalsIgnoreCase("oss")) { return AliyunOSSFileSystem.class.getName(); -} else if (fsScheme.equalsIgnoreCase("cosn")) { +} else if (fsScheme.equalsIgnoreCase("cosn") || fsScheme.equalsIgnoreCase("lakefs")) { return CosFileSystem.class.getName(); } else { return S3AFileSystem.class.getName(); @@ -341,6 +341,7 @@ public class PropertyConverter { cosProperties.put(CosNConfigKeys.COSN_ENDPOINT_SUFFIX_KEY, props.get(CosProperties.ENDPOINT)); cosProperties.put("fs.cosn.impl.disable.cache", "true"); cosProperties.put("fs.cosn.impl", getHadoopFSImplByS
(doris) branch branch-2.1 updated (23cf494b485 -> a8e9c89dc6e)
This is an automated email from the ASF dual-hosted git repository. morrysnow pushed a change to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git from 23cf494b485 [fix](schema-change) Fix schema-change from non-null to null (#36389) add a8e9c89dc6e [Fix](nereids) fix NormalizeAgg, change the upper project projections rewrite logic (#36161) (#36622) No new revisions were added by this update. Summary of changes: .../nereids/rules/analysis/NormalizeAggregate.java | 57 ++ .../normalize_aggregate_test.out} | 3 +- .../normalize_aggregate_test.groovy} | 24 +++-- 3 files changed, 57 insertions(+), 27 deletions(-) copy regression-test/data/{compaction/test_single_compaction_with_variant_inverted_index.out => nereids_rules_p0/normalize_aggregate/normalize_aggregate_test.out} (67%) copy regression-test/suites/{nereids_p0/empty_table/load.groovy => nereids_rules_p0/normalize_aggregate/normalize_aggregate_test.groovy} (66%) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris-thirdparty) branch clucene updated: add error code FileNotFound (#227)
This is an automated email from the ASF dual-hosted git repository. airborne pushed a commit to branch clucene in repository https://gitbox.apache.org/repos/asf/doris-thirdparty.git The following commit(s) were added to refs/heads/clucene by this push: new 5db9db68e4 add error code FileNotFound (#227) 5db9db68e4 is described below commit 5db9db68e448b8ccfd360d02666bbac44e6f8d1a Author: Sun Chenyang AuthorDate: Thu Jun 27 13:03:47 2024 +0800 add error code FileNotFound (#227) --- src/core/CLucene/debug/error.h | 1 + 1 file changed, 1 insertion(+) diff --git a/src/core/CLucene/debug/error.h b/src/core/CLucene/debug/error.h index 0c48db3d28..5c5c620352 100644 --- a/src/core/CLucene/debug/error.h +++ b/src/core/CLucene/debug/error.h @@ -34,6 +34,7 @@ #define CL_ERR_OutOfMemory 23 #define CL_ERR_FieldReader 24 #define CL_ERR_MaxBytesLength 25 +#define CL_ERR_FileNotFound 26 //error try/throw/catch definitions - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.1 updated: [improvement](clone) dead be will abort sched task #36795 (#36897)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.1 by this push: new f80750faede [improvement](clone) dead be will abort sched task #36795 (#36897) f80750faede is described below commit f80750faedecb6fe71f5a829e458432e462efc28 Author: yujun AuthorDate: Thu Jun 27 13:35:51 2024 +0800 [improvement](clone) dead be will abort sched task #36795 (#36897) cherry pick from #36795 --- .../org/apache/doris/clone/TabletScheduler.java| 9 ++ .../apache/doris/common/util/DebugPointUtil.java | 10 +- .../java/org/apache/doris/system/HeartbeatMgr.java | 10 ++ .../apache/doris/clone/BeDownCancelCloneTest.java | 148 + .../apache/doris/utframe/MockedBackendFactory.java | 8 ++ 5 files changed, 183 insertions(+), 2 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java b/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java index 094beca0425..b92d9fa86b7 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java +++ b/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java @@ -1867,10 +1867,13 @@ public class TabletScheduler extends MasterDaemon { * If task is timeout, remove the tablet. */ public void handleRunningTablets() { +Set aliveBeIds = Sets.newHashSet(Env.getCurrentSystemInfo().getAllBackendIds(true)); // 1. remove the tablet ctx if timeout List cancelTablets = Lists.newArrayList(); synchronized (this) { for (TabletSchedCtx tabletCtx : runningTablets.values()) { +long srcBeId = tabletCtx.getSrcBackendId(); +long destBeId = tabletCtx.getDestBackendId(); if (Config.disable_tablet_scheduler) { tabletCtx.setErrMsg("tablet scheduler is disabled"); cancelTablets.add(tabletCtx); @@ -1881,6 +1884,12 @@ public class TabletScheduler extends MasterDaemon { tabletCtx.setErrMsg("timeout"); cancelTablets.add(tabletCtx); stat.counterCloneTaskTimeout.incrementAndGet(); +} else if (destBeId > 0 && !aliveBeIds.contains(destBeId)) { +tabletCtx.setErrMsg("dest be " + destBeId + " is dead"); +cancelTablets.add(tabletCtx); +} else if (srcBeId > 0 && !aliveBeIds.contains(srcBeId)) { +tabletCtx.setErrMsg("src be " + srcBeId + " is dead"); +cancelTablets.add(tabletCtx); } } } diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java b/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java index da06232f0c0..420cee77631 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/util/DebugPointUtil.java @@ -134,12 +134,18 @@ public class DebugPointUtil { addDebugPoint(name, new DebugPoint()); } -public static void addDebugPointWithValue(String name, E value) { +public static void addDebugPointWithParams(String name, Map params) { DebugPoint debugPoint = new DebugPoint(); -debugPoint.params.put("value", String.format("%s", value)); +debugPoint.params = params; addDebugPoint(name, debugPoint); } +public static void addDebugPointWithValue(String name, E value) { +Map params = Maps.newHashMap(); +params.put("value", String.format("%s", value)); +addDebugPointWithParams(name, params); +} + public static void removeDebugPoint(String name) { DebugPoint debugPoint = debugPoints.remove(name); LOG.info("remove debug point: name={}, exists={}", name, debugPoint != null); diff --git a/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java b/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java index 9d13218ae06..9a5058f8d02 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java +++ b/fe/fe-core/src/main/java/org/apache/doris/system/HeartbeatMgr.java @@ -24,6 +24,7 @@ import org.apache.doris.common.Config; import org.apache.doris.common.FeConstants; import org.apache.doris.common.ThreadPoolManager; import org.apache.doris.common.Version; +import org.apache.doris.common.util.DebugPointUtil; import org.apache.doris.common.util.MasterDaemon; import org.apache.doris.persist.HbPackage; import org.apache.doris.resource.Tag; @@ -56,6 +57,7 @@ import com.google.common.collect.Maps; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; +import java.util.Arrays; import java.util.List; import java.util.Map; import java.util.
(doris) branch master updated: [fix](ms) Fix txn approximate size (#36880)
This is an automated email from the ASF dual-hosted git repository. gavinchou pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 228237b205a [fix](ms) Fix txn approximate size (#36880) 228237b205a is described below commit 228237b205a72e238f47de6c81405f5f57c38ee8 Author: walter AuthorDate: Thu Jun 27 13:37:05 2024 +0800 [fix](ms) Fix txn approximate size (#36880) --- cloud/src/meta-service/txn_kv.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cloud/src/meta-service/txn_kv.cpp b/cloud/src/meta-service/txn_kv.cpp index f48ac8f9912..75da65c8109 100644 --- a/cloud/src/meta-service/txn_kv.cpp +++ b/cloud/src/meta-service/txn_kv.cpp @@ -265,7 +265,7 @@ void Transaction::put(std::string_view key, std::string_view val) { ++num_put_keys_; put_bytes_ += key.size() + val.size(); -approximate_bytes_ = key.size() * 3 + val.size(); // See fdbclient/ReadYourWrites.actor.cpp +approximate_bytes_ += key.size() * 3 + val.size(); // See fdbclient/ReadYourWrites.actor.cpp } // return 0 for success otherwise error - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch branch-2.0 updated: [improvement](balance) partition rebalance chose disk by rr #36826 (#36901)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch branch-2.0 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.0 by this push: new f5c058527ce [improvement](balance) partition rebalance chose disk by rr #36826 (#36901) f5c058527ce is described below commit f5c058527cee79ef4518e1e0726b05fcb3fca9f7 Author: yujun AuthorDate: Thu Jun 27 13:37:07 2024 +0800 [improvement](balance) partition rebalance chose disk by rr #36826 (#36901) cherry pick from #36826 --- .../apache/doris/clone/PartitionRebalancer.java| 5 +- .../org/apache/doris/clone/TabletScheduler.java| 48 +--- .../java/org/apache/doris/clone/PathSlotTest.java | 64 ++ 3 files changed, 93 insertions(+), 24 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/clone/PartitionRebalancer.java b/fe/fe-core/src/main/java/org/apache/doris/clone/PartitionRebalancer.java index a6b8bf04b12..fd70e5116f2 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/clone/PartitionRebalancer.java +++ b/fe/fe-core/src/main/java/org/apache/doris/clone/PartitionRebalancer.java @@ -39,7 +39,6 @@ import java.util.List; import java.util.Map; import java.util.NavigableSet; import java.util.Random; -import java.util.Set; import java.util.concurrent.atomic.AtomicLong; import java.util.stream.Collectors; @@ -266,9 +265,9 @@ public class PartitionRebalancer extends Rebalancer { Preconditions.checkNotNull(slot, "unable to get slot of toBe " + move.toBe); List paths = beStat.getPathStatistics(); -Set availPath = paths.stream().filter(path -> path.getStorageMedium() == tabletCtx.getStorageMedium() +List availPath = paths.stream().filter(path -> path.getStorageMedium() == tabletCtx.getStorageMedium() && path.isFit(tabletCtx.getTabletSize(), false) == BalanceStatus.OK) - .map(RootPathLoadStatistic::getPathHash).collect(Collectors.toSet()); + .map(RootPathLoadStatistic::getPathHash).collect(Collectors.toList()); long pathHash = slot.takeAnAvailBalanceSlotFrom(availPath); if (pathHash == -1) { throw new SchedException(SchedException.Status.SCHEDULE_FAILED, SchedException.SubCode.WAITING_SLOT, diff --git a/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java b/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java index 8ae51be7a96..3341f5bb305 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java +++ b/fe/fe-core/src/main/java/org/apache/doris/clone/TabletScheduler.java @@ -1933,9 +1933,12 @@ public class TabletScheduler extends MasterDaemon { // path hash -> slot num private Map pathSlots = Maps.newConcurrentMap(); private long beId; +// only use in takeAnAvailBalanceSlotFrom, make pick RR +private long lastPickPathHash; public PathSlot(Map paths, long beId) { this.beId = beId; +this.lastPickPathHash = -1; for (Map.Entry entry : paths.entrySet()) { pathSlots.put(entry.getKey(), new Slot(entry.getValue())); } @@ -2046,19 +2049,6 @@ public class TabletScheduler extends MasterDaemon { return num; } -/** - * get path whose balance slot num is larger than 0 - */ -public synchronized Set getAvailPathsForBalance() { -Set pathHashs = Sets.newHashSet(); -for (Map.Entry entry : pathSlots.entrySet()) { -if (entry.getValue().getAvailableBalance() > 0) { -pathHashs.add(entry.getKey()); -} -} -return pathHashs; -} - public synchronized List> getSlotInfo(long beId) { List> results = Lists.newArrayList(); pathSlots.forEach((key, value) -> { @@ -2091,15 +2081,31 @@ public class TabletScheduler extends MasterDaemon { return -1; } -public synchronized long takeAnAvailBalanceSlotFrom(Set pathHashs) { -for (Long pathHash : pathHashs) { -Slot slot = pathSlots.get(pathHash); -if (slot == null) { -continue; +public long takeAnAvailBalanceSlotFrom(List pathHashs) { +if (pathHashs.isEmpty()) { +return -1; +} + +Collections.sort(pathHashs); +synchronized (this) { +int preferSlotIndex = pathHashs.indexOf(lastPickPathHash) + 1; +if (preferSlotIndex < 0 || preferSlotIndex >= pathHashs.size()) { +preferSlotIndex = 0; } -if (slot.balanceUsed < slot.getBalanceTotal()) { -slot.balanceUsed++; -
(doris) branch branch-2.1 updated (f80750faede -> 89fc55d833d)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git from f80750faede [improvement](clone) dead be will abort sched task #36795 (#36897) add 89fc55d833d [improvement](balance) partition rebalance chose disk by rr #36826 (#36900) No new revisions were added by this update. Summary of changes: .../apache/doris/clone/PartitionRebalancer.java| 5 +- .../org/apache/doris/clone/TabletScheduler.java| 48 +--- .../java/org/apache/doris/clone/PathSlotTest.java | 64 ++ 3 files changed, 93 insertions(+), 24 deletions(-) create mode 100644 fe/fe-core/src/test/java/org/apache/doris/clone/PathSlotTest.java - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [chore](autobucket) add autobucket test and log (#36874)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 6101c5a2e8e [chore](autobucket) add autobucket test and log (#36874) 6101c5a2e8e is described below commit 6101c5a2e8eef27776fc22ff796def55ea3a220e Author: yujun AuthorDate: Thu Jun 27 13:44:18 2024 +0800 [chore](autobucket) add autobucket test and log (#36874) We met unexpect autobucket case for online env. Add log for investigation. --- .../doris/clone/DynamicPartitionScheduler.java | 59 +- .../doris/catalog/DynamicPartitionTableTest.java | 38 ++ 2 files changed, 84 insertions(+), 13 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/clone/DynamicPartitionScheduler.java b/fe/fe-core/src/main/java/org/apache/doris/clone/DynamicPartitionScheduler.java index 8c5f4f669c5..d17af1836fe 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/clone/DynamicPartitionScheduler.java +++ b/fe/fe-core/src/main/java/org/apache/doris/clone/DynamicPartitionScheduler.java @@ -50,6 +50,7 @@ import org.apache.doris.common.util.MasterDaemon; import org.apache.doris.common.util.PropertyAnalyzer; import org.apache.doris.common.util.RangeUtils; import org.apache.doris.common.util.TimeUtils; +import org.apache.doris.rpc.RpcException; import org.apache.doris.thrift.TStorageMedium; import com.google.common.base.Strings; @@ -71,6 +72,7 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.stream.Collectors; /** * This class is used to periodically add or drop partition on an olapTable which specify dynamic partition properties @@ -186,7 +188,7 @@ public class DynamicPartitionScheduler extends MasterDaemon { } private static int getBucketsNum(DynamicPartitionProperty property, OlapTable table, -String nowPartitionName, boolean executeFirstTime) { +String partitionName, String nowPartitionName, boolean executeFirstTime) { // if execute first time, all partitions no contain data if (!table.isAutoBucket() || executeFirstTime) { return property.getBuckets(); @@ -194,27 +196,56 @@ public class DynamicPartitionScheduler extends MasterDaemon { // auto bucket // get all history partitions -ArrayList partitionSizeArray = Lists.newArrayList(); RangePartitionInfo info = (RangePartitionInfo) (table.getPartitionInfo()); List> idToItems = new ArrayList<>(info.getIdToItem(false).entrySet()); idToItems.sort(Comparator.comparing(o -> ((RangePartitionItem) o.getValue()).getItems().upperEndpoint())); -for (Map.Entry idToItem : idToItems) { -Partition partition = table.getPartition(idToItem.getKey()); -// exclude current partition because its data isn't enough one week/day/hour. -if (partition != null && !partition.getName().equals(nowPartitionName) -&& partition.getVisibleVersion() >= 2) { -partitionSizeArray.add(partition.getAllDataSize(true)); +List partitions = idToItems.stream() +.map(entry -> table.getPartition(entry.getKey())) +.filter(partition -> partition != null && !partition.getName().equals(nowPartitionName)) +.collect(Collectors.toList()); +List visibleVersions = null; +try { +visibleVersions = Partition.getVisibleVersions(partitions); +} catch (RpcException e) { +LOG.warn("autobucket use property's buckets get visible version fail, table: [{}-{}], " ++ "partition: {}, buckets num: {}, exception: ", +table.getName(), table.getId(), partitionName, property.getBuckets(), e); +return property.getBuckets(); +} + +List hasDataPartitions = Lists.newArrayList(); +for (int i = 0; i < partitions.size(); i++) { +if (visibleVersions.get(i) >= 2) { +hasDataPartitions.add(partitions.get(i)); } } // no exist history partition data -if (partitionSizeArray.isEmpty()) { +if (hasDataPartitions.isEmpty()) { +LOG.info("autobucket use property's buckets due to all partitions no data, table: [{}-{}], " ++ "partition: {}, buckets num: {}", +table.getName(), table.getId(), partitionName, property.getBuckets()); return property.getBuckets(); } +ArrayList partitionSizeArray = hasDataPartitions.stream() +.map(partition -> partition.getAllDataSize(true)) +.collect(Collectors.toCollection(ArrayList::new)); +long estimatePartitionSize = getNextPartitionSize(partitionSizeArray
(doris) branch branch-2.1 updated: [improvement](compaction) be do not compact invisible version to avoid query error -230 #28082 (#36222)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.1 by this push: new 22cb7b8fcbb [improvement](compaction) be do not compact invisible version to avoid query error -230 #28082 (#36222) 22cb7b8fcbb is described below commit 22cb7b8fcbb938e9342361c2d5bc1ec1fe23a579 Author: yujun AuthorDate: Thu Jun 27 13:45:21 2024 +0800 [improvement](compaction) be do not compact invisible version to avoid query error -230 #28082 (#36222) cherry pick from #28082 --- be/src/agent/agent_server.cpp | 13 + be/src/agent/agent_server.h| 1 + be/src/agent/task_worker_pool.cpp | 13 + be/src/agent/task_worker_pool.h| 2 + be/src/common/config.cpp | 7 + be/src/common/config.h | 7 + be/src/olap/full_compaction.cpp| 2 +- be/src/olap/olap_common.h | 21 + be/src/olap/tablet.cpp | 74 +++- be/src/olap/tablet.h | 13 + be/src/olap/tablet_manager.cpp | 72 +++- be/src/olap/tablet_manager.h | 19 +- be/src/olap/tablet_meta.cpp| 10 + be/src/olap/tablet_meta.h | 1 + be/src/olap/task/engine_clone_task.cpp | 2 + .../main/java/org/apache/doris/common/Config.java | 6 +- .../java/org/apache/doris/alter/AlterHandler.java | 3 +- .../java/org/apache/doris/backup/RestoreJob.java | 3 +- .../main/java/org/apache/doris/catalog/Env.java| 8 +- .../org/apache/doris/catalog/MetadataViewer.java | 2 +- .../java/org/apache/doris/catalog/Replica.java | 75 ++-- .../main/java/org/apache/doris/catalog/Tablet.java | 22 +- .../apache/doris/catalog/TabletInvertedIndex.java | 50 ++- .../org/apache/doris/catalog/TabletStatMgr.java| 11 +- .../org/apache/doris/clone/TabletSchedCtx.java | 13 +- .../org/apache/doris/clone/TabletScheduler.java| 6 +- .../apache/doris/common/proc/ReplicasProcNode.java | 5 +- .../doris/common/proc/TabletHealthProcDir.java | 3 +- .../apache/doris/common/proc/TabletsProcDir.java | 11 +- .../apache/doris/datasource/InternalCatalog.java | 5 +- ...oCollector.java => PartitionInfoCollector.java} | 47 ++- .../org/apache/doris/master/ReportHandler.java | 67 +-- .../java/org/apache/doris/system/Diagnoser.java| 5 +- .../java/org/apache/doris/task/AgentBatchTask.java | 10 + .../doris/task/UpdateVisibleVersionTask.java | 40 ++ .../doris/transaction/DatabaseTransactionMgr.java | 24 +- .../doris/transaction/GlobalTransactionMgr.java| 5 +- .../doris/transaction/PublishVersionDaemon.java| 30 +- .../org/apache/doris/alter/RollupJobV2Test.java| 10 +- .../apache/doris/alter/SchemaChangeJobV2Test.java | 6 +- .../org/apache/doris/analysis/ShowReplicaTest.java | 3 +- .../java/org/apache/doris/catalog/ReplicaTest.java | 21 +- .../doris/clone/DiskReblanceWhenSchedulerIdle.java | 3 +- .../org/apache/doris/clone/RebalancerTestUtil.java | 5 +- .../org/apache/doris/clone/RepairVersionTest.java | 8 +- .../doris/clone/TabletReplicaTooSlowTest.java | 4 +- .../org/apache/doris/clone/TabletSchedCtxTest.java | 16 +- .../org/apache/doris/planner/QueryPlanTest.java| 12 +- .../transaction/DatabaseTransactionMgrTest.java| 6 +- .../transaction/GlobalTransactionMgrTest.java | 29 +- gensrc/thrift/AgentService.thrift | 5 + gensrc/thrift/BackendService.thrift| 5 +- gensrc/thrift/MasterService.thrift | 4 +- gensrc/thrift/Types.thrift | 3 +- .../test_compaction_with_visible_version.out | 448 + .../doris/regression/suite/SuiteCluster.groovy | 3 + .../test_compaction_with_visible_version.groovy| 275 + 57 files changed, 1333 insertions(+), 241 deletions(-) diff --git a/be/src/agent/agent_server.cpp b/be/src/agent/agent_server.cpp index e7217cdcff0..5355c037b19 100644 --- a/be/src/agent/agent_server.cpp +++ b/be/src/agent/agent_server.cpp @@ -163,6 +163,10 @@ void AgentServer::start_workers(ExecEnv* exec_env) { _clean_trash_workers = std::make_unique( "CLEAN_TRASH", 1, [&engine](auto&& task) {return clean_trash_callback(engine, task); }); + +_update_visible_version_workers = std::make_unique( +"UPDATE_VISIBLE_VERSION", 1, [&engine](auto&& task) { return visible_version_callback(engine, task); }); + // clang-format on } @@ -278,6 +282,15 @@ void AgentServer::submit_tasks(TAgentResult& agent_result, "task(signature={}) has wrong request member = cle
(doris) branch branch-2.1 updated: [Improvement](multicatalog) support read tencent dlc table on lakefs (#36891)
This is an automated email from the ASF dual-hosted git repository. lide pushed a commit to branch branch-2.1 in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/branch-2.1 by this push: new 8a1ebba1cc9 [Improvement](multicatalog) support read tencent dlc table on lakefs (#36891) 8a1ebba1cc9 is described below commit 8a1ebba1cc90b2b4d05dea596d03b66fbd1bbe79 Author: Yulei-Yang AuthorDate: Thu Jun 27 14:03:48 2024 +0800 [Improvement](multicatalog) support read tencent dlc table on lakefs (#36891) bp #36823 --- .../main/java/org/apache/doris/common/FeConstants.java| 1 + .../java/org/apache/doris/common/util/LocationPath.java | 15 +++ .../doris/datasource/property/PropertyConverter.java | 3 ++- .../doris/datasource/property/PropertyConverterTest.java | 2 +- 4 files changed, 19 insertions(+), 2 deletions(-) diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java b/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java index a502d79e032..f137c4cab49 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/FeConstants.java @@ -71,6 +71,7 @@ public class FeConstants { public static final String FS_PREFIX_BOS = "bos"; public static final String FS_PREFIX_COS = "cos"; public static final String FS_PREFIX_COSN = "cosn"; +public static final String FS_PREFIX_LAKEFS = "lakefs"; public static final String FS_PREFIX_OBS = "obs"; public static final String FS_PREFIX_OFS = "ofs"; public static final String FS_PREFIX_GFS = "gfs"; diff --git a/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java b/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java index dd1641126bf..eccb483578a 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java +++ b/fe/fe-core/src/main/java/org/apache/doris/common/util/LocationPath.java @@ -62,6 +62,7 @@ public class LocationPath { COSN, // Tencent OFS, // Tencent CHDFS GFS, // Tencent GooseFs, +LAKEFS, // used by Tencent DLC OSS, // Alibaba, OSS_HDFS, // JindoFS on OSS JFS, // JuiceFS, @@ -163,6 +164,10 @@ public class LocationPath { locationType = LocationType.COSN; this.location = location; break; +case FeConstants.FS_PREFIX_LAKEFS: +locationType = LocationType.COSN; +this.location = normalizedLakefsPath(location); +break; case FeConstants.FS_PREFIX_VIEWFS: locationType = LocationType.VIEWFS; this.location = location; @@ -277,6 +282,15 @@ public class LocationPath { } } +private static String normalizedLakefsPath(String location) { +int atIndex = location.indexOf("@dlc"); +if (atIndex != -1) { +return "lakefs://" + location.substring(atIndex + 1); +} else { +return location; +} +} + public static Pair getFSIdentity(String location, String bindBrokerName) { LocationPath locationPath = new LocationPath(location); FileSystemType fsType = (bindBrokerName != null) ? FileSystemType.BROKER : locationPath.getFileSystemType(); @@ -351,6 +365,7 @@ public class LocationPath { case GCS: // ATTN, for COSN, on FE side, use HadoopFS to access, but on BE, use S3 client to access. case COSN: +case LAKEFS: // now we only support S3 client for object storage on BE return TFileType.FILE_S3; case HDFS: diff --git a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java index 425ea6cdcfe..9dea5eb3802 100644 --- a/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java +++ b/fe/fe-core/src/main/java/org/apache/doris/datasource/property/PropertyConverter.java @@ -188,7 +188,7 @@ public class PropertyConverter { return OBSFileSystem.class.getName(); } else if (fsScheme.equalsIgnoreCase("oss")) { return AliyunOSSFileSystem.class.getName(); -} else if (fsScheme.equalsIgnoreCase("cosn")) { +} else if (fsScheme.equalsIgnoreCase("cosn") || fsScheme.equalsIgnoreCase("lakefs")) { return CosFileSystem.class.getName(); } else { return S3AFileSystem.class.getName(); @@ -354,6 +354,7 @@ public class PropertyConverter { cosProperties.put(CosNConfigKeys.COSN_ENDPOINT_SUFFIX_KEY, props.get(CosProperties.ENDPOINT)); cosProperties.put("fs.cosn.impl.disable.cache", "true"); cosProperties.put("f
(doris) branch master updated (6101c5a2e8e -> c2f402dd9b6)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 6101c5a2e8e [chore](autobucket) add autobucket test and log (#36874) add c2f402dd9b6 [fix](oom) avoid oom when a lot of tablets fail on load (#36873) No new revisions were added by this update. Summary of changes: be/src/agent/heartbeat_server.cpp | 7 ++- .../doris/transaction/DatabaseTransactionMgr.java | 65 -- 2 files changed, 55 insertions(+), 17 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (c2f402dd9b6 -> 992c27c78d5)
This is an automated email from the ASF dual-hosted git repository. wangbo pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from c2f402dd9b6 [fix](oom) avoid oom when a lot of tablets fail on load (#36873) add 992c27c78d5 [Fix]add log when npe (#36876) No new revisions were added by this update. Summary of changes: .../main/java/org/apache/doris/qe/Coordinator.java | 21 + 1 file changed, 21 insertions(+) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated (992c27c78d5 -> 943bafcf15f)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 992c27c78d5 [Fix]add log when npe (#36876) add 943bafcf15f [Regression test]Add some partition test for streamload 2pc regression test (#36379) No new revisions were added by this update. Summary of changes: .../load_p0/stream_load/test_stream_load_2pc.out | 464 +++- ...ic_data.csv => two_phase_commit_basic_data.csv} | 40 +- .../stream_load/test_stream_load_2pc.groovy| 582 ++--- 3 files changed, 992 insertions(+), 94 deletions(-) copy regression-test/data/load_p0/stream_load/{basic_data.csv => two_phase_commit_basic_data.csv} (81%) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
(doris) branch master updated: [regression] add regression-test for s3 tvf (#35406)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/doris.git The following commit(s) were added to refs/heads/master by this push: new 40638f542a1 [regression] add regression-test for s3 tvf (#35406) 40638f542a1 is described below commit 40638f542a1d7280c8ebace5ab559ec9b9345a57 Author: 133tosakarin <97331129+133tosaka...@users.noreply.github.com> AuthorDate: Thu Jun 27 14:12:29 2024 +0800 [regression] add regression-test for s3 tvf (#35406) I had added some regression test for s3 tvf, all requirse of test cases refer to https://github.com/apache/doris/issues/32576 --- regression-test/data/load_p0/tvf/test_s3_tvf.out | 358 -- .../suites/load_p0/tvf/ddl/nest_tbl_basic_tvf.sql | 29 ++ .../load_p0/tvf/ddl/nest_tbl_basic_tvf_drop.sql| 1 + .../suites/load_p0/tvf/test_s3_tvf.groovy | 526 - 4 files changed, 859 insertions(+), 55 deletions(-) diff --git a/regression-test/data/load_p0/tvf/test_s3_tvf.out b/regression-test/data/load_p0/tvf/test_s3_tvf.out index 88b962cfd6c..1e1da717ad9 100644 --- a/regression-test/data/load_p0/tvf/test_s3_tvf.out +++ b/regression-test/data/load_p0/tvf/test_s3_tvf.out @@ -11,17 +11,44 @@ -- !select -- 20 +-- !select -- +40 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +60 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + -- !select -- 18 -- !select -- -16 +18 -- !select -- -16 +79 -- !select -- -40 +20 -- !select -- 20 @@ -29,20 +56,38 @@ -- !select -- 20 +-- !select -- +39 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +98 + -- !select -- 20 -- !select -- -36 +20 -- !select -- -16 +20 -- !select -- -16 +58 -- !select -- -60 +18 + +-- !select -- +18 + +-- !select -- +117 -- !select -- 20 @@ -54,16 +99,16 @@ 20 -- !select -- -54 +77 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -60 +137 -- !select -- 20 @@ -75,16 +120,16 @@ 20 -- !select -- -54 +95 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -70 +157 -- !select -- 20 @@ -96,16 +141,19 @@ 20 -- !select -- -64 +113 + +-- !select -- +18 -- !select -- -16 +18 -- !select -- -16 +5 -- !select -- -90 +177 -- !select -- 20 @@ -117,16 +165,16 @@ 20 -- !select -- -82 +131 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -110 +197 -- !select -- 20 @@ -138,16 +186,16 @@ 20 -- !select -- -100 +149 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -130 +217 -- !select -- 20 @@ -159,16 +207,16 @@ 20 -- !select -- -118 +167 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -142 +237 -- !select -- 20 @@ -180,13 +228,19 @@ 20 -- !select -- -123 +185 -- !select -- -16 +18 -- !select -- -16 +18 + +-- !select -- +257 + +-- !select -- +20 -- !select -- 20 @@ -195,7 +249,16 @@ 20 -- !select -- -162 +203 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +277 -- !select -- 20 @@ -207,16 +270,16 @@ 20 -- !select -- -141 +221 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -182 +277 -- !select -- 20 @@ -228,16 +291,16 @@ 20 -- !select -- -159 +221 -- !select -- -16 +18 -- !select -- -16 +18 -- !select -- -202 +287 -- !select -- 20 @@ -249,16 +312,100 @@ 20 -- !select -- -177 +231 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +307 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +249 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +327 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +267 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +347 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +285 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +367 -- !select -- -16 +20 -- !select -- -16 +20 + +-- !select -- +20 -- !select -- -222 +305 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +379 -- !select -- 20 @@ -270,11 +417,122 @@ 20 -- !select -- -195 +310 -- !select -- -16 +18 + +-- !select -- +18 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +399 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +328 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +419 + +-- !select -- +20 + +-- !select -- +20 -- !select -- -16 +20 + +-- !select -- +346 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +439 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +364 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +459 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +20 + +-- !select -- +382 + +-- !select -- +18 + +-- !select -- +18 + +-- !select -- +479 + +-- !sele
(doris) branch master updated (40638f542a1 -> 3493a10c3fa)
This is an automated email from the ASF dual-hosted git repository. dataroaring pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/doris.git from 40638f542a1 [regression] add regression-test for s3 tvf (#35406) add 3493a10c3fa [opt](log)password should not be output in log (#34324) No new revisions were added by this update. Summary of changes: be/src/vec/exec/scan/mysql_scanner.cpp | 6 +- 1 file changed, 5 insertions(+), 1 deletion(-) - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org
svn commit: r70004 - /dev/doris/2.0.12/ /release/doris/2.0/2.0.12/
Author: kxiao Date: Thu Jun 27 06:55:30 2024 New Revision: 70004 Log: move doris 2.0.12 to release Added: release/doris/2.0/2.0.12/ - copied from r70003, dev/doris/2.0.12/ Removed: dev/doris/2.0.12/ - To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org For additional commands, e-mail: commits-h...@doris.apache.org