This is an automated email from the ASF dual-hosted git repository.
englefly pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git
The following commit(s) were added to refs/heads/master by this push:
new 3ed3b8dc3ff [test](doris-compose) add test readme (#54726)
3ed3b8dc3ff is described below
commit 3ed3b8dc3ffd5808a092d25c4e5711daea4a39cc
Author: yujun <[email protected]>
AuthorDate: Thu Aug 14 16:24:04 2025 +0800
[test](doris-compose) add test readme (#54726)
### What problem does this PR solve?
---
docker/runtime/doris-compose/Readme.md | 16 ++++++-
docker/runtime/doris-compose/requirements.txt | 7 +--
regression-test/README.md | 10 ++++
.../org/apache/doris/regression/suite/Suite.groovy | 5 ++
.../doris/regression/suite/SuiteCluster.groovy | 20 ++++++++
.../suites/demo_p0/docker_action.groovy | 56 +++++++++++++++-------
6 files changed, 91 insertions(+), 23 deletions(-)
diff --git a/docker/runtime/doris-compose/Readme.md
b/docker/runtime/doris-compose/Readme.md
index 34bab457872..f304069c3c1 100644
--- a/docker/runtime/doris-compose/Readme.md
+++ b/docker/runtime/doris-compose/Readme.md
@@ -117,7 +117,7 @@ So if multiple users use different `LOCAL_DORIS_PATH`,
their clusters may have d
### Create a cluster or recreate its containers
```shell
-python docker/runtime/doris-compose/doris-compose.py up <cluster-name>
<image?>
+python docker/runtime/doris-compose/doris-compose.py up <cluster-name>
<image?>
--add-fe-num <add-fe-num> --add-be-num <add-be-num>
[--fe-id <fd-id> --be-id <be-id>]
...
@@ -176,11 +176,23 @@ Otherwise it will just list summary of each clusters.
There are more options about doris-compose. Just try
```shell
-python docker/runtime/doris-compose/doris-compose.py <command> -h
+python docker/runtime/doris-compose/doris-compose.py <command> -h
```
+### Docker suite in regression test
+
+Regression test support running a suite in a docker doris cluster.
+
+See the example
[demo_p0/docker_action.groovy](https://github.com/apache/doris/blob/master/regression-test/suites/demo_p0/docker_action.groovy).
+
+The docker suite can specify fe num and be num, and
add/drop/start/stop/restart the fe and be.
+
+Before run a docker suite, read the annotation in
`demo_p0/docker_action.groovy` carefully.
+
### Generate regression custom conf file
+provide a command for let the regression test connect to a docker cluster.
+
```shell
python docker/runtime/doris-compose/doris-compose.py config <cluster-name>
<doris-root-path> [-q] [--connect-follow-fe]
```
diff --git a/docker/runtime/doris-compose/requirements.txt
b/docker/runtime/doris-compose/requirements.txt
index 46eebbd0a3f..e519f99af03 100644
--- a/docker/runtime/doris-compose/requirements.txt
+++ b/docker/runtime/doris-compose/requirements.txt
@@ -15,9 +15,6 @@
# specific language governing permissions and limitations
# under the License.
-# if install docker failed, specific pyyaml version and docker version
-#pyyaml==5.3.1
-#docker==6.1.3
docker
docker-compose
filelock
@@ -26,3 +23,7 @@ prettytable
pymysql
python-dateutil
requests<=2.31.0
+
+# NOTICE: if install docker failed, specific pyyaml version and docker version
+#pyyaml==5.3.1
+#docker==6.1.3
diff --git a/regression-test/README.md b/regression-test/README.md
index fb7bdde2ee2..e28865d8e14 100644
--- a/regression-test/README.md
+++ b/regression-test/README.md
@@ -84,6 +84,16 @@ under the License.
8. Cases injected should be marked as nonConcurrent and ensured injection to
be removed after running the case.
+9. Docker case run in a docker cluster. The docker cluster is new created and
independent, not contains history data, not affect other cluster.
+
+ Docker case can add/drop/start/stop/restart fe and be, and specify fe and
be num.
+
+ Example will see
[demo_p0/docker_action.groovy](https://github.com/apache/doris/blob/master/regression-test/suites/demo_p0/docker_action.groovy)
+
+ Read the annotation carefully in the example file.
+
+ Also read the
[doris-compose](https://github.com/apache/doris/tree/master/docker/runtime/doris-compose)
readme.
+
## Compatibility case
Refers to the resources or rules created on the initial cluster during FE
testing or upgrade testing, which can still be used normally after the cluster
restart or upgrade, such as permissions, UDF, etc.
diff --git
a/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/Suite.groovy
b/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/Suite.groovy
index dccdffb597f..8852e8c7e99 100644
---
a/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/Suite.groovy
+++
b/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/Suite.groovy
@@ -302,6 +302,7 @@ class Suite implements GroovyInterceptable {
// more explaination can see example file: demo_p0/docker_action.groovy
public void docker(ClusterOptions options = new ClusterOptions(), Closure
actionSupplier) throws Exception {
if (context.config.excludeDockerTest) {
+ logger.info("do not run the docker suite {}, because regression
config excludeDockerTest=true", name)
return
}
@@ -319,9 +320,13 @@ class Suite implements GroovyInterceptable {
}
} else {
if (options.cloudMode == true && context.config.runMode ==
RunMode.NOT_CLOUD) {
+ logger.info("do not run the docker suite {}, because the
suite's ClusterOptions.cloudMode=true "
+ + "but regression test is local mode", name)
return
}
if (options.cloudMode == false && context.config.runMode ==
RunMode.CLOUD) {
+ logger.info("do not run the docker suite {}, because the
suite's ClusterOptions.cloudMode=false "
+ + "but regression test is cloud mode", name)
return
}
dockerImpl(options, options.cloudMode, actionSupplier)
diff --git
a/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/SuiteCluster.groovy
b/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/SuiteCluster.groovy
index 8f834aa3ca4..d6d25026604 100644
---
a/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/SuiteCluster.groovy
+++
b/regression-test/framework/src/main/groovy/org/apache/doris/regression/suite/SuiteCluster.groovy
@@ -119,6 +119,7 @@ class ListHeader {
class ServerNode {
+ // all node index start from 1, not 0
int index
String host
int httpPort
@@ -565,48 +566,57 @@ class SuiteCluster {
int START_WAIT_TIMEOUT = 120
int STOP_WAIT_TIMEOUT = 60
+ // indices start from 1, not 0
// if not specific fe indices, then start all frontends
void startFrontends(int... indices) {
runFrontendsCmd(START_WAIT_TIMEOUT + 5, "start --wait-timeout
${START_WAIT_TIMEOUT}".toString(), indices)
}
+ // indices start from 1, not 0
// if not specific be indices, then start all backends
void startBackends(int... indices) {
runBackendsCmd(START_WAIT_TIMEOUT + 5, "start --wait-timeout
${START_WAIT_TIMEOUT}".toString(), indices)
}
+ // indices start from 1, not 0
// if not specific fe indices, then stop all frontends
void stopFrontends(int... indices) {
runFrontendsCmd(STOP_WAIT_TIMEOUT + 5, "stop --wait-timeout
${STOP_WAIT_TIMEOUT}".toString(), indices)
waitHbChanged()
}
+ // indices start from 1, not 0
// if not specific be indices, then stop all backends
void stopBackends(int... indices) {
runBackendsCmd(STOP_WAIT_TIMEOUT + 5, "stop --wait-timeout
${STOP_WAIT_TIMEOUT}".toString(), indices)
waitHbChanged()
}
+ // indices start from 1, not 0
// if not specific fe indices, then restart all frontends
void restartFrontends(int... indices) {
runFrontendsCmd(START_WAIT_TIMEOUT + 5, "restart --wait-timeout
${START_WAIT_TIMEOUT}".toString(), indices)
}
+ // indices start from 1, not 0
// if not specific be indices, then restart all backends
void restartBackends(int... indices) {
runBackendsCmd(START_WAIT_TIMEOUT + 5, "restart --wait-timeout
${START_WAIT_TIMEOUT}".toString(), indices)
}
+ // indices start from 1, not 0
// if not specific ms indices, then restart all ms
void restartMs(int... indices) {
runMsCmd(START_WAIT_TIMEOUT + 5, "restart --wait-timeout
${START_WAIT_TIMEOUT}".toString(), indices)
}
+ // indices start from 1, not 0
// if not specific recycler indices, then restart all recyclers
void restartRecyclers(int... indices) {
runRecyclerCmd(START_WAIT_TIMEOUT + 5, "restart --wait-timeout
${START_WAIT_TIMEOUT}".toString(), indices)
}
+ // indices start from 1, not 0
// if not specific fe indices, then drop all frontends
void dropFrontends(boolean clean, int... indices) {
def cmd = 'down'
@@ -616,6 +626,7 @@ class SuiteCluster {
runFrontendsCmd(60, cmd, indices)
}
+ // indices start from 1, not 0
// if not specific be indices, then decommission all backends
void decommissionBackends(boolean clean, int... indices) {
def cmd = 'down'
@@ -625,6 +636,7 @@ class SuiteCluster {
runBackendsCmd(300, cmd, indices)
}
+ // indices start from 1, not 0
// if not specific be indices, then drop force all backends
void dropForceBackends(boolean clean, int... indices) {
def cmd = 'down --drop-force'
@@ -634,6 +646,7 @@ class SuiteCluster {
runBackendsCmd(60, cmd, indices)
}
+ // index start from 1, not 0
void checkFeIsAlive(int index, boolean isAlive) {
def fe = getFeByIndex(index)
assert fe != null : 'frontend with index ' + index + ' not exists!'
@@ -641,6 +654,7 @@ class SuiteCluster {
: 'frontend with index ' + index + ' dead')
}
+ // index start from 1, not 0
void checkBeIsAlive(int index, boolean isAlive) {
def be = getBeByIndex(index)
assert be != null : 'backend with index ' + index + ' not exists!'
@@ -648,6 +662,7 @@ class SuiteCluster {
: 'backend with index ' + index + ' dead')
}
+ // index start from 1, not 0
void checkFeIsExists(int index, boolean isExists) {
def fe = getFeByIndex(index)
if (isExists) {
@@ -657,6 +672,7 @@ class SuiteCluster {
}
}
+ // index start from 1, not 0
void checkBeIsExists(int index, boolean isExists) {
def be = getBeByIndex(index)
if (isExists) {
@@ -676,21 +692,25 @@ class SuiteCluster {
Thread.sleep(7000)
}
+ // indices start from 1, not 0
private void runFrontendsCmd(int timeoutSecond, String op, int... indices)
{
def cmd = op + ' ' + name + ' --fe-id ' + indices.join(' ')
runCmd(cmd, timeoutSecond)
}
+ // indices start from 1, not 0
private void runBackendsCmd(int timeoutSecond, String op, int... indices) {
def cmd = op + ' ' + name + ' --be-id ' + indices.join(' ')
runCmd(cmd, timeoutSecond)
}
+ // indices start from 1, not 0
private void runMsCmd(int timeoutSecond, String op, int... indices) {
def cmd = op + ' ' + name + ' --ms-id ' + indices.join(' ')
runCmd(cmd, timeoutSecond)
}
+ // indices start from 1, not 0
private void runRecyclerCmd(int timeoutSecond, String op, int... indices) {
def cmd = op + ' ' + name + ' --recycle-id ' + indices.join(' ')
runCmd(cmd, timeoutSecond)
diff --git a/regression-test/suites/demo_p0/docker_action.groovy
b/regression-test/suites/demo_p0/docker_action.groovy
index 7e111b48285..b0569cb930d 100644
--- a/regression-test/suites/demo_p0/docker_action.groovy
+++ b/regression-test/suites/demo_p0/docker_action.groovy
@@ -17,32 +17,51 @@
import org.apache.doris.regression.suite.ClusterOptions
+// Every docker suite will connect to a docker cluster.
+// The docker cluster is new created and independent, not contains history
data,
+// not affect the external doris cluster, not affect other docker cluster.
+
// Run docker suite steps:
-// 1. Read 'docker/runtime/doris-compose/Readme.md', make sure you can setup a
doris docker cluster;
-// 2. update regression-conf-custom.groovy with config:
+// 1. Before run docker regreesion test, make sure you can setup a doris
docker cluster.
+// Read readme in
[doris-compose](https://github.com/apache/doris/tree/master/docker/runtime/doris-compose)'
+// to setup a docker doris cluster;
+// 2. Then run the docker suite, and setup regression-conf-custom.groovy with
following config:
// image = "xxxx" // your doris docker image
// excludeDockerTest = false // do run docker suite, default is true
// dockerEndDeleteFiles = false // after run docker suite, whether delete
contains's log and data in directory '/tmp/doris/<suite-name>'
-// When run docker suite, then no need an external doris cluster.
+// When run docker suite, the regression test no need to connect to an
external doris cluster,
+// but can still connect to one just like run a non-docker suite.
// But whether run a docker suite, need more check.
-// Firstly, get the pipeline's run mode (cloud or not_cloud):
-// If there's an external doris cluster, then fetch pipeline's runMode from it.
-// If there's no external doris cluster, then set pipeline's runMode with
command args.
-// for example: sh run-regression-test.sh --run docker_action
-runMode=cloud/not_cloud
-// Secondly, compare ClusterOptions.cloudMode and pipeline's runMode
-// If ClusterOptions.cloudMode = null then let ClusterOptions.cloudMode =
pipeline's cloudMode, and run docker suite.
-// if ClusterOptions.cloudMode = true or false, if cloudMode == pipeline's
cloudMode or pipeline's cloudMode is unknown,
-// then run docker suite, otherwise don't run docker suite.
+// Firstly, get the regression test's run mode (cloud or not_cloud):
+// a) If the regression test connect to an external doris cluster,
+// then will use the external doris cluster's runMode(cloud or not_cloud)
as the regression runMode.
+// b) If there's no external doris cluster, then user can set the regression
runMode with command arg `-runMode`.
+// for example:
+// `sh run-regression-test.sh --run -d demo_p0 -s docker_action
-runMode=cloud/not_cloud`
+// what's more, if the docker suite not contains 'isCloudMode()', then no
need specify the command arg `-runMode`.
+// for exmaple, if the regression not connect to an external doris cluster,
then command:
+// `sh run-regression-test.sh --run -d demo_p0 -s docker_action`
+// will run both cloud case and not_cloud case.
+// Secondly, compare ClusterOptions.cloudMode and the regression's runMode
+// a) If ClusterOptions.cloudMode = null then let ClusterOptions.cloudMode =
the regression's cloudMode, and run docker suite.
+// b) if ClusterOptions.cloudMode = true or false, and regreesion runMode
equals equals the suite's cloudMode
+// or regression's cloudMode is unknown, then run docker suite.
+//
+//
+// By default, after run a docker suite, whether run succ or fail, the
suite's relate docker cluster will auto destroy.
+// If user don't want to destroy the docker cluster after the test, then user
can specify with arg `-noKillDocker`
+// for exmaple:
+// `sh run-regression-test.sh --run -d demo_p0 -s docker_action
-noKillDocker`
+// will run 3 docker cluster, and the last docker cluster will not destroy .
-// NOTICE:
+// NOTICE, for code:
// 1. Need add 'docker' to suite's group, and don't add 'nonConcurrent' to it;
-// 2. In docker closure:
-// a. remove function dockerAwaitUntil(...), should use
'Awaitility.await()...until(f)' directly or use 'awaitUntil(...)';
-// 3. No need to use code ` if (isCloudMode()) { return } ` in docker suites,
-// instead should use `ClusterOptions.cloudMode = true/false` is enough.
-// Because when run docker suite without an external doris cluster, if suite
use code `isCloudMode()`, it need specific -runMode=cloud/not_cloud.
-// On the contrary, `ClusterOptions.cloudMode = true/false` no need specific
-runMode=cloud/not_cloud when no external doris cluster exists.
+// 2. No need to use code ` if (isCloudMode()) { return } ` in docker suites,
+// instead should use `ClusterOptions.cloudMode = true/false` is enough.
+// Because when run docker suite without an external doris cluster, if
suite use code `isCloudMode()`, it need specific -runMode=cloud/not_cloud.
+// On the contrary, `ClusterOptions.cloudMode = true/false` no need
specific -runMode=cloud/not_cloud when no external doris cluster exists.
+// 3. For more options and functions usage, read the file
`suite/SuiteCluster.groovy` in regression framework.
suite('docker_action', 'docker') {
// run a new docker
@@ -52,6 +71,7 @@ suite('docker_action', 'docker') {
cluster.checkBeIsAlive(2, true)
+ // fe and be's index start from 1, not 0.
// stop backend 2, 3
cluster.stopBackends(2, 3)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]