github-actions[bot] commented on PR #21788:
URL: https://github.com/apache/doris/pull/21788#issuecomment-1641806034

   #### `sh-checker report`
   
   To get the full details, please check in the 
[job]("https://github.com/apache/doris/actions/runs/5597780824";) output.
   
   <details>
   <summary>shellcheck errors</summary>
   
   ```
   
   'shellcheck ' returned error 1 finding the following syntactical issues:
   
   ----------
   
   In tools/emr_storage_regression/emr_tools.sh line 75:
               source "${PROFILE}"
                      ^----------^ SC1091 (info): Not following: 
$(pwd)/default_emr_env.sh: openBinaryFile: does not exist (No such file or 
directory)
   
   
   In tools/emr_storage_regression/emr_tools.sh line 118:
           shift
           ^---^ SC2317 (info): Command appears to be unreachable. Check usage 
(or ignore if invoked indirectly).
   
   
   In tools/emr_storage_regression/ping_test/ping_poc.sh line 33:
   FE_HOST=${FE_HOST}
   ^-----^ SC2269 (info): This variable is assigned to itself, so the 
assignment does nothing.
   
   
   In tools/emr_storage_regression/ping_test/ping_poc.sh line 34:
   FE_QUERY_PORT=${FE_QUERY_PORT}
   ^-----------^ SC2269 (info): This variable is assigned to itself, so the 
assignment does nothing.
   
   
   In tools/emr_storage_regression/ping_test/ping_poc.sh line 35:
   USER=${USER}
   ^--^ SC2269 (info): This variable is assigned to itself, so the assignment 
does nothing.
   
   
   In tools/emr_storage_regression/ping_test/ping_poc.sh line 116:
           sed -e 's#DLF_ENDPOINT#'"${DLF_ENDPOINT}"'#g' emr_catalog.sql > 
emr_catalog.sql
                                                         ^-------------^ SC2094 
(info): Make sure not to read and write the same file in the same pipeline.
                                                                           
^-------------^ SC2094 (info): Make sure not to read and write the same file in 
the same pipeline.
   
   
   In tools/emr_storage_regression/ping_test/ping_poc.sh line 117:
           sed -e 's#JINDO_ENDPOINT#'"${JINDO_ENDPOINT}"'#g' emr_catalog.sql > 
emr_catalog.sql
                                                             ^-------------^ 
SC2094 (info): Make sure not to read and write the same file in the same 
pipeline.
                                                                               
^-------------^ SC2094 (info): Make sure not to read and write the same file in 
the same pipeline.
   
   
   In tools/emr_storage_regression/standard_set/gen_spark_create_sql.sh line 27:
   sh gen_tbl/gen_ssb_create_sql.sh  "${BUCKET}"/ssb/ssb100_parquet 
ssb100_parquet_"${TYPE}" parquet >> create_"${TYPE}".sql
   ^-- SC2129 (style): Consider using { cmd1; cmd2; } >> file instead of 
individual redirects.
   
   
   In 
tools/emr_storage_regression/standard_set/gen_tbl/gen_clickbench_create_sql.sh 
line 44:
   USE '"${db}"';
               ^-- SC2016 (info): Expressions don't expand in single quotes, 
use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_ssb_create_sql.sh 
line 37:
   if [[ -z "$3" ]]] then
   ^-- SC1073 (error): Couldn't parse this if expression. Fix to allow more 
checks.
                   ^-- SC1050 (error): Expected 'then'.
                   ^-- SC1072 (error): Expected 'then'. Fix any mentioned 
problems and try again.
                   ^-- SC1140 (error): Unexpected parameters after condition. 
Missing &&/||, or bad expression?
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 45:
   USE '"${db}"';
               ^-- SC2016 (info): Expressions don't expand in single quotes, 
use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 57:
   LOCATION "'"${db_loc}"/customer'";
                                  ^-- SC2016 (info): Expressions don't expand 
in single quotes, use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 77:
   LOCATION "'"${db_loc}"/lineitem'";
                                  ^-- SC2016 (info): Expressions don't expand 
in single quotes, use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 85:
   LOCATION "'"${db_loc}"/nation'";
                                ^-- SC2016 (info): Expressions don't expand in 
single quotes, use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 98:
   LOCATION "'"${db_loc}"/orders'";
                                ^-- SC2016 (info): Expressions don't expand in 
single quotes, use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 111:
   LOCATION "'"${db_loc}"/part'";
                              ^-- SC2016 (info): Expressions don't expand in 
single quotes, use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 120:
   LOCATION "'"${db_loc}"/partsupp'";
                                  ^-- SC2016 (info): Expressions don't expand 
in single quotes, use double quotes for that.
   
   
   In tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh 
line 127:
   LOCATION "'"${db_loc}"/region'";
                                ^-- SC2016 (info): Expressions don't expand in 
single quotes, use double quotes for that.
   
   For more information:
     https://www.shellcheck.net/wiki/SC1050 -- Expected 'then'.
     https://www.shellcheck.net/wiki/SC1140 -- Unexpected parameters after 
condi...
     https://www.shellcheck.net/wiki/SC1091 -- Not following: 
$(pwd)/default_emr...
   ----------
   
   You can address the above issues in one of three ways:
   1. Manually correct the issue in the offending shell script;
   2. Disable specific issues by adding the comment:
     # shellcheck disable=NNNN
   above the line that contains the issue, where NNNN is the error code;
   3. Add '-e NNNN' to the SHELLCHECK_OPTS setting in your .yml action file.
   
   
   
   ```
   </details>
   
   <details>
   <summary>shfmt errors</summary>
   
   ```
   
   'shfmt ' returned error 1 finding the following formatting issues:
   
   ----------
   --- tools/emr_storage_regression/default_emr_env.sh.orig
   +++ tools/emr_storage_regression/default_emr_env.sh
   @@ -36,20 +36,16 @@
        export SK=sk
        export ENDPOINT=oss-cn-beijing-internal.aliyuncs.com
        export REGION=oss-cn-beijing
   -elif [[ ${SERVICE} == 'hw' ]]; then 
   +elif [[ ${SERVICE} == 'hw' ]]; then
        export CASE=ping
        export AK=ak
        export SK=sk
        export ENDPOINT=obs.cn-north-4.myhuaweicloud.com
        export REGION=cn-north-4
   -elif [[ ${SERVICE} == 'tx' ]]; then 
   +elif [[ ${SERVICE} == 'tx' ]]; then
        export CASE=ping
        export AK=ak
        export SK=sk
        export ENDPOINT=cos.ap-beijing.mycloud.com
        export REGION=ap-beijing
    fi
   -
   -
   -
   -
   --- tools/emr_storage_regression/emr_tools.sh.orig
   +++ tools/emr_storage_regression/emr_tools.sh
   @@ -149,9 +149,9 @@
        sh ping_test/ping_poc.sh "${ENDPOINT}" "${REGION}" "${SERVICE}" "${AK}" 
"${SK}" "${HMS_META_URI}" "${HMS_WAREHOUSE}" "${BEELINE_URI}"
    elif [[ ${CASE} == 'data_set' ]]; then
        if [[ ${SERVICE} == 'tx' ]]; then
   -          BUCKET=cosn://datalake-bench-cos-1308700295
   +        BUCKET=cosn://datalake-bench-cos-1308700295
        elif [[ ${SERVICE} == 'ali' ]]; then
   -          BUCKET=oss://benchmark-oss
   +        BUCKET=oss://benchmark-oss
        fi
        # gen table for spark
        if ! sh stardard_set/gen_spark_create_sql.sh "${BUCKET}" obj; then
   @@ -168,7 +168,7 @@
        TYPE=hdfs sh stardard_set/run_standard_set.sh "${FE_HOST}" "${USER}" 
"${PORT}" hms_hdfs
        TYPE=hdfs sh stardard_set/run_standard_set.sh "${FE_HOST}" "${USER}" 
"${PORT}" iceberg_hms
        if [[ ${SERVICE} == 'tx' ]]; then
   -        sh stardard_set/run_standard_set.sh "${FE_HOST}" "${USER}" 
"${PORT}" hms_cos 
   +        sh stardard_set/run_standard_set.sh "${FE_HOST}" "${USER}" 
"${PORT}" hms_cos
            sh stardard_set/run_standard_set.sh "${FE_HOST}" "${USER}" 
"${PORT}" iceberg_hms_cos
        elif [[ ${SERVICE} == 'ali' ]]; then
            sh stardard_set/run_standard_set.sh "${FE_HOST}" "${USER}" 
"${PORT}" hms_oss
   --- tools/emr_storage_regression/ping_test/ping_poc.sh.orig
   +++ tools/emr_storage_regression/ping_test/ping_poc.sh
   @@ -36,10 +36,10 @@
    
    DLF_ENDPOINT=datalake-vpc.cn-beijing.aliyuncs.com
    JINDO_ENDPOINT=cn-beijing.oss-dls.aliyuncs.com
   -        
   +
    if [[ -z ${HMS_WAREHOUSE} ]]; then
   -  echo "Need warehouse for ${SERVICE}"
   -fi 
   +    echo "Need warehouse for ${SERVICE}"
   +fi
    cd "$(dirname "$0")" || exit
    
    run_spark_create_sql() {
   @@ -55,7 +55,7 @@
                --conf spark.sql.catalog.hms.type=hive \
                --conf spark.sql.defaultCatalog=hms \
                --conf spark.sql.catalog.hms.warehouse=${HMS_WAREHOUSE} \
   -            -f data/create_spark_ping.sql" 2> spark_create.log
   +            -f data/create_spark_ping.sql" 2>spark_create.log
        elif [[ ${SERVICE} == 'tx' ]]; then
            PARAM="--jars 
/usr/local/service/iceberg/iceberg-spark-runtime-3.2_2.12-0.13.1.jar \
                  --conf 
spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
 \
   @@ -64,9 +64,9 @@
                  --conf 
spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog \
                  --conf spark.sql.catalog.local.type=hadoop \
                  --conf spark.sql.catalog.local.warehouse=/usr/hive/warehouse \
   -              -f data/create_spark_ping.sql"  2> spark_create.log
   +              -f data/create_spark_ping.sql" 2>spark_create.log
        elif [[ ${SERVICE} == 'hw' ]]; then
   -        PARAM="-f data/create_spark_ping.sql"  2> spark_create.log
   +        PARAM="-f data/create_spark_ping.sql" 2>spark_create.log
        else
            echo "Unknown service type: ${SERVICE}"
            exit 1
   @@ -77,19 +77,19 @@
    run_spark_create_sql
    run_hive_create_sql() {
        if [[ ${SERVICE} == 'hw' ]]; then
   -        beeline -u "${BEELINE_URI}" -f data/create_hive_ping.sql  2> 
hive_create.log
   +        beeline -u "${BEELINE_URI}" -f data/create_hive_ping.sql 
2>hive_create.log
        elif [[ ${SERVICE} == 'ali' ]]; then
   -        hive -f data/create_hive_ping.sql  2> hive_create.log
   +        hive -f data/create_hive_ping.sql 2>hive_create.log
        else
   -        hive -f data/create_hive_ping.sql  2> hive_create.log
   +        hive -f data/create_hive_ping.sql 2>hive_create.log
        fi
    }
    
    run_hive_create_sql
    
    ## Step 2: make ping data
   -spark-sql -f data/data_for_spark.sql >> spark_data.log
   -hive -f data/data_for_hive.sql >> hive_data.log
   +spark-sql -f data/data_for_spark.sql >>spark_data.log
   +hive -f data/data_for_hive.sql >>hive_data.log
    
    run_query() {
        QUERY_NUM=1
   @@ -100,38 +100,38 @@
            echo -n "create catalog ${QUERY_NUM},"
            for i in $(seq 1 "${TRIES}"); do
                if [[ -n ${catalog} ]]; then
   -              query="switch ${catalog};${query}"
   +                query="switch ${catalog};${query}"
                fi
                RES=$(mysql -vvv -h"${FE_HOST}" -u"${USER}" 
-P"${FE_QUERY_PORT}" -e "${query}")
                echo -n "${RES}"
                [[ "${i}" != "${TRIES}" ]] && echo -n ","
            done
            QUERY_NUM=$((QUERY_NUM + 1))
   -    done < "${sql_file}"
   +    done <"${sql_file}"
    }
    
    ## Step 3: create external catalog in doris
    case "${SERVICE}" in
   -    ali)
   -        sed -e 's#DLF_ENDPOINT#'"${DLF_ENDPOINT}"'#g' emr_catalog.sql > 
emr_catalog.sql
   -        sed -e 's#JINDO_ENDPOINT#'"${JINDO_ENDPOINT}"'#g' emr_catalog.sql > 
emr_catalog.sql
   -        sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g'  create_catalog_aliyun.sql > emr_catalog.sql
   -        ;;
   -    tx)
   -        sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g'  create_catalog_tx.sql > emr_catalog.sql
   -        ;;
   -    aws)
   -        sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g'  create_catalog_aws.sql > emr_catalog.sql
   -        ;;
   -    hw)
   -        sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g'  create_catalog_hw.sql > emr_catalog.sql
   -        ;;
   -    *)
   -        echo "Internal error"
   -        exit 1
   -        ;;
   +ali)
   +    sed -e 's#DLF_ENDPOINT#'"${DLF_ENDPOINT}"'#g' emr_catalog.sql 
>emr_catalog.sql
   +    sed -e 's#JINDO_ENDPOINT#'"${JINDO_ENDPOINT}"'#g' emr_catalog.sql 
>emr_catalog.sql
   +    sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g' create_catalog_aliyun.sql >emr_catalog.sql
   +    ;;
   +tx)
   +    sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g' create_catalog_tx.sql >emr_catalog.sql
   +    ;;
   +aws)
   +    sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g' create_catalog_aws.sql >emr_catalog.sql
   +    ;;
   +hw)
   +    sed -e 's#ENDPOINT#'"${ENDPOINT}"'#g' -e 
's#META_URI#'"${HMS_META_URI}"'#g' -e 's#AK_INPUT#'"${AK}"'#g' -e 
's#SK_INPUT#'"${SK}"'#g' create_catalog_hw.sql >emr_catalog.sql
   +    ;;
   +*)
   +    echo "Internal error"
   +    exit 1
   +    ;;
    esac
   -    
   +
    run_query emr_catalog.sql
    
    ## Step 4: query ping
   @@ -141,6 +141,6 @@
    for c in $(echo "${EMR_CATALOG}"); do
        if [[ ${SERVICE} == 'ali' ]]; then
            run_query ping_aliyun.sql "${c}"
   -    fi 
   +    fi
        run_query ping.sql "${c}"
    done
   --- tools/emr_storage_regression/standard_set/gen_spark_create_sql.sh.orig
   +++ tools/emr_storage_regression/standard_set/gen_spark_create_sql.sh
   @@ -23,16 +23,15 @@
    BUCKET=$1
    TYPE=$2
    cd "$(dirname "$0")" || exit
   -sh gen_tbl/gen_ssb_create_sql.sh  "${BUCKET}"/ssb/ssb100_orc 
ssb100_orc_"${TYPE}" orc > create_"${TYPE}".sql
   -sh gen_tbl/gen_ssb_create_sql.sh  "${BUCKET}"/ssb/ssb100_parquet 
ssb100_parquet_"${TYPE}" parquet >> create_"${TYPE}".sql
   +sh gen_tbl/gen_ssb_create_sql.sh "${BUCKET}"/ssb/ssb100_orc 
ssb100_orc_"${TYPE}" orc >create_"${TYPE}".sql
   +sh gen_tbl/gen_ssb_create_sql.sh "${BUCKET}"/ssb/ssb100_parquet 
ssb100_parquet_"${TYPE}" parquet >>create_"${TYPE}".sql
    # tpch
   -sh gen_tbl/gen_tpch_create_sql.sh "${BUCKET}"/tpch/tpch100_orc 
tpch100_orc_"${TYPE}" orc >> create_"${TYPE}".sql
   -sh gen_tbl/gen_tpch_create_sql.sh "${BUCKET}"/tpch/tpch100_parquet 
tpch100_parquet_"${TYPE}" parquet >> create_"${TYPE}".sql
   +sh gen_tbl/gen_tpch_create_sql.sh "${BUCKET}"/tpch/tpch100_orc 
tpch100_orc_"${TYPE}" orc >>create_"${TYPE}".sql
   +sh gen_tbl/gen_tpch_create_sql.sh "${BUCKET}"/tpch/tpch100_parquet 
tpch100_parquet_"${TYPE}" parquet >>create_"${TYPE}".sql
    # clickbench
   -sh gen_tbl/gen_clickbench_create_sql.sh "${BUCKET}"/clickbench/hits_parquet 
clickbench_parquet_"${TYPE}" parquet >> create_"${TYPE}".sql
   -sh gen_tbl/gen_clickbench_create_sql.sh "${BUCKET}"/clickbench/hits_orc  
clickbench_orc_"${TYPE}" orc >> create_"${TYPE}".sql
   +sh gen_tbl/gen_clickbench_create_sql.sh "${BUCKET}"/clickbench/hits_parquet 
clickbench_parquet_"${TYPE}" parquet >>create_"${TYPE}".sql
   +sh gen_tbl/gen_clickbench_create_sql.sh "${BUCKET}"/clickbench/hits_orc 
clickbench_orc_"${TYPE}" orc >>create_"${TYPE}".sql
    # iceberg
    # sh gen_tbl/gen_ssb_create_sql.sh  oss://benchmark-oss/ssb/ssb100_iceberg 
ssb100_iceberg iceberg >> create_"${TYPE}".sql
    # sh gen_tbl/gen_tpch_create_sql.sh 
oss://benchmark-oss/tpch/tpch100_iceberg tpch100_iceberg iceberg >> 
create_"${TYPE}".sql
    # sh gen_tbl/gen_clickbench_create_sql.sh 
oss://benchmark-oss/clickbench/hits_iceberg clickbench_iceberg_hdfs >> 
create_"${TYPE}".sql
   -
   --- 
tools/emr_storage_regression/standard_set/gen_tbl/gen_clickbench_create_sql.sh.orig
   +++ 
tools/emr_storage_regression/standard_set/gen_tbl/gen_clickbench_create_sql.sh
   @@ -23,7 +23,7 @@
    if [[ -z "$1" ]]; then
        echo 'the first argument is database location'
        exit
   -else 
   +else
        db_loc=$1
    fi
    
   
tools/emr_storage_regression/standard_set/gen_tbl/gen_ssb_create_sql.sh:37:15: 
not a valid test operator: ]]]
   --- 
tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh.orig
   +++ tools/emr_storage_regression/standard_set/gen_tbl/gen_tpch_create_sql.sh
   @@ -23,7 +23,7 @@
    if [[ -z "$1" ]]; then
        echo 'the first argument is database location'
        exit
   -else 
   +else
        db_loc=$1
    fi
    
   --- tools/emr_storage_regression/standard_set/run_queries.sh.orig
   +++ tools/emr_storage_regression/standard_set/run_queries.sh
   @@ -43,4 +43,4 @@
        echo "" | tee -a "${RESULT_FILE}"
    
        QUERY_NUM=$((QUERY_NUM + 1))
   -done < "$5"
   +done <"$5"
   --- tools/emr_storage_regression/standard_set/run_standard_set.sh.orig
   +++ tools/emr_storage_regression/standard_set/run_standard_set.sh
   @@ -27,18 +27,18 @@
    if [[ -z "$4" ]]; then
        echo 'need catalog name'
        exit
   -else 
   +else
        catalog_name=$4
    fi
    
    if [[ -z "$5" ]]; then
        echo "run all test"
   -else 
   +else
        case=$5
    fi
    
    if [[ -z ${TYPE} ]]; then
   -  TYPE=obj
   +    TYPE=obj
    fi
    echo "execute ${case} benchmark for ${TYPE}..."
    
   ----------
   
   You can reformat the above files to meet shfmt's requirements by typing:
   
     shfmt  -w filename
   
   
   ```
   </details>
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to