This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pinot.git


The following commit(s) were added to refs/heads/master by this push:
     new b0159416ee8 [cleanup]Remove unused contrib/pinot-druid-benchmark 
directory (#18007)
b0159416ee8 is described below

commit b0159416ee801976c422c66032227a4e856cd296
Author: Xiang Fu <[email protected]>
AuthorDate: Fri Mar 27 18:04:55 2026 -0700

    [cleanup]Remove unused contrib/pinot-druid-benchmark directory (#18007)
    
    * Remove unused contrib/pinot-druid-benchmark directory
    
    This directory contains an unmaintained Druid vs Pinot benchmark module 
that:
    - Is not part of the standard build
    - Is already excluded from source distributions
    - Has been stale and unmaintained
    - Is not referenced elsewhere in the codebase
    
    Also removes the corresponding exclusion in pinot-source-assembly.xml.
    
    Co-Authored-By: Claude Opus 4.6 <[email protected]>
    
    * Remove duplicate thirdeye exclusion in pinot-source-assembly.xml
    
    Address review comment: the thirdeye/** exclude pattern was
    listed twice. Remove the redundant entry.
    
    Co-Authored-By: Claude Opus 4.6 <[email protected]>
    
    ---------
    
    Co-authored-by: Pinot Cleanup Agent <[email protected]>
    Co-authored-by: Claude Opus 4.6 <[email protected]>
---
 contrib/pinot-druid-benchmark/README.md            | 297 ---------------------
 contrib/pinot-druid-benchmark/pom.xml              |  99 -------
 contrib/pinot-druid-benchmark/run_benchmark.sh     | 125 ---------
 .../org/apache/pinotdruidbenchmark/DataMerger.java |  96 -------
 .../apache/pinotdruidbenchmark/DataSeparator.java  |  69 -----
 .../pinotdruidbenchmark/DruidResponseTime.java     | 145 ----------
 .../pinotdruidbenchmark/DruidThroughput.java       | 128 ---------
 .../pinotdruidbenchmark/PinotResponseTime.java     | 136 ----------
 .../pinotdruidbenchmark/PinotThroughput.java       | 120 ---------
 .../druid-0.9.2_index_tpch_lineitem_yearly.json    |  91 -------
 .../config/druid_broker_runtime.properties         |  32 ---
 .../src/main/resources/config/druid_jvm.config     |   8 -
 .../main/resources/config/pinot_csv_config.json    |   4 -
 .../resources/config/pinot_generator_config.json   |  12 -
 .../src/main/resources/config/pinot_schema.json    |  71 -----
 .../main/resources/config/pinot_startree_spec.json |   6 -
 .../src/main/resources/config/pinot_table.json     |  18 --
 .../src/main/resources/druid_queries/0.json        |  18 --
 .../src/main/resources/druid_queries/1.json        |  18 --
 .../src/main/resources/druid_queries/2.json        |  13 -
 .../src/main/resources/druid_queries/3.json        |  16 --
 .../src/main/resources/druid_queries/4.json        |  21 --
 .../src/main/resources/druid_queries/5.json        |  16 --
 .../src/main/resources/druid_queries/6.json        |  34 ---
 .../src/main/resources/pinot_queries/0.pql         |   1 -
 .../src/main/resources/pinot_queries/1.pql         |   1 -
 .../src/main/resources/pinot_queries/2.pql         |   1 -
 .../src/main/resources/pinot_queries/3.pql         |   1 -
 .../src/main/resources/pinot_queries/4.pql         |   1 -
 .../src/main/resources/pinot_queries/5.pql         |   1 -
 .../src/main/resources/pinot_queries/6.pql         |   1 -
 pinot-distribution/pinot-source-assembly.xml       |   2 -
 32 files changed, 1602 deletions(-)

diff --git a/contrib/pinot-druid-benchmark/README.md 
b/contrib/pinot-druid-benchmark/README.md
deleted file mode 100644
index 3b32f3b039a..00000000000
--- a/contrib/pinot-druid-benchmark/README.md
+++ /dev/null
@@ -1,297 +0,0 @@
-# Running the benchmark
-
-For instructions on how to run the Pinot/Druid benchmark please refer to the
-```run_benchmark.sh``` file. 
-
-In order to run the Apache Pinot benchmark you'll need to create the 
appropriate
-data segments, which are too large to be included in this github repository and
-they may need to be recreated with new Apache Pinot versions.
-
-To create the neccessary segment data for the benchmark please follow the
-instructions below.
-
-# Creating Apache Pinot benchmark segments from TPC-H data
-
-To run the Pinot/Druid benchmark with Apache Pinot you'll need to download and 
run 
-the TPC-H tools to generate the benchmark data sets.
-
-## Downloading and building the TPC-H tools
-
-The TPC-H tools can be downloaded from the [TPC-H 
Website](http://www.tpc.org/tpch/default5.asp). 
-Registration is required.
-
-**Note:**: The instructions below for dbgen assume a Linux OS.
-
-After downloading and extracing the TPC-H tools, you'll need to build the
-db generator tool: ```dbgen```. To do so, extract the package that you have 
-downloaded from TPC-H's website and inside the dbgen sub directory edit the 
-```makefile``` file.
-
-Set the following variables in the makefile to:
-
-```
-CC      = gcc
-...
-DATABASE= SQLSERVER
-MACHINE = LINUX
-WORKLOAD = TPCH
-```
-
-Next, build the dbgen tool as per the README instructions in the dbgen 
directory.
-
-## Generating the TPC-H data and converting them for use in Apache Pinot
-
-After building ```dbgen``` run the following command line in the ```dbgen``` 
directory:
-
-```
-./dbgen -TL -s8
-```
-
-The command above will generate a single large file called ```lineitem.tbl```.
-This is the data file for the TPC-H benchmark, which we'll need to 
post-process 
-a bit to be imported into Apache Pinot.
-
-Next, build the Pinot/Druid Benchmark code if you haven't done so already.
-
-**Note:** Apache Pinot has JDK11 support, however for now it's
-best to use JDK8 for all build and run operations in this manual.
-
-Inside ```pinot_directory/contrib/pinot-druid-benchmark``` run:
-
-```
-./mvnw clean install
-```
-
-Next, inside the same directory split the ```lineitem``` table:
-
-```
-./target/appassembler/bin/data-separator.sh <Path to lineitem.tbl> <Output 
Directory> 
-```
-
-Use the output directory from the split as the input directory for the merge
-command below:
-
-```
-./target/appassembler/bin/data-merger.sh <Input Directory> <Output Directory> 
YEAR
-```
-
-If all ran well you should see a few CSV files produced, 1992.csv through 
1998.csv.
-
-These files are the starting point for creating our Apache Pinot segments.
-
-## Create the Apache Pinot segments
-
-The first step in the process is to launch a standalone Apache Pinot Cluster 
on one
-single server. This cluster will serve as a host to hold the initial segments, 
-which we'll extract and copy for later re-use in the benchmark.
-
-Follow the steps outlined in the Apache Pinot Manual Cluster setup to launch 
the
-cluster:
-
-https://docs.pinot.apache.org/basics/getting-started/advanced-pinot-setup
-
-You don't need the Kafka service as we won't be using it.
-
-Next, we need to follow the instructions similar to the ones described in
-the [Batch Import 
Example](https://docs.pinot.apache.org/basics/getting-started/pushing-your-data-to-pinot)
-in the Apache Pinot documentation.
-
-### Create the Apache Pinot tables
-
-Run:
-
-```
-pinot-admin.sh AddTable \
-  -tableConfigFile /absolute/path/to/table-config.json \
-  -schemaFile /absolute/path/to/schema.json -exec
-```
-
-For this command above you'll need the following configuration files:
-
-```table_config.json```
-```
-{
-  "tableName": "tpch_lineitem",
-  "segmentsConfig" : {
-    "replication" : "1",
-    "segmentAssignmentStrategy" : "BalanceNumSegmentAssignmentStrategy"
-  },
-  "tenants" : {
-    "broker":"DefaultTenant",
-    "server":"DefaultTenant"
-  },
-  "tableIndexConfig" : {
-    "starTreeIndexConfigs":[{
-      "maxLeafRecords": 100,
-      "functionColumnPairs": ["SUM__l_extendedprice", "SUM__l_discount", 
"SUM__l_quantity"],
-      "dimensionsSplitOrder": ["l_receiptdate", "l_shipdate", "l_shipmode", 
"l_returnflag"],
-      "skipStarNodeCreationForDimensions": [],
-      "skipMaterializationForDimensions": ["l_partkey", "l_commitdate", 
"l_linestatus", "l_comment", "l_orderkey", "l_shipinstruct", "l_linenumber", 
"l_suppkey"]
-    }]
-  },
-  "tableType":"OFFLINE",
-  "metadata": {}
-}
-```
-
-```schema.json```
-```
-{
-  "schemaName": "tpch_lineitem",
-  "dimensionFieldSpecs": [
-    {
-      "name": "l_orderkey",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_partkey",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_suppkey",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_linenumber",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_returnflag",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_linestatus",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_shipdate",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_commitdate",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_receiptdate",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_shipinstruct",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_shipmode",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_comment",
-      "dataType": "STRING"
-    }
-  ],
-  "metricFieldSpecs": [
-    {
-      "name": "l_quantity",
-      "dataType": "LONG"
-    },
-    {
-      "name": "l_extendedprice",
-      "dataType": "DOUBLE"
-    },
-    {
-      "name": "l_discount",
-      "dataType": "DOUBLE"
-    },
-    {
-      "name": "l_tax",
-      "dataType": "DOUBLE"
-    }
-  ]
-}
-```
-
-**Note:** The configuration as specified above will give you
-the data with the **optimal star tree index**. The index configuration is
-specified in the ```tableIndexConfig``` section in the ```table_config.json``` 
file. If
-you want to generate a different type of indexed segment, then you
-should modify the tableIndexConfig section to reflect the correct index
-type as described in the [Indexing 
Section](https://docs.pinot.apache.org/basics/features/indexing) 
-of the Apache Pinot Documentation.
-
-### Create the Apache Pinot segments
-
-Next, we'll create the segments for this Apache Pinot table using the optimal
-star tree index configuration. 
-
-For this purpose you'll need a job specification YAML file. Here's an example
-that does the TPC-H data import:
-
-```job-spec.yml```
-```
-executionFrameworkSpec:
-  name: 'standalone'
-  segmentGenerationJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
-  segmentTarPushJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
-  segmentUriPushJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
-jobType: SegmentCreationAndTarPush
-inputDirURI: 
'/absolute/path/to/pinot/contrib/pinot-druid-benchmark/data_out/raw_data/'
-includeFileNamePattern: 'glob:**/*.csv'
-outputDirURI: 
'/absolute/path/to/pinot/contrib/pinot-druid-benchmark/data_out/segments/'
-overwriteOutput: true
-pinotFSSpecs:
-  - scheme: file
-    className: org.apache.pinot.spi.filesystem.LocalPinotFS
-recordReaderSpec:
-  dataFormat: 'csv'
-  className: 'org.apache.pinot.plugin.inputformat.csv.CSVRecordReader'
-  configClassName: 
'org.apache.pinot.plugin.inputformat.csv.CSVRecordReaderConfig'
-  configs:
-    delimiter: '|'
-    multiValueDelimiterEnabled: false
-    header: 
'l_orderkey|l_partkey|l_suppkey|l_linenumber|l_quantity|l_extendedprice|l_discount|l_tax|l_returnflag|l_linestatus|l_shipdate|l_commitdate|l_receiptdate|l_shipinstruct|l_shipmode|l_comment|'
-tableSpec:
-  tableName: 'tpch_lineitem'
-  schemaURI: 'http://localhost:9000/tables/tpch_lineitem/schema'
-  tableConfigURI: 'http://localhost:9000/tables/tpch_lineitem'
-pinotClusterSpecs:
-  - controllerURI: 'http://localhost:9000'
-```
-
-**Note:** Make sure you modify the absolute path for **inputDirURI** and 
**outputDirURI**
-above. The inputDirURI should be pointing to the directory where you have
-generated the 7 YEAR CSV files, 1992.csv through 1998.csv.
-
-After you have modified the input and output dir, run the job as described in 
the 
-[Batch Import 
Example](https://docs.pinot.apache.org/basics/getting-started/pushing-your-data-to-pinot)
 document:
-
-
-```
-pinot-admin.sh LaunchDataIngestionJob \
-    -jobSpecFile /absolute/path/to/job-spec.yml
-```
-
-The segment creation output on the console will tell you where Apache Pinot 
will
-store the created segments (it should be your output dir). You should see a 
-line appear in the output as:
-
-```
-...
-outputDirURI: 
/absolute/path/to/pinot/contrib/pinot-druid-benchmark/data_out/segments/
-...
-```
-
-Inside there you'll find the tpch_lineitem_OFFLINE directory with 7 separate 
-segments, 0 through 6. Tar/GZip the whole directory and this will be your
-optimal_startree_small_yearly temp segment that the benchmark requires. 
However,
-wait first for the segment creation to finish.
-
-Try few queries to ensure that the segments are working. You can find some
-sample queries under the benchmark directory 
```src/main/resources/pinot_queries```.
-Watch the console output from the Apache Pinot cluster as you run the queries, 
and make sure 
-there are no complaints in there that the queries were slow since index wasn't 
found. 
-If you see a message saying the query was slow, it means that the indexes 
weren't 
-created properly. With the optimal star tree index your total query time 
should be
-few milliseconds at most.
-
-You can now shutdown the Apache Pinot cluster which you started manually and 
when you
-launch the benchmark server cluster it will pick up your new segments. 
-
diff --git a/contrib/pinot-druid-benchmark/pom.xml 
b/contrib/pinot-druid-benchmark/pom.xml
deleted file mode 100644
index a58229a76f4..00000000000
--- a/contrib/pinot-druid-benchmark/pom.xml
+++ /dev/null
@@ -1,99 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
-
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0";
-         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
-  <modelVersion>4.0.0</modelVersion>
-
-  <groupId>org.apache.pinotdruidbenchmark</groupId>
-  <artifactId>pinot-druid-benchmark</artifactId>
-  <version>1.0-SNAPSHOT</version>
-  <packaging>jar</packaging>
-
-  <dependencies>
-    <dependency>
-      <groupId>org.apache.httpcomponents.client5</groupId>
-      <artifactId>httpclient5</artifactId>
-      <version>5.3.1</version>
-    </dependency>
-  </dependencies>
-
-  <build>
-    <plugins>
-      <plugin>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <configuration>
-          <source>1.8</source>
-          <target>1.8</target>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>org.codehaus.mojo</groupId>
-        <artifactId>appassembler-maven-plugin</artifactId>
-        <configuration>
-          <programs>
-            <program>
-              
<mainClass>org.apache.pinotdruidbenchmark.DataSeparator</mainClass>
-              <name>data-separator</name>
-            </program>
-            <program>
-              <mainClass>org.apache.pinotdruidbenchmark.DataMerger</mainClass>
-              <name>data-merger</name>
-            </program>
-            <program>
-              
<mainClass>org.apache.pinotdruidbenchmark.PinotResponseTime</mainClass>
-              <name>pinot-response-time</name>
-            </program>
-            <program>
-              
<mainClass>org.apache.pinotdruidbenchmark.DruidResponseTime</mainClass>
-              <name>druid-response-time</name>
-            </program>
-            <program>
-              
<mainClass>org.apache.pinotdruidbenchmark.PinotThroughput</mainClass>
-              <name>pinot-throughput</name>
-            </program>
-            <program>
-              
<mainClass>org.apache.pinotdruidbenchmark.DruidThroughput</mainClass>
-              <name>druid-throughput</name>
-            </program>
-          </programs>
-          <binFileExtensions>
-            <unix>.sh</unix>
-          </binFileExtensions>
-          <platforms>
-            <platform>unix</platform>
-          </platforms>
-          <repositoryLayout>flat</repositoryLayout>
-          <repositoryName>lib</repositoryName>
-        </configuration>
-        <executions>
-          <execution>
-            <phase>package</phase>
-            <goals>
-              <goal>assemble</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-</project>
diff --git a/contrib/pinot-druid-benchmark/run_benchmark.sh 
b/contrib/pinot-druid-benchmark/run_benchmark.sh
deleted file mode 100755
index 7121268544a..00000000000
--- a/contrib/pinot-druid-benchmark/run_benchmark.sh
+++ /dev/null
@@ -1,125 +0,0 @@
-#!/bin/bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-echo "Compiling the benchmark driver..."
-mvn clean install > /dev/null
-rm -rf temp results
-mkdir temp results
-
-echo "Untaring Pinot Segments Without Extra Index..."
-tar -zxf pinot_non_startree.tar.gz -C temp
-echo "Untaring Pinot Segments With Inverted Index..."
-tar -zxf pinot_non_startree_inverted_index.tar.gz -C temp
-#echo "Untaring Pinot Segments With Default Startree Index..."
-#tar -zxf pinot_default_startree.tar.gz -C temp
-#echo "Untaring Pinot Segments With Optimal Startree Index..."
-#tar -zxf pinot_optimal_startree.tar.gz -C temp
-echo "Untaring Druid Segments..."
-tar -zxf druid_segment_cache.tar.gz -C temp
-
-cd temp
-echo "Downloading Druid..."
-curl -O http://static.druid.io/artifacts/releases/druid-0.9.2-bin.tar.gz
-tar -zxf druid-0.9.2-bin.tar.gz
-rm druid-0.9.2-bin.tar.gz
-echo "Downloading ZooKeeper..."
-curl -O http://apache.claz.org/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
-tar -zxf zookeeper-3.4.6.tar.gz
-rm zookeeper-3.4.6.tar.gz
-cd ..
-
-echo "Benchmarking Pinot without Extra Index..."
-java -jar pinot-tool-launcher-jar-with-dependencies.jar PerfBenchmarkRunner 
-mode startAll -dataDir temp/non_startree_small_yearly -tableNames 
tpch_lineitem_OFFLINE > /dev/null 2>&1 &
-PINOT_PROCESS_ID=$!
-echo ${PINOT_PROCESS_ID}
-echo "Wait 30 seconds so that cluster is ready for processing queries..."
-sleep 30
-echo "Starting response time benchmark..."
-./target/appassembler/bin/pinot-response-time.sh 
src/main/resources/pinot_queries http://localhost:8099/query 20 20 
results/pinot_non_startree | tee results/pinot_non_startree_response_time.txt
-echo "Starting throughput benchmark..."
-./target/appassembler/bin/pinot-throughput.sh src/main/resources/pinot_queries 
http://localhost:8099/query 5 60 | tee results/pinot_non_startree_throughput.txt
-kill -9 ${PINOT_PROCESS_ID}
-
-echo "Benchmarking Pinot with Inverted Index..."
-java -jar pinot-tool-launcher-jar-with-dependencies.jar PerfBenchmarkRunner 
-mode startAll -dataDir temp/non_startree_small_yearly_inverted_index 
-tableNames tpch_lineitem_OFFLINE -invertedIndexColumns 
l_receiptdate,l_shipmode > /dev/null 2>&1 &
-PINOT_PROCESS_ID=$!
-echo "Wait 30 seconds so that cluster is ready for processing queries..."
-sleep 30
-echo "Starting response time benchmark..."
-./target/appassembler/bin/pinot-response-time.sh 
src/main/resources/pinot_queries http://localhost:8099/query 20 20 
results/pinot_non_startree_inverted_index | tee 
results/pinot_non_startree_inverted_index_response_time.txt
-echo "Starting throughput benchmark..."
-./target/appassembler/bin/pinot-throughput.sh src/main/resources/pinot_queries 
http://localhost:8099/query 5 60 | tee 
results/pinot_non_startree_inverted_index_throughput.txt
-kill -9 ${PINOT_PROCESS_ID}
-
-#echo "Benchmarking Pinot with Default Startree Index..."
-#java -jar pinot-tool-launcher-jar-with-dependencies.jar PerfBenchmarkRunner 
-mode startAll -dataDir temp/default_startree_small_yearly -tableNames 
tpch_lineitem_OFFLINE > /dev/null 2>&1 &
-#PINOT_PROCESS_ID=$!
-#echo "Wait 30 seconds so that cluster is ready for processing queries..."
-#sleep 30
-#echo "Starting response time benchmark..."
-#./target/appassembler/bin/pinot-response-time.sh 
src/main/resources/pinot_queries http://localhost:8099/query 20 20 
results/pinot_default_startree | tee 
results/pinot_default_startree_response_time.txt
-#echo "Starting throughput benchmark..."
-#./target/appassembler/bin/pinot-throughput.sh 
src/main/resources/pinot_queries http://localhost:8099/query 5 60 | tee 
results/pinot_default_startree_throughput.txt
-#kill -9 ${PINOT_PROCESS_ID}
-#
-#echo "Benchmarking Pinot with Optimal Startree Index..."
-#java -jar pinot-tool-launcher-jar-with-dependencies.jar PerfBenchmarkRunner 
-mode startAll -dataDir temp/optimal_startree_small_yearly -tableNames 
tpch_lineitem_OFFLINE > /dev/null 2>&1 &
-#PINOT_PROCESS_ID=$!
-#echo "Wait 30 seconds so that cluster is ready for processing queries..."
-#sleep 30
-#echo "Starting response time benchmark..."
-#./target/appassembler/bin/pinot-response-time.sh 
src/main/resources/pinot_queries http://localhost:8099/query 20 20 
results/pinot_optimal_startree | tee 
results/pinot_optimal_startree_response_time.txt
-#echo "Starting throughput benchmark..."
-#./target/appassembler/bin/pinot-throughput.sh 
src/main/resources/pinot_queries http://localhost:8099/query 5 60 | tee 
results/pinot_optimal_startree_throughput.txt
-#kill -9 ${PINOT_PROCESS_ID}
-
-echo "Benchmarking Druid with Inverted Index (Default Setting)..."
-cd temp/druid-0.9.2
-./bin/init
-rm -rf var/druid/segment-cache
-mv ../segment-cache var/druid/segment-cache
-#Start ZooKeeper
-../zookeeper-3.4.6/bin/zkServer.sh start 
../zookeeper-3.4.6/conf/zoo_sample.cfg > /dev/null 2>&1
-#Replace Druid JVM config and broker runtime properties
-cp ../../src/main/resources/config/druid_jvm.config 
conf/druid/broker/jvm.config
-cp ../../src/main/resources/config/druid_jvm.config 
conf/druid/historical/jvm.config
-cp ../../src/main/resources/config/druid_broker_runtime.properties 
conf/druid/broker/runtime.properties
-#Start Druid cluster
-java `cat conf/druid/broker/jvm.config | xargs` -cp 
conf-quickstart/druid/_common:conf/druid/broker:lib/* io.druid.cli.Main server 
broker > /dev/null 2>&1 &
-DRUID_BROKER_PROCESS_ID=$!
-java `cat conf/druid/historical/jvm.config | xargs` -cp 
conf-quickstart/druid/_common:conf/druid/historical:lib/* io.druid.cli.Main 
server historical > /dev/null 2>&1 &
-DRUID_SERVER_PROCESS_ID=$!
-#Run benchmark
-cd ../..
-echo "Wait 30 seconds so that cluster is ready for processing queries..."
-sleep 30
-echo "Starting response time benchmark..."
-./target/appassembler/bin/druid-response-time.sh 
src/main/resources/druid_queries http://localhost:8082/druid/v2/?pretty 20 20 
results/druid | tee results/druid_response_time.txt
-echo "Starting throughput benchmark..."
-./target/appassembler/bin/druid-throughput.sh src/main/resources/druid_queries 
http://localhost:8082/druid/v2/?pretty 5 60 | tee results/druid_throughput.txt
-kill -9 ${DRUID_BROKER_PROCESS_ID}
-kill -9 ${DRUID_SERVER_PROCESS_ID}
-temp/zookeeper-3.4.6/bin/zkServer.sh stop 
temp/zookeeper-3.4.6/conf/zoo_sample.cfg > /dev/null 2>&1
-
-echo "********************************************************************"
-echo "* Benchmark finished. Results can be found in 'results' directory. *"
-echo "********************************************************************"
-
-exit 0
diff --git 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DataMerger.java
 
b/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DataMerger.java
deleted file mode 100644
index 981b3c69601..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DataMerger.java
+++ /dev/null
@@ -1,96 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.pinotdruidbenchmark;
-
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.util.Arrays;
-
-
-/**
- * Merge multiple chunks of data into one according to <code>l_shipdate</code>.
- */
-public class DataMerger {
-  private DataMerger() {
-  }
-
-  private enum MergeGranularity {
-    MONTH, YEAR
-  }
-
-  public static void main(String[] args)
-      throws Exception {
-    if (args.length != 3) {
-      System.err.println("3 arguments required: INPUT_DIR, OUTPUT_DIR, 
MERGE_GRANULARITY.");
-      return;
-    }
-
-    File inputDir = new File(args[0]);
-    File outputDir = new File(args[1]);
-    if (!outputDir.exists()) {
-      if (!outputDir.mkdirs()) {
-        throw new RuntimeException("Failed to create output directory: " + 
outputDir);
-      }
-    }
-
-    int subStringLength;
-    switch (MergeGranularity.valueOf(args[2])) {
-      case MONTH:
-        subStringLength = 7;
-        break;
-      case YEAR:
-        subStringLength = 4;
-        break;
-      default:
-        throw new IllegalArgumentException("Unsupported merge granularity: " + 
args[2]);
-    }
-
-    String[] inputFileNames = inputDir.list();
-    assert inputFileNames != null;
-    Arrays.sort(inputFileNames);
-
-    String currentOutputFileName = "";
-    BufferedWriter writer = null;
-    for (String inputFileName : inputFileNames) {
-      BufferedReader reader = new BufferedReader(new FileReader(new 
File(inputDir, inputFileName)));
-      String expectedOutputFileName = inputFileName.substring(0, 
subStringLength) + ".csv";
-      if (!currentOutputFileName.equals(expectedOutputFileName)) {
-        if (writer != null) {
-          writer.close();
-        }
-        currentOutputFileName = expectedOutputFileName;
-        writer = new BufferedWriter(new FileWriter(new File(outputDir, 
currentOutputFileName)));
-      }
-
-      assert writer != null;
-      String line;
-      while ((line = reader.readLine()) != null) {
-        writer.write(line);
-        writer.newLine();
-      }
-      reader.close();
-    }
-    if (writer != null) {
-      writer.close();
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DataSeparator.java
 
b/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DataSeparator.java
deleted file mode 100644
index 9211599b487..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DataSeparator.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.pinotdruidbenchmark;
-
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.util.HashMap;
-import java.util.Map;
-
-
-/**
- * Separate data set into multiple chunks according to <code>l_shipdate</code>.
- */
-public final class DataSeparator {
-  private DataSeparator() {
-  }
-
-  public static void main(String[] args)
-      throws Exception {
-    if (args.length != 2) {
-      System.err.println("2 arguments required: INPUT_FILE_PATH, OUTPUT_DIR.");
-      return;
-    }
-
-    File inputFile = new File(args[0]);
-    File outputDir = new File(args[1]);
-    if (!outputDir.exists()) {
-      if (!outputDir.mkdirs()) {
-        throw new RuntimeException("Failed to create output directory: " + 
outputDir);
-      }
-    }
-
-    BufferedReader reader = new BufferedReader(new FileReader(inputFile));
-    Map<String, BufferedWriter> writerMap = new HashMap<>();
-    String line;
-    while ((line = reader.readLine()) != null) {
-      String shipDate = line.split("\\|")[10];
-      BufferedWriter writer = writerMap.get(shipDate);
-      if (writer == null) {
-        writer = new BufferedWriter(new FileWriter(new File(outputDir, 
shipDate + ".csv")));
-        writerMap.put(shipDate, writer);
-      }
-      writer.write(line);
-      writer.newLine();
-    }
-    for (BufferedWriter writer : writerMap.values()) {
-      writer.close();
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DruidResponseTime.java
 
b/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DruidResponseTime.java
deleted file mode 100644
index a7f7c5b5203..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DruidResponseTime.java
+++ /dev/null
@@ -1,145 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.pinotdruidbenchmark;
-
-import java.io.BufferedInputStream;
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.util.Arrays;
-import org.apache.hc.client5.http.classic.methods.HttpPost;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
-import org.apache.hc.client5.http.impl.classic.HttpClients;
-import org.apache.hc.core5.http.io.entity.StringEntity;
-
-
-/**
- * Test single query response time for Druid.
- */
-public class DruidResponseTime {
-  private DruidResponseTime() {
-  }
-
-  private static final byte[] BYTE_BUFFER = new byte[4096];
-  private static final char[] CHAR_BUFFER = new char[4096];
-
-  public static void main(String[] args)
-      throws Exception {
-    if (args.length != 4 && args.length != 5) {
-      System.err.println(
-          "4 or 5 arguments required: QUERY_DIR, RESOURCE_URL, WARM_UP_ROUNDS, 
TEST_ROUNDS, RESULT_DIR (optional).");
-      return;
-    }
-
-    File queryDir = new File(args[0]);
-    String resourceUrl = args[1];
-    int warmUpRounds = Integer.parseInt(args[2]);
-    int testRounds = Integer.parseInt(args[3]);
-    File resultDir;
-    if (args.length == 4) {
-      resultDir = null;
-    } else {
-      resultDir = new File(args[4]);
-      if (!resultDir.exists()) {
-        if (!resultDir.mkdirs()) {
-          throw new RuntimeException("Failed to create result directory: " + 
resultDir);
-        }
-      }
-    }
-
-    File[] queryFiles = queryDir.listFiles();
-    assert queryFiles != null;
-    Arrays.sort(queryFiles);
-
-    try (CloseableHttpClient httpClient = HttpClients.createDefault()) {
-      HttpPost httpPost = new HttpPost(resourceUrl);
-      httpPost.addHeader("content-type", "application/json");
-
-      for (File queryFile : queryFiles) {
-        StringBuilder stringBuilder = new StringBuilder();
-        try (BufferedReader bufferedReader = new BufferedReader(new 
FileReader(queryFile))) {
-          int length;
-          while ((length = bufferedReader.read(CHAR_BUFFER)) > 0) {
-            stringBuilder.append(new String(CHAR_BUFFER, 0, length));
-          }
-        }
-        String query = stringBuilder.toString();
-        httpPost.setEntity(new StringEntity(query));
-
-        
System.out.println("--------------------------------------------------------------------------------");
-        System.out.println("Running query: " + query);
-        
System.out.println("--------------------------------------------------------------------------------");
-
-        // Warm-up Rounds
-        System.out.println("Run " + warmUpRounds + " times to warm up...");
-        for (int i = 0; i < warmUpRounds; i++) {
-          try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPost)) {
-            // httpResponse will be auto closed
-          }
-          System.out.print('*');
-        }
-        System.out.println();
-
-        // Test Rounds
-        System.out.println("Run " + testRounds + " times to get response time 
statistics...");
-        long[] responseTimes = new long[testRounds];
-        long totalResponseTime = 0L;
-        for (int i = 0; i < testRounds; i++) {
-          long startTime = System.currentTimeMillis();
-          try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPost)) {
-              // httpResponse will be auto closed
-          }
-          long responseTime = System.currentTimeMillis() - startTime;
-          responseTimes[i] = responseTime;
-          totalResponseTime += responseTime;
-          System.out.print(responseTime + "ms ");
-        }
-        System.out.println();
-
-        // Store result.
-        if (resultDir != null) {
-          File resultFile = new File(resultDir, queryFile.getName() + 
".result");
-          try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPost)) {
-            try (BufferedInputStream bufferedInputStream = new 
BufferedInputStream(
-                httpResponse.getEntity().getContent());
-                BufferedWriter bufferedWriter = new BufferedWriter(new 
FileWriter(resultFile))) {
-              int length;
-              while ((length = bufferedInputStream.read(BYTE_BUFFER)) > 0) {
-                bufferedWriter.write(new String(BYTE_BUFFER, 0, length));
-              }
-            }
-          }
-        }
-
-        // Process response times.
-        double averageResponseTime = (double) totalResponseTime / testRounds;
-        double temp = 0;
-        for (long responseTime : responseTimes) {
-          temp += (responseTime - averageResponseTime) * (responseTime - 
averageResponseTime);
-        }
-        double standardDeviation = Math.sqrt(temp / testRounds);
-        System.out.println("Average response time: " + averageResponseTime + 
"ms");
-        System.out.println("Standard deviation: " + standardDeviation);
-      }
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DruidThroughput.java
 
b/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DruidThroughput.java
deleted file mode 100644
index c1277a84fe4..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/DruidThroughput.java
+++ /dev/null
@@ -1,128 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.pinotdruidbenchmark;
-
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileReader;
-import java.io.IOException;
-import java.util.Arrays;
-import java.util.Random;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicLong;
-import org.apache.hc.client5.http.classic.methods.HttpPost;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
-import org.apache.hc.client5.http.impl.classic.HttpClients;
-import org.apache.hc.core5.http.io.entity.StringEntity;
-
-
-/**
- * Test throughput for Druid.
- */
-public class DruidThroughput {
-  private DruidThroughput() {
-  }
-
-  private static final long RANDOM_SEED = 123456789L;
-  private static final Random RANDOM = new Random(RANDOM_SEED);
-  private static final char[] CHAR_BUFFER = new char[4096];
-
-  private static final int MILLIS_PER_SECOND = 1000;
-  private static final int REPORT_INTERVAL_MILLIS = 3000;
-
-  @SuppressWarnings("InfiniteLoopStatement")
-  public static void main(String[] args)
-      throws Exception {
-    if (args.length != 3 && args.length != 4) {
-      System.err.println("3 or 4 arguments required: QUERY_DIR, RESOURCE_URL, 
NUM_CLIENTS, TEST_TIME (seconds).");
-      return;
-    }
-
-    File queryDir = new File(args[0]);
-    String resourceUrl = args[1];
-    final int numClients = Integer.parseInt(args[2]);
-    final long endTime;
-    if (args.length == 3) {
-      endTime = Long.MAX_VALUE;
-    } else {
-      endTime = System.currentTimeMillis() + Integer.parseInt(args[3]) * 
MILLIS_PER_SECOND;
-    }
-
-    File[] queryFiles = queryDir.listFiles();
-    assert queryFiles != null;
-    Arrays.sort(queryFiles);
-
-    final int numQueries = queryFiles.length;
-    final HttpPost[] httpPosts = new HttpPost[numQueries];
-    for (int i = 0; i < numQueries; i++) {
-      HttpPost httpPost = new HttpPost(resourceUrl);
-      httpPost.addHeader("content-type", "application/json");
-      StringBuilder stringBuilder = new StringBuilder();
-      try (BufferedReader bufferedReader = new BufferedReader(new 
FileReader(queryFiles[i]))) {
-        int length;
-        while ((length = bufferedReader.read(CHAR_BUFFER)) > 0) {
-          stringBuilder.append(new String(CHAR_BUFFER, 0, length));
-        }
-      }
-      String query = stringBuilder.toString();
-      httpPost.setEntity(new StringEntity(query));
-      httpPosts[i] = httpPost;
-    }
-
-    final AtomicInteger counter = new AtomicInteger(0);
-    final AtomicLong totalResponseTime = new AtomicLong(0L);
-    final ExecutorService executorService = 
Executors.newFixedThreadPool(numClients);
-
-    for (int i = 0; i < numClients; i++) {
-      executorService.submit(new Runnable() {
-        @Override
-        public void run() {
-          try (CloseableHttpClient httpClient = HttpClients.createDefault()) {
-            while (System.currentTimeMillis() < endTime) {
-              long startTime = System.currentTimeMillis();
-              try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPosts[RANDOM.nextInt(numQueries)])) {
-                // httpResponse will be auto closed
-              }
-              long responseTime = System.currentTimeMillis() - startTime;
-              counter.getAndIncrement();
-              totalResponseTime.getAndAdd(responseTime);
-            }
-          } catch (IOException e) {
-            e.printStackTrace();
-          }
-        }
-      });
-    }
-    executorService.shutdown();
-
-    long startTime = System.currentTimeMillis();
-    while (System.currentTimeMillis() < endTime) {
-      Thread.sleep(REPORT_INTERVAL_MILLIS);
-      double timePassedSeconds = ((double) (System.currentTimeMillis() - 
startTime)) / MILLIS_PER_SECOND;
-      int count = counter.get();
-      double avgResponseTime = ((double) totalResponseTime.get()) / count;
-      System.out.println(
-          "Time Passed: " + timePassedSeconds + "s, Query Executed: " + count 
+ ", QPS: " + count / timePassedSeconds
-              + ", Avg Response Time: " + avgResponseTime + "ms");
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/PinotResponseTime.java
 
b/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/PinotResponseTime.java
deleted file mode 100644
index 436c40721b1..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/PinotResponseTime.java
+++ /dev/null
@@ -1,136 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.pinotdruidbenchmark;
-
-import java.io.BufferedInputStream;
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileReader;
-import java.io.FileWriter;
-import java.util.Arrays;
-import org.apache.hc.client5.http.classic.methods.HttpPost;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
-import org.apache.hc.client5.http.impl.classic.HttpClients;
-import org.apache.hc.core5.http.io.entity.StringEntity;
-
-
-/**
- * Test single query response time for Pinot.
- */
-public class PinotResponseTime {
-  private PinotResponseTime() {
-  }
-
-  private static final byte[] BYTE_BUFFER = new byte[4096];
-
-  public static void main(String[] args)
-      throws Exception {
-    if (args.length != 4 && args.length != 5) {
-      System.err.println(
-          "4 or 5 arguments required: QUERY_DIR, RESOURCE_URL, WARM_UP_ROUNDS, 
TEST_ROUNDS, RESULT_DIR (optional).");
-      return;
-    }
-
-    File queryDir = new File(args[0]);
-    String resourceUrl = args[1];
-    int warmUpRounds = Integer.parseInt(args[2]);
-    int testRounds = Integer.parseInt(args[3]);
-    File resultDir;
-    if (args.length == 4) {
-      resultDir = null;
-    } else {
-      resultDir = new File(args[4]);
-      if (!resultDir.exists()) {
-        if (!resultDir.mkdirs()) {
-          throw new RuntimeException("Failed to create result directory: " + 
resultDir);
-        }
-      }
-    }
-
-    File[] queryFiles = queryDir.listFiles();
-    assert queryFiles != null;
-    Arrays.sort(queryFiles);
-
-    try (CloseableHttpClient httpClient = HttpClients.createDefault()) {
-      HttpPost httpPost = new HttpPost(resourceUrl);
-
-      for (File queryFile : queryFiles) {
-        String query = new BufferedReader(new 
FileReader(queryFile)).readLine();
-        httpPost.setEntity(new StringEntity("{\"pql\":\"" + query + "\"}"));
-
-        
System.out.println("--------------------------------------------------------------------------------");
-        System.out.println("Running query: " + query);
-        
System.out.println("--------------------------------------------------------------------------------");
-
-        // Warm-up Rounds
-        System.out.println("Run " + warmUpRounds + " times to warm up...");
-        for (int i = 0; i < warmUpRounds; i++) {
-          try (httpResponse = httpClient.execute(httpPost)){
-            // httpResponse will be auto closed
-          }
-          System.out.print('*');
-        }
-        System.out.println();
-
-        // Test Rounds
-        System.out.println("Run " + testRounds + " times to get response time 
statistics...");
-        long[] responseTimes = new long[testRounds];
-        long totalResponseTime = 0L;
-        for (int i = 0; i < testRounds; i++) {
-          long startTime = System.currentTimeMillis();
-          try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPost)) {
-            // http response will be auto closed
-          }
-          long responseTime = System.currentTimeMillis() - startTime;
-          responseTimes[i] = responseTime;
-          totalResponseTime += responseTime;
-          System.out.print(responseTime + "ms ");
-        }
-        System.out.println();
-
-        // Store result.
-        if (resultDir != null) {
-          File resultFile = new File(resultDir, queryFile.getName() + 
".result");
-          try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPost)) {
-            try (BufferedInputStream bufferedInputStream = new 
BufferedInputStream(
-                httpResponse.getEntity().getContent());
-                BufferedWriter bufferedWriter = new BufferedWriter(new 
FileWriter(resultFile))) {
-              int length;
-              while ((length = bufferedInputStream.read(BYTE_BUFFER)) > 0) {
-                bufferedWriter.write(new String(BYTE_BUFFER, 0, length));
-              }
-            }
-          }
-        }
-
-        // Process response times.
-        double averageResponseTime = (double) totalResponseTime / testRounds;
-        double temp = 0;
-        for (long responseTime : responseTimes) {
-          temp += (responseTime - averageResponseTime) * (responseTime - 
averageResponseTime);
-        }
-        double standardDeviation = Math.sqrt(temp / testRounds);
-        System.out.println("Average response time: " + averageResponseTime + 
"ms");
-        System.out.println("Standard deviation: " + standardDeviation);
-      }
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/PinotThroughput.java
 
b/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/PinotThroughput.java
deleted file mode 100644
index 0122fef6b51..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/java/org/apache/pinotdruidbenchmark/PinotThroughput.java
+++ /dev/null
@@ -1,120 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.pinotdruidbenchmark;
-
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileReader;
-import java.io.IOException;
-import java.util.Arrays;
-import java.util.Random;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicLong;
-import org.apache.hc.client5.http.classic.methods.HttpPost;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
-import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
-import org.apache.hc.client5.http.impl.classic.HttpClients;
-import org.apache.hc.core5.http.io.entity.EntityUtils;
-import org.apache.hc.core5.http.io.entity.StringEntity;
-
-
-/**
- * Test throughput for Pinot.
- */
-public class PinotThroughput {
-  private PinotThroughput() {
-  }
-
-  private static final long RANDOM_SEED = 123456789L;
-  private static final Random RANDOM = new Random(RANDOM_SEED);
-
-  private static final int MILLIS_PER_SECOND = 1000;
-  private static final int REPORT_INTERVAL_MILLIS = 3000;
-
-  @SuppressWarnings("InfiniteLoopStatement")
-  public static void main(String[] args)
-      throws Exception {
-    if (args.length != 3 && args.length != 4) {
-      System.err.println("3 or 4 arguments required: QUERY_DIR, RESOURCE_URL, 
NUM_CLIENTS, TEST_TIME (seconds).");
-      return;
-    }
-
-    File queryDir = new File(args[0]);
-    String resourceUrl = args[1];
-    final int numClients = Integer.parseInt(args[2]);
-    final long endTime;
-    if (args.length == 3) {
-      endTime = Long.MAX_VALUE;
-    } else {
-      endTime = System.currentTimeMillis() + Integer.parseInt(args[3]) * 
MILLIS_PER_SECOND;
-    }
-
-    File[] queryFiles = queryDir.listFiles();
-    assert queryFiles != null;
-    Arrays.sort(queryFiles);
-
-    final int numQueries = queryFiles.length;
-    final HttpPost[] httpPosts = new HttpPost[numQueries];
-    for (int i = 0; i < numQueries; i++) {
-      HttpPost httpPost = new HttpPost(resourceUrl);
-      String query = new BufferedReader(new 
FileReader(queryFiles[i])).readLine();
-      httpPost.setEntity(new StringEntity("{\"pql\":\"" + query + "\"}"));
-      httpPosts[i] = httpPost;
-    }
-
-    final AtomicInteger counter = new AtomicInteger(0);
-    final AtomicLong totalResponseTime = new AtomicLong(0L);
-    final ExecutorService executorService = 
Executors.newFixedThreadPool(numClients);
-
-    for (int i = 0; i < numClients; i++) {
-      executorService.submit(new Runnable() {
-        @Override
-        public void run() {
-          try (CloseableHttpClient httpClient = HttpClients.createDefault()) {
-            while (System.currentTimeMillis() < endTime) {
-              long startTime = System.currentTimeMillis();
-              try (CloseableHttpResponse httpResponse = 
httpClient.execute(httpPosts[RANDOM.nextInt(numQueries)])) {
-                EntityUtils.consume(httpResponse.getEntity());
-              }
-              long responseTime = System.currentTimeMillis() - startTime;
-              counter.getAndIncrement();
-              totalResponseTime.getAndAdd(responseTime);
-            }
-          } catch (IOException e) {
-            e.printStackTrace();
-          }
-        }
-      });
-    }
-    executorService.shutdown();
-
-    long startTime = System.currentTimeMillis();
-    while (System.currentTimeMillis() < endTime) {
-      Thread.sleep(REPORT_INTERVAL_MILLIS);
-      double timePassedSeconds = ((double) (System.currentTimeMillis() - 
startTime)) / MILLIS_PER_SECOND;
-      int count = counter.get();
-      double avgResponseTime = ((double) totalResponseTime.get()) / count;
-      System.out.println(
-          "Time Passed: " + timePassedSeconds + "s, Query Executed: " + count 
+ ", QPS: " + count / timePassedSeconds
-              + ", Avg Response Time: " + avgResponseTime + "ms");
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/druid-0.9.2_index_tpch_lineitem_yearly.json
 
b/contrib/pinot-druid-benchmark/src/main/resources/config/druid-0.9.2_index_tpch_lineitem_yearly.json
deleted file mode 100644
index 28a6955b608..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/resources/config/druid-0.9.2_index_tpch_lineitem_yearly.json
+++ /dev/null
@@ -1,91 +0,0 @@
-{
-  "type": "index_hadoop",
-  "spec": {
-    "ioConfig": {
-      "type": "hadoop",
-      "inputSpec": {
-        "type": "static",
-        "paths": "<CSV FILE PATH>"
-      }
-    },
-    "dataSchema": {
-      "dataSource": "tpch_lineitem",
-      "granularitySpec": {
-        "segmentGranularity": "year",
-        "intervals": [
-          "1992/1999"
-        ]
-      },
-      "parser": {
-        "type": "hadoopyString",
-        "parseSpec": {
-          "format": "tsv",
-          "timestampSpec": {
-            "column": "l_shipdate"
-          },
-          "dimensionsSpec": {
-            "dimensions": [
-              "l_orderkey",
-              "l_partkey",
-              "l_suppkey",
-              "l_linenumber",
-              "l_returnflag",
-              "l_linestatus",
-              "l_shipdate",
-              "l_commitdate",
-              "l_receiptdate",
-              "l_shipinstruct",
-              "l_shipmode",
-              "l_comment"
-            ]
-          },
-          "delimiter": "|",
-          "columns": [
-            "l_orderkey",
-            "l_partkey",
-            "l_suppkey",
-            "l_linenumber",
-            "l_quantity",
-            "l_extendedprice",
-            "l_discount",
-            "l_tax",
-            "l_returnflag",
-            "l_linestatus",
-            "l_shipdate",
-            "l_commitdate",
-            "l_receiptdate",
-            "l_shipinstruct",
-            "l_shipmode",
-            "l_comment"
-          ]
-        }
-      },
-      "metricsSpec": [
-        {
-          "type": "count",
-          "name": "count"
-        },
-        {
-          "type": "longSum",
-          "name": "l_quantity",
-          "fieldName": "l_quantity"
-        },
-        {
-          "type": "doubleSum",
-          "name": "l_extendedprice",
-          "fieldName": "l_extendedprice"
-        },
-        {
-          "type": "doubleSum",
-          "name": "l_discount",
-          "fieldName": "l_discount"
-        },
-        {
-          "type": "doubleSum",
-          "name": "l_tax",
-          "fieldName": "l_tax"
-        }
-      ]
-    }
-  }
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/druid_broker_runtime.properties
 
b/contrib/pinot-druid-benchmark/src/main/resources/config/druid_broker_runtime.properties
deleted file mode 100644
index cd39b08c710..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/resources/config/druid_broker_runtime.properties
+++ /dev/null
@@ -1,32 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-#
-
-druid.service=druid/broker
-druid.port=8082
-
-# HTTP server threads
-druid.broker.http.numConnections=5
-druid.server.http.numThreads=25
-
-# Processing threads and buffers
-druid.processing.buffer.sizeBytes=536870912
-druid.processing.numThreads=7
-
-# Query cache
-druid.broker.cache.useCache=false
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/druid_jvm.config 
b/contrib/pinot-druid-benchmark/src/main/resources/config/druid_jvm.config
deleted file mode 100644
index eed737204d4..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/config/druid_jvm.config
+++ /dev/null
@@ -1,8 +0,0 @@
--server
--Xms4g
--Xmx4g
--XX:MaxDirectMemorySize=30G
--Duser.timezone=UTC
--Dfile.encoding=UTF-8
--Djava.io.tmpdir=var/tmp
--Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_csv_config.json 
b/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_csv_config.json
deleted file mode 100644
index 724a7c5531d..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_csv_config.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
-  "CsvHeader": 
"l_orderkey|l_partkey|l_suppkey|l_linenumber|l_quantity|l_extendedprice|l_discount|l_tax|l_returnflag|l_linestatus|l_shipdate|l_commitdate|l_receiptdate|l_shipinstruct|l_shipmode|l_comment|",
-  "CsvDelimiter": "|"
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_generator_config.json
 
b/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_generator_config.json
deleted file mode 100644
index 4a5d53e798b..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_generator_config.json
+++ /dev/null
@@ -1,12 +0,0 @@
-{
-  "dataDir": "<RAW DATA DIR>",
-  "format": "CSV",
-  "outDir": "<OUTPUT DIR>",
-  "overwrite": true,
-  "tableName": "tpch_lineitem",
-  "segmentName": "tpch_lineitem",
-  "schemaFile": "config/pinot_schema.json",
-  "readerConfigFile": "config/pinot_csv_config.json",
-  "enableStarTreeIndex": true,
-  "starTreeIndexSpecFile": "config/pinot_startree_spec.json"
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_schema.json 
b/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_schema.json
deleted file mode 100644
index 589311c116e..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_schema.json
+++ /dev/null
@@ -1,71 +0,0 @@
-{
-  "schemaName": "tpch_lineitem",
-  "dimensionFieldSpecs": [
-    {
-      "name": "l_orderkey",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_partkey",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_suppkey",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_linenumber",
-      "dataType": "INT"
-    },
-    {
-      "name": "l_returnflag",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_linestatus",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_shipdate",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_commitdate",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_receiptdate",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_shipinstruct",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_shipmode",
-      "dataType": "STRING"
-    },
-    {
-      "name": "l_comment",
-      "dataType": "STRING"
-    }
-  ],
-  "metricFieldSpecs": [
-    {
-      "name": "l_quantity",
-      "dataType": "LONG"
-    },
-    {
-      "name": "l_extendedprice",
-      "dataType": "DOUBLE"
-    },
-    {
-      "name": "l_discount",
-      "dataType": "DOUBLE"
-    },
-    {
-      "name": "l_tax",
-      "dataType": "DOUBLE"
-    }
-  ]
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_startree_spec.json
 
b/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_startree_spec.json
deleted file mode 100644
index 8e4dfc88107..00000000000
--- 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_startree_spec.json
+++ /dev/null
@@ -1,6 +0,0 @@
-{
-  "maxLeafRecords": 100,
-  "dimensionsSplitOrder": ["l_receiptdate", "l_shipdate", "l_shipmode", 
"l_returnflag"],
-  "skipStarNodeCreationForDimensions": [],
-  "skipMaterializationForDimensions": ["l_partkey", "l_commitdate", 
"l_linestatus", "l_comment", "l_orderkey", "l_shipinstruct", "l_linenumber", 
"l_suppkey"]
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_table.json 
b/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_table.json
deleted file mode 100644
index c3e37fe9245..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/config/pinot_table.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
-  "tableName": "tpch_lineitem",
-  "segmentsConfig" : {
-    "replication" : "1",
-    "segmentAssignmentStrategy" : "BalanceNumSegmentAssignmentStrategy"
-  },
-  "tenants" : {
-    "broker":"brokerOne",
-    "server":"serverOne"
-  },
-  "tableIndexConfig" : {
-    "invertedIndexColumns" : [],
-    "loadMode"  : "HEAP",
-    "lazyLoad"  : "false"
-  },
-  "tableType":"OFFLINE",
-  "metadata": {}
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/0.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/0.json
deleted file mode 100644
index 90a886f90b6..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/0.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
-  "queryType": "timeseries",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1992-01-01/1999-01-01"],
-  "granularity": "all",
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    },
-    {
-      "type": "doubleSum",
-      "name": "l_discount_sum",
-      "fieldName": "l_discount"
-    }
-  ]
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/1.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/1.json
deleted file mode 100644
index 82e450223c8..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/1.json
+++ /dev/null
@@ -1,18 +0,0 @@
-{
-  "queryType": "timeseries",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1992-01-01/1999-01-01"],
-  "granularity": "all",
-  "filter": {
-    "type": "selector",
-    "dimension": "l_returnflag",
-    "value": "R"
-  },
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    }
-  ]
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/2.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/2.json
deleted file mode 100644
index 28748dbc8f2..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/2.json
+++ /dev/null
@@ -1,13 +0,0 @@
-{
-  "queryType": "timeseries",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1996-12-01/1997-01-01"],
-  "granularity": "all",
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    }
-  ]
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/3.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/3.json
deleted file mode 100644
index e565ca2b3f3..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/3.json
+++ /dev/null
@@ -1,16 +0,0 @@
-{
-  "queryType": "topN",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1992-01-01/1999-01-01"],
-  "granularity": "all",
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    }
-  ],
-  "dimension": "l_shipdate",
-  "threshold": 10,
-  "metric": "l_extendedprice_sum"
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/4.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/4.json
deleted file mode 100644
index b81f9da4ee1..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/4.json
+++ /dev/null
@@ -1,21 +0,0 @@
-{
-  "queryType": "topN",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1992-01-01/1999-01-01"],
-  "granularity": "all",
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    },
-    {
-      "type": "longSum",
-      "name": "l_quantity_sum",
-      "fieldName": "l_quantity"
-    }
-  ],
-  "dimension": "l_shipdate",
-  "threshold": 10,
-  "metric": "l_extendedprice_sum"
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/5.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/5.json
deleted file mode 100644
index ad55d3f50b8..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/5.json
+++ /dev/null
@@ -1,16 +0,0 @@
-{
-  "queryType": "topN",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1995-01-01/1997-01-01"],
-  "granularity": "all",
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    }
-  ],
-  "dimension": "l_shipdate",
-  "threshold": 10,
-  "metric": "l_extendedprice_sum"
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/6.json 
b/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/6.json
deleted file mode 100644
index 6bf7b0fefe4..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/druid_queries/6.json
+++ /dev/null
@@ -1,34 +0,0 @@
-{
-  "queryType": "topN",
-  "dataSource": "tpch_lineitem",
-  "intervals": ["1992-01-01/1999-01-01"],
-  "granularity": "all",
-  "filter": {
-    "type": "and",
-    "fields": [
-      {
-        "type": "in",
-        "dimension": "l_shipmode",
-        "values": ["RAIL", "FOB"]
-      },
-      {
-        "type": "bound",
-        "dimension": "l_receiptdate",
-        "lower": "1997-01-01",
-        "upper": "1998-01-01",
-        "upperStrict": true,
-        "alphaNumeric": false
-      }
-    ]
-  },
-  "aggregations": [
-    {
-      "type": "doubleSum",
-      "name": "l_extendedprice_sum",
-      "fieldName": "l_extendedprice"
-    }
-  ],
-  "dimension": "l_shipmode",
-  "threshold": 10,
-  "metric": "l_extendedprice_sum"
-}
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/0.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/0.pql
deleted file mode 100644
index a7226472178..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/0.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice), SUM(l_discount) FROM tpch_lineitem
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/1.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/1.pql
deleted file mode 100644
index 4d04804b252..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/1.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice) FROM tpch_lineitem WHERE l_returnflag = 'R'
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/2.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/2.pql
deleted file mode 100644
index 21884e4a90f..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/2.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice) FROM tpch_lineitem WHERE l_shipdate BETWEEN 
'1996-12-01' AND '1996-12-31'
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/3.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/3.pql
deleted file mode 100644
index 316b5a66e35..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/3.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice) FROM tpch_lineitem GROUP BY l_shipdate
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/4.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/4.pql
deleted file mode 100644
index 3325281611a..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/4.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice), SUM(l_quantity) FROM tpch_lineitem GROUP BY 
l_shipdate
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/5.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/5.pql
deleted file mode 100644
index 65c7d4e21e2..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/5.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice) FROM tpch_lineitem WHERE l_shipdate BETWEEN 
'1995-01-01' AND '1996-12-31' GROUP BY l_shipdate
diff --git 
a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/6.pql 
b/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/6.pql
deleted file mode 100644
index ec14c398ccb..00000000000
--- a/contrib/pinot-druid-benchmark/src/main/resources/pinot_queries/6.pql
+++ /dev/null
@@ -1 +0,0 @@
-SELECT SUM(l_extendedprice) FROM tpch_lineitem WHERE l_shipmode in ('RAIL', 
'FOB') AND l_receiptdate BETWEEN '1997-01-01' AND '1997-12-31' GROUP BY 
l_shipmode
diff --git a/pinot-distribution/pinot-source-assembly.xml 
b/pinot-distribution/pinot-source-assembly.xml
index f0a04da48dd..b3d0362fb6d 100644
--- a/pinot-distribution/pinot-source-assembly.xml
+++ b/pinot-distribution/pinot-source-assembly.xml
@@ -53,8 +53,6 @@
         <!-- Do not include docker, kubernetes related files -->
         <exclude>kubernetes/**</exclude>
         <exclude>docker/**</exclude>
-        <exclude>contrib/pinot-druid-benchmark/**</exclude>
-        <exclude>thirdeye/**</exclude>
         <exclude>thirdeye/**</exclude>
       </excludes>
     </fileSet>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to