yashmayya commented on code in PR #14830: URL: https://github.com/apache/pinot/pull/14830#discussion_r1925144245
########## pinot-query-planner/src/main/java/org/apache/pinot/query/QueryEnvironment.java: ########## @@ -118,9 +111,15 @@ public QueryEnvironment(Config config) { String database = config.getDatabase(); PinotCatalog catalog = new PinotCatalog(config.getTableCache(), database); CalciteSchema rootSchema = CalciteSchema.createRootSchema(false, false, database, catalog); + Properties connectionConfigProperties = new Properties(); + connectionConfigProperties.setProperty(CalciteConnectionProperty.CASE_SENSITIVE.camelName(), Boolean.toString( + config.getTableCache() == null Review Comment: Will the table cache ever be `null` here? Doesn't look like it should be. ########## pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/MultiStageEngineIntegrationTest.java: ########## @@ -1347,6 +1347,15 @@ public void testConcurrentQueries() { executorService.shutdownNow(); } + @Test + public void testCaseInsensitiveNames() throws Exception { + String query = "select ACTualELAPsedTIMe from mYtABLE where actUALelAPSedTIMe > 0 limit 1"; + JsonNode jsonNode = postQuery(query); Review Comment: This uses the broker query API; let's also add a call to the controller query API (via `postQueryToController`) to test that path as well here - I see that you've made the necessary changes for that to work already. ########## pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java: ########## @@ -2994,25 +2994,29 @@ public void testCaseSensitivityV2() int daysSinceEpoch = 16138; int hoursSinceEpoch = 16138 * 24; int secondsSinceEpoch = 16138 * 24 * 60 * 60; - List<String> baseQueries = Arrays.asList("SELECT * FROM mytable", - "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable", + List<String> baseQueries = Arrays.asList("SELECT * FROM mytable limit 10000", + "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable limit 10000", "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable order by DaysSinceEpoch " + "limit 10000", "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable order by timeConvert" + "(DaysSinceEpoch,'DAYS','SECONDS') DESC limit 10000", - "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " + daysSinceEpoch, - "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch, - "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch, - "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable", + "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " + daysSinceEpoch + " limit 10000", + "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch + + " limit 10000", + "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch + + " limit 10000", + "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable limit 10000", "SELECT COUNT(*) FROM mytable GROUP BY dateTimeConvert(DaysSinceEpoch,'1:DAYS:EPOCH','1:HOURS:EPOCH'," - + "'1:HOURS')"); + + "'1:HOURS') limit 10000"); List<String> queries = new ArrayList<>(); baseQueries.forEach(q -> queries.add(q.replace("mytable", "MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch"))); baseQueries.forEach( - q -> queries.add(q.replace("mytable", "MYDB.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch"))); + q -> queries.add(q.replace("mytable", "DEFAULT.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch"))); for (String query : queries) { - testQueryError(query, QueryException.QUERY_PLANNING_ERROR_CODE); + //testQueryError(query, QueryException.QUERY_PLANNING_ERROR_CODE); + JsonNode response = postQuery(query); + assertTrue(response.get("numSegmentsProcessed").asLong() >= 1L, "Query: " + query + " failed"); Review Comment: If the intention is to change the assertion to verify that the query no longer fails (since we're case insensitive by default now), we can use the `assertNoError` method here. ########## pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java: ########## @@ -2994,25 +2994,29 @@ public void testCaseSensitivityV2() int daysSinceEpoch = 16138; int hoursSinceEpoch = 16138 * 24; int secondsSinceEpoch = 16138 * 24 * 60 * 60; - List<String> baseQueries = Arrays.asList("SELECT * FROM mytable", - "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable", + List<String> baseQueries = Arrays.asList("SELECT * FROM mytable limit 10000", + "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable limit 10000", "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable order by DaysSinceEpoch " + "limit 10000", "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable order by timeConvert" + "(DaysSinceEpoch,'DAYS','SECONDS') DESC limit 10000", - "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " + daysSinceEpoch, - "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch, - "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch, - "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable", + "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " + daysSinceEpoch + " limit 10000", + "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch + + " limit 10000", + "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch + + " limit 10000", + "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable limit 10000", Review Comment: What are these changes for? ########## pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java: ########## @@ -2994,25 +2994,29 @@ public void testCaseSensitivityV2() int daysSinceEpoch = 16138; int hoursSinceEpoch = 16138 * 24; int secondsSinceEpoch = 16138 * 24 * 60 * 60; - List<String> baseQueries = Arrays.asList("SELECT * FROM mytable", - "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable", + List<String> baseQueries = Arrays.asList("SELECT * FROM mytable limit 10000", + "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable limit 10000", "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable order by DaysSinceEpoch " + "limit 10000", "SELECT DaysSinceEpoch, timeConvert(DaysSinceEpoch,'DAYS','SECONDS') FROM mytable order by timeConvert" + "(DaysSinceEpoch,'DAYS','SECONDS') DESC limit 10000", - "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " + daysSinceEpoch, - "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch, - "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch, - "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable", + "SELECT count(*) FROM mytable WHERE DaysSinceEpoch = " + daysSinceEpoch + " limit 10000", + "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','HOURS') = " + hoursSinceEpoch + + " limit 10000", + "SELECT count(*) FROM mytable WHERE timeConvert(DaysSinceEpoch,'DAYS','SECONDS') = " + secondsSinceEpoch + + " limit 10000", + "SELECT MAX(timeConvert(DaysSinceEpoch,'DAYS','SECONDS')) FROM mytable limit 10000", "SELECT COUNT(*) FROM mytable GROUP BY dateTimeConvert(DaysSinceEpoch,'1:DAYS:EPOCH','1:HOURS:EPOCH'," - + "'1:HOURS')"); + + "'1:HOURS') limit 10000"); List<String> queries = new ArrayList<>(); baseQueries.forEach(q -> queries.add(q.replace("mytable", "MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch"))); baseQueries.forEach( - q -> queries.add(q.replace("mytable", "MYDB.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch"))); + q -> queries.add(q.replace("mytable", "DEFAULT.MYTABLE").replace("DaysSinceEpoch", "DAYSSinceEpOch"))); Review Comment: Why did the database name need to be changed here? ########## pinot-query-planner/src/main/java/org/apache/pinot/query/catalog/PinotCatalog.java: ########## @@ -80,8 +80,9 @@ public Table getTable(String name) { */ @Override public Set<String> getTableNames() { - return _tableCache.getTableNameMap().keySet().stream().filter(n -> DatabaseUtils.isPartOfDatabase(n, _databaseName)) - .collect(Collectors.toSet()); + //return _tableCache.getTableNameMap().keySet().stream().filter(n -> DatabaseUtils.isPartOfDatabase(n, _databaseName)) + // .collect(Collectors.toSet()); + return _tableCache.getTableNameMap().keySet(); Review Comment: > As it was implemented, this method got rid of all non-prefixed tableNames when _databaseName != null, and probably that was not intended. The non-prefixed table names are assumed to belong to the "default" database IIUC. So shouldn't we want to exclude them when there's a non-default database being used? cc - @shounakmk219 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org