yashmayya commented on code in PR #15263: URL: https://github.com/apache/pinot/pull/15263#discussion_r2000253759
########## pinot-query-planner/src/main/java/org/apache/pinot/query/type/TypeSystem.java: ########## @@ -62,16 +62,36 @@ public boolean shouldConvertRaggedUnionTypesToVarying() { return true; } + @Override + public int getMaxScale(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_SCALE : super.getMaxScale(typeName); + } + @Override public int getMaxNumericScale() { return MAX_DECIMAL_SCALE; } + @Override + public int getDefaultScale(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_SCALE : super.getDefaultScale(typeName); + } + + @Override + public int getMaxPrecision(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_PRECISION : super.getMaxPrecision(typeName); + } + @Override public int getMaxNumericPrecision() { return MAX_DECIMAL_PRECISION; } + @Override + public int getDefaultPrecision(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_PRECISION : super.getDefaultPrecision(typeName); Review Comment: Okay so using a smaller default scale / precision in the range of 1-10 doesn't seem like a great idea. That would truncate values unnecessarily. For instance, if we used a default scale of 2, `SELECT CAST(123.4567 AS DECIMAL) FROM mytable LIMIT 1;` would return `123.45` - Postgres returns `123.4567` for reference. So I think it makes sense to use our max scale / precision as the default scale / precision as well to avoid unnecessarily truncating / rounding values? Also using a max / default precision and scale of 1000 doesn't mean we're limited to values in the range of (-1, 1). That would only be the case if we're using 1000 digits after the decimal point. But for instance, a value like `1234567890.0123456789` that has precision 20 and scale 10 can be represented just fine as well. On a side note, the weird and IMO inexplicable logic we have [here](https://github.com/apache/pinot/blob/22fde8d9280ec03e7f7613d18c1325f6035771d9/pinot-query-planner/src/main/java/org/apache/pinot/query/planner/logical/RelToPlanNodeConverter.java#L445-L465) causes strange and unexpected results - if a decimal literal's precision is <= 30 (with non-zero scale), we use `Double` instead of `BigDecimal` leading to imprecision, so we actually see perfectly accurate results only if using numbers with precision > 30. But this is orthogonal to the Calcite upgrade and we can discuss / solve this separately. Edit: Another important point is that the default default `DECIMAL` scale of 0 actually does mean a 0 scale, and not unlimited - i.e., `CAST(123.4567 AS DECIMAL)` would become `123`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org