gortiz commented on code in PR #15263: URL: https://github.com/apache/pinot/pull/15263#discussion_r2014099147
########## pinot-query-planner/src/main/java/org/apache/pinot/query/type/TypeSystem.java: ########## @@ -62,16 +62,36 @@ public boolean shouldConvertRaggedUnionTypesToVarying() { return true; } + @Override + public int getMaxScale(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_SCALE : super.getMaxScale(typeName); + } + @Override public int getMaxNumericScale() { return MAX_DECIMAL_SCALE; } + @Override + public int getDefaultScale(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_SCALE : super.getDefaultScale(typeName); + } + + @Override + public int getMaxPrecision(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_PRECISION : super.getMaxPrecision(typeName); + } + @Override public int getMaxNumericPrecision() { return MAX_DECIMAL_PRECISION; } + @Override + public int getDefaultPrecision(SqlTypeName typeName) { + return typeName == SqlTypeName.DECIMAL ? MAX_DECIMAL_PRECISION : super.getDefaultPrecision(typeName); Review Comment: Sorry, I wanted to say `DECIMAL(1000, 1000)`. > Also using a max / default precision and scale of 1000 doesn't mean we're limited to values in the range of (-1, 1). I think that is wrong. For example in Posgres: ``` postgres=# select cast(10 as DECIMAL(10, 10)); ERROR: numeric field overflow DETAIL: A field with precision 10, scale 10 must round to an absolute value less than 1. postgres=# select cast(10 as DECIMAL(10, 0)); numeric --------- 10 (1 row) postgres=# select cast(10 as DECIMAL(1, 0)); ERROR: numeric field overflow DETAIL: A field with precision 1, scale 0 must round to an absolute value less than 10^1. postgres=# select cast(10 as DECIMAL(2, 0)); numeric --------- 10 (1 row) postgres=# select cast(10 as DECIMAL(2, 1)); ERROR: numeric field overflow DETAIL: A field with precision 2, scale 1 must round to an absolute value less than 10^1. ``` AFAIU `DECIMAL(1000, 1000)` can only represent from -0.999... to 0.999... with 1000 digits at the right of the decimal point. If that is the case we _must_ change the default used for scale. And even if it isn't... shoudln't we use a smaller scale by default? That would be more efficient once implemented as BigDecimal -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org For additional commands, e-mail: commits-h...@pinot.apache.org