nizarhejazi commented on issue #8418:
URL: https://github.com/apache/pinot/issues/8418#issuecomment-1099926108

   @Jackie-Jiang Supporting fixed-precision BigDecimal w/ configurable 
precision and scale (`DECIMAL(p, s)`) is a big intake and requires:
   
   - Optimizing storage based on different precision values (1-9, 10-18, 19-38).
   - Updating PinotDataType to not be an enum.
   - Arithmetic operations that are precision-aware and scale-aware ([example 
impl.](https://prestodb.io/docs/current/functions/decimal.html#binary-arithmetic-decimal-operators)).
   - Aggregate transforms that are scale-aware (e.g. the result of AVG 
aggregate function is precision: 38, and scale = max(scale of input col, 6).
   - etc.
   
   I wonder if it makes sense to add support now for `BIG_DECIMAL` data type 
(arbitrary precision integer unscaled value, and a 16-bit integer scale). 
Someone else can take on adding fixed-precision `DECIMAL(p,s)` data type later.
   
   My first PR [8468](https://github.com/apache/pinot/pull/8468) introduced 
`BIG_DECIMAL` data type, and conversion between `byte[]` and `BigDecimal` close 
to the data storage layer for two reasons:
   - Avoid updating comparing code everywhere (since `BigDecimal` comparison is 
not equivalent to comparing corresponding `byte[]`).
   - Avoid the cost of converting `byte[]` to `BigDecimal` multiple times.
   
   Let me know if I should proceed w/ supporting arbitrary precision 
`BIG_DECIMAL` data type.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to