mbutrovich commented on PR #3144:
URL: 
https://github.com/apache/datafusion-comet/pull/3144#issuecomment-4245069440

   Thanks @andygrove! Claude summarized my notes for me:
   
   > ## PR #3144: `date_from_unix_date`
   > 
   > Nice straightforward expression. A few things I noticed:
   > 
   > ### Scalar path returns Array instead of Scalar
   > 
   > The scalar branch in `date_from_unix_date.rs` converts the input to a 
1-element array and returns `ColumnarValue::Array`. This breaks the 
scalar-in/scalar-out contract that DataFusion expects for proper broadcast 
semantics. When `date_from_unix_date(0)` is used as a literal, returning a 
1-element array instead of a scalar could cause issues with downstream columnar 
operations.
   > 
   > The fix would be to extract the `ScalarValue::Int32` directly and return 
`ScalarValue::Date32`:
   > 
   > ```rust
   > ColumnarValue::Scalar(scalar) => {
   >     match scalar {
   >         ScalarValue::Int32(Some(days)) => {
   >             Ok(ColumnarValue::Scalar(ScalarValue::Date32(Some(days))))
   >         }
   >         ScalarValue::Int32(None) | ScalarValue::Null => {
   >             Ok(ColumnarValue::Scalar(ScalarValue::Date32(None)))
   >         }
   >         _ => Err(DataFusionError::Execution(
   >             "date_from_unix_date expects Int32 scalar input".to_string(),
   >         )),
   >     }
   > }
   > ```
   > 
   > ### Docs
   > 
   > `docs/spark_expressions_support.md` still has `date_from_unix_date` marked 
as `[ ]`. Should be `[x]`.
   > 
   > ### Tests
   > 
   > It might be worth adding Int32 boundary values (`2147483647`, 
`-2147483648`) to the test INSERT to match Spark's own `testIntegralInput` 
coverage. The implementation is identity so it can't overflow, but boundary 
tests document that the behavior is intentional.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to