kosiew commented on code in PR #21436:
URL: https://github.com/apache/datafusion/pull/21436#discussion_r3045814575


##########
datafusion/physical-plan/src/sorts/builder.rs:
##########
@@ -336,43 +296,21 @@ mod tests {
     }
 
     #[test]
-    fn test_recover_offset_overflow_from_panic() {
-        let error = recover_offset_overflow_from_panic(
-            || -> std::result::Result<(), ArrowError> { panic!("offset 
overflow") },
-        )
-        .unwrap_err();
-
-        assert!(is_offset_overflow(&error));
+    fn test_is_offset_overflow_matches_arrow_error() {

Review Comment:
   Would it make sense to move this coverage up a layer and exercise 
`BatchBuilder::build_record_batch`, or even the sort-preserving merge drain 
path, using a real `ArrowError::OffsetOverflowError`?
   
   Right now the tests only stub `retry_interleave`, so they might miss 
regressions if `interleave` starts surfacing the overflow differently again.



##########
datafusion/physical-plan/src/sorts/builder.rs:
##########
@@ -143,7 +141,7 @@ impl BatchBuilder {
                     .iter()
                     .map(|(_, batch)| batch.column(column_idx).as_ref())
                     .collect();
-                recover_offset_overflow_from_panic(|| interleave(&arrays, 
indices))
+                interleave(&arrays, indices).map_err(Into::into)

Review Comment:
   Maybe worth adding a short comment here noting that this now relies on Arrow 
58.1.0+ returning `OffsetOverflowError` directly.
   
   That would make the cleanup easier to understand in this file, especially 
since the removed shim was guarding this exact call site.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to