mbutrovich opened a new pull request, #3893:
URL: https://github.com/apache/datafusion-comet/pull/3893

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Closes #3846.
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.
   -->
   
   Native shuffle above a `COUNT` from a native scan results in `RecordBatch`es 
with an empty schema but valid number of rows. Native shuffle currently panics 
trying to `interleave` those batches, but we can fast path this scenario with a 
special partitioner. It is similar to the `SinglePartitionShufflePartitioner` 
but instead of concatenating batches to write to a shuffle file for a single 
partition, it accumulates the number of rows, then writes a single IPC batch 
for the number of rows, but makes sure the index file has the expected number 
of partitions.
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   - `native/shuffle/src/partitioners/empty_schema.rs`: new 
`EmptySchemaShufflePartitioner` that accumulates row count, writes a single 
zero-column IPC batch to partition 0, and fills the index with equal offsets 
for all other partitions
   - `native/shuffle/src/partitioners/mod.rs`: exports the new partitioner
   - `native/shuffle/src/shuffle_writer.rs`: branches on 
`schema.fields().is_empty()` before falling through to 
`MultiPartitionShuffleRepartitioner`; added Rust test verifying row count 
roundtrip and index structure
   - `spark/.../CometNativeShuffleSuite.scala`: integration test from PR #3858 
for `repartition(10).count()` with native DataFusion scan
   
   ## How are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   
   New test from #3858 that reflects repro in #3846.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to