0lai0 opened a new pull request, #4068:
URL: https://github.com/apache/datafusion-comet/pull/4068
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
Closes #1212
Part of #3996
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
Comet already reports encoding/compression metrics for native shuffle, but
JVM columnar shuffle (CometColumnarExchange) either showed 0 ms or lacked a
useful task-level distribution in SQL UI. This made it difficult to compare
shuffle behavior across spark.comet.shuffle.mode=native and jvm, and reduced
observability for JVM shuffle performance tuning.
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
- Aligned the JVM/native shuffle spill contract so encode/compression timing
is propagated end-to-end: native spill results are consumed as (written_bytes,
checksum, encode_nanos), with corresponding JNI/Java updates.
- Added shared AtomicLong encode-time accumulators in SpillWriter,
CometShuffleExternalSorterSync/Async, and CometDiskBlockWriter so encode time
is aggregated correctly across batches, spills, and concurrent sorter/writer
instances.
- Added getEncodeNanos() to CometShuffleExternalSorter and wired the
accumulated value into the encode_time SQLMetric in both
CometUnsafeShuffleWriter and CometBypassMergeSortShuffleWriter.
- Ensured JVM columnar shuffle dependencies carry shuffleWriteMetrics,
allowing CometShuffleManager to retrieve and pass encode_time into the shuffle
writers.
- Added a regression test to verify encode_time exists and is greater than
zero for columnar shuffle workloads.
## How are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example, are
they covered by existing tests)?
-->
<img width="433" height="705" alt="Screenshot 2026-04-24 at 11 42 26 PM"
src="https://github.com/user-attachments/assets/879aa231-ad4e-4caf-9b02-6c153ba58d9d"
/>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]