This is an automated email from the ASF dual-hosted git repository.

kabhwan pushed a commit to branch branch-4.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-4.0 by this push:
     new 13d9c914f09d [SPARK-51351][SS] Do not materialize the output in Python 
worker for TWS
13d9c914f09d is described below

commit 13d9c914f09dc18f02cf7d64a626d6b8845e2e55
Author: Jungtaek Lim <kabhwan.opensou...@gmail.com>
AuthorDate: Sat Mar 1 07:04:56 2025 +0900

    [SPARK-51351][SS] Do not materialize the output in Python worker for TWS
    
    ### What changes were proposed in this pull request?
    
    This PR proposes to fix the logic of serializer in TWS PySpark version to 
NOT materialize the output entirely. This PR changes the logic of creating a 
list to create a generator instead, so that it can be lazily consumed.
    
    ### Why are the changes needed?
    
    Without this PR, all the outputs are materialized when JVM signals to 
Python worker that there is no further input (at task completion), which brings 
up two critical issues:
    
    * downstream operator can only see outputs after TWS operator processes all 
inputs
    * all the outputs are materialized into "memory" in Python worker, which 
could lead memory issue
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Existing UTs. I've confirmed manually below:
    
    * Before this PR, all the outputs are available after processing all inputs
    * After this PR, outputs are available during processing inputs
    
    The change I have made to verify the fix manually:
    
https://github.com/HeartSaVioR/spark/commit/cd30db0746bd59dde032d7f209e8657f4f7d93c5
    
    If we call run_test() in testcode.py in PySpark, the log messages `Spark 
pulls the iterators` and `The data is being retrieved from Python worker` are 
interleaved with this fix. Without the fix, there is a sequence of log 
messages, all `Spark pulls the iterators` messages come first, and then all 
`The data is being retrieved from Python worker` messages come later.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #50110 from HeartSaVioR/SPARK-51351.
    
    Authored-by: Jungtaek Lim <kabhwan.opensou...@gmail.com>
    Signed-off-by: Jungtaek Lim <kabhwan.opensou...@gmail.com>
    (cherry picked from commit 496fe7a50919ccf291836bfb789b22402d7221e9)
    Signed-off-by: Jungtaek Lim <kabhwan.opensou...@gmail.com>
---
 python/pyspark/sql/pandas/serializers.py | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/python/pyspark/sql/pandas/serializers.py 
b/python/pyspark/sql/pandas/serializers.py
index c9a51dbd1967..c7f40a360d8f 100644
--- a/python/pyspark/sql/pandas/serializers.py
+++ b/python/pyspark/sql/pandas/serializers.py
@@ -1223,8 +1223,17 @@ class 
TransformWithStateInPandasSerializer(ArrowStreamPandasUDFSerializer):
         Read through an iterator of (iterator of pandas DataFrame), serialize 
them to Arrow
         RecordBatches, and write batches to stream.
         """
-        result = [(b, t) for x in iterator for y, t in x for b in y]
-        super().dump_stream(result, stream)
+
+        def flatten_iterator():
+            # iterator: iter[list[(iter[pandas.DataFrame], pdf_type)]]
+            for packed in iterator:
+                iter_pdf_with_type = packed[0]
+                iter_pdf = iter_pdf_with_type[0]
+                pdf_type = iter_pdf_with_type[1]
+                for pdf in iter_pdf:
+                    yield (pdf, pdf_type)
+
+        super().dump_stream(flatten_iterator(), stream)
 
 
 class 
TransformWithStateInPandasInitStateSerializer(TransformWithStateInPandasSerializer):


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to