Repository: spark
Updated Branches:
  refs/heads/branch-1.2 e63516855 -> 76c88c668


SPARK-785 [CORE] ClosureCleaner not invoked on most PairRDDFunctions

This looked like perhaps a simple and important one. `combineByKey` looks like 
it should clean its arguments' closures, and that in turn covers apparently all 
remaining functions in `PairRDDFunctions` which delegate to it.

Author: Sean Owen <so...@cloudera.com>

Closes #3690 from srowen/SPARK-785 and squashes the following commits:

8df68fe [Sean Owen] Clean context of most remaining functions in 
PairRDDFunctions, which ultimately call combineByKey

(cherry picked from commit 2a28bc61009a170af3853c78f7f36970898a6d56)
Signed-off-by: Josh Rosen <joshro...@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/76c88c66
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/76c88c66
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/76c88c66

Branch: refs/heads/branch-1.2
Commit: 76c88c6687033bd3cb9a686ea922098f4c0212ad
Parents: e635168
Author: Sean Owen <so...@cloudera.com>
Authored: Mon Dec 15 16:06:15 2014 -0800
Committer: Josh Rosen <joshro...@databricks.com>
Committed: Wed Dec 17 12:20:29 2014 -0800

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/76c88c66/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala 
b/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
index 8c2c959..0a0f0c3 100644
--- a/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
+++ b/core/src/main/scala/org/apache/spark/rdd/PairRDDFunctions.scala
@@ -86,7 +86,10 @@ class PairRDDFunctions[K, V](self: RDD[(K, V)])
         throw new SparkException("Default partitioner cannot partition array 
keys.")
       }
     }
-    val aggregator = new Aggregator[K, V, C](createCombiner, mergeValue, 
mergeCombiners)
+    val aggregator = new Aggregator[K, V, C](
+      self.context.clean(createCombiner),
+      self.context.clean(mergeValue),
+      self.context.clean(mergeCombiners))
     if (self.partitioner == Some(partitioner)) {
       self.mapPartitions(iter => {
         val context = TaskContext.get()


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to