Repository: spark
Updated Branches:
refs/heads/master 7d05a6245 -> 4a39b5a1b
[SPARK-11958][SPARK-11957][ML][DOC] SQLTransformer user guide and example code
Add ```SQLTransformer``` user guide, example code and make Scala API doc more
clear.
Author: Yanbo Liang
Closes #10006 from yanboliang
Repository: spark
Updated Branches:
refs/heads/branch-1.6 8652fc03c -> 5c8216920
[SPARK-11958][SPARK-11957][ML][DOC] SQLTransformer user guide and example code
Add ```SQLTransformer``` user guide, example code and make Scala API doc more
clear.
Author: Yanbo Liang
Closes #10006 from yanbol
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3c683ed5f -> 8652fc03c
[SPARK-10259][ML] Add @since annotation to ml.classification
Add since annotation to ml.classification
Author: Takahashi Hiroshi
Closes #8534 from taishi-oss/issue10259.
(cherry picked from commit 7d05a624510f
Repository: spark
Updated Branches:
refs/heads/master 73896588d -> 7d05a6245
[SPARK-10259][ML] Add @since annotation to ml.classification
Add since annotation to ml.classification
Author: Takahashi Hiroshi
Closes #8534 from taishi-oss/issue10259.
Project: http://git-wip-us.apache.org/repo
Repository: spark
Updated Branches:
refs/heads/branch-1.5 3868ab644 -> 2f30927a5
[SPARK-12160][MLLIB] Use SQLContext.getOrCreate in MLlib - 1.5 backport
This backports [https://github.com/apache/spark/pull/10161] to Spark 1.5, with
the difference that ChiSqSelector does not require modificati
Repository: spark
Updated Branches:
refs/heads/master 78209b0cc -> 73896588d
Closes #10098
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/73896588
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/73896588
Diff: htt
Repository: spark
Updated Branches:
refs/heads/branch-1.6 115bfbdae -> 3c683ed5f
http://git-wip-us.apache.org/repos/asf/spark/blob/3c683ed5/examples/src/main/scala/org/apache/spark/examples/ml/StringIndexerExample.scala
--
diff
[SPARK-11551][DOC][EXAMPLE] Replace example code in ml-features.md using
include_example
Made new patch contaning only markdown examples moved to exmaple/folder.
Ony three java code were not shfted since they were contaning compliation
error ,these classes are
1)StandardScale 2)NormalizerExampl
http://git-wip-us.apache.org/repos/asf/spark/blob/3c683ed5/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
--
diff --git
a/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialE
http://git-wip-us.apache.org/repos/asf/spark/blob/78209b0c/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialExpansionExample.java
--
diff --git
a/examples/src/main/java/org/apache/spark/examples/ml/JavaPolynomialE
[SPARK-11551][DOC][EXAMPLE] Replace example code in ml-features.md using
include_example
Made new patch contaning only markdown examples moved to exmaple/folder.
Ony three java code were not shfted since they were contaning compliation
error ,these classes are
1)StandardScale 2)NormalizerExampl
Repository: spark
Updated Branches:
refs/heads/master 3e7e05f5e -> 78209b0cc
http://git-wip-us.apache.org/repos/asf/spark/blob/78209b0c/examples/src/main/scala/org/apache/spark/examples/ml/StringIndexerExample.scala
--
diff --g
Repository: spark
Updated Branches:
refs/heads/master 36282f78b -> 3e7e05f5e
[SPARK-12160][MLLIB] Use SQLContext.getOrCreate in MLlib
Switched from using SQLContext constructor to using getOrCreate, mainly in
model save/load methods.
This covers all instances in spark.mllib. There were no u
Repository: spark
Updated Branches:
refs/heads/branch-1.6 cdeb89b34 -> 115bfbdae
[SPARK-12160][MLLIB] Use SQLContext.getOrCreate in MLlib
Switched from using SQLContext constructor to using getOrCreate, mainly in
model save/load methods.
This covers all instances in spark.mllib. There were
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c8aa5f201 -> cdeb89b34
[SPARK-12184][PYTHON] Make python api doc for pivot consistant with scala doc
In SPARK-11946 the API for pivot was changed a bit and got updated doc, the doc
changes were not made for the python api though. This
Repository: spark
Updated Branches:
refs/heads/master 84b809445 -> 36282f78b
[SPARK-12184][PYTHON] Make python api doc for pivot consistant with scala doc
In SPARK-11946 the API for pivot was changed a bit and got updated doc, the doc
changes were not made for the python api though. This PR u
Repository: spark
Updated Branches:
refs/heads/master 871e85d9c -> 84b809445
[SPARK-11884] Drop multiple columns in the DataFrame API
See the thread Ben started:
http://search-hadoop.com/m/q3RTtveEuhjsr7g/
This PR adds drop() method to DataFrame which accepts multiple column names
Author: te
Repository: spark
Updated Branches:
refs/heads/branch-1.6 539914f1a -> c8aa5f201
[SPARK-11963][DOC] Add docs for QuantileDiscretizer
https://issues.apache.org/jira/browse/SPARK-11963
Author: Xusen Yin
Closes #9962 from yinxusen/SPARK-11963.
(cherry picked from commit 871e85d9c14c6b19068cc7
Repository: spark
Updated Branches:
refs/heads/master 3f4efb5c2 -> 871e85d9c
[SPARK-11963][DOC] Add docs for QuantileDiscretizer
https://issues.apache.org/jira/browse/SPARK-11963
Author: Xusen Yin
Closes #9962 from yinxusen/SPARK-11963.
Project: http://git-wip-us.apache.org/repos/asf/spar
Repository: spark
Updated Branches:
refs/heads/branch-1.5 93a0510a5 -> 3868ab644
[SPARK-12101][CORE] Fix thread pools that cannot cache tasks in Worker and
AppClient (backport 1.5)
backport #10108 to branch 1.5
Author: Shixiong Zhu
Closes #10135 from zsxwing/fix-threadpool-1.5.
Project:
Repository: spark
Updated Branches:
refs/heads/master 5d80d8c6a -> 3f4efb5c2
[SPARK-12060][CORE] Avoid memory copy in JavaSerializerInstance.serialize
Merged #10051 again since #10083 is resolved.
This reverts commit 328b757d5d4486ea3c2e246780792d7a57ee85e5.
Author: Shixiong Zhu
Closes #10
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fed453821 -> 539914f1a
[SPARK-11932][STREAMING] Partition previous TrackStateRDD if partitioner not
present
The reason is that TrackStateRDDs generated by trackStateByKey expect the
previous batch's TrackStateRDDs to have a partitione
Repository: spark
Updated Branches:
refs/heads/master ef3f047c0 -> 5d80d8c6a
[SPARK-11932][STREAMING] Partition previous TrackStateRDD if partitioner not
present
The reason is that TrackStateRDDs generated by trackStateByKey expect the
previous batch's TrackStateRDDs to have a partitioner. H
Repository: spark
Updated Branches:
refs/heads/branch-1.6 3f230f7b3 -> fed453821
[SPARK-12132] [PYSPARK] raise KeyboardInterrupt inside SIGINT handler
Currently, the current line is not cleared by Cltr-C
After this patch
```
>>> asdfasdf^C
Traceback (most recent call last):
File "~/spark/py
Repository: spark
Updated Branches:
refs/heads/master 39d677c8f -> ef3f047c0
[SPARK-12132] [PYSPARK] raise KeyboardInterrupt inside SIGINT handler
Currently, the current line is not cleared by Cltr-C
After this patch
```
>>> asdfasdf^C
Traceback (most recent call last):
File "~/spark/python
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c54b698ec -> 3f230f7b3
http://git-wip-us.apache.org/repos/asf/spark/blob/3f230f7b/R/pkg/inst/tests/testthat/test_sparkSQL.R
--
diff --git a/R/pkg/inst/tests/testthat/tes
Repository: spark
Updated Branches:
refs/heads/master 9cde7d5fa -> 39d677c8f
http://git-wip-us.apache.org/repos/asf/spark/blob/39d677c8/R/pkg/inst/tests/testthat/test_sparkSQL.R
--
diff --git a/R/pkg/inst/tests/testthat/test_sp
http://git-wip-us.apache.org/repos/asf/spark/blob/3f230f7b/R/pkg/inst/tests/testthat/test_binaryFile.R
--
diff --git a/R/pkg/inst/tests/testthat/test_binaryFile.R
b/R/pkg/inst/tests/testthat/test_binaryFile.R
new file mode 100644
http://git-wip-us.apache.org/repos/asf/spark/blob/39d677c8/R/pkg/inst/tests/test_sparkSQL.R
--
diff --git a/R/pkg/inst/tests/test_sparkSQL.R b/R/pkg/inst/tests/test_sparkSQL.R
deleted file mode 100644
index 6ef03ae..000
--- a/R
http://git-wip-us.apache.org/repos/asf/spark/blob/39d677c8/R/pkg/inst/tests/testthat/test_binaryFile.R
--
diff --git a/R/pkg/inst/tests/testthat/test_binaryFile.R
b/R/pkg/inst/tests/testthat/test_binaryFile.R
new file mode 100644
[SPARK-12034][SPARKR] Eliminate warnings in SparkR test cases.
This PR:
1. Suppress all known warnings.
2. Cleanup test cases and fix some errors in test cases.
3. Fix errors in HiveContext related test cases. These test cases are actually
not run previously due to a bug of creating TestHiveConte
http://git-wip-us.apache.org/repos/asf/spark/blob/3f230f7b/R/pkg/inst/tests/test_sparkSQL.R
--
diff --git a/R/pkg/inst/tests/test_sparkSQL.R b/R/pkg/inst/tests/test_sparkSQL.R
deleted file mode 100644
index 6ef03ae..000
--- a/R
[SPARK-12034][SPARKR] Eliminate warnings in SparkR test cases.
This PR:
1. Suppress all known warnings.
2. Cleanup test cases and fix some errors in test cases.
3. Fix errors in HiveContext related test cases. These test cases are actually
not run previously due to a bug of creating TestHiveConte
Repository: spark
Updated Branches:
refs/heads/master 6fd9e70e3 -> 9cde7d5fa
[SPARK-12032] [SQL] Re-order inner joins to do join with conditions first
Currently, the order of joins is exactly the same as SQL query, some conditions
may not pushed down to the correct join, then those join will
Repository: spark
Updated Branches:
refs/heads/branch-1.6 82a71aba0 -> c54b698ec
[SPARK-12106][STREAMING][FLAKY-TEST] BatchedWAL test transiently flaky when
Jenkins load is high
We need to make sure that the last entry is indeed the last entry in the queue.
Author: Burak Yavuz
Closes #1011
Repository: spark
Updated Branches:
refs/heads/master 80a824d36 -> 6fd9e70e3
[SPARK-12106][STREAMING][FLAKY-TEST] BatchedWAL test transiently flaky when
Jenkins load is high
We need to make sure that the last entry is indeed the last entry in the queue.
Author: Burak Yavuz
Closes #10110 fr
36 matches
Mail list logo