Repository: spark
Updated Branches:
refs/heads/branch-1.5 00ccb2173 -> a0d52eb30
[SPARK-9958] [SQL] Make HiveThriftServer2Listener thread-safe and update the
tab name to "JDBC/ODBC Server"
This PR fixed the thread-safe issue of HiveThriftServer2Listener, and also
changed the tab name to "JDB
Repository: spark
Updated Branches:
refs/heads/master 7c7c7529a -> c8677d736
[SPARK-9958] [SQL] Make HiveThriftServer2Listener thread-safe and update the
tab name to "JDBC/ODBC Server"
This PR fixed the thread-safe issue of HiveThriftServer2Listener, and also
changed the tab name to "JDBC/OD
Repository: spark
Updated Branches:
refs/heads/branch-1.5 703e3f1ea -> 00ccb2173
[MINOR] [SQL] Remove canEqual in Row
As `InternalRow` does not extend `Row` now, I think we can remove it.
Author: Liang-Chi Hsieh
Closes #8170 from viirya/remove_canequal.
(cherry picked from commit 7c7c7529a
Repository: spark
Updated Branches:
refs/heads/master bd35385d5 -> 7c7c7529a
[MINOR] [SQL] Remove canEqual in Row
As `InternalRow` does not extend `Row` now, I think we can remove it.
Author: Liang-Chi Hsieh
Closes #8170 from viirya/remove_canequal.
Project: http://git-wip-us.apache.org/r
Repository: spark
Updated Branches:
refs/heads/branch-1.5 9df2a2d76 -> 703e3f1ea
[SPARK-9945] [SQL] pageSize should be calculated from executor.memory
Currently, pageSize of TungstenSort is calculated from driver.memory, it should
use executor.memory instead.
Also, in the worst case, the saf
Repository: spark
Updated Branches:
refs/heads/master 8187b3ae4 -> bd35385d5
[SPARK-9945] [SQL] pageSize should be calculated from executor.memory
Currently, pageSize of TungstenSort is calculated from driver.memory, it should
use executor.memory instead.
Also, in the worst case, the safeFac
Repository: spark
Updated Branches:
refs/heads/branch-1.5 b318b1141 -> 9df2a2d76
http://git-wip-us.apache.org/repos/asf/spark/blob/9df2a2d7/sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala
--
diff --gi
http://git-wip-us.apache.org/repos/asf/spark/blob/9df2a2d7/sql/core/src/test/scala/org/apache/spark/sql/TestData.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TestData.scala
b/sql/core/src/test/scala/org/apach
Repository: spark
Updated Branches:
refs/heads/master c50f97daf -> 8187b3ae4
http://git-wip-us.apache.org/repos/asf/spark/blob/8187b3ae/sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala
--
diff --git
a
[SPARK-9580] [SQL] Replace singletons in SQL tests
A fundamental limitation of the existing SQL tests is that *there is simply no
way to create your own `SparkContext`*. This is a serious limitation because
the user may wish to use a different master or config. As a case in point,
`BroadcastJoi
http://git-wip-us.apache.org/repos/asf/spark/blob/8187b3ae/sql/core/src/test/scala/org/apache/spark/sql/TestData.scala
--
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/TestData.scala
b/sql/core/src/test/scala/org/apach
http://git-wip-us.apache.org/repos/asf/spark/blob/8187b3ae/sql/core/src/test/scala/org/apache/spark/sql/execution/joins/InnerJoinSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/execution/joins/InnerJoinSui
http://git-wip-us.apache.org/repos/asf/spark/blob/9df2a2d7/sql/core/src/test/scala/org/apache/spark/sql/execution/joins/InnerJoinSuite.scala
--
diff --git
a/sql/core/src/test/scala/org/apache/spark/sql/execution/joins/InnerJoinSui
[SPARK-9580] [SQL] Replace singletons in SQL tests
A fundamental limitation of the existing SQL tests is that *there is simply no
way to create your own `SparkContext`*. This is a serious limitation because
the user may wish to use a different master or config. As a case in point,
`BroadcastJoi
Repository: spark
Updated Branches:
refs/heads/branch-1.5 cadc3b7d2 -> b318b1141
[SPARK-9943] [SQL] deserialized UnsafeHashedRelation should be serializable
When the free memory in executor goes low, the cached broadcast objects need to
serialized into disk, but currently the deserialized Uns
Repository: spark
Updated Branches:
refs/heads/master 693949ba4 -> c50f97daf
[SPARK-9943] [SQL] deserialized UnsafeHashedRelation should be serializable
When the free memory in executor goes low, the cached broadcast objects need to
serialized into disk, but currently the deserialized UnsafeH
Repository: spark
Updated Branches:
refs/heads/branch-1.4 041e720ec -> db71ea482
[SPARK-8976] [PYSPARK] fix open mode in python3
This bug only happen on Python 3 and Windows.
I tested this manually with python 3 and disable python daemon, no unit test
yet.
Author: Davies Liu
Closes #8181
Repository: spark
Updated Branches:
refs/heads/master 6c5858bc6 -> 693949ba4
[SPARK-8976] [PYSPARK] fix open mode in python3
This bug only happen on Python 3 and Windows.
I tested this manually with python 3 and disable python daemon, no unit test
yet.
Author: Davies Liu
Closes #8181 from
Repository: spark
Updated Branches:
refs/heads/branch-1.5 2b6b1d12f -> cadc3b7d2
[SPARK-8976] [PYSPARK] fix open mode in python3
This bug only happen on Python 3 and Windows.
I tested this manually with python 3 and disable python daemon, no unit test
yet.
Author: Davies Liu
Closes #8181
Repository: spark
Updated Branches:
refs/heads/branch-1.5 2c7f8da58 -> 2b6b1d12f
[SPARK-9922] [ML] rename StringIndexerReverse to IndexToString
What `StringIndexerInverse` does is not strictly associated with
`StringIndexer`, and the name is not clearly describing the transformation.
Renamin
Repository: spark
Updated Branches:
refs/heads/master c2520f501 -> 6c5858bc6
[SPARK-9922] [ML] rename StringIndexerReverse to IndexToString
What `StringIndexerInverse` does is not strictly associated with
`StringIndexer`, and the name is not clearly describing the transformation.
Renaming to
Repository: spark
Updated Branches:
refs/heads/master a8d2f4c5f -> c2520f501
[SPARK-9935] [SQL] EqualNotNull not processed in ORC
https://issues.apache.org/jira/browse/SPARK-9935
Author: hyukjinkwon
Closes #8163 from HyukjinKwon/master.
Project: http://git-wip-us.apache.org/repos/asf/spar
Repository: spark
Updated Branches:
refs/heads/branch-1.5 875ecc7f6 -> 2c7f8da58
[SPARK-9942] [PYSPARK] [SQL] ignore exceptions while try to import pandas
If pandas is broken (can't be imported, raise other exceptions other than
ImportError), pyspark can't be imported, we should ignore all th
Repository: spark
Updated Branches:
refs/heads/master 864de8eaf -> a8d2f4c5f
[SPARK-9942] [PYSPARK] [SQL] ignore exceptions while try to import pandas
If pandas is broken (can't be imported, raise other exceptions other than
ImportError), pyspark can't be imported, we should ignore all the ex
Repository: spark
Updated Branches:
refs/heads/master 8815ba2f6 -> 864de8eaf
[SPARK-9661] [MLLIB] [ML] Java compatibility
I skimmed through the docs for various instance of Object and replaced them
with Java compaible versions of the same.
1. Some methods in LDAModel.
2. runMiniBatchSGD
3. k
Repository: spark
Updated Branches:
refs/heads/branch-1.5 30460206f -> 875ecc7f6
[SPARK-9661] [MLLIB] [ML] Java compatibility
I skimmed through the docs for various instance of Object and replaced them
with Java compaible versions of the same.
1. Some methods in LDAModel.
2. runMiniBatchSGD
Repository: spark
Updated Branches:
refs/heads/branch-1.4 8ce86b23f -> 041e720ec
[SPARK-9649] Fix flaky test MasterSuite - randomize ports
```
Error Message
Failed to bind to: /127.0.0.1:7093: Service 'sparkMaster' failed after 16
retries!
Stacktrace
java.net.BindException: Failed to
Repository: spark
Updated Branches:
refs/heads/branch-1.5 883c7d35f -> 30460206f
[SPARK-9649] Fix MasterSuite, third time's a charm
This particular test did not load the default configurations so
it continued to start the REST server, which causes port bind
exceptions.
Project: http://git-wi
Repository: spark
Updated Branches:
refs/heads/master 65fec798c -> 8815ba2f6
[SPARK-9649] Fix MasterSuite, third time's a charm
This particular test did not load the default configurations so
it continued to start the REST server, which causes port bind
exceptions.
Project: http://git-wip-us
Repository: spark
Updated Branches:
refs/heads/branch-1.5 2b1353249 -> 883c7d35f
[MINOR] [DOC] fix mllib pydoc warnings
Switch to correct Sphinx syntax. MechCoder
Author: Xiangrui Meng
Closes #8169 from mengxr/mllib-pydoc-fix.
(cherry picked from commit 65fec798ce52ca6b8b0fe14b78a16712778a
Repository: spark
Updated Branches:
refs/heads/master 4b70798c9 -> 65fec798c
[MINOR] [DOC] fix mllib pydoc warnings
Switch to correct Sphinx syntax. MechCoder
Author: Xiangrui Meng
Closes #8169 from mengxr/mllib-pydoc-fix.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/branch-1.5 49085b56c -> 2b1353249
[MINOR] [ML] change MultilayerPerceptronClassifierModel to
MultilayerPerceptronClassificationModel
To follow the naming rule of ML, change `MultilayerPerceptronClassifierModel`
to `MultilayerPerceptronClassifica
Repository: spark
Updated Branches:
refs/heads/master 7a539ef3b -> 4b70798c9
[MINOR] [ML] change MultilayerPerceptronClassifierModel to
MultilayerPerceptronClassificationModel
To follow the naming rule of ML, change `MultilayerPerceptronClassifierModel`
to `MultilayerPerceptronClassification
Repository: spark
Updated Branches:
refs/heads/branch-1.5 fe05142f5 -> 49085b56c
[SPARK-8965] [DOCS] Add ml-guide Python Example: Estimator, Transformer, and
Param
Added ml-guide Python Example: Estimator, Transformer, and Param
/docs/_site/ml-guide.html
Author: Rosstin
Closes #8081 from R
Repository: spark
Updated Branches:
refs/heads/master 2932e25da -> 7a539ef3b
[SPARK-8965] [DOCS] Add ml-guide Python Example: Estimator, Transformer, and
Param
Added ml-guide Python Example: Estimator, Transformer, and Param
/docs/_site/ml-guide.html
Author: Rosstin
Closes #8081 from Rosst
Repository: spark
Updated Branches:
refs/heads/master 699303101 -> 2932e25da
[SPARK-9073] [ML] spark.ml Models copy() should call setParent when there is a
parent
Copied ML models must have the same parent of original ones
Author: lewuathe
Author: Lewuathe
Closes #7447 from Lewuathe/SPARK
Repository: spark
Updated Branches:
refs/heads/branch-1.5 5592d162a -> fe05142f5
[SPARK-9073] [ML] spark.ml Models copy() should call setParent when there is a
parent
Copied ML models must have the same parent of original ones
Author: lewuathe
Author: Lewuathe
Closes #7447 from Lewuathe/S
Repository: spark
Updated Branches:
refs/heads/branch-1.5 2a600daab -> 5592d162a
[SPARK-9757] [SQL] Fixes persistence of Parquet relation with decimal column
PR #7967 enables us to save data source relations to metastore in Hive
compatible format when possible. But it fails to persist Parquet
Repository: spark
Updated Branches:
refs/heads/master 84a27916a -> 699303101
[SPARK-9757] [SQL] Fixes persistence of Parquet relation with decimal column
PR #7967 enables us to save data source relations to metastore in Hive
compatible format when possible. But it fails to persist Parquet rel
Repository: spark
Updated Branches:
refs/heads/branch-1.5 ae18342a5 -> 2a600daab
[SPARK-9885] [SQL] Also pass barrierPrefixes and sharedPrefixes to
IsolatedClientLoader when hiveMetastoreJars is set to maven.
https://issues.apache.org/jira/browse/SPARK-9885
cc marmbrus liancheng
Author: Yin
Repository: spark
Updated Branches:
refs/heads/master 68f995714 -> 84a27916a
[SPARK-9885] [SQL] Also pass barrierPrefixes and sharedPrefixes to
IsolatedClientLoader when hiveMetastoreJars is set to maven.
https://issues.apache.org/jira/browse/SPARK-9885
cc marmbrus liancheng
Author: Yin Hua
41 matches
Mail list logo