spark git commit: [SPARK-20843][CORE] Add a config to set driver terminate timeout

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.1 6e6adccea -> ebd72f453 [SPARK-20843][CORE] Add a config to set driver terminate timeout ## What changes were proposed in this pull request? Add a `worker` configuration to set how long to wait before forcibly killing driver. ## How w

spark git commit: [SPARK-20843][CORE] Add a config to set driver terminate timeout

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.2 39f76657e -> f2408bdd7 [SPARK-20843][CORE] Add a config to set driver terminate timeout ## What changes were proposed in this pull request? Add a `worker` configuration to set how long to wait before forcibly killing driver. ## How w

spark git commit: [SPARK-20843][CORE] Add a config to set driver terminate timeout

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master a0f8a072e -> 6c1dbd6fc [SPARK-20843][CORE] Add a config to set driver terminate timeout ## What changes were proposed in this pull request? Add a `worker` configuration to set how long to wait before forcibly killing driver. ## How was t

spark git commit: [SPARK-20748][SQL] Add built-in SQL function CH[A]R.

2017-05-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 1d62f8aca -> a0f8a072e [SPARK-20748][SQL] Add built-in SQL function CH[A]R. ## What changes were proposed in this pull request? Add built-in SQL function `CH[A]R`: For `CHR(bigint|double n)`, returns the ASCII character having the binary e

spark git commit: [SPARK-19659][CORE][FOLLOW-UP] Fetch big blocks to disk when shuffle-read

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.2 fc799d730 -> 39f76657e [SPARK-19659][CORE][FOLLOW-UP] Fetch big blocks to disk when shuffle-read ## What changes were proposed in this pull request? This PR includes some minor improvement for the comments and tests in https://github.

spark git commit: [SPARK-19659][CORE][FOLLOW-UP] Fetch big blocks to disk when shuffle-read

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/master 4af378129 -> 1d62f8aca [SPARK-19659][CORE][FOLLOW-UP] Fetch big blocks to disk when shuffle-read ## What changes were proposed in this pull request? This PR includes some minor improvement for the comments and tests in https://github.com/

spark git commit: [SPARK-10643][CORE] Make spark-submit download remote files to local in client mode

2017-05-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 30922dec8 -> fc799d730 [SPARK-10643][CORE] Make spark-submit download remote files to local in client mode ## What changes were proposed in this pull request? This PR makes spark-submit script download remote files to local file syste

spark git commit: [SPARK-10643][CORE] Make spark-submit download remote files to local in client mode

2017-05-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master c491e2ed9 -> 4af378129 [SPARK-10643][CORE] Make spark-submit download remote files to local in client mode ## What changes were proposed in this pull request? This PR makes spark-submit script download remote files to local file system f

spark git commit: [SPARK-20873][SQL] Improve the error message for unsupported Column Type

2017-05-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master ae33abf71 -> c491e2ed9 [SPARK-20873][SQL] Improve the error message for unsupported Column Type ## What changes were proposed in this pull request? Upon encountering an invalid columntype, the column type object is printed, rather than the

spark git commit: [SPARK-20694][DOCS][SQL] Document DataFrameWriter partitionBy, bucketBy and sortBy in SQL guide

2017-05-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.2 2b59ed4f1 -> 30922dec8 [SPARK-20694][DOCS][SQL] Document DataFrameWriter partitionBy, bucketBy and sortBy in SQL guide ## What changes were proposed in this pull request? - Add Scala, Python and Java examples for `partitionBy`, `sortB

spark git commit: [SPARK-20694][DOCS][SQL] Document DataFrameWriter partitionBy, bucketBy and sortBy in SQL guide

2017-05-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 473d7552a -> ae33abf71 [SPARK-20694][DOCS][SQL] Document DataFrameWriter partitionBy, bucketBy and sortBy in SQL guide ## What changes were proposed in this pull request? - Add Scala, Python and Java examples for `partitionBy`, `sortBy` a

spark git commit: [SPARK-20014] Optimize mergeSpillsWithFileStream method

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master d935e0a9d -> 473d7552a [SPARK-20014] Optimize mergeSpillsWithFileStream method ## What changes were proposed in this pull request? When the individual partition size in a spill is small, mergeSpillsWithTransferTo method does many small di

spark git commit: [SPARK-20844] Remove experimental from Structured Streaming APIs

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.2 92837aeb4 -> 2b59ed4f1 [SPARK-20844] Remove experimental from Structured Streaming APIs Now that Structured Streaming has been out for several Spark release and has large production use cases, the `Experimental` label is no longer appr

spark git commit: [SPARK-20844] Remove experimental from Structured Streaming APIs

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/master 0fd84b05d -> d935e0a9d [SPARK-20844] Remove experimental from Structured Streaming APIs Now that Structured Streaming has been out for several Spark release and has large production use cases, the `Experimental` label is no longer appropri

spark git commit: [SPARK-19372][SQL] Fix throwing a Java exception at df.fliter() due to 64KB bytecode size limit

2017-05-26 Thread zsxwing
Repository: spark Updated Branches: refs/heads/branch-2.2 f99456b5f -> 92837aeb4 [SPARK-19372][SQL] Fix throwing a Java exception at df.fliter() due to 64KB bytecode size limit ## What changes were proposed in this pull request? When an expression for `df.filter()` has many nodes (e.g. 400),

spark git commit: [SPARK-20393][WEBU UI] Strengthen Spark to prevent XSS vulnerabilities

2017-05-26 Thread srowen
Repository: spark Updated Branches: refs/heads/branch-2.2 fafe28327 -> f99456b5f [SPARK-20393][WEBU UI] Strengthen Spark to prevent XSS vulnerabilities ## What changes were proposed in this pull request? Add stripXSS and stripXSSMap to Spark Core's UIUtils. Calling these functions at any poi

spark-website git commit: Add message directing security issues to priv...@spark.apache.org

2017-05-26 Thread srowen
Repository: spark-website Updated Branches: refs/heads/asf-site 5ed41c8d8 -> 80f50ecca Add message directing security issues to priv...@spark.apache.org Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/80f50

spark git commit: [SPARK-20835][CORE] It should exit directly when the --total-executor-cores parameter is setted less than 0 when submit a application

2017-05-26 Thread srowen
Repository: spark Updated Branches: refs/heads/master 629f38e17 -> 0fd84b05d [SPARK-20835][CORE] It should exit directly when the --total-executor-cores parameter is setted less than 0 when submit a application ## What changes were proposed in this pull request? In my test, the submitted app

spark git commit: [SPARK-20887][CORE] support alternative keys in ConfigBuilder

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/master b6f2017a6 -> 629f38e17 [SPARK-20887][CORE] support alternative keys in ConfigBuilder ## What changes were proposed in this pull request? `ConfigBuilder` builds `ConfigEntry` which can only read value with one key, if we wanna change the c

spark git commit: [MINOR] document edge case of updateFunc usage

2017-05-26 Thread srowen
Repository: spark Updated Branches: refs/heads/master d9ad78908 -> b6f2017a6 [MINOR] document edge case of updateFunc usage ## What changes were proposed in this pull request? Include documentation of the fact that the updateFunc is sometimes called with no new values. This is documented in

spark git commit: [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.0 ef0ebdde0 -> 9846a3c4b [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo ## What changes were proposed in this pull request? Long time ago we fixed a [bug](https://issues.apache.org/jira/

spark git commit: [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.1 4f6fccf15 -> 6e6adccea [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo ## What changes were proposed in this pull request? Long time ago we fixed a [bug](https://issues.apache.org/jira/

spark git commit: [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/master a97c49704 -> d9ad78908 [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo ## What changes were proposed in this pull request? Long time ago we fixed a [bug](https://issues.apache.org/jira/brow

spark git commit: [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo

2017-05-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.2 289dd170c -> fafe28327 [SPARK-20868][CORE] UnsafeShuffleWriter should verify the position after FileChannel.transferTo ## What changes were proposed in this pull request? Long time ago we fixed a [bug](https://issues.apache.org/jira/