spark git commit: [SPARK-24288][SQL] Add a JDBC Option to enable preventing predicate pushdown

2018-07-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master e6e9031d7 -> 21fcac164 [SPARK-24288][SQL] Add a JDBC Option to enable preventing predicate pushdown ## What changes were proposed in this pull request? Add a JDBC Option "pushDownPredicate" (default `true`) to allow/disallow predicate pus

spark git commit: [SPARK-24865] Remove AnalysisBarrier

2018-07-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/master f9c9d80e4 -> e6e9031d7 [SPARK-24865] Remove AnalysisBarrier ## What changes were proposed in this pull request? AnalysisBarrier was introduced in SPARK-20392 to improve analysis speed (don't re-analyze nodes that have already been analyzed

spark git commit: [SPARK-24929][INFRA] Make merge script don't swallow KeyboardInterrupt

2018-07-26 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master dc3713cca -> f9c9d80e4 [SPARK-24929][INFRA] Make merge script don't swallow KeyboardInterrupt ## What changes were proposed in this pull request? If you want to get out of the loop to assign JIRA's user by command+c (KeyboardInterrupt), I

spark git commit: [SPARK-24829][STS] In Spark Thrift Server, CAST AS FLOAT inconsistent with spark-shell or spark-sql

2018-07-26 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master 094aa5971 -> dc3713cca [SPARK-24829][STS] In Spark Thrift Server, CAST AS FLOAT inconsistent with spark-shell or spark-sql ## What changes were proposed in this pull request? SELECT CAST('4.56' AS FLOAT) the result is 4.55942779541

svn commit: r28387 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_26_20_01-fa09d91-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-26 Thread pwendell
Author: pwendell Date: Fri Jul 27 03:16:20 2018 New Revision: 28387 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_26_20_01-fa09d91 docs [This commit notification would consist of 1470 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24801][CORE] Avoid memory waste by empty byte[] arrays in SaslEncryption$EncryptedMessage

2018-07-26 Thread irashid
Repository: spark Updated Branches: refs/heads/master fa09d9192 -> 094aa5971 [SPARK-24801][CORE] Avoid memory waste by empty byte[] arrays in SaslEncryption$EncryptedMessage ## What changes were proposed in this pull request? Initialize SaslEncryption$EncryptedMessage.byteChannel lazily, so

spark git commit: [SPARK-24919][BUILD] New linter rule for sparkContext.hadoopConfiguration

2018-07-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 2c8274568 -> fa09d9192 [SPARK-24919][BUILD] New linter rule for sparkContext.hadoopConfiguration ## What changes were proposed in this pull request? In most cases, we should use `spark.sessionState.newHadoopConf()` instead of `sparkContex

svn commit: r28384 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_26_16_01-2c82745-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-26 Thread pwendell
Author: pwendell Date: Thu Jul 26 23:16:00 2018 New Revision: 28384 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_26_16_01-2c82745 docs [This commit notification would consist of 1470 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

svn commit: r28381 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_26_12_01-5ed7660-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-26 Thread pwendell
Author: pwendell Date: Thu Jul 26 19:16:07 2018 New Revision: 28381 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_26_12_01-5ed7660 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24307][CORE] Add conf to revert to old code.

2018-07-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master e3486e1b9 -> 2c8274568 [SPARK-24307][CORE] Add conf to revert to old code. In case there are any issues in converting FileSegmentManagedBuffer to ChunkedByteBuffer, add a conf to go back to old code path. Followup to 7e847646d1f377f46dc315

spark git commit: [SPARK-24795][CORE] Implement barrier execution mode

2018-07-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 5ed7660d1 -> e3486e1b9 [SPARK-24795][CORE] Implement barrier execution mode ## What changes were proposed in this pull request? Propose new APIs and modify job/task scheduling to support barrier execution mode, which requires all tasks in

spark git commit: [SPARK-24802][SQL][FOLLOW-UP] Add a new config for Optimization Rule Exclusion

2018-07-26 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 58353d7f4 -> 5ed7660d1 [SPARK-24802][SQL][FOLLOW-UP] Add a new config for Optimization Rule Exclusion ## What changes were proposed in this pull request? This is an extension to the original PR, in which rule exclusion did not work for cl

svn commit: r28377 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_26_04_02-58353d7-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-26 Thread pwendell
Author: pwendell Date: Thu Jul 26 11:18:10 2018 New Revision: 28377 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_26_04_02-58353d7 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24924][SQL] Add mapping for built-in Avro data source

2018-07-26 Thread gurwls223
Repository: spark Updated Branches: refs/heads/master c9b233d41 -> 58353d7f4 [SPARK-24924][SQL] Add mapping for built-in Avro data source ## What changes were proposed in this pull request? This PR aims to the followings. 1. Like `com.databricks.spark.csv` mapping, we had better map `com.dat

spark git commit: [SPARK-24878][SQL] Fix reverse function for array type of primitive type containing null.

2018-07-26 Thread wenchen
Repository: spark Updated Branches: refs/heads/master d2e7deb59 -> c9b233d41 [SPARK-24878][SQL] Fix reverse function for array type of primitive type containing null. ## What changes were proposed in this pull request? If we use `reverse` function for array type of primitive type containing