svn commit: r28362 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_25_20_01-d2e7deb-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-25 Thread pwendell
Author: pwendell Date: Thu Jul 26 03:16:39 2018 New Revision: 28362 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_25_20_01-d2e7deb docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

svn commit: r28360 - in /dev/spark/2.3.3-SNAPSHOT-2018_07_25_18_01-fa552c3-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-25 Thread pwendell
Author: pwendell Date: Thu Jul 26 01:16:14 2018 New Revision: 28360 Log: Apache Spark 2.3.3-SNAPSHOT-2018_07_25_18_01-fa552c3 docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24867][SQL] Add AnalysisBarrier to DataFrameWriter

2018-07-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/branch-2.3 740606eb8 -> fa552c3c1 [SPARK-24867][SQL] Add AnalysisBarrier to DataFrameWriter ```Scala val udf1 = udf({(x: Int, y: Int) => x + y}) val df = spark.range(0, 3).toDF("a") .withColumn("b", udf1($"a", udf1($"a", lit(10

spark git commit: [SPARK-24867][SQL] Add AnalysisBarrier to DataFrameWriter

2018-07-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 17f469bc8 -> d2e7deb59 [SPARK-24867][SQL] Add AnalysisBarrier to DataFrameWriter ## What changes were proposed in this pull request? ```Scala val udf1 = udf({(x: Int, y: Int) => x + y}) val df = spark.range(0, 3).toDF("a")

svn commit: r28354 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_25_16_02-17f469b-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-25 Thread pwendell
Author: pwendell Date: Wed Jul 25 23:16:20 2018 New Revision: 28354 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_25_16_02-17f469b docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24860][SQL] Support setting of partitionOverWriteMode in output options for writing DataFrame

2018-07-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 0c83f718e -> 17f469bc8 [SPARK-24860][SQL] Support setting of partitionOverWriteMode in output options for writing DataFrame ## What changes were proposed in this pull request? Besides spark setting spark.sql.sources.partitionOverwriteMode

svn commit: r28352 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_25_12_01-2f77616-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-25 Thread pwendell
Author: pwendell Date: Wed Jul 25 19:16:39 2018 New Revision: 28352 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_25_12_01-2f77616 docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-23146][K8S][TESTS] Enable client mode integration test.

2018-07-25 Thread mcheah
Repository: spark Updated Branches: refs/heads/master 2f77616e1 -> 0c83f718e [SPARK-23146][K8S][TESTS] Enable client mode integration test. ## What changes were proposed in this pull request? Enable client mode integration test after merging from master. ## How was this patch tested? Check

spark git commit: [SPARK-24849][SPARK-24911][SQL] Converting a value of StructType to a DDL string

2018-07-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 571a6f057 -> 2f77616e1 [SPARK-24849][SPARK-24911][SQL] Converting a value of StructType to a DDL string ## What changes were proposed in this pull request? In the PR, I propose to extend the `StructType`/`StructField` classes by new metho

spark git commit: [SPARK-23146][K8S] Support client mode.

2018-07-25 Thread mcheah
Repository: spark Updated Branches: refs/heads/master c44eb561e -> 571a6f057 [SPARK-23146][K8S] Support client mode. ## What changes were proposed in this pull request? Support client mode for the Kubernetes scheduler. Client mode works more or less identically to cluster mode. However, in c

[2/3] spark-website git commit: spark summit eu 2018

2018-07-25 Thread lixiao
http://git-wip-us.apache.org/repos/asf/spark-website/blob/d86cffd1/site/news/spark-2-2-1-released.html -- diff --git a/site/news/spark-2-2-1-released.html b/site/news/spark-2-2-1-released.html index df7c2f0..b9d465f 100644 --- a/s

[1/3] spark-website git commit: spark summit eu 2018

2018-07-25 Thread lixiao
Repository: spark-website Updated Branches: refs/heads/asf-site f5d7dfafe -> d86cffd19 http://git-wip-us.apache.org/repos/asf/spark-website/blob/d86cffd1/site/releases/spark-release-1-1-1.html -- diff --git a/site/releases/spar

[3/3] spark-website git commit: spark summit eu 2018

2018-07-25 Thread lixiao
spark summit eu 2018 Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/d86cffd1 Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/d86cffd1 Diff: http://git-wip-us.apache.org/repos/asf/spark-website/

spark git commit: [SPARK-24768][FOLLOWUP][SQL] Avro migration followup: change artifactId to spark-avro

2018-07-25 Thread lixiao
Repository: spark Updated Branches: refs/heads/master 7a5fd4a91 -> c44eb561e [SPARK-24768][FOLLOWUP][SQL] Avro migration followup: change artifactId to spark-avro ## What changes were proposed in this pull request? After rethinking on the artifactId, I think it should be `spark-avro` instead

svn commit: r28345 - in /dev/spark/2.3.3-SNAPSHOT-2018_07_25_06_01-740606e-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-25 Thread pwendell
Author: pwendell Date: Wed Jul 25 13:17:06 2018 New Revision: 28345 Log: Apache Spark 2.3.3-SNAPSHOT-2018_07_25_06_01-740606e docs [This commit notification would consist of 1443 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---

spark git commit: [SPARK-24891][FOLLOWUP][HOT-FIX][2.3] Fix the Compilation Errors

2018-07-25 Thread gurwls223
Repository: spark Updated Branches: refs/heads/branch-2.3 6a5999286 -> 740606eb8 [SPARK-24891][FOLLOWUP][HOT-FIX][2.3] Fix the Compilation Errors ## What changes were proposed in this pull request? This PR is to fix the compilation failure in 2.3 build. https://amplab.cs.berkeley.edu/jenkins

svn commit: r28341 - in /dev/spark/2.4.0-SNAPSHOT-2018_07_25_00_01-7a5fd4a-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _s

2018-07-25 Thread pwendell
Author: pwendell Date: Wed Jul 25 07:16:31 2018 New Revision: 28341 Log: Apache Spark 2.4.0-SNAPSHOT-2018_07_25_00_01-7a5fd4a docs [This commit notification would consist of 1469 parts, which exceeds the limit of 50 ones, so it was shortened to the summary.] ---