This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new e823afad625e [SPARK-50917][EXAMPLES] Add Pi Scala example to work both 
for Connect and Classic
e823afad625e is described below

commit e823afad625e1914baf265d823d472cad183355e
Author: Kent Yao <[email protected]>
AuthorDate: Mon Feb 10 11:42:07 2025 -0800

    [SPARK-50917][EXAMPLES] Add Pi Scala example to work both for Connect and 
Classic
    
    ### What changes were proposed in this pull request?
    
    This PR adds a SparkDataFramePi Scala example to work both for Connect and 
Classic
    
    ### Why are the changes needed?
    
    The SparkPi example, mostly as the first step for users to get to know 
Spark, should be able to run on Spark Connect mode.
    
    ### Does this PR introduce _any_ user-facing change?
    no
    
    ### How was this patch tested?
    Manually build and test
    
    ```log
    bin/spark-submit --remote 'sc://localhost' --class 
org.apache.spark.examples.sql.SparkDataFramePi 
examples/jars/spark-examples_2.13-4.1.0-SNAPSHOT.jar
    WARNING: Using incubator modules: jdk.incubator.vector
    25/01/23 15:00:03 INFO BaseAllocator: Debug mode disabled. Enable with the 
VM option -Darrow.memory.debug.allocator=true.
    25/01/23 15:00:03 INFO DefaultAllocationManagerOption: allocation manager 
type not specified, using netty as the default type
    25/01/23 15:00:03 INFO CheckAllocator: Using DefaultAllocationManager at 
memory/netty/DefaultAllocationManagerFactory.class
    Pi is roughly 3.1388756943784717
    25/01/23 15:00:04 INFO ShutdownHookManager: Shutdown hook called
    25/01/23 15:00:04 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/84/dgr9ykwn6yndcmq1kjxqvk200000gn/T/spark-25ed842e-5888-47ce-bb0b-442385d643cb
    ```
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    no
    
    Closes #49617 from yaooqinn/SPARK-50917.
    
    Authored-by: Kent Yao <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 .../spark/examples/sql/SparkDataFramePi.scala      | 42 ++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git 
a/examples/src/main/scala/org/apache/spark/examples/sql/SparkDataFramePi.scala 
b/examples/src/main/scala/org/apache/spark/examples/sql/SparkDataFramePi.scala
new file mode 100644
index 000000000000..0102b2d291e9
--- /dev/null
+++ 
b/examples/src/main/scala/org/apache/spark/examples/sql/SparkDataFramePi.scala
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// scalastyle:off println
+package org.apache.spark.examples.sql
+
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.functions._
+
+/** Computes an approximation to pi with SparkSession/DataFrame APIs */
+object SparkDataFramePi {
+  def main(args: Array[String]): Unit = {
+    val spark = SparkSession
+      .builder()
+      .appName("Spark DataFrame Pi")
+      .getOrCreate()
+    import spark.implicits._
+    val slices = if (args.length > 0) args(0).toInt else 2
+    val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
+    val count = spark.range(0, n, 1, slices)
+      .select((pow(rand() * 2 - 1, lit(2)) + pow(rand() * 2 - 1, 
lit(2))).as("v"))
+      .where($"v" <= 1)
+      .count()
+    println(s"Pi is roughly ${4.0 * count / (n - 1)}")
+    spark.stop()
+  }
+}
+// scalastyle:on println


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to