This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 3c831ebf7cd8 [MINOR][DOCS] Add new dataframe methods to API references
3c831ebf7cd8 is described below

commit 3c831ebf7cd81c2ed774c692b2da7a35ce0fa252
Author: Ruifeng Zheng <[email protected]>
AuthorDate: Fri Jan 10 19:41:45 2025 +0800

    [MINOR][DOCS] Add new dataframe methods to API references
    
    ### What changes were proposed in this pull request?
    Add new dataframe methods to API references
    
    ### Why are the changes needed?
    these new methods are missing in docs
    
    ### Does this PR introduce _any_ user-facing change?
    yes, docs-only changes
    
    ### How was this patch tested?
    CI
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No
    
    Closes #49439 from zhengruifeng/py_doc_missing_df.
    
    Authored-by: Ruifeng Zheng <[email protected]>
    Signed-off-by: Ruifeng Zheng <[email protected]>
---
 python/docs/source/reference/pyspark.sql/dataframe.rst | 6 ++++++
 python/pyspark/sql/dataframe.py                        | 6 +++---
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/python/docs/source/reference/pyspark.sql/dataframe.rst 
b/python/docs/source/reference/pyspark.sql/dataframe.rst
index 569c5cec6955..5aaea4c32577 100644
--- a/python/docs/source/reference/pyspark.sql/dataframe.rst
+++ b/python/docs/source/reference/pyspark.sql/dataframe.rst
@@ -30,6 +30,7 @@ DataFrame
     DataFrame.agg
     DataFrame.alias
     DataFrame.approxQuantile
+    DataFrame.asTable
     DataFrame.cache
     DataFrame.checkpoint
     DataFrame.coalesce
@@ -56,6 +57,7 @@ DataFrame
     DataFrame.dtypes
     DataFrame.exceptAll
     DataFrame.executionInfo
+    DataFrame.exists
     DataFrame.explain
     DataFrame.fillna
     DataFrame.filter
@@ -75,9 +77,11 @@ DataFrame
     DataFrame.isStreaming
     DataFrame.join
     DataFrame.limit
+    DataFrame.lateralJoin
     DataFrame.localCheckpoint
     DataFrame.mapInPandas
     DataFrame.mapInArrow
+    DataFrame.metadataColumn
     DataFrame.melt
     DataFrame.na
     DataFrame.observe
@@ -96,6 +100,7 @@ DataFrame
     DataFrame.sameSemantics
     DataFrame.sample
     DataFrame.sampleBy
+    DataFrame.scalar
     DataFrame.schema
     DataFrame.select
     DataFrame.selectExpr
@@ -117,6 +122,7 @@ DataFrame
     DataFrame.toLocalIterator
     DataFrame.toPandas
     DataFrame.transform
+    DataFrame.transpose
     DataFrame.union
     DataFrame.unionAll
     DataFrame.unionByName
diff --git a/python/pyspark/sql/dataframe.py b/python/pyspark/sql/dataframe.py
index f2c0bc815582..2d12704485ad 100644
--- a/python/pyspark/sql/dataframe.py
+++ b/python/pyspark/sql/dataframe.py
@@ -6609,10 +6609,10 @@ class DataFrame:
         After obtaining a TableArg from a DataFrame using this method, you can 
specify partitioning
         and ordering for the table argument by calling methods such as 
`partitionBy`, `orderBy`, and
         `withSinglePartition` on the `TableArg` instance.
-        - partitionBy(*cols): Partitions the data based on the specified 
columns. This method cannot
+        - partitionBy: Partitions the data based on the specified columns. 
This method cannot
         be called after withSinglePartition() has been called.
-        - orderBy(*cols): Orders the data within partitions based on the 
specified columns.
-        - withSinglePartition(): Indicates that the data should be treated as 
a single partition.
+        - orderBy: Orders the data within partitions based on the specified 
columns.
+        - withSinglePartition: Indicates that the data should be treated as a 
single partition.
         This method cannot be called after partitionBy() has been called.
 
         .. versionadded:: 4.0.0


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to