This is an automated email from the ASF dual-hosted git repository.

ruifengz pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 1fce8291425 [MINOR][PYTHON] Fix python linter in master
1fce8291425 is described below

commit 1fce8291425d8cf62ad4f8f53c510db34347802e
Author: Ruifeng Zheng <[email protected]>
AuthorDate: Thu Oct 5 10:46:21 2023 +0800

    [MINOR][PYTHON] Fix python linter in master
    
    ### What changes were proposed in this pull request?
    Fix python linter
    
    ### Why are the changes needed?
    https://github.com/apache/spark/actions/runs/6413129615/job/17411605510
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    local test
    
    ```
    (spark_dev_311) ➜  spark git:(minor_fix_python_linter) dev/lint-python
    starting python compilation test...
    python compilation succeeded.
    
    starting black test...
    black checks passed.
    
    starting flake8 test...
    flake8 checks passed.
    
    starting mypy annotations test...
    annotations passed mypy checks.
    
    starting mypy examples test...
    examples passed mypy checks.
    
    starting mypy data test...
    annotations passed data checks.
    
    all lint-python tests passed!
    ```
    
    ### Was this patch authored or co-authored using generative AI tooling?
    no
    
    Closes #43222 from zhengruifeng/minor_fix_python_linter.
    
    Authored-by: Ruifeng Zheng <[email protected]>
    Signed-off-by: Ruifeng Zheng <[email protected]>
---
 python/pyspark/sql/connect/dataframe.py | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/python/pyspark/sql/connect/dataframe.py 
b/python/pyspark/sql/connect/dataframe.py
index 9e46c8e1bf3..5197a0db968 100644
--- a/python/pyspark/sql/connect/dataframe.py
+++ b/python/pyspark/sql/connect/dataframe.py
@@ -1677,9 +1677,6 @@ class DataFrame:
 
     def __getitem__(self, item: Union[int, str, Column, List, Tuple]) -> 
Union[Column, "DataFrame"]:
         if isinstance(item, str):
-            if self._plan is None:
-                raise SparkConnectException("Cannot analyze on empty plan.")
-
             # validate the column name
             if not hasattr(self._session, "is_mock_session"):
                 self.select(item).isLocal()


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to