martin-g commented on code in PR #1579:
URL: 
https://github.com/apache/datafusion-ballista/pull/1579#discussion_r3152032360


##########
docs/source/user-guide/python/jupyter.md:
##########
@@ -0,0 +1,99 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Jupyter Notebooks
+
+Ballista works well in Jupyter notebooks. DataFrames automatically render as 
formatted
+HTML tables when displayed in a notebook cell.
+
+## Basic Usage
+
+```python
+from ballista import BallistaSessionContext
+
+ctx = BallistaSessionContext("df://localhost:50050")
+
+ctx.register_parquet("trips", "/path/to/nyctaxi.parquet")
+
+# The result renders as an HTML table when this is the last expression in a 
cell
+ctx.sql("SELECT * FROM trips LIMIT 10")
+```
+
+When a DataFrame is the last expression in a cell, Jupyter automatically calls 
its
+`_repr_html_()` method, which renders a styled table with formatted column 
headers,
+expandable cells for long text, and scrollable display for wide tables.
+
+## Converting Results
+
+DataFrames can be converted to various formats for further analysis:
+
+```python
+df = ctx.sql("SELECT * FROM trips WHERE fare_amount > 50")
+
+pandas_df = df.to_pandas()
+arrow_table = df.to_arrow_table()
+polars_df = df.to_polars()
+batches = df.collect()
+```
+
+## Example Workflow
+
+```python
+# Cell 1: Setup
+from ballista import BallistaSessionContext
+from datafusion import col, lit
+
+ctx = BallistaSessionContext("df://localhost:50050")
+ctx.register_parquet("orders", "/data/orders.parquet")
+ctx.register_parquet("customers", "/data/customers.parquet")
+
+# Cell 2: Explore the data
+ctx.sql("SELECT * FROM orders LIMIT 5")

Review Comment:
   ```suggestion
   df = ctx.sql("SELECT * FROM orders LIMIT 5")
   ```



##########
docs/source/user-guide/python/quickstart.md:
##########
@@ -0,0 +1,180 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Ballista Python Bindings
+
+Ballista provides Python bindings, allowing SQL and DataFrame queries to be 
executed
+from Python. Like PySpark, you build a plan through SQL or a DataFrame API 
against
+Parquet, CSV, JSON, and other file formats, run it in a distributed 
environment, and
+collect the results back in Python.
+
+## Connecting to a Cluster
+
+`BallistaSessionContext` is the entry point for both remote and in-process 
clusters.
+
+**Remote cluster** — connect to an already-running scheduler:
+
+```python
+from ballista import BallistaSessionContext
+
+ctx = BallistaSessionContext("df://localhost:50050")
+```
+
+**In-process cluster** — start a scheduler and executor in the current Python 
process.
+Useful for development, testing, and notebooks:
+
+```python
+from ballista import BallistaSessionContext, setup_test_cluster
+
+host, port = setup_test_cluster()
+ctx = BallistaSessionContext(f"df://{host}:{port}")
+```
+
+## Configuration
+
+### Target Partitions
+
+Set `datafusion.execution.target_partitions` to match your cluster capacity
+(`executors × concurrent_tasks_per_executor`) by passing `cluster_config` to 
the
+constructor. The default inherits from DataFusion and is based on the client's 
CPU
+count, which is far too low for distributed execution:
+
+```python
+from ballista import BallistaSessionContext
+
+executors = 4
+concurrent_tasks = 8
+target_partitions = executors * concurrent_tasks
+
+ctx = BallistaSessionContext(
+    "df://localhost:50050",
+    cluster_config={
+        "datafusion.execution.target_partitions": str(target_partitions),
+    },
+)
+```
+
+`cluster_config` is propagated to the scheduler-side session for distributed 
planning
+and execution, and is also applied to the local context so settings consulted 
during
+table registration (e.g. 
`datafusion.execution.listing_table_factory_infer_partitions`)
+take effect before the plan is shipped.
+
+This controls how many parallel tasks the scheduler creates per stage. Setting 
it too
+low leaves executor capacity idle; setting it too high creates unnecessary 
scheduling
+overhead.
+
+## SQL
+
+### Registering Tables
+
+Before running SQL queries, register tables with the context using a 
`register_*`
+method or a `CREATE EXTERNAL TABLE` statement:
+
+```python
+ctx.register_parquet("trips", "/mnt/bigdata/nyctaxi")
+```
+
+```python
+ctx.sql("CREATE EXTERNAL TABLE trips STORED AS PARQUET LOCATION 
'/mnt/bigdata/nyctaxi'")
+```
+
+### Executing Queries
+
+The `sql` method returns a `DataFrame`. The query runs when you call an action 
like
+`show` or `collect`:
+
+```python
+df = ctx.sql("SELECT count(*) FROM trips")
+```
+
+### Showing Query Results
+
+```python
+>>> df.show()
++-----------------+
+| COUNT(UInt8(1)) |
++-----------------+
+| 9071244         |
++-----------------+
+```
+
+### Collecting Query Results
+
+`collect` executes the query and returns the results as
+[PyArrow](https://arrow.apache.org/docs/python/index.html) record batches:
+
+```python
+>>> df.collect()

Review Comment:
   ```suggestion
   df.collect()
   ```



##########
docs/source/user-guide/python/jupyter.md:
##########
@@ -0,0 +1,99 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Jupyter Notebooks
+
+Ballista works well in Jupyter notebooks. DataFrames automatically render as 
formatted
+HTML tables when displayed in a notebook cell.
+
+## Basic Usage
+
+```python
+from ballista import BallistaSessionContext
+
+ctx = BallistaSessionContext("df://localhost:50050")
+
+ctx.register_parquet("trips", "/path/to/nyctaxi.parquet")
+
+# The result renders as an HTML table when this is the last expression in a 
cell
+ctx.sql("SELECT * FROM trips LIMIT 10")
+```
+
+When a DataFrame is the last expression in a cell, Jupyter automatically calls 
its
+`_repr_html_()` method, which renders a styled table with formatted column 
headers,
+expandable cells for long text, and scrollable display for wide tables.
+
+## Converting Results
+
+DataFrames can be converted to various formats for further analysis:
+
+```python
+df = ctx.sql("SELECT * FROM trips WHERE fare_amount > 50")
+
+pandas_df = df.to_pandas()
+arrow_table = df.to_arrow_table()
+polars_df = df.to_polars()
+batches = df.collect()
+```
+
+## Example Workflow
+
+```python
+# Cell 1: Setup
+from ballista import BallistaSessionContext
+from datafusion import col, lit
+
+ctx = BallistaSessionContext("df://localhost:50050")
+ctx.register_parquet("orders", "/data/orders.parquet")
+ctx.register_parquet("customers", "/data/customers.parquet")
+
+# Cell 2: Explore the data
+ctx.sql("SELECT * FROM orders LIMIT 5")
+
+# Cell 3: Run analysis — DataFrame renders as an HTML table
+ctx.sql("""

Review Comment:
   ```suggestion
   df = ctx.sql("""
   ```



##########
docs/source/user-guide/python/index.rst:
##########
@@ -0,0 +1,26 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+
+..   http://www.apache.org/licenses/LICENSE-2.0
+
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+
+Python Client
+=============
+
+.. toctree::
+   :maxdepth: 2
+
+   Quick Start <quickstart>

Review Comment:
   Should there be a page that explains how to install the `ballista` Python 
library ?
   There is https://pypi.org/project/ballista/ but it is for `Helpers for 
training with pytorch`.
   It would be good to have a page explaining how to build and install it 
locally.



##########
docs/source/user-guide/python/quickstart.md:
##########
@@ -0,0 +1,180 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Ballista Python Bindings
+
+Ballista provides Python bindings, allowing SQL and DataFrame queries to be 
executed
+from Python. Like PySpark, you build a plan through SQL or a DataFrame API 
against
+Parquet, CSV, JSON, and other file formats, run it in a distributed 
environment, and
+collect the results back in Python.
+
+## Connecting to a Cluster
+
+`BallistaSessionContext` is the entry point for both remote and in-process 
clusters.
+
+**Remote cluster** — connect to an already-running scheduler:
+
+```python
+from ballista import BallistaSessionContext
+
+ctx = BallistaSessionContext("df://localhost:50050")
+```
+
+**In-process cluster** — start a scheduler and executor in the current Python 
process.
+Useful for development, testing, and notebooks:
+
+```python
+from ballista import BallistaSessionContext, setup_test_cluster
+
+host, port = setup_test_cluster()
+ctx = BallistaSessionContext(f"df://{host}:{port}")
+```
+
+## Configuration
+
+### Target Partitions
+
+Set `datafusion.execution.target_partitions` to match your cluster capacity
+(`executors × concurrent_tasks_per_executor`) by passing `cluster_config` to 
the
+constructor. The default inherits from DataFusion and is based on the client's 
CPU
+count, which is far too low for distributed execution:
+
+```python
+from ballista import BallistaSessionContext
+
+executors = 4
+concurrent_tasks = 8
+target_partitions = executors * concurrent_tasks
+
+ctx = BallistaSessionContext(
+    "df://localhost:50050",
+    cluster_config={
+        "datafusion.execution.target_partitions": str(target_partitions),
+    },
+)
+```
+
+`cluster_config` is propagated to the scheduler-side session for distributed 
planning
+and execution, and is also applied to the local context so settings consulted 
during
+table registration (e.g. 
`datafusion.execution.listing_table_factory_infer_partitions`)
+take effect before the plan is shipped.
+
+This controls how many parallel tasks the scheduler creates per stage. Setting 
it too
+low leaves executor capacity idle; setting it too high creates unnecessary 
scheduling
+overhead.
+
+## SQL
+
+### Registering Tables
+
+Before running SQL queries, register tables with the context using a 
`register_*`
+method or a `CREATE EXTERNAL TABLE` statement:
+
+```python
+ctx.register_parquet("trips", "/mnt/bigdata/nyctaxi")
+```
+
+```python
+ctx.sql("CREATE EXTERNAL TABLE trips STORED AS PARQUET LOCATION 
'/mnt/bigdata/nyctaxi'")
+```
+
+### Executing Queries
+
+The `sql` method returns a `DataFrame`. The query runs when you call an action 
like
+`show` or `collect`:
+
+```python
+df = ctx.sql("SELECT count(*) FROM trips")
+```
+
+### Showing Query Results
+
+```python
+>>> df.show()
++-----------------+
+| COUNT(UInt8(1)) |
++-----------------+
+| 9071244         |
++-----------------+
+```
+
+### Collecting Query Results
+
+`collect` executes the query and returns the results as
+[PyArrow](https://arrow.apache.org/docs/python/index.html) record batches:
+
+```python
+>>> df.collect()
+[pyarrow.RecordBatch
+COUNT(UInt8(1)): int64]
+```
+
+### Viewing Query Plans
+
+`explain` shows the logical and physical plans for a query:
+
+```python
+>>> df.explain()

Review Comment:
   ```suggestion
   df.explain()
   ```



##########
docs/source/user-guide/python/quickstart.md:
##########
@@ -0,0 +1,180 @@
+<!---
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing,
+  software distributed under the License is distributed on an
+  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  KIND, either express or implied.  See the License for the
+  specific language governing permissions and limitations
+  under the License.
+-->
+
+# Ballista Python Bindings
+
+Ballista provides Python bindings, allowing SQL and DataFrame queries to be 
executed
+from Python. Like PySpark, you build a plan through SQL or a DataFrame API 
against
+Parquet, CSV, JSON, and other file formats, run it in a distributed 
environment, and
+collect the results back in Python.
+
+## Connecting to a Cluster
+
+`BallistaSessionContext` is the entry point for both remote and in-process 
clusters.
+
+**Remote cluster** — connect to an already-running scheduler:
+
+```python
+from ballista import BallistaSessionContext
+
+ctx = BallistaSessionContext("df://localhost:50050")
+```
+
+**In-process cluster** — start a scheduler and executor in the current Python 
process.
+Useful for development, testing, and notebooks:
+
+```python
+from ballista import BallistaSessionContext, setup_test_cluster
+
+host, port = setup_test_cluster()
+ctx = BallistaSessionContext(f"df://{host}:{port}")
+```
+
+## Configuration
+
+### Target Partitions
+
+Set `datafusion.execution.target_partitions` to match your cluster capacity
+(`executors × concurrent_tasks_per_executor`) by passing `cluster_config` to 
the
+constructor. The default inherits from DataFusion and is based on the client's 
CPU
+count, which is far too low for distributed execution:
+
+```python
+from ballista import BallistaSessionContext
+
+executors = 4
+concurrent_tasks = 8
+target_partitions = executors * concurrent_tasks
+
+ctx = BallistaSessionContext(
+    "df://localhost:50050",
+    cluster_config={
+        "datafusion.execution.target_partitions": str(target_partitions),
+    },
+)
+```
+
+`cluster_config` is propagated to the scheduler-side session for distributed 
planning
+and execution, and is also applied to the local context so settings consulted 
during
+table registration (e.g. 
`datafusion.execution.listing_table_factory_infer_partitions`)
+take effect before the plan is shipped.
+
+This controls how many parallel tasks the scheduler creates per stage. Setting 
it too
+low leaves executor capacity idle; setting it too high creates unnecessary 
scheduling
+overhead.
+
+## SQL
+
+### Registering Tables
+
+Before running SQL queries, register tables with the context using a 
`register_*`
+method or a `CREATE EXTERNAL TABLE` statement:
+
+```python
+ctx.register_parquet("trips", "/mnt/bigdata/nyctaxi")
+```
+
+```python
+ctx.sql("CREATE EXTERNAL TABLE trips STORED AS PARQUET LOCATION 
'/mnt/bigdata/nyctaxi'")
+```
+
+### Executing Queries
+
+The `sql` method returns a `DataFrame`. The query runs when you call an action 
like
+`show` or `collect`:
+
+```python
+df = ctx.sql("SELECT count(*) FROM trips")
+```
+
+### Showing Query Results
+
+```python
+>>> df.show()

Review Comment:
   ```suggestion
   df.show()
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to