sumedhsakdeo commented on code in PR #3046:
URL: https://github.com/apache/iceberg-python/pull/3046#discussion_r2831076454
##########
pyiceberg/io/pyarrow.py:
##########
@@ -1710,6 +1685,76 @@ def _read_all_delete_files(io: FileIO, tasks:
Iterable[FileScanTask]) -> dict[st
return deletes_per_file
+_QUEUE_SENTINEL = object()
+
+
+def _bounded_concurrent_batches(
+ tasks: list[FileScanTask],
+ batch_fn: Callable[[FileScanTask], Iterator[pa.RecordBatch]],
+ concurrent_files: int,
+ max_buffered_batches: int = 16,
Review Comment:
Refactored and parameterized concurrent_streams, batch_size, and
max_buffered_batches, for advanced memory tuning.
```text
Peak memory ≈ concurrent_streams × batch_size × max_buffered_batches ×
(average row size in bytes)
Where:
- `concurrent_streams`: Number of files read in parallel (default: 8)
- `batch_size`: Number of rows per batch (default: 131,072, can be set via
ArrivalOrder constructor)
- `max_buffered_batches`: Internal buffering parameter (default: 16, can be
tuned for advanced use cases)
- Average row size depends on your schema and data; multiply the above by it
to estimate bytes.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]