ehsantn commented on code in PR #2167:
URL: https://github.com/apache/iceberg-python/pull/2167#discussion_r2193638945


##########
tests/integration/test_writes/test_partitioned_writes.py:
##########
@@ -451,6 +451,11 @@ def 
test_dynamic_partition_overwrite_unpartitioned_evolve_to_identity_transform(
 
 @pytest.mark.integration
 def test_summaries_with_null(spark: SparkSession, session_catalog: Catalog, 
arrow_table_with_null: pa.Table) -> None:
+    import pyarrow
+    from packaging import version
+
+    under_20_arrow = version.parse(pyarrow.__version__) < 
version.parse("20.0.0")
+

Review Comment:
   Any ideas? Maybe use a range of "safe" values instead of a single file size 
value? I'd be happy to open another PR if there is more work for this.
   
   Bodo is currently pinned to Arrow 19 since the current release version of 
PyIceberg supports up to Arrow 19. Bodo uses Arrow C++, which currently 
requires pinning to a single version for pip wheels to work (conda-forge builds 
against 4 latest Arrow versions in this case but pip doesn't support this yet). 
It'd be great if PyIceberg wouldn't set an upper version for Arrow if possible.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to