Fokko commented on code in PR #6775:
URL: https://github.com/apache/iceberg/pull/6775#discussion_r1111835645


##########
python/pyiceberg/io/pyarrow.py:
##########
@@ -470,13 +472,25 @@ def expression_to_pyarrow(expr: BooleanExpression) -> 
pc.Expression:
     return boolean_expression_visit(expr, _ConvertToArrowExpression())
 
 
+def _read_deletes(fs: FileSystem, file_path: str) -> Dict[str, 
pa.ChunkedArray]:
+    _, path = PyArrowFileIO.parse_location(file_path)
+    table = pq.read_table(
+        source=path, pre_buffer=True, buffer_size=8 * ONE_MEGABYTE, 
read_dictionary=["file_path"], filesystem=fs
+    )
+    table.unify_dictionaries()
+    return {
+        file.as_py(): table.filter(pc.field("file_path") == 
file).column("pos") for file in table.columns[0].chunks[0].dictionary

Review Comment:
   > Also, what happens if the content isn't dictionary-encoded? It looks like 
you can use dictionary_encode() to ensure an array is dictionary-encoded. Maybe 
we can do the same and make sure this is present?
   
   The docstring states: `List of column names to read directly as 
DictionaryArray.`. This is unrelated to how it is encoded. If it is encoded as 
a dictionary, then it would be even more efficient when reading the data. But 
since we know that the `file_path` is probably low cardinality, I think it 
makes sense to decode it directly into a 
[`DictionaryArray`](https://arrow.apache.org/docs/python/generated/pyarrow.DictionaryArray.html).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to