corleyma commented on code in PR #786:
URL: https://github.com/apache/iceberg-python/pull/786#discussion_r1637425676


##########
pyiceberg/io/pyarrow.py:
##########
@@ -1795,15 +1873,19 @@ def write_file(io: FileIO, table_metadata: 
TableMetadata, tasks: Iterator[WriteT
 
     def write_parquet(task: WriteTask) -> DataFile:
         table_schema = task.schema
-        arrow_table = pa.Table.from_batches(task.record_batches)
+
         # if schema needs to be transformed, use the transformed schema and 
adjust the arrow table accordingly
         # otherwise use the original schema
         if (sanitized_schema := sanitize_column_names(table_schema)) != 
table_schema:
             file_schema = sanitized_schema
-            arrow_table = to_requested_schema(requested_schema=file_schema, 
file_schema=table_schema, table=arrow_table)
+            batches = [
+                to_requested_schema(requested_schema=file_schema, 
file_schema=table_schema, batch=batch)
+                for batch in task.record_batches
+            ]
         else:
             file_schema = table_schema
-
+            batches = task.record_batches
+        arrow_table = pa.Table.from_batches(batches)

Review Comment:
   i.e., maybe we could do something like the following?:
   ```python
   def sanitize_batches(batches: Iterator[RecordBatch], table_schema: Schema, 
sanitized_schema: Schema) -> Iterator[RecordBatch]:
       if sanitized_schema != table_schema:
           for batch in batches:
               yield to_requested_schema(requested_schema=sanitized_schema, 
file_schema=table_schema, batch=batch)
       else:
           yield from batches
   
   def write_parquet(task: WriteTask) -> DataFile:
       table_schema = task.schema
   
       # Check if schema needs to be transformed
       sanitized_schema = sanitize_column_names(table_schema)
   
       file_path = 
f'{table_metadata.location}/data/{task.generate_data_file_path("parquet")}'
       fo = io.new_output(file_path)
       
       with fo.create(overwrite=True) as fos:
           with pq.ParquetWriter(fos, schema=sanitized_schema.as_arrow(), 
**parquet_writer_kwargs) as writer:
               for sanitized_batch in sanitize_batches(task.record_batches, 
table_schema, sanitized_schema):
                   writer.write_table(pa.Table.from_batches([sanitized_batch]), 
row_group_size=row_group_size)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to