robtandy commented on code in PR #41: URL: https://github.com/apache/iceberg-python/pull/41#discussion_r1440972324
########## pyiceberg/io/pyarrow.py: ########## @@ -1565,13 +1564,54 @@ def fill_parquet_file_metadata( del upper_bounds[field_id] del null_value_counts[field_id] - df.file_format = FileFormat.PARQUET df.record_count = parquet_metadata.num_rows - df.file_size_in_bytes = file_size df.column_sizes = column_sizes df.value_counts = value_counts df.null_value_counts = null_value_counts df.nan_value_counts = nan_value_counts df.lower_bounds = lower_bounds df.upper_bounds = upper_bounds df.split_offsets = split_offsets + + +def write_file(table: Table, tasks: Iterator[WriteTask]) -> Iterator[DataFile]: + task = next(tasks) + + try: + _ = next(tasks) + # If there are more tasks, raise an exception + raise ValueError("Only unpartitioned writes are supported: https://github.com/apache/iceberg-python/issues/208") + except StopIteration: + pass + + df = task.df + + file_path = f'{table.location()}/data/{_generate_datafile_filename("parquet")}' + file_schema = schema_to_pyarrow(table.schema()) Review Comment: if I have ``` df = pa.Table.from_pylist([{'a':"hello"}, {'a':"world"}]) ``` Should I expect `df` to have a pa.Schema that can be converted with `pyarrow_to_schema` ? even the modified one presented in the diff above? I wasn't able to get either branch to work as the schema of `df` above has no `metadata` for its fields -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org