capDoYeonLee commented on issue #13671:
URL: https://github.com/apache/iceberg/issues/13671#issuecomment-3128998720
Hello, @nastra
I added the missing fields to the internal copy() method of FileMetadata.
```
this.referencedDataFile = toCopy.referencedDataFile();
this.contentOffset = toCopy.contentOffset();
this.contentSizeInBytes = toCopy.contentSizeInBytes();
```
After adding them, errors such as ```java.lang.IllegalArgumentException:
Content offset is required for DV``` no longer occur.
However, after adding the missing fields, when testing with
format-version=3, ```an Unsupported file format: PUFFIN``` exception occurs.
```
java.lang.UnsupportedOperationException: Unsupported file format: PUFFIN
at
org.apache.iceberg.spark.actions.RewriteTablePathSparkAction.positionDeletesReader(RewriteTablePathSparkAction.java:705)
at
org.apache.iceberg.spark.actions.RewriteTablePathSparkAction$SparkPositionDeleteReaderWriter.reader(RewriteTablePathSparkAction.java:642)
at
org.apache.iceberg.RewriteTablePathUtil.rewritePositionDeleteFile(RewriteTablePathUtil.java:605)
at
org.apache.iceberg.spark.actions.RewriteTablePathSparkAction.lambda$rewritePositionDelete$ab76677f$1(RewriteTablePathSparkAction.java:668)
at
org.apache.spark.sql.Dataset.$anonfun$foreach$2(Dataset.scala:3507)
at
org.apache.spark.sql.Dataset.$anonfun$foreach$2$adapted(Dataset.scala:3507)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at org.apache.spark.rdd.RDD.$anonfun$foreach$2(RDD.scala:1031)
at
org.apache.spark.rdd.RDD.$anonfun$foreach$2$adapted(RDD.scala:1031)
at
org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2433)
at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
at org.apache.spark.scheduler.Task.run(Task.scala:141)
at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:621)
at
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
at
org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
at
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:624)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
```
Currently, ```an Unsupported file format: PUFFIN``` exception occurs.
I am having trouble with this error. Could you please advise me on how to
resolve it?
thank you!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]