qinghui-xu commented on issue #14232:
URL: https://github.com/apache/iceberg/issues/14232#issuecomment-3363045480

   Hello @nastra 
   I can provide a trivial example for simplicity, which results in an error of 
the same kind
   ```
   $ spark-shell --deploy-mode client --jars path/to/iceberg.jar --conf 
spark.sql.catalog.iceberg=org.apache.iceberg.spark.SparkCatalog --conf 
spark.sql.catalog.iceberg.warehouse=/path/to/dwh/root --conf 
spark.sql.catalog.iceberg.type=hadoop
   
   scala> spark.sql("create table iceberg.test (uname STRING, create_date INT, 
uid INT) TBLPROPERTIES ('format' = 'iceberg/parquet', 'format-version' = '2',  
'identifier-fields' = '[uid]', 'bwrite.distribution-mode' = 'hash', 
'write.upsert.enabled' = 'true')")
   
   scala> spark.sql("select * from iceberg.test").show()
   +-----+-----------+---+
   |uname|create_date|uid|
   +-----+-----------+---+
   +-----+-----------+---+
   
   scala> spark.sql("delete from iceberg.test where uid = 0")
   java.lang.NoSuchMethodError: 'org.apache.avro.LogicalTypes$TimestampNanos 
org.apache.avro.LogicalTypes.timestampNanos()'
     at org.apache.iceberg.avro.TypeToSchema.<clinit>(TypeToSchema.java:50)
     at org.apache.iceberg.avro.AvroSchemaUtil.convert(AvroSchemaUtil.java:64)
     at org.apache.iceberg.avro.AvroSchemaUtil.convert(AvroSchemaUtil.java:59)
     at org.apache.iceberg.avro.Avro$WriteBuilder.build(Avro.java:211)
     at 
org.apache.iceberg.ManifestListWriter$V2Writer.newAppender(ManifestListWriter.java:219)
     at org.apache.iceberg.ManifestListWriter.<init>(ManifestListWriter.java:34)
     at 
org.apache.iceberg.ManifestListWriter$V2Writer.<init>(ManifestListWriter.java:196)
     at org.apache.iceberg.ManifestLists.write(ManifestLists.java:65)
     at org.apache.iceberg.SnapshotProducer.apply(SnapshotProducer.java:266)
     at org.apache.iceberg.StreamingDelete.apply(StreamingDelete.java:24)
     at 
org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:440)
     at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
     at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
     at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
     at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
     at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:438)
     at org.apache.iceberg.StreamingDelete.commit(StreamingDelete.java:24)
     at 
org.apache.iceberg.spark.source.SparkTable.deleteWhere(SparkTable.java:411)
     at 
org.apache.spark.sql.execution.datasources.v2.DeleteFromTableExec.run(DeleteFromTableExec.scala:31)
     at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
     at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
     at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)
     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
     at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
     at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)
     at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
     at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)
     at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98)
     at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85)
     at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
     at org.apache.spark.sql.Dataset.<init>(Dataset.scala:220)
     at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
     at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
     at org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:691)
     at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
     at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
     at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:713)
     at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:744)
     ... 47 elided
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to