rafoid opened a new issue, #8926:
URL: https://github.com/apache/iceberg/issues/8926

   ### Apache Iceberg version
   
   1.4.0
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   Starting `spark-sql` with the following config params:
   ```
   spark-sql \
   --packages 
org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.0,org.apache.iceberg:iceberg-aws-bundle:1.4.0
 \
   --conf 
spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkSessionCatalog \
   --conf spark.sql.catalog.spark_catalog.type=hive \
   --conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog \
   --conf spark.sql.catalog.local.type=hive \
   --conf spark.sql.catalog.local.uri=thrift://<host_omitted>:9083 \
   --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \
   --conf spark.hadoop.fs.s3a.access.key=<omitted> \
   --conf spark.hadoop.fs.s3a.secret.key=<omitted> \
   --conf spark.hadoop.fs.s3a.endpoint=http://<host_omitted>:9000
   ```
   Then create a table with the following properties:
   ```
   create table local.test.test_tab_2(c1 int, c2 string) using iceberg 
tblproperties('format-version'='2','write.delete.mode'='merge-on-read');
   ```
   Then insert records from an existing table, which has 25k rows:
   ```
   insert into test_tab_2 select * from test_tab_1;
   ```
   Now, when attempting to delete a record from test_tab_2 (it has only 1 
matching record), I get an exception with below (relevant) stack:
   ```
   spark-sql (test)> delete from test_tab_2 where c1 = 5;
   23/10/26 10:48:28 ERROR SparkSQLDriver: Failed in [delete from raf_spark_t_2 
where c1 = 5]
   java.lang.IllegalArgumentException: info must be ExtendedLogicalWriteInfo
        at 
org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
        at 
org.apache.iceberg.spark.source.SparkPositionDeltaOperation.newWriteBuilder(SparkPositionDeltaOperation.java:89)
        at 
org.apache.iceberg.spark.source.SparkPositionDeltaOperation.newWriteBuilder(SparkPositionDeltaOperation.java:38)
        at 
org.apache.spark.sql.connector.write.RowLevelOperationTable.newWriteBuilder(RowLevelOperationTable.scala:50)
        at 
org.apache.spark.sql.execution.datasources.v2.V2Writes$.org$apache$spark$sql$execution$datasources$v2$V2Writes$$newWriteBuilder(V2Writes.scala:144)
        at 
org.apache.spark.sql.execution.datasources.v2.V2Writes$$anonfun$apply$1.applyOrElse(V2Writes.scala:100)
        at 
org.apache.spark.sql.execution.datasources.v2.V2Writes$$anonfun$apply$1.applyOrElse(V2Writes.scala:43)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
   ```
   It's not clear to me whether I missed another spark-sql config param or 
table property or it is simply a bug. The exception isn't obvious on how to 
proceed. Searching for this exception didn't show much either. I'm using the 
following spark version:
   ```
   spark-sql --version
   Welcome to
         ____              __
        / __/__  ___ _____/ /__
       _\ \/ _ \/ _ `/ __/  '_/
      /___/ .__/\_,_/_/ /_/\_\   version 3.4.1
         /_/
                           
   Using Scala version 2.12.17, Java HotSpot(TM) 64-Bit Server VM, 17.0.5
   Branch HEAD
   Compiled by user centos on 2023-06-19T23:01:01Z
   Revision 6b1ff22dde1ead51cbf370be6e48a802daae58b6
   Url https://github.com/apache/spark
   Type --help for more information.
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to