jiantao-vungle commented on PR #9017:
URL: https://github.com/apache/iceberg/pull/9017#issuecomment-1818197335

   hi @huaxingao, we tried to execute sql `INSERT` into a Iceberg table, but 
encountered following exception stack, no Iceberg API calling in the stack, It 
seems like the same issue as https://github.com/apache/iceberg/issues/8904, my 
questions:
   - It can be fixed by this PR, (noticed that no Iceberg API calling in the 
stack )?
   - Any release plan to the this PR and the 
PR:https://github.com/apache/iceberg/pull/9028?
   
   
   ```
   Exception in thread "main" java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at 
org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:63)
       at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
   Caused by: org.apache.spark.SparkException: Job aborted due to stage 
failure: Task 186 in stage 9.0 failed 4 times, most recent failure: Lost task 
186.4 in stage 9.0 (TID 2992) (172.26.2.81 executor 20): 
org.apache.spark.SparkException: Commit denied for partition 186 (task 2992, 
attempt 4, stage 9.0).
       at 
org.apache.spark.sql.errors.QueryExecutionErrors$.commitDeniedError(QueryExecutionErrors.scala:929)
       at 
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.$anonfun$run$1(WriteToDataSourceV2Exec.scala:485)
       at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1563)
       at 
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:509)
       at 
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:448)
       at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:514)
       at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:411)
       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
       at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
       at org.apache.spark.scheduler.Task.run(Task.scala:139)
       at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
       at java.lang.Thread.run(Thread.java:748)
   
   Driver stacktrace:
       at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2785)
       at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2721)
       at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2720)
       at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
       at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
       at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2720)
       at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1206)
       at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1206)
       at scala.Option.foreach(Option.scala:407)
       at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1206)
       at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2984)
       at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2923)
       at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2912)
       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
       at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:971)
       at org.apache.spark.SparkContext.runJob(SparkContext.scala:2263)
       at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:408)
       at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:382)
       at 
org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:248)
       at 
org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:360)
       at 
org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:359)
       at 
org.apache.spark.sql.execution.datasources.v2.AppendDataExec.run(WriteToDataSourceV2Exec.scala:248)
       at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
       at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
       at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
       at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
       at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)
       at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
       at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
       at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
       at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
       at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
       at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
       at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
       at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
       at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
       at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
       at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
       at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
       at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
       at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
       at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
       at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
       at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
       at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
       at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
       at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
       at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
       at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
       at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
       at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
       at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
       at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:671)
       at 
com.xxxxxx.xxxx.BoilerplateSparkMain.withSaveIcebergStagingTable(Boilerplate.scala:1685)
       at 
com.xxxxxx.xxxx.BoilerplateSparkMain.withSaveIcebergStagingTable$(Boilerplate.scala:1667)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.withSaveIcebergStagingTable(SparkMain.scala:13)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.process(SparkMain.scala:156)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.$anonfun$run$3(SparkMain.scala:221)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.$anonfun$run$3$adapted(SparkMain.scala:218)
       at 
com.xxxxxx.xxxx.BoilerplateSparkMain.withCoba2TempViewInRange(Boilerplate.scala:1354)
       at 
com.xxxxxx.xxxx.BoilerplateSparkMain.withCoba2TempViewInRange$(Boilerplate.scala:1343)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.withCoba2TempViewInRange(SparkMain.scala:13)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.$anonfun$run$2(SparkMain.scala:218)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.$anonfun$run$2$adapted(SparkMain.scala:202)
       at 
scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:985)
       at scala.collection.immutable.List.foreach(List.scala:431)
       at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:984)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.run(SparkMain.scala:202)
       at com.xxxxxx.xxxx.BoilerplateSparkMain.main(Boilerplate.scala:2625)
       at com.xxxxxx.xxxx.BoilerplateSparkMain.main$(Boilerplate.scala:2605)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain$.main(SparkMain.scala:13)
       at 
com.xxxxxx.xxxx.hbp.notifications_attribution.SparkMain.main(SparkMain.scala)
       ... 6 more
   Caused by: org.apache.spark.SparkException: Commit denied for partition 186 
(task 2992, attempt 4, stage 9.0).
       at 
org.apache.spark.sql.errors.QueryExecutionErrors$.commitDeniedError(QueryExecutionErrors.scala:929)
       at 
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.$anonfun$run$1(WriteToDataSourceV2Exec.scala:485)
       at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1563)
       at 
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:509)
       at 
org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:448)
       at 
org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:514)
       at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:411)
       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
       at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
       at org.apache.spark.scheduler.Task.run(Task.scala:139)
       at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
       at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
       at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
       at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
       at java.lang.Thread.run(Thread.java:748)
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to