flisboac commented on issue #9650:
URL: https://github.com/apache/iceberg/issues/9650#issuecomment-2054163238

   Just to contribute to this report, I can replicate this error with a table 
that's configured as Merge-On-Read for inserts and updates, and as 
Copy-on-Write for Deletes. It's a CDC table, using MERGE INTO to integrate 
changes from an external table exported as JSON files.
   
   In this CDC table, we have a timestamp column `dt_geracao` to report when a 
specific item was changed. When I do a simple `max(dt_geracao)`, this error 
happens. Curiously, this only happens when executing the query under Spark, and 
inside an EMR cluster, which corroborates @nastra 's rationale. For example, 
this same query doesn't break when executed in Athena (which uses some other 
engine, Trino, I think, AFAIK).
   
   Luckily, this is not an essential part of our pipeline, but it's a worrisome 
error regardless. What could be causing it?
   
   Follows an error traceback, from a simple `max` query executed on 
`spark-sql`:
   
   ```text
   24/04/14 19:23:43 ERROR SparkSQLDriver: Failed in [select max(dt_geracao) 
from REDACTED_DB.REDACTED_TABLE]
   java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
           at java.util.TimSort.mergeHi(TimSort.java:903) ~[?:?]
           at java.util.TimSort.mergeAt(TimSort.java:520) ~[?:?]
           at java.util.TimSort.mergeCollapse(TimSort.java:448) ~[?:?]
           at java.util.TimSort.sort(TimSort.java:245) ~[?:?]
           at java.util.Arrays.sort(Arrays.java:1307) ~[?:?]
           at java.util.ArrayList.sort(ArrayList.java:1721) ~[?:?]
           at java.util.Collections.sort(Collections.java:179) ~[?:?]
           at 
org.apache.iceberg.util.HeadTailBinPackingAMZN$PackingIterator.<init>(HeadTailBinPackingAMZN.java:125)
 ~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.util.HeadTailBinPackingAMZN$PackingIterator.<init>(HeadTailBinPackingAMZN.java:93)
 ~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.util.HeadTailBinPackingAMZN$PackingIterable.iterator(HeadTailBinPackingAMZN.java:89)
 ~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.io.CloseableIterable$2.iterator(CloseableIterable.java:72) 
~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.io.CloseableIterable$7$1.<init>(CloseableIterable.java:188) 
~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.io.CloseableIterable$7.iterator(CloseableIterable.java:187) 
~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.io.CloseableIterable$7.iterator(CloseableIterable.java:179) 
~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.relocated.com.google.common.collect.Lists.newArrayList(Lists.java:132)
 ~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.spark.source.SparkPartitioningAwareScan.taskGroups(SparkPartitioningAwareScan.java:210)
 ~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.iceberg.spark.source.SparkPartitioningAwareScan.outputPartitioning(SparkPartitioningAwareScan.java:106)
 ~[iceberg-spark-runtime-3.5_2.12-1.4.2-amzn-0.jar:?]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:44)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:42)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:503)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76) 
~[spark-sql-api_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:503)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:33)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:508)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1288) 
~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1287) 
~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.Aggregate.mapChildren(basicLogicalOperators.scala:1426)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:508)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:33)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:479) 
~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.partitioning(V2ScanPartitioningAndOrdering.scala:42)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.$anonfun$apply$1(V2ScanPartitioningAndOrdering.scala:35)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.$anonfun$apply$3(V2ScanPartitioningAndOrdering.scala:38)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126) 
~[scala-library-2.12.17.jar:?]
           at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122) 
~[scala-library-2.12.17.jar:?]
           at scala.collection.immutable.List.foldLeft(List.scala:91) 
~[scala-library-2.12.17.jar:?]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.apply(V2ScanPartitioningAndOrdering.scala:37)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.apply(V2ScanPartitioningAndOrdering.scala:33)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:?]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:239)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126) 
~[scala-library-2.12.17.jar:?]
           at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122) 
~[scala-library-2.12.17.jar:?]
           at scala.collection.immutable.List.foldLeft(List.scala:91) 
~[scala-library-2.12.17.jar:?]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:236)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$6(RuleExecutor.scala:319)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) 
~[scala-library-2.12.17.jar:?]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$RuleExecutionContext$.withContext(RuleExecutor.scala:368)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$5(RuleExecutor.scala:319)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$5$adapted(RuleExecutor.scala:309)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at scala.collection.immutable.List.foreach(List.scala:431) 
~[scala-library-2.12.17.jar:?]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:309)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:195)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.super$execute(BaseOptimizer.scala:28)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.$anonfun$execute$1(BaseOptimizer.scala:28)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.optimizer.OptimizationContext$.withOptimizationContext(OptimizationContext.scala:80)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.execute(BaseOptimizer.scala:28)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.execute(BaseOptimizer.scala:23)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:191)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:108)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:161)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:219)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:256)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:625)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:256)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) 
~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:255)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:157)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:153)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$writePlans$4(QueryExecution.scala:340)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:716) 
~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.writePlans(QueryExecution.scala:340)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:357)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:311)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:289)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:121)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$9(SQLExecution.scala:165)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:108)
 ~[spark-catalyst_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:255)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$8(SQLExecution.scala:165)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:276)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:164)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900) 
~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:70)
 ~[spark-sql_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:76)
 ~[spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:501)
 ~[spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:619)
 ~[spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:613)
 ~[spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at scala.collection.Iterator.foreach(Iterator.scala:943) 
[scala-library-2.12.17.jar:?]
           at scala.collection.Iterator.foreach$(Iterator.scala:943) 
[scala-library-2.12.17.jar:?]
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1431) 
[scala-library-2.12.17.jar:?]
           at scala.collection.IterableLike.foreach(IterableLike.scala:74) 
[scala-library-2.12.17.jar:?]
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73) 
[scala-library-2.12.17.jar:?]
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56) 
[scala-library-2.12.17.jar:?]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:613)
 [spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:310)
 [spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
 [spark-hive-thriftserver_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) ~[?:?]
           at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
 ~[?:?]
           at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
           at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1075)
 [spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1167) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1176) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
[spark-core_2.12-3.5.0-amzn-0.jar:3.5.0-amzn-0]
   Comparison method violates its general contract!
   java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
           at java.base/java.util.TimSort.mergeHi(TimSort.java:903)
           at java.base/java.util.TimSort.mergeAt(TimSort.java:520)
           at java.base/java.util.TimSort.mergeCollapse(TimSort.java:448)
           at java.base/java.util.TimSort.sort(TimSort.java:245)
           at java.base/java.util.Arrays.sort(Arrays.java:1307)
           at java.base/java.util.ArrayList.sort(ArrayList.java:1721)
           at java.base/java.util.Collections.sort(Collections.java:179)
           at 
org.apache.iceberg.util.HeadTailBinPackingAMZN$PackingIterator.<init>(HeadTailBinPackingAMZN.java:125)
           at 
org.apache.iceberg.util.HeadTailBinPackingAMZN$PackingIterator.<init>(HeadTailBinPackingAMZN.java:93)
           at 
org.apache.iceberg.util.HeadTailBinPackingAMZN$PackingIterable.iterator(HeadTailBinPackingAMZN.java:89)
           at 
org.apache.iceberg.io.CloseableIterable$2.iterator(CloseableIterable.java:72)
           at 
org.apache.iceberg.io.CloseableIterable$7$1.<init>(CloseableIterable.java:188)
           at 
org.apache.iceberg.io.CloseableIterable$7.iterator(CloseableIterable.java:187)
           at 
org.apache.iceberg.io.CloseableIterable$7.iterator(CloseableIterable.java:179)
           at 
org.apache.iceberg.relocated.com.google.common.collect.Lists.newArrayList(Lists.java:132)
           at 
org.apache.iceberg.spark.source.SparkPartitioningAwareScan.taskGroups(SparkPartitioningAwareScan.java:210)
           at 
org.apache.iceberg.spark.source.SparkPartitioningAwareScan.outputPartitioning(SparkPartitioningAwareScan.java:106)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:44)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:42)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:503)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:503)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:33)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:508)
           at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1288)
           at 
org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1287)
           at 
org.apache.spark.sql.catalyst.plans.logical.Aggregate.mapChildren(basicLogicalOperators.scala:1426)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:508)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:33)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:33)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:479)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.partitioning(V2ScanPartitioningAndOrdering.scala:42)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.$anonfun$apply$1(V2ScanPartitioningAndOrdering.scala:35)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.$anonfun$apply$3(V2ScanPartitioningAndOrdering.scala:38)
           at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
           at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
           at scala.collection.immutable.List.foldLeft(List.scala:91)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.apply(V2ScanPartitioningAndOrdering.scala:37)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.apply(V2ScanPartitioningAndOrdering.scala:33)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:239)
           at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
           at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
           at scala.collection.immutable.List.foldLeft(List.scala:91)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:236)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$6(RuleExecutor.scala:319)
           at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$RuleExecutionContext$.withContext(RuleExecutor.scala:368)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$5(RuleExecutor.scala:319)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$5$adapted(RuleExecutor.scala:309)
           at scala.collection.immutable.List.foreach(List.scala:431)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:309)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:195)
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.super$execute(BaseOptimizer.scala:28)
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.$anonfun$execute$1(BaseOptimizer.scala:28)
           at 
org.apache.spark.sql.catalyst.optimizer.OptimizationContext$.withOptimizationContext(OptimizationContext.scala:80)
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.execute(BaseOptimizer.scala:28)
           at 
org.apache.spark.sql.catalyst.optimizer.BaseOptimizer.execute(BaseOptimizer.scala:23)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:191)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:108)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:161)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:219)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:256)
           at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:625)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:256)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:255)
           at 
org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:157)
           at 
org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:153)
           at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$writePlans$4(QueryExecution.scala:340)
           at 
org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:716)
           at 
org.apache.spark.sql.execution.QueryExecution.writePlans(QueryExecution.scala:340)
           at 
org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:357)
           at 
org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:311)
           at 
org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:289)
           at 
org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:121)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$9(SQLExecution.scala:165)
           at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:108)
           at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:255)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$8(SQLExecution.scala:165)
           at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:276)
           at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:164)
           at 
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
           at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:70)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:76)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:501)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:619)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:613)
           at scala.collection.Iterator.foreach(Iterator.scala:943)
           at scala.collection.Iterator.foreach$(Iterator.scala:943)
           at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
           at scala.collection.IterableLike.foreach(IterableLike.scala:74)
           at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
           at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:613)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:310)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
           at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.base/java.lang.reflect.Method.invoke(Method.java:568)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1075)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1167)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1176)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to