2MD opened a new issue, #12130:
URL: https://github.com/apache/iceberg/issues/12130

   ### Apache Iceberg version
   
   1.6.0
   
   ### Query engine
   
   Spark
   
   ### Please describe the bug 🐞
   
   val icebergVersion = "1.6.0"
   scalaVersion := "2.12.15"
   val sparkVersion = "3.3.2"
   
    ```
    "org.apache.iceberg" %% "iceberg-spark-runtime-3.3" % icebergVersion
    "org.apache.iceberg" % "iceberg-hive-metastore" % icebergVersion
    "org.apache.spark" %% "spark-hive" % sparkVersion % Provided
    ("org.apache.iceberg" % "iceberg-hive-metastore" % icebergVersion % 
IntegrationTest).classifier("tests")
   
   .......
   dependencyOverrides  ++= Seq(
     "com.fasterxml.jackson.core" % "jackson-annotations" % "2.12.6", // Force 
shaded version for regular Jackson
     "com.fasterxml.jackson.core" % "jackson-core" % "2.12.6",
     "com.fasterxml.jackson.core" % "jackson-databind" % "2.12.6",
     "com.fasterxml.jackson.module" %% "jackson-module-scala" % "2.12.6",
     "org.apache.iceberg.shaded.com.fasterxml.jackson.core" % 
"jackson-annotations" % "2.12.6",
     "org.apache.iceberg.shaded.com.fasterxml.jackson.core" % "jackson-core" % 
"2.12.6",
     "org.apache.iceberg.shaded.com.fasterxml.jackson.core" % 
"jackson-databind" % "2.12.6",
     "org.apache.iceberg.shaded.com.fasterxml.jackson.module" %% 
"jackson-module-scala" % "2.12.6"
   )
   
   ...
   val testAssemblySettings = Seq(
     Test / assembly / assemblyMergeStrategy := {
       case PathList("META-INF", "MANIFEST.MF") => MergeStrategy.discard
       case PathList("META-INF", xs @ _*) => MergeStrategy.last
       case n if 
n.startsWith("org.apache.iceberg.shaded.com.fasterxml.jackson") => 
MergeStrategy.first
       case x =>
         val oldStrategy = (assembly / assemblyMergeStrategy).value
         oldStrategy(x)
     },
   
    
     Test / assembly / fullClasspath := {
       val cp = (Test / fullClasspath).value
       val providedDependencies = update.map(f => 
f.select(configurationFilter("provided"))).value
   
       cp.filter { f =>
         !providedDependencies.contains(f.data)
       }
     },
     Test / assembly / assemblyJarName := 
s"${name.value}-assembly-${version.value}-test.jar"
   
   ```
   Spark 3.3.2 uses Jackson 2.12.6,  Iceberg 1.6.0, which **shades** Jackson 
2.12.4!
   
   I have integration test:
   
   ```
   override def beforeAll(): Unit = {
     metastore = new HiveMetastoreTest()  
       metastore.start()
       uris = metastore.getMetastoreUris
       spark = SparkSession.builder
         .master("local[*]")
         .config("spark.hadoop.hive.metastore.uris", uris)
         .config("spark.sql.legacy.parquet.nanosAsLong", "false")
         .config("spark.sql.extensions", 
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
         .config(s"spark.sql.catalog.$icebergCatalog", 
"org.apache.iceberg.spark.SparkCatalog")
         .config(s"spark.sql.catalog.$icebergCatalog.type", "hive")
         .config("spark.hadoop.fs.s3a.endpoint", minioContainer.getHostAddress)
         .config("spark.hadoop.fs.s3a.access.key", 
minioContainer.getMinioAccessValue)
         .config("spark.hadoop.fs.s3a.secret.key", 
minioContainer.getMinioAccessValue)
         .config("spark.hadoop.fs.s3a.path.style.access", "true")
         .config("spark.hadoop.fs.s3a.impl", 
"org.apache.hadoop.fs.s3a.S3AFileSystem")
         .config("spark.sql.iceberg.check-ordering", "false")
         .config("write.metadata.delete-after-commit.enabled", "false")
         .config("spark.sql.iceberg.handle-timestamp-without-timezone", "true")
         .enableHiveSupport()
         .getOrCreate()
   
    s3.createBucket(bucket)
   }
   ```
   When i run
   >  sbt "IntegrationTest/testOnly"
   
   I have some test with existed iceberg table without partition and try to 
append data:
   ```df.writeTo(path).options(options).append()```
   
   ```
   Exception in thread "stream execution thread for kbzxyspcyd [id = 
5e64e67c-a7ae-4816-a977-317cbef0cdea, runId = 
ff9bfeca-4f23-4f3f-8b13-b02c7396f8cd]" java.lang.NoSuchMethodError: 
'com.fasterxml.jackson.databind.ObjectMapper 
org.apache.iceberg.util.JsonUtil.mapper()'
           at 
org.apache.iceberg.hive.HiveTableOperations.setSnapshotSummary(HiveTableOperations.java:406)
           at 
org.apache.iceberg.hive.HiveTableOperations.setSnapshotStats(HiveTableOperations.java:397)
           at 
org.apache.iceberg.hive.HiveTableOperations.setHmsTableParameters(HiveTableOperations.java:376)
           at 
org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:233)
           at 
org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:128)
           at 
org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:412)
           at 
org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
           at 
org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
           at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
           at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
           at 
org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:384)
           at 
org.apache.iceberg.spark.source.SparkWrite.commitOperation(SparkWrite.java:216)
           at 
org.apache.iceberg.spark.source.SparkWrite.access$1300(SparkWrite.java:83)
           at 
org.apache.iceberg.spark.source.SparkWrite$BatchAppend.commit(SparkWrite.java:279)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:392)
           at 
org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:353)
           at 
org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:244)
           at 
org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:332
   ```
   
   if i update iceberg version up to 1.7.1 i will catch the same error.
   I found https://issues.apache.org/jira/browse/HIVE-28229 this bug... look 
like the same.
   
   ### Willingness to contribute
   
   - [ ] I can contribute a fix for this bug independently
   - [x] I would be willing to contribute a fix for this bug with guidance from 
the Iceberg community
   - [ ] I cannot contribute a fix for this bug at this time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to