lk-1984 opened a new issue, #12557:
URL: https://github.com/apache/iceberg/issues/12557

   ### Apache Iceberg version
   
   1.8.1 (latest release)
   
   ### Query engine
   
   None
   
   ### Please describe the bug 🐞
   
   I have 1.8.1 Iceberg Kafka Connect Sink, and 1.7.1 in Trino. When I create 
an Iceberg table, I cannot write into it with the Iceberg Kafka Connect Sink.
   
   I have Hive Metastore, and MinIO.
   
   " java.lang.NumberFormatException: For input string: "60s""
   
   The metadata json does not have anything that would refer to that. Also the 
AVRO schema has only one field which is simply a string, and the content in the 
AVRO message does not have 60s.
   
   ```
   2025-03-17 20:46:02 [2025-03-17 18:46:02,234] ERROR 
[iceberg-sink-connector|task-0] WorkerSinkTask{id=iceberg-sink-connector-0} 
Task threw an uncaught and unrecoverable exception. Task is being killed and 
will not recover until manually restarted 
(org.apache.kafka.connect.runtime.WorkerTask:234)
   2025-03-17 20:46:02 org.apache.kafka.connect.errors.ConnectException: 
Exiting WorkerSinkTask due to unrecoverable exception.
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:636)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:345)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:247)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:216)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:226)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:281)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:238)
   2025-03-17 20:46:02     at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
   2025-03-17 20:46:02     at 
java.base/java.util.concurrent.FutureTask.run(Unknown Source)
   2025-03-17 20:46:02     at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
   2025-03-17 20:46:02     at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
   2025-03-17 20:46:02     at java.base/java.lang.Thread.run(Unknown Source)
   2025-03-17 20:46:02 Caused by: java.lang.NumberFormatException: For input 
string: "60s"
   2025-03-17 20:46:02     at 
java.base/java.lang.NumberFormatException.forInputString(Unknown Source)
   2025-03-17 20:46:02     at java.base/java.lang.Long.parseLong(Unknown Source)
   2025-03-17 20:46:02     at java.base/java.lang.Long.parseLong(Unknown Source)
   2025-03-17 20:46:02     at 
org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1607)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.s3a.S3AUtils.longOption(S3AUtils.java:1024)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initThreadPools(S3AFileSystem.java:719)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:498)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3615)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:172)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3716)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3667)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:557)
   2025-03-17 20:46:02     at 
org.apache.hadoop.fs.Path.getFileSystem(Path.java:366)
   2025-03-17 20:46:02     at org.apache.iceberg.hadoop.Util.getFs(Util.java:56)
   2025-03-17 20:46:02     at 
org.apache.iceberg.hadoop.HadoopInputFile.fromLocation(HadoopInputFile.java:56)
   2025-03-17 20:46:02     at 
org.apache.iceberg.hadoop.HadoopFileIO.newInputFile(HadoopFileIO.java:87)
   2025-03-17 20:46:02     at 
org.apache.iceberg.TableMetadataParser.read(TableMetadataParser.java:275)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.lambda$refreshFromMetadataLocation$0(BaseMetastoreTableOperations.java:179)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.lambda$refreshFromMetadataLocation$1(BaseMetastoreTableOperations.java:198)
   2025-03-17 20:46:02     at 
org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:413)
   2025-03-17 20:46:02     at 
org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:219)
   2025-03-17 20:46:02     at 
org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:203)
   2025-03-17 20:46:02     at 
org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:196)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:198)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:175)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.refreshFromMetadataLocation(BaseMetastoreTableOperations.java:170)
   2025-03-17 20:46:02     at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:167)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:87)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:70)
   2025-03-17 20:46:02     at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:49)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.IcebergWriterFactory.createWriter(IcebergWriterFactory.java:59)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.SinkWriter.lambda$writerForTable$3(SinkWriter.java:139)
   2025-03-17 20:46:02     at 
java.base/java.util.HashMap.computeIfAbsent(Unknown Source)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.SinkWriter.writerForTable(SinkWriter.java:138)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.SinkWriter.lambda$routeRecordStatically$1(SinkWriter.java:98)
   2025-03-17 20:46:02     at 
java.base/java.util.Arrays$ArrayList.forEach(Unknown Source)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.SinkWriter.routeRecordStatically(SinkWriter.java:96)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.SinkWriter.save(SinkWriter.java:85)
   2025-03-17 20:46:02     at java.base/java.util.ArrayList.forEach(Unknown 
Source)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.data.SinkWriter.save(SinkWriter.java:68)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.channel.Worker.save(Worker.java:124)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.channel.CommitterImpl.save(CommitterImpl.java:88)
   2025-03-17 20:46:02     at 
org.apache.iceberg.connect.IcebergSinkTask.put(IcebergSinkTask.java:87)
   2025-03-17 20:46:02     at 
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:606)
   2025-03-17 20:46:02     ... 11 more
   ```
   
   ### Willingness to contribute
   
   - [ ] I can contribute a fix for this bug independently
   - [ ] I would be willing to contribute a fix for this bug with guidance from 
the Iceberg community
   - [ ] I cannot contribute a fix for this bug at this time


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to