Souldiv opened a new issue, #12719:
URL: https://github.com/apache/iceberg/issues/12719

   ### Query engine
   
   trino
   
   ### Question
   
   I have been playing around with kafka connect and iceberg with hms as 
catalog. I followed the 1.8.1 iceberg kafka-connect documentation with the 
single destination table example and after creating the table using trino, I 
enabled the sink it said task is running but once I pushed data into the topic 
it gave me the following error:
   
   ```
   Apr 04 03:04:30 hudi connect-distributed[702929]: [2025-04-04 03:04:30,835] 
ERROR [events-sink|task-0] WorkerSinkTask{id=events-sink-0} Task threw an 
uncaught and unrecoverable exception. Task is being killed and will not recover 
until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:237)
   Apr 04 03:04:30 hudi connect-distributed[702929]: 
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to 
unrecoverable exception.
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:628)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:340)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:238)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:207)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:229)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:284)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.lang.Thread.run(Thread.java:840)
   Apr 04 03:04:30 hudi connect-distributed[702929]: Caused by: 
java.lang.RuntimeException: Failed to get table info from metastore 
default.events
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:160)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:87)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:70)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:49)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.IcebergWriterFactory.createWriter(IcebergWriterFactory.java:59)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.SinkWriter.lambda$writerForTable$3(SinkWriter.java:139)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1220)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.SinkWriter.writerForTable(SinkWriter.java:138)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.SinkWriter.lambda$routeRecordStatically$1(SinkWriter.java:98)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.Arrays$ArrayList.forEach(Arrays.java:4204)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.SinkWriter.routeRecordStatically(SinkWriter.java:96)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.SinkWriter.save(SinkWriter.java:85)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.data.SinkWriter.save(SinkWriter.java:68)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.channel.Worker.save(Worker.java:124)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.channel.CommitterImpl.save(CommitterImpl.java:88)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.connect.IcebergSinkTask.put(IcebergSinkTask.java:87)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         ... 11 more
   Apr 04 03:04:30 hudi connect-distributed[702929]: Caused by: 
org.apache.thrift.TApplicationException: Invalid method name: 'get_table'
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:79)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:1514)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:1500)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1346)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
java.base/java.lang.reflect.Method.invoke(Method.java:569)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:169)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
jdk.proxy4/jdk.proxy4.$Proxy54.getTable(Unknown Source)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.hive.HiveTableOperations.lambda$doRefresh$0(HiveTableOperations.java:147)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:72)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:65)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:147)
   Apr 04 03:04:30 hudi connect-distributed[702929]:         ... 28 more
   ```
   this is my sink config for the topic events:
   ```
   {
     "name": "events-sink",
     "config": {
       "iceberg.catalog.uri": "thrift://localhost:9083",
       "iceberg.catalog.type": "hive",
       "iceberg.hadoop-conf-dir": "/home/conuser/files/hadoop-3.4.1/etc/hadoop",
       "name": "events-sink",
       "connector.class": "org.apache.iceberg.connect.IcebergSinkConnector",
       "tasks.max": "2",
       "topics": "events",
       "iceberg.tables": "default.events"
     }
   }
   ```
   how can I resolve it?
   
   hive version: 4.0.1
   iceberg version: 1.8.1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to