CalvinKirs opened a new pull request, #51245:
URL: https://github.com/apache/doris/pull/51245

   
   
   ### What problem does this PR solve?
   
   When using HadoopCatalog with Kerberos authentication, write operations fail 
during doCommit because the underlying FileSystem is accessed without proper 
credentials.
   
   This patch ensures that the Kerberos credentials are available during 
commit-time FS operations, allowing data writes to succeed under kerberos 
environments.
   
   
   `Caused by: org.apache.iceberg.exceptions.RuntimeIOException: Failed to 
refresh the table    at 
org.apache.iceberg.hadoop.HadoopTableOperations.refresh(HadoopTableOperations.java:128)
 ~[iceberg-core-1.6.1.jar:?]    at 
org.apache.iceberg.Transactions.newTransaction(Transactions.java:63) 
~[iceberg-core-1.6.1.jar:?]    at 
org.apache.iceberg.BaseTable.newTransaction(BaseTable.java:240) 
~[iceberg-core-1.6.1.jar:?]    at 
org.apache.doris.datasource.iceberg.IcebergTransaction.beginInsert(IcebergTransaction.java:79)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.IcebergInsertExecutor.beforeExec(IcebergInsertExecutor.java:55)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.AbstractInsertExecutor.executeSingleInsert(AbstractInsertExecutor.java:198)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.InsertIntoTableCommand.runInternal(InsertIntoTableCommand.java:472)
 ~[doris-f
 e.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.InsertIntoTableCommand.run(InsertIntoTableCommand.java:166)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.qe.StmtExecutor.executeByNereids(StmtExecutor.java:771) 
~[doris-fe.jar:1.2-SNAPSHOT]    ... 17 moreCaused by: java.io.IOException: 
DestHost:destPort hadoop-master:8520 , LocalHost:localPort 
vm-204/172.20.57.204:0. Failed on local exception: java.io.IOException: 
org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
via:[TOKEN, KERBEROS]    at 
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
~[?:?]    at 
jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
 ~[?:?]    at 
jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 ~[?:?]    at 
java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499) 
~[?:?]    at java.lang.reflect.Con
 structor.newInstance(Constructor.java:480) ~[?:?]    at 
org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:930) 
~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:905) 
~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1571) 
~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.ipc.Client.call(Client.java:1513) 
~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.ipc.Client.call(Client.java:1410) 
~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
 ~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
 ~[hadoop-common-3.3.6.jar:?]    at jdk.proxy3.$Proxy162.getFileInfo(Unknown 
Source) ~[?:?]    at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:966)
 ~[hadoop-hdfs-client-3.3.6.jar:?]    at jd
 k.internal.reflect.GeneratedMethodAccessor53.invoke(Unknown Source) ~[?:?]    
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]    at java.lang.reflect.Method.invoke(Method.java:568) ~[?:?]    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
 ~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
 ~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
 ~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
 ~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
 ~[hadoop-common-3.3.6.jar:?]    at jdk.proxy3.$Proxy163.getFileInfo(Unknown 
Source) ~[?:?]    at org.apache.hadoop.hdfs.DFSCli
 ent.getFileInfo(DFSClient.java:1739) ~[hadoop-hdfs-client-3.3.6.jar:?]    at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1829)
 ~[hadoop-hdfs-client-3.3.6.jar:?]    at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1826)
 ~[hadoop-hdfs-client-3.3.6.jar:?]    at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 ~[hadoop-common-3.3.6.jar:?]    at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1841)
 ~[hadoop-hdfs-client-3.3.6.jar:?]    at 
org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1862) 
~[hadoop-common-3.3.6.jar:?]    at 
org.apache.iceberg.hadoop.HadoopTableOperations.getMetadataFile(HadoopTableOperations.java:243)
 ~[iceberg-core-1.6.1.jar:?]    at 
org.apache.iceberg.hadoop.HadoopTableOperations.refresh(HadoopTableOperations.java:108)
 ~[iceberg-core-1.6.1.jar:?]    at 
org.apache.iceberg.Transactions.newTransaction(Transactions.java:
 63) ~[iceberg-core-1.6.1.jar:?]    at 
org.apache.iceberg.BaseTable.newTransaction(BaseTable.java:240) 
~[iceberg-core-1.6.1.jar:?]    at 
org.apache.doris.datasource.iceberg.IcebergTransaction.beginInsert(IcebergTransaction.java:79)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.IcebergInsertExecutor.beforeExec(IcebergInsertExecutor.java:55)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.AbstractInsertExecutor.executeSingleInsert(AbstractInsertExecutor.java:198)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.InsertIntoTableCommand.runInternal(InsertIntoTableCommand.java:472)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.nereids.trees.plans.commands.insert.InsertIntoTableCommand.run(InsertIntoTableCommand.java:166)
 ~[doris-fe.jar:1.2-SNAPSHOT]    at 
org.apache.doris.qe.StmtExecutor.executeByNereids(StmtExecutor.java:771) 
~[doris-fe.jar:1.2-SNAPSHOT]    ... 17
  more `
   
   ### Release note
   
   None
   
   ### Check List (For Author)
   
   - Test <!-- At least one of them must be included. -->
       - [ ] Regression test
       - [ ] Unit Test
       - [ ] Manual test (add detailed scripts or steps below)
       - [ ] No need to test or manual test. Explain why:
           - [ ] This is a refactor/code format and no logic has been changed.
           - [ ] Previous test can cover this change.
           - [ ] No code files have been changed.
           - [ ] Other reason <!-- Add your reason?  -->
   
   - Behavior changed:
       - [ ] No.
       - [ ] Yes. <!-- Explain the behavior change -->
   
   - Does this need documentation?
       - [ ] No.
       - [ ] Yes. <!-- Add document PR link here. eg: 
https://github.com/apache/doris-website/pull/1214 -->
   
   ### Check List (For Reviewer who merge this PR)
   
   - [ ] Confirm the release note
   - [ ] Confirm test cases
   - [ ] Confirm document
   - [ ] Add branch pick label <!-- Add branch pick label that this PR should 
merge into -->
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to