snazy opened a new issue, #3853:
URL: https://github.com/apache/polaris/issues/3853

   ### Describe the bug
   
   When running the guides integration tests (#3553) locally, the tests for 
Ceph and Ozone fail during the Spark-SQL shell execution with 
`IllegalArgumentException: Credential vending was requested for table ns.t1, 
but no credentials are available`.
   
   Timing or something similar seems to matter here, because although I can 
pretty much reproduce the issue locally all the time, it does not happen in GH 
workflows and others do not seem to run into the issue.
   
   Adding `--conf 
spark.sql.catalog.polaris.header.X-Iceberg-Access-Delegation=""` to the Spark 
SQL shell invocation fixes the issue, but that does not appear to be a "proper 
fix", rather a workaround.
   
   The issue happens with Docker images built locally from a couple commits on 
`main` and with the released images for 1.3.0-incubating (aka 
`apache/polaris:latest` as of today) pulled from Docker Hub.
   
   More specifically, the issue happens during the execution of
   ```sql
   CREATE TABLE ns.t1 AS SELECT 'abc';
   ```
   of the SQL script
   ```sql
   USE polaris;
   
   CREATE NAMESPACE ns;
   
   CREATE TABLE ns.t1 AS SELECT 'abc';
   
   SELECT * FROM ns.t1;
   ```
   It also fails for `CREATE TABLE ns.t1 (something int)`.
   
   Full Spark SQL exception:
   ```
   java.lang.IllegalArgumentException: Credential vending was requested for 
table ns.t1, but no credentials are available
        at 
org.apache.iceberg.rest.ErrorHandlers$DefaultErrorHandler.accept(ErrorHandlers.java:232)
        at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:124)
        at 
org.apache.iceberg.rest.ErrorHandlers$TableErrorHandler.accept(ErrorHandlers.java:108)
        at org.apache.iceberg.rest.HTTPClient.throwFailure(HTTPClient.java:240)
        at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:336)
        at org.apache.iceberg.rest.HTTPClient.execute(HTTPClient.java:297)
        at org.apache.iceberg.rest.BaseHTTPClient.post(BaseHTTPClient.java:100)
        at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.stageCreate(RESTSessionCatalog.java:923)
        at 
org.apache.iceberg.rest.RESTSessionCatalog$Builder.createTransaction(RESTSessionCatalog.java:801)
        at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.createTransaction(CachingCatalog.java:282)
        at 
org.apache.iceberg.spark.SparkCatalog.stageCreate(SparkCatalog.java:267)
        at 
org.apache.spark.sql.connector.catalog.StagingTableCatalog.stageCreate(StagingTableCatalog.java:94)
        at 
org.apache.spark.sql.execution.datasources.v2.AtomicCreateTableAsSelectExec.run(WriteToDataSourceV2Exec.scala:121)
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
        at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
        at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:107)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:125)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:201)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:108)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:66)
        at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:107)
        at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:461)
        at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:76)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:461)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:32)
        at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
        at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:32)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:437)
        at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:98)
        at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:85)
        at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:83)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:220)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
        at 
org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:691)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:713)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:744)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:651)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:68)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:501)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:619)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:613)
        at scala.collection.Iterator.foreach(Iterator.scala:943)
        at scala.collection.Iterator.foreach$(Iterator.scala:943)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:613)
        at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
        at 
org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:474)
        at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:490)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:229)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:75)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:52)
        at java.base/java.lang.reflect.Method.invoke(Method.java:580)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1029)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:194)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1120)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1129)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   ```
   
   The relevant output from Polaris:
   ```
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,347 INFO  
[org.apa.pol.ser.cat.ice.IcebergCatalogHandler] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) Initializing non-federated catalog
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,347 INFO  
[org.apa.pol.ser.cat.ice.IcebergCatalogHandler] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) Catalog type: INTERNAL
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,347 INFO  
[org.apa.pol.ser.cat.ice.IcebergCatalogHandler] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) allow external catalog credential vending: true
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,350 INFO  
[org.apa.ice.BaseMetastoreCatalog] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) Table properties set at catalog level through catalog 
properties: {}
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,351 INFO  
[org.apa.ice.BaseMetastoreCatalog] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) Table properties enforced at catalog level through catalog 
properties: {}
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,372 INFO  
[org.apa.pol.ser.exc.IcebergExceptionMapper] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) Handling runtimeException Credential vending was requested 
for table ns.t1, but no credentials are available
   polaris-1        | 2026-02-20T14:22:46+01:00 2026-02-20 13:22:46,373 INFO  
[io.qua.htt.access-log] 
[7a55cc90-0186-4274-bc4b-495012d0af42_0000000000000000010,POLARIS] [,,,] 
(executor-thread-1) 10.89.0.6 - root [20/Feb/2026:13:22:46 +0000] "POST 
/api/catalog/v1/quickstart_catalog/namespaces/ns/tables HTTP/1.1" 400 151
   ```
   
   Ralated PRs: #3591 + #3744
   
   
   ### To Reproduce
   
   _No response_
   
   ### Actual Behavior
   
   _No response_
   
   ### Expected Behavior
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### System information
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to