gaocho commented on issue #3640: URL: https://github.com/apache/polaris/issues/3640#issuecomment-3885896648
> Hi [@gaocho](https://github.com/gaocho) , I once read an issue saying that, to bypass Credential Vending, you must store the S3 keys in the environment of the query engine (Spark, Trino, ...). So, if you have applied the config: > > stsUnavailable: true > then add the S3 configuration to the Spark environment: > > spark-jupyter: > build: . > container_name: polaris-spark-jupyter > depends_on: > polaris-setup: > condition: service_completed_successfully > environment: > AWS_REGION: "us-east-1" > AWS_ACCESS_KEY_ID: "abcd" > AWS_SECRET_ACCESS_KEY: "abcd" > ports: > - "8888:8888" > - "4040:4040" > volumes: > - ./notebooks:/home/jovyan/work > healthcheck: > test: "curl localhost:8888" > interval: 5s > retries: 15 > networks: > - polaris_net hey @tuanit03 thanks for the suggestion. I’m not running Polaris in Docker, but I did set the S3 creds/region in the Spark runtime environment (AWS_REGION / AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY) and also in Spark config (spark.hadoop.fs.s3a.*) to match the NetApp endpoint. Even with that in place (and with the Polaris catalog showing storageConfigInfo.endpoint set correctly + stsUnavailable=true), CREATE NAMESPACE works, but CREATE TABLE fails with: ForbiddenException: The AWS Access Key Id you provided does not exist in our records (403) So it looks like during table creation Polaris/Iceberg is still using some vended/sub-scoped credential (or otherwise not the static key I configured in Spark), and StorageGRID rejects it since there’s no STS on the NetApp side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
