RussellSpitzer commented on code in PR #12931: URL: https://github.com/apache/iceberg/pull/12931#discussion_r2067366792
########## docs/docs/spark-configuration.md: ########## @@ -145,6 +145,59 @@ Using those SQL commands requires adding Iceberg extensions to your Spark enviro ## Runtime configuration +### Precedence of Configuration Settings +Iceberg allows configurations to be specified at different levels. The effective configuration for a read or write operation is determined based on the following order of precedence: + +1. Read/Write Options – Explicitly passed to `.option(...)` in a read/write operation. + +2. Table Properties – Defined on the Iceberg table via `ALTER TABLE SET TBLPROPERTIES`. + +3. Spark SQL Configurations – Set globally in Spark via `spark.conf.set(...)`, `spark-defaults.conf`, or `--conf` in spark-submit. Review Comment: Technically these are not all Spark Sql Configurations, they are Spark Session Configurations The first sets "Spark Session Conf" The second sets "Spark Context Configuration" but the Session inherits this and the last also sets "Spark Context Configuration" I would probably just change Spark Sql Configuration to Spark Session Configuration -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org