RussellSpitzer commented on issue #2040:
URL: https://github.com/apache/iceberg/issues/2040#issuecomment-1414466758

   Both Spark and Iceberg have their own checks to determine whether an input 
schema is valid for writing to a given table. The Spark checks are first and 
require that all of the columns present in the output table are also present in 
the Dataframe writing to that table. To fix this a flag is allowed to be set 
which will disable the Spark Checks for compatible schema and instead it will 
only use the Iceberg check. This is accomplished by setting 
write.spark.accept-any-schema in the table properties.
   
   ```
     public static final String SPARK_WRITE_ACCEPT_ANY_SCHEMA = 
"write.spark.accept-any-schema";
   ```
   
   You may also 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to