Akshay-connectwise commented on issue #2040:
URL: https://github.com/apache/iceberg/issues/2040#issuecomment-1420569509

   Hi @RussellSpitzer,
   Have tried setting up the table properties but it didn't worked for me below 
is table creation query and error when I have an extra column "comment" . This 
is using Pyspark in jupyter notebook
   -->spark.sql("CREATE TABLE iceberg.employee ( \
        firstname string, \
        middlename string, \
        lastname string, \
        id string, \
        gender string, \
        salary bigint ) \
   USING iceberg \
   TBLPROPERTIES 
('SPARK_WRITE_ACCEPT_ANY_SCHEMA'='write.spark.accept-any-schema')")
   
   Error:
   AnalysisException: 
   Cannot write to 'local_schema_evo.iceberg.employee', too many data columns:
   Table columns: 'firstname', 'middlename', 'lastname', 'id', 'gender', 
'salary'
   Data columns: 'firstname', 'middlename', 'lastname', 'id', 'gender', 
'salary', 'comment'
   
   While checking table properties this property is enabled but still not 
working. Please suggest if we have any other way for schema evolution without 
using Alter queries.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to