jhchee opened a new issue, #8199:
URL: https://github.com/apache/iceberg/issues/8199

   ### Feature Request / Improvement
   
   Currently, if the insert statement specifies less columns than the target 
table size, the following exception will be thrown:
   
   Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot 
find column 'col_1' of the target table among the INSERT columns: col_2, col_3. 
INSERT clauses must provide values for all columns of the target table.
   
   For a wide table that has 1000 columns, the user is required to specify all 
the columns with default values null to avoid this exception. Can we support 
partial insert in merge into (default to null if not specified) command so the 
developer can maintain clean SQL statement.
   
   E.g. Delta merge command has already supported this
   image
   https://docs.databricks.com/sql/language-manual/delta-merge-into.html
   
   ### Query engine
   
   Spark


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to