anuragmantri commented on issue #5069:
URL: https://github.com/apache/iceberg/issues/5069#issuecomment-2368871238

   There is a very old JIRA in Spark to add unenforced referential integrity 
constraints in Spark 
[SPARK-19842](https://issues.apache.org/jira/browse/SPARK-19842). This change 
is still pending. Maybe we can check with the spark community on the status of 
this work. As @kris-sea mentioned, even if we add native support in Spark, it 
will not be enforced and so records added through spark can contain duplicate 
keys.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to