pvary commented on issue #10360: URL: https://github.com/apache/iceberg/issues/10360#issuecomment-2121152930
Currently Flink Sink is not able to handle the schema updates. You need to restart the job to handle schema changes. One hacky solution is to throw a `SuppressRestartsException` exception, and use some external tool, like Flink Kubernetes Operator to restart the failed job (`kubernetes.operator.job.restart.failed` does this for you). This will recreate the job graph and will update the sink schema. Is this the feature that you are looking for? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org