RussellSpitzer commented on issue #12078:
URL: https://github.com/apache/iceberg/issues/12078#issuecomment-2657726189
Ah so you are setting a Table Property in Hive. But then running Expire
Snapshots in Spark but Spark isn't using the table properties that you set.
Can you check the u
cosen-wu commented on issue #12078:
URL: https://github.com/apache/iceberg/issues/12078#issuecomment-2613723439
so sorry. I didn't express myself clearly.
What I'm confused about is why the new properties haven't been updated in
the metadata file after they were added in hive.
This
RussellSpitzer commented on issue #12078:
URL: https://github.com/apache/iceberg/issues/12078#issuecomment-2612815164
I'm not sure what you are asking here.
Are you saying that the table properties are not being set correctly in
metadata json when set in flink?
Or are you saying t
cosen-wu opened a new issue, #12078:
URL: https://github.com/apache/iceberg/issues/12078
### Query engine
hive,flink,spark
### Question
> create iceberg table in hive:
`create table test.iceberg_v1(
a int,
b string,
c string,
d string
)
partitioned b