linhr commented on issue #14529:
URL: https://github.com/apache/iceberg/issues/14529#issuecomment-3505884197
After some more experimentation, the following works for me.
```bash
pyspark \
--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.10.0 \
--conf spark.sql.catalog.local=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.local.type=hadoop \
--conf spark.sql.catalog.local.warehouse=$PWD/warehouse
```
```python
spark.sql("CREATE TABLE local.db.table (id bigint) USING iceberg")
spark.sql("INSERT INTO local.db.table VALUES (1), (2)")
spark.read.format("iceberg").load("warehouse/db/table")
spark.range(5).write.format("iceberg").mode("overwrite").save("warehouse/db/table")
spark.range(5).write.format("iceberg").mode("append").save("warehouse/db/table")
```
And I need to explicitly specify the `overwrite` or `append` mode. The
`error` (default) or `ignore` mode would fail.
So it seems the v1 writer only works in certain modes for an existing table
written by the v2 writer. Could this be improved?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]