pietro-agilelab commented on issue #8377:
URL: https://github.com/apache/iceberg/issues/8377#issuecomment-1734973961

   Hi @sebastien-namy-cbp, after running some tests I came to the following 
conclusion:
   
   ### Hadoop catalog
   If you are using a catalog of type "hadoop", then you can't use the 
`tableProperty("location", "s3://...")` property of the DataFrameWriterV2 API, 
otherwise you'll get [an 
error](https://github.com/apache/iceberg/issues/8377#issue-1862786604).
   
   Similarly, creating a table with 
`df.write.format("iceberg").mode("overwrite").save("/path/to/my/table")` raises 
[an 
error](https://github.com/apache/iceberg/issues/8377#issuecomment-1691790773) 
too.
   
   The only workaround I found is by using the DataFrameWriterV2 API: assuming 
that you configured an Iceberg catalog called "local" with a warehouse path 
`s3://my-bucket/warehouse`, then writing
   ```python
   df.writeTo("local.db.my.table").using("iceberg").createOrReplace()
   ```
   will create a table at location `s3://my-bucket/warehouse/local/db/my/table`.
   
   
   ### Hive catalog
   If instead you are using a catalog of type "hive", then you can follow the 
example of the 
[documentation](https://iceberg.apache.org/docs/latest/spark-writes/#creating-tables),
 thus writing:
   ```python
   # creates table `iceberg.db.table` at location 
`s3://my-bucket/path/to/location`
   df.writeTo("iceberg.db.table")
       .tableProperty("location", "s3://my-bucket/path/to/location")
       .createOrReplace()
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to