ahmedriza commented on issue #3546:
URL: https://github.com/apache/iceberg/issues/3546#issuecomment-1641086986

   In the doc on [JDBC 
integration](https://iceberg.apache.org/docs/latest/jdbc/) we have the following
   ```
   Class.forName("com.mysql.cj.jdbc.Driver"); // ensure JDBC driver is at 
runtime classpath
   Map<String, String> properties = new HashMap<>();
   properties.put(CatalogProperties.CATALOG_IMPL, JdbcCatalog.class.getName());
   properties.put(CatalogProperties.URI, "jdbc:mysql://localhost:3306/test");
   properties.put(JdbcCatalog.PROPERTY_PREFIX + "user", "admin");
   properties.put(JdbcCatalog.PROPERTY_PREFIX + "password", "pass");
   properties.put(CatalogProperties.WAREHOUSE_LOCATION, "s3://warehouse/path");
   Configuration hadoopConf = new Configuration(); // configs if you use 
HadoopFileIO
   JdbcCatalog catalog = CatalogUtil.buildIcebergCatalog("test_jdbc_catalog", 
properties, hadoopConf);
   ```
   
   This configuration uses Hadoop File IO to write the warehouse data to S3.  
Isn't this unsafe as well, since the actual write takes place on S3?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to