nastra commented on code in PR #6338:
URL: https://github.com/apache/iceberg/pull/6338#discussion_r1037875766


##########
core/src/main/java/org/apache/iceberg/jdbc/JdbcUtil.java:
##########
@@ -72,9 +72,9 @@ final class JdbcUtil {
           + TABLE_NAME
           + " VARCHAR(255) NOT NULL,"
           + METADATA_LOCATION
-          + " VARCHAR(5500),"
+          + " VARCHAR(1000),"

Review Comment:
   I've rerun some testing today to verify my findings from yesterday when 
making this PR. It turns out adjusting `NAMESPACE_PROPERTY_KEY` from 5500 to 
255 was the only thing that was necessary. I was mislead by the stack trace
   ```
   at 
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1903)
   at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:1242)
   at 
org.apache.iceberg.jdbc.JdbcCatalog.lambda$initializeCatalogTables$1(JdbcCatalog.java:152)
   at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:58)
   at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
   at 
org.apache.iceberg.jdbc.JdbcCatalog.initializeCatalogTables(JdbcCatalog.java:135)
   at org.apache.iceberg.jdbc.JdbcCatalog.initialize(JdbcCatalog.java:103)
   ... 87 more
   org.apache.iceberg.jdbc.UncheckedSQLException: Cannot initialize JDBC catalog
   at org.apache.iceberg.jdbc.JdbcCatalog.initialize(JdbcCatalog.java:109)
   at org.apache.iceberg.CatalogUtil.loadCatalog(CatalogUtil.java:212)
   ```
   thinking that `JdbcCatalog.initializeCatalogTables(JdbcCatalog.java:135)`  
is affecting the 
[CREATE_TABLE_STATEMENT](https://github.com/apache/iceberg/blob/383d9caec804293e27b23b50c2cd89168346a848/core/src/main/java/org/apache/iceberg/jdbc/JdbcCatalog.java#L135)
 while the issue was in all 3 issues the `CREATE_NAMESPACE_PROPERTIES_TABLE` 
stmt. 
   
   So should we restore those varchars from 1000 back to 5500? 
   
   `Although InnoDB supports row sizes larger than 65,535 bytes internally, 
MySQL itself imposes a row-size limit of 65,535 for the combined size of all 
columns` <-- we could also try and set them to a higher value by calculating 
(65535 - (255 * 3)) / 2 (metadata_location + previous_metadata_location). But 
if we ever add another column, we'd have to re-calculate that.
   
   To your second question around truncation. If a value is truncated, then 
there will be an exception: `Data truncation: Data too long for column 
'metadata_location' at row 1`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to