cgpoh commented on issue #6606:
URL: https://github.com/apache/iceberg/issues/6606#issuecomment-1397847153

   After looking into the code, realised that instead of having 
s3.connection.maximum in flink configuration, I should set the values in Hadoop 
configuration and pass in the configuration to HiveCatalog instead.
   
   ```kotlin
   val conf = Configuration()
   conf["fs.s3a.connection.maximum"] = 100
   val catalogLoader = CatalogLoader.hive(hiveCatalogName, conf, properties)
   ```
   
   with this, I can see the maximum connection is set to 100 in the log. I will 
close this issue for now as my job is running for 24hrs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to