dmgcodevil opened a new issue, #7506: URL: https://github.com/apache/iceberg/issues/7506
### Query engine Trino ### Question I've found a new compaction option `max-file-group-size-bytes` which is very useful b/c it allows compacting very large data sets while using sorting. However, is there an option to control a max number of groups (partitions)? I could not find it. It could be useful in a case where a lot of partitions should be compacted. We'd like to limit the execution time of our compaction spark job. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
