rdblue commented on code in PR #8660:
URL: https://github.com/apache/iceberg/pull/8660#discussion_r1338902603
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java:
##########
@@ -649,6 +668,39 @@ private String deleteOrcCompressionStrategy() {
.option(SparkWriteOptions.COMPRESSION_STRATEGY)
.sessionConf(SparkSQLProperties.COMPRESSION_STRATEGY)
.tableProperty(DELETE_ORC_COMPRESSION_STRATEGY)
- .parseOptional();
+ .defaultValue(orcCompressionStrategy())
+ .parse();
+ }
+
+ private long dataAdvisoryPartitionSize() {
+ long defaultValue =
+ advisoryPartitionSize(DATA_FILE_SIZE, dataFileFormat(),
dataCompressionCodec());
+ return advisoryPartitionSize(defaultValue);
+ }
+
+ private long deleteAdvisoryPartitionSize() {
+ long defaultValue =
+ advisoryPartitionSize(DELETE_FILE_SIZE, deleteFileFormat(),
deleteCompressionCodec());
+ return advisoryPartitionSize(defaultValue);
+ }
+
+ private long advisoryPartitionSize(long defaultValue) {
+ return confParser
+ .longConf()
+ .option(SparkWriteOptions.ADVISORY_PARTITION_SIZE)
+ .sessionConf(SparkSQLProperties.ADVISORY_PARTITION_SIZE)
+
.tableProperty(TableProperties.SPARK_WRITE_ADVISORY_PARTITION_SIZE_BYTES)
+ .defaultValue(defaultValue)
+ .parse();
+ }
+
+ private long advisoryPartitionSize(
+ long targetFileSize, FileFormat outputFileFormat, String outputCodec) {
Review Comment:
It's a little odd to call this `targetFileSize` when it isn't the table's
target file size. I don't have a much better name though.
##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/SparkWriteConf.java:
##########
@@ -649,6 +668,39 @@ private String deleteOrcCompressionStrategy() {
.option(SparkWriteOptions.COMPRESSION_STRATEGY)
.sessionConf(SparkSQLProperties.COMPRESSION_STRATEGY)
.tableProperty(DELETE_ORC_COMPRESSION_STRATEGY)
- .parseOptional();
+ .defaultValue(orcCompressionStrategy())
+ .parse();
+ }
+
+ private long dataAdvisoryPartitionSize() {
+ long defaultValue =
+ advisoryPartitionSize(DATA_FILE_SIZE, dataFileFormat(),
dataCompressionCodec());
+ return advisoryPartitionSize(defaultValue);
+ }
+
+ private long deleteAdvisoryPartitionSize() {
+ long defaultValue =
+ advisoryPartitionSize(DELETE_FILE_SIZE, deleteFileFormat(),
deleteCompressionCodec());
+ return advisoryPartitionSize(defaultValue);
+ }
+
+ private long advisoryPartitionSize(long defaultValue) {
+ return confParser
+ .longConf()
+ .option(SparkWriteOptions.ADVISORY_PARTITION_SIZE)
+ .sessionConf(SparkSQLProperties.ADVISORY_PARTITION_SIZE)
+
.tableProperty(TableProperties.SPARK_WRITE_ADVISORY_PARTITION_SIZE_BYTES)
+ .defaultValue(defaultValue)
+ .parse();
+ }
+
+ private long advisoryPartitionSize(
+ long targetFileSize, FileFormat outputFileFormat, String outputCodec) {
Review Comment:
It's a little odd to call this `targetFileSize` when it isn't the table's
target file size. I don't have a much better name though. Maybe a comment here
to clarify?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]