guykhazma commented on code in PR #11615: URL: https://github.com/apache/iceberg/pull/11615#discussion_r1854912088
########## spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkScan.java: ########## @@ -248,6 +296,88 @@ protected Statistics estimateStatistics(Snapshot snapshot) { return new Stats(sizeInBytes, rowsCount, colStatsMap); } + private Map<Integer, Long> calculateNullCount( + Map<String, Map<Integer, Long>> distinctDataFilesNullCount) { + Map<Integer, Long> nullCount; + // get the null counts from the manifest file + nullCount = + distinctDataFilesNullCount.values().stream() // Stream<Map<Integer, Long>> + .flatMap( + innerMap -> + innerMap.entrySet().stream()) // Flatten to Stream<Map.Entry<Integer, Long>> + .collect( + Collectors.toMap( + Map.Entry::getKey, // Key is the column id + Map.Entry::getValue, + Long::sum, + HashMap::new)); + return nullCount; + } + + private Object toSparkType(Type type, Object value) { Review Comment: @RussellSpitzer we saw a similar conversion in the `BaseReader`: https://github.com/apache/iceberg/blob/f2b1b91d304039027333570451477e7575f7d39d/spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/BaseReader.java#L217 However, it is not extracted to some helper function. And in this case we don't need the logic for Strings/Binary as strings are not supported and binary don't support min/max -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org