aokolnychyi commented on code in PR #10176:
URL: https://github.com/apache/iceberg/pull/10176#discussion_r1697282703


##########
core/src/main/java/org/apache/iceberg/BaseScan.java:
##########
@@ -289,4 +289,22 @@ private static Schema 
lazyColumnProjection(TableScanContext context, Schema sche
   public ThisT metricsReporter(MetricsReporter reporter) {
     return newRefinedScan(table, schema, context.reportWith(reporter));
   }
+
+  /**
+   * Retrieves a list of column names based on the type of manifest content 
provided.
+   *
+   * @param content the manifest content type, which specifies whether the 
columns are for data or

Review Comment:
   Minor: What about shortening the description here to stay on one line? 
   Like simply `the manifest content type to scan`?



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.

Review Comment:
   Minor: `a Schema object` -> `a schema`?



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.
+   */
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");

Review Comment:
   Question: Is there a particular reason why we are not using `Preconditions` 
here?



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.List;
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION_DATA,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");
+    }
+
+    return new Schema(
+        Types.NestedField.required(1, Column.PARTITION_DATA.name(), 
partitionType),
+        Types.NestedField.required(2, Column.SPEC_ID.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(3, Column.DATA_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.required(4, Column.DATA_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(5, Column.DATA_FILE_SIZE_IN_BYTES.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(

Review Comment:
   Question: We made these counts nullable cause we think not all 
implementations will populate these values? We will still write 0 if needed in 
the current implementation, though? Except the total record count, which may be 
expensive to compute.



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.
+   */
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");

Review Comment:
   Question: Are we saying it is illegal to compute partition stats for 
unpartitioned tables given that most of the information is actually available 
in the snapshot summary?



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.
+   */
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");
+    }
+
+    return new Schema(
+        Types.NestedField.required(1, Column.PARTITION.name(), partitionType),
+        Types.NestedField.required(2, Column.SPEC_ID.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(3, Column.DATA_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.required(4, Column.DATA_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(
+            5, Column.TOTAL_DATA_FILE_SIZE_IN_BYTES.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            6, Column.POSITION_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            7, Column.POSITION_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(
+            8, Column.EQUALITY_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            9, Column.EQUALITY_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(10, Column.TOTAL_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(11, Column.LAST_UPDATED_AT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            12, Column.LAST_UPDATED_SNAPSHOT_ID.name(), Types.LongType.get()));
+  }
+
+  /**
+   * Creates an iterable of partition stats records from a given manifest 
file, using the specified
+   * table and record schema.
+   *
+   * @param table the table from which the manifest file is derived.
+   * @param manifest the manifest file containing metadata about the records.
+   * @param recordSchema the schema defining the structure of the records.
+   * @return a CloseableIterable of partition stats records as defined by the 
manifest file and
+   *     record schema.
+   */
+  public static CloseableIterable<Record> fromManifest(
+      Table table, ManifestFile manifest, Schema recordSchema) {
+    return CloseableIterable.transform(
+        ManifestFiles.open(manifest, table.io(), table.specs())
+            .select(BaseScan.scanColumns(manifest.content()))
+            .liveEntries(),
+        entry -> fromManifestEntry(entry, table, recordSchema));
+  }
+
+  /**
+   * Appends statistics from one Record to another.
+   *
+   * @param toRecord the Record to which statistics will be appended.
+   * @param fromRecord the Record from which statistics will be sourced.
+   */
+  public static void appendStatsFromRecord(Record toRecord, Record fromRecord) 
{
+    Preconditions.checkState(toRecord != null, "Record to update cannot be 
null");
+    Preconditions.checkState(fromRecord != null, "Record to update from cannot 
be null");
+
+    toRecord.set(
+        Column.SPEC_ID.ordinal(),
+        Math.max(
+            (int) toRecord.get(Column.SPEC_ID.ordinal()),
+            (int) fromRecord.get(Column.SPEC_ID.ordinal())));
+    checkAndIncrementLong(toRecord, fromRecord, Column.DATA_RECORD_COUNT);

Review Comment:
   It feels a bit fragile to use the enum ordinals. Let me think.



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.
+   */
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");
+    }
+
+    return new Schema(
+        Types.NestedField.required(1, Column.PARTITION.name(), partitionType),
+        Types.NestedField.required(2, Column.SPEC_ID.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(3, Column.DATA_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.required(4, Column.DATA_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(
+            5, Column.TOTAL_DATA_FILE_SIZE_IN_BYTES.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            6, Column.POSITION_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            7, Column.POSITION_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(
+            8, Column.EQUALITY_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            9, Column.EQUALITY_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(10, Column.TOTAL_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(11, Column.LAST_UPDATED_AT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            12, Column.LAST_UPDATED_SNAPSHOT_ID.name(), Types.LongType.get()));
+  }
+
+  /**
+   * Creates an iterable of partition stats records from a given manifest 
file, using the specified
+   * table and record schema.
+   *
+   * @param table the table from which the manifest file is derived.
+   * @param manifest the manifest file containing metadata about the records.
+   * @param recordSchema the schema defining the structure of the records.
+   * @return a CloseableIterable of partition stats records as defined by the 
manifest file and
+   *     record schema.
+   */
+  public static CloseableIterable<Record> fromManifest(
+      Table table, ManifestFile manifest, Schema recordSchema) {
+    return CloseableIterable.transform(
+        ManifestFiles.open(manifest, table.io(), table.specs())
+            .select(BaseScan.scanColumns(manifest.content()))
+            .liveEntries(),
+        entry -> fromManifestEntry(entry, table, recordSchema));
+  }
+
+  /**
+   * Appends statistics from one Record to another.
+   *
+   * @param toRecord the Record to which statistics will be appended.
+   * @param fromRecord the Record from which statistics will be sourced.
+   */
+  public static void appendStatsFromRecord(Record toRecord, Record fromRecord) 
{
+    Preconditions.checkState(toRecord != null, "Record to update cannot be 
null");
+    Preconditions.checkState(fromRecord != null, "Record to update from cannot 
be null");
+
+    toRecord.set(
+        Column.SPEC_ID.ordinal(),
+        Math.max(
+            (int) toRecord.get(Column.SPEC_ID.ordinal()),
+            (int) fromRecord.get(Column.SPEC_ID.ordinal())));
+    checkAndIncrementLong(toRecord, fromRecord, Column.DATA_RECORD_COUNT);
+    checkAndIncrementInt(toRecord, fromRecord, Column.DATA_FILE_COUNT);
+    checkAndIncrementLong(toRecord, fromRecord, 
Column.TOTAL_DATA_FILE_SIZE_IN_BYTES);
+    checkAndIncrementLong(toRecord, fromRecord, 
Column.POSITION_DELETE_RECORD_COUNT);
+    checkAndIncrementInt(toRecord, fromRecord, 
Column.POSITION_DELETE_FILE_COUNT);
+    checkAndIncrementLong(toRecord, fromRecord, 
Column.EQUALITY_DELETE_RECORD_COUNT);
+    checkAndIncrementInt(toRecord, fromRecord, 
Column.EQUALITY_DELETE_FILE_COUNT);
+    checkAndIncrementLong(toRecord, fromRecord, Column.TOTAL_RECORD_COUNT);
+    if (fromRecord.get(Column.LAST_UPDATED_AT.ordinal()) != null) {
+      if (toRecord.get(Column.LAST_UPDATED_AT.ordinal()) == null

Review Comment:
   Hm, can we actually compute `last_updated_at` if at least one side has null? 
I thought such computations will require knowing this value for all records.



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.

Review Comment:
   Minor: `a Schema object` -> `a schema`?



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.
+   */
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");
+    }
+
+    return new Schema(
+        Types.NestedField.required(1, Column.PARTITION.name(), partitionType),
+        Types.NestedField.required(2, Column.SPEC_ID.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(3, Column.DATA_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.required(4, Column.DATA_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(
+            5, Column.TOTAL_DATA_FILE_SIZE_IN_BYTES.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            6, Column.POSITION_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            7, Column.POSITION_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(
+            8, Column.EQUALITY_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            9, Column.EQUALITY_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(10, Column.TOTAL_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(11, Column.LAST_UPDATED_AT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            12, Column.LAST_UPDATED_SNAPSHOT_ID.name(), Types.LongType.get()));
+  }
+
+  /**
+   * Creates an iterable of partition stats records from a given manifest 
file, using the specified
+   * table and record schema.
+   *
+   * @param table the table from which the manifest file is derived.
+   * @param manifest the manifest file containing metadata about the records.
+   * @param recordSchema the schema defining the structure of the records.
+   * @return a CloseableIterable of partition stats records as defined by the 
manifest file and
+   *     record schema.
+   */
+  public static CloseableIterable<Record> fromManifest(
+      Table table, ManifestFile manifest, Schema recordSchema) {
+    return CloseableIterable.transform(
+        ManifestFiles.open(manifest, table.io(), table.specs())
+            .select(BaseScan.scanColumns(manifest.content()))
+            .liveEntries(),
+        entry -> fromManifestEntry(entry, table, recordSchema));
+  }
+
+  /**
+   * Appends statistics from one Record to another.
+   *
+   * @param toRecord the Record to which statistics will be appended.
+   * @param fromRecord the Record from which statistics will be sourced.
+   */
+  public static void appendStatsFromRecord(Record toRecord, Record fromRecord) 
{
+    Preconditions.checkState(toRecord != null, "Record to update cannot be 
null");
+    Preconditions.checkState(fromRecord != null, "Record to update from cannot 
be null");
+
+    toRecord.set(
+        Column.SPEC_ID.ordinal(),
+        Math.max(
+            (int) toRecord.get(Column.SPEC_ID.ordinal()),
+            (int) fromRecord.get(Column.SPEC_ID.ordinal())));
+    checkAndIncrementLong(toRecord, fromRecord, Column.DATA_RECORD_COUNT);
+    checkAndIncrementInt(toRecord, fromRecord, Column.DATA_FILE_COUNT);
+    checkAndIncrementLong(toRecord, fromRecord, 
Column.TOTAL_DATA_FILE_SIZE_IN_BYTES);
+    checkAndIncrementLong(toRecord, fromRecord, 
Column.POSITION_DELETE_RECORD_COUNT);
+    checkAndIncrementInt(toRecord, fromRecord, 
Column.POSITION_DELETE_FILE_COUNT);
+    checkAndIncrementLong(toRecord, fromRecord, 
Column.EQUALITY_DELETE_RECORD_COUNT);
+    checkAndIncrementInt(toRecord, fromRecord, 
Column.EQUALITY_DELETE_FILE_COUNT);
+    checkAndIncrementLong(toRecord, fromRecord, Column.TOTAL_RECORD_COUNT);
+    if (fromRecord.get(Column.LAST_UPDATED_AT.ordinal()) != null) {
+      if (toRecord.get(Column.LAST_UPDATED_AT.ordinal()) == null
+          || ((long) toRecord.get(Column.LAST_UPDATED_AT.ordinal())
+              < (long) fromRecord.get(Column.LAST_UPDATED_AT.ordinal()))) {
+        toRecord.set(
+            Column.LAST_UPDATED_AT.ordinal(), 
fromRecord.get(Column.LAST_UPDATED_AT.ordinal()));
+        toRecord.set(
+            Column.LAST_UPDATED_SNAPSHOT_ID.ordinal(),
+            fromRecord.get(Column.LAST_UPDATED_SNAPSHOT_ID.ordinal()));
+      }
+    }
+  }
+
+  /**
+   * Converts the given {@link PartitionData} into a {@link Record} based on 
the specified partition
+   * schema.
+   *
+   * @param partitionSchema the schema defining the structure of the partition 
data.
+   * @param partitionData the data to be converted into a Record.
+   * @return a Record that represents the partition data as per the given 
schema.
+   */
+  public static Record partitionDataToRecord(
+      Types.StructType partitionSchema, PartitionData partitionData) {
+    GenericRecord genericRecord = GenericRecord.create(partitionSchema);
+    for (int index = 0; index < partitionData.size(); index++) {
+      genericRecord.set(index, partitionData.get(index));
+    }
+
+    return genericRecord;
+  }
+
+  private static Record fromManifestEntry(
+      ManifestEntry<?> entry, Table table, Schema recordSchema) {
+    GenericRecord record = GenericRecord.create(recordSchema);
+    Types.StructType partitionType =
+        recordSchema.findField(Column.PARTITION.name()).type().asStructType();
+    PartitionData partitionData = coercedPartitionData(entry.file(), 
table.specs(), partitionType);
+    record.set(Column.PARTITION.ordinal(), 
partitionDataToRecord(partitionType, partitionData));
+    record.set(Column.SPEC_ID.ordinal(), entry.file().specId());
+
+    Snapshot snapshot = table.snapshot(entry.snapshotId());
+    if (snapshot != null) {
+      record.set(Column.LAST_UPDATED_SNAPSHOT_ID.ordinal(), 
snapshot.snapshotId());
+      record.set(Column.LAST_UPDATED_AT.ordinal(), snapshot.timestampMillis());
+    }
+
+    switch (entry.file().content()) {
+      case DATA:
+        record.set(Column.DATA_FILE_COUNT.ordinal(), 1);
+        record.set(Column.DATA_RECORD_COUNT.ordinal(), 
entry.file().recordCount());
+        record.set(Column.TOTAL_DATA_FILE_SIZE_IN_BYTES.ordinal(), 
entry.file().fileSizeInBytes());
+        break;
+      case POSITION_DELETES:
+        record.set(Column.POSITION_DELETE_FILE_COUNT.ordinal(), 1);

Review Comment:
   If there are no deletes, what kind of values will we have as delete counts? 
Null or 0?



##########
data/src/main/java/org/apache/iceberg/data/GeneratePartitionStats.java:
##########
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.data;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentSkipListMap;
+import org.apache.iceberg.ImmutableGenericPartitionStatisticsFile;
+import org.apache.iceberg.ManifestFile;
+import org.apache.iceberg.PartitionStatisticsFile;
+import org.apache.iceberg.PartitionStatsUtil;
+import org.apache.iceberg.Partitioning;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.SnapshotUtil;
+import org.apache.iceberg.util.Tasks;
+import org.apache.iceberg.util.ThreadPools;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class GeneratePartitionStats {

Review Comment:
   We usually name classes as nouns. What about something like 
`PartitionStatsGenerator`?



##########
data/src/main/java/org/apache/iceberg/data/GeneratePartitionStats.java:
##########
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.data;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentSkipListMap;
+import org.apache.iceberg.ImmutableGenericPartitionStatisticsFile;
+import org.apache.iceberg.ManifestFile;
+import org.apache.iceberg.PartitionStatisticsFile;
+import org.apache.iceberg.PartitionStatsUtil;
+import org.apache.iceberg.Partitioning;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.SnapshotUtil;
+import org.apache.iceberg.util.Tasks;
+import org.apache.iceberg.util.ThreadPools;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class GeneratePartitionStats {
+  private static final Logger LOG = 
LoggerFactory.getLogger(GeneratePartitionStats.class);
+
+  private final Table table;
+  private String branch;
+
+  public GeneratePartitionStats(Table table) {
+    this.table = table;
+  }
+
+  public GeneratePartitionStats(Table table, String branch) {
+    this.table = table;
+    this.branch = branch;
+  }
+
+  /**
+   * Computes the partition stats for the current snapshot and writes it into 
the metadata folder.
+   *
+   * @return {@link PartitionStatisticsFile} for the latest snapshot id or 
null if table doesn't
+   *     have any snapshot.
+   */
+  public PartitionStatisticsFile generate() {
+    Snapshot currentSnapshot = SnapshotUtil.latestSnapshot(table, branch);
+    if (currentSnapshot == null) {
+      Preconditions.checkArgument(
+          branch == null, "Couldn't find the snapshot for the branch %s", 
branch);

Review Comment:
   Can we have empty branches (without snapshots)?



##########
data/src/main/java/org/apache/iceberg/data/GeneratePartitionStats.java:
##########
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.data;
+
+import java.io.IOException;
+import java.io.UncheckedIOException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentSkipListMap;
+import org.apache.iceberg.ImmutableGenericPartitionStatisticsFile;
+import org.apache.iceberg.ManifestFile;
+import org.apache.iceberg.PartitionStatisticsFile;
+import org.apache.iceberg.PartitionStatsUtil;
+import org.apache.iceberg.Partitioning;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Comparators;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.SnapshotUtil;
+import org.apache.iceberg.util.Tasks;
+import org.apache.iceberg.util.ThreadPools;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class GeneratePartitionStats {
+  private static final Logger LOG = 
LoggerFactory.getLogger(GeneratePartitionStats.class);
+
+  private final Table table;
+  private String branch;
+
+  public GeneratePartitionStats(Table table) {
+    this.table = table;
+  }
+
+  public GeneratePartitionStats(Table table, String branch) {
+    this.table = table;
+    this.branch = branch;
+  }
+
+  /**
+   * Computes the partition stats for the current snapshot and writes it into 
the metadata folder.
+   *
+   * @return {@link PartitionStatisticsFile} for the latest snapshot id or 
null if table doesn't
+   *     have any snapshot.
+   */
+  public PartitionStatisticsFile generate() {
+    Snapshot currentSnapshot = SnapshotUtil.latestSnapshot(table, branch);
+    if (currentSnapshot == null) {
+      Preconditions.checkArgument(
+          branch == null, "Couldn't find the snapshot for the branch %s", 
branch);
+      return null;
+    }
+
+    Types.StructType partitionType = Partitioning.partitionType(table);
+    // Map of partitionData, partition-stats-entry per partitionData.
+    // Sorting the records based on partition as per spec.
+    Map<Record, Record> partitionEntryMap =
+        new ConcurrentSkipListMap<>(Comparators.forType(partitionType));

Review Comment:
   I am worried about the overhead of maintaining the map sorted on writes 
(there will be a lot of writes). I wonder whether using `ConcurrentHashMap` and 
sorting once at the end will be more efficient. Can we add a JMH benchmark to 
verify this?
   
   If we notice thread-safe collections cause significant overhead, we can 
switch to aggregating manifests concurrently and doing one more pass for the 
final aggregation. Then we will be able to avoid concurrency problems without 
overhead.



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    TOTAL_DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  /**
+   * Generates a Schema object as per partition statistics spec based on the 
given partition type.
+   *
+   * @param partitionType the struct type that defines the structure of the 
partition.
+   * @return a Schema object that corresponds to the provided partition type.
+   */
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");
+    }
+
+    return new Schema(
+        Types.NestedField.required(1, Column.PARTITION.name(), partitionType),
+        Types.NestedField.required(2, Column.SPEC_ID.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(3, Column.DATA_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.required(4, Column.DATA_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(
+            5, Column.TOTAL_DATA_FILE_SIZE_IN_BYTES.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            6, Column.POSITION_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            7, Column.POSITION_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(
+            8, Column.EQUALITY_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            9, Column.EQUALITY_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(10, Column.TOTAL_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(11, Column.LAST_UPDATED_AT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            12, Column.LAST_UPDATED_SNAPSHOT_ID.name(), Types.LongType.get()));
+  }
+
+  /**
+   * Creates an iterable of partition stats records from a given manifest 
file, using the specified
+   * table and record schema.
+   *
+   * @param table the table from which the manifest file is derived.
+   * @param manifest the manifest file containing metadata about the records.
+   * @param recordSchema the schema defining the structure of the records.
+   * @return a CloseableIterable of partition stats records as defined by the 
manifest file and
+   *     record schema.
+   */
+  public static CloseableIterable<Record> fromManifest(
+      Table table, ManifestFile manifest, Schema recordSchema) {
+    return CloseableIterable.transform(
+        ManifestFiles.open(manifest, table.io(), table.specs())
+            .select(BaseScan.scanColumns(manifest.content()))
+            .liveEntries(),

Review Comment:
   Does selecting live entries affect the computation of `last_updated_at`?



##########
core/src/main/java/org/apache/iceberg/PartitionStatsUtil.java:
##########
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg;
+
+import java.util.List;
+import java.util.Map;
+import org.apache.iceberg.data.GenericRecord;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.types.Types;
+import org.apache.iceberg.util.PartitionUtil;
+
+public class PartitionStatsUtil {
+
+  private PartitionStatsUtil() {}
+
+  public enum Column {
+    PARTITION_DATA,
+    SPEC_ID,
+    DATA_RECORD_COUNT,
+    DATA_FILE_COUNT,
+    DATA_FILE_SIZE_IN_BYTES,
+    POSITION_DELETE_RECORD_COUNT,
+    POSITION_DELETE_FILE_COUNT,
+    EQUALITY_DELETE_RECORD_COUNT,
+    EQUALITY_DELETE_FILE_COUNT,
+    TOTAL_RECORD_COUNT,
+    LAST_UPDATED_AT,
+    LAST_UPDATED_SNAPSHOT_ID
+  }
+
+  public static Schema schema(Types.StructType partitionType) {
+    if (partitionType.fields().isEmpty()) {
+      throw new IllegalArgumentException("getting schema for an unpartitioned 
table");
+    }
+
+    return new Schema(
+        Types.NestedField.required(1, Column.PARTITION_DATA.name(), 
partitionType),
+        Types.NestedField.required(2, Column.SPEC_ID.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(3, Column.DATA_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.required(4, Column.DATA_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.required(5, Column.DATA_FILE_SIZE_IN_BYTES.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            6, Column.POSITION_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            7, Column.POSITION_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(
+            8, Column.EQUALITY_DELETE_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            9, Column.EQUALITY_DELETE_FILE_COUNT.name(), 
Types.IntegerType.get()),
+        Types.NestedField.optional(10, Column.TOTAL_RECORD_COUNT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(11, Column.LAST_UPDATED_AT.name(), 
Types.LongType.get()),
+        Types.NestedField.optional(
+            12, Column.LAST_UPDATED_SNAPSHOT_ID.name(), Types.LongType.get()));
+  }
+
+  public static CloseableIterable<Record> fromManifest(
+      Table table, ManifestFile manifest, Schema recordSchema) {
+    CloseableIterable<? extends ManifestEntry<? extends ContentFile<?>>> 
entries =
+        CloseableIterable.transform(
+            ManifestFiles.open(manifest, table.io(), table.specs())
+                .select(scanColumns(manifest.content())) // don't select stats 
columns
+                .liveEntries(),
+            t ->
+                (ManifestEntry<? extends ContentFile<?>>)
+                    // defensive copy of manifest entry without stats columns
+                    t.copyWithoutStats());
+
+    return CloseableIterable.transform(
+        entries, entry -> fromManifestEntry(entry, table, recordSchema));
+  }
+
+  public static void updateRecord(Record toUpdate, Record fromRecord) {
+    toUpdate.set(
+        Column.SPEC_ID.ordinal(),
+        Math.max(
+            (int) toUpdate.get(Column.SPEC_ID.ordinal()),
+            (int) fromRecord.get(Column.SPEC_ID.ordinal())));
+    incrementLong(toUpdate, fromRecord, Column.DATA_RECORD_COUNT);
+    incrementInt(toUpdate, fromRecord, Column.DATA_FILE_COUNT);
+    incrementLong(toUpdate, fromRecord, Column.DATA_FILE_SIZE_IN_BYTES);
+    checkAndIncrementLong(toUpdate, fromRecord, 
Column.POSITION_DELETE_RECORD_COUNT);
+    checkAndIncrementInt(toUpdate, fromRecord, 
Column.POSITION_DELETE_FILE_COUNT);
+    checkAndIncrementLong(toUpdate, fromRecord, 
Column.EQUALITY_DELETE_RECORD_COUNT);
+    checkAndIncrementInt(toUpdate, fromRecord, 
Column.EQUALITY_DELETE_FILE_COUNT);
+    checkAndIncrementLong(toUpdate, fromRecord, Column.TOTAL_RECORD_COUNT);
+    if (toUpdate.get(Column.LAST_UPDATED_AT.ordinal()) != null
+        && fromRecord.get(Column.LAST_UPDATED_AT.ordinal()) != null
+        && ((long) toUpdate.get(Column.LAST_UPDATED_AT.ordinal())
+            < (long) fromRecord.get(Column.LAST_UPDATED_AT.ordinal()))) {
+      toUpdate.set(
+          Column.LAST_UPDATED_AT.ordinal(), 
fromRecord.get(Column.LAST_UPDATED_AT.ordinal()));
+      toUpdate.set(
+          Column.LAST_UPDATED_SNAPSHOT_ID.ordinal(),
+          fromRecord.get(Column.LAST_UPDATED_SNAPSHOT_ID.ordinal()));
+    }
+  }
+
+  public static Record partitionDataToRecord(
+      Types.StructType partitionSchema, PartitionData partitionData) {
+    GenericRecord genericRecord = GenericRecord.create(partitionSchema);
+    for (int index = 0; index < partitionData.size(); index++) {
+      genericRecord.set(index, partitionData.get(index));
+    }
+
+    return genericRecord;
+  }
+
+  private static Record fromManifestEntry(
+      ManifestEntry<?> entry, Table table, Schema recordSchema) {
+    GenericRecord record = GenericRecord.create(recordSchema);
+    Types.StructType partitionType =
+        
recordSchema.findField(Column.PARTITION_DATA.name()).type().asStructType();
+    PartitionData partitionData = coercedPartitionData(entry.file(), 
table.specs(), partitionType);
+    record.set(
+        Column.PARTITION_DATA.ordinal(), partitionDataToRecord(partitionType, 
partitionData));
+    record.set(Column.SPEC_ID.ordinal(), entry.file().specId());
+
+    Snapshot snapshot = table.snapshot(entry.snapshotId());
+    if (snapshot != null) {
+      record.set(Column.LAST_UPDATED_SNAPSHOT_ID.ordinal(), 
snapshot.snapshotId());
+      record.set(Column.LAST_UPDATED_AT.ordinal(), snapshot.timestampMillis());
+    }
+
+    switch (entry.file().content()) {
+      case DATA:
+        record.set(Column.DATA_FILE_COUNT.ordinal(), 1);
+        record.set(Column.DATA_RECORD_COUNT.ordinal(), 
entry.file().recordCount());
+        record.set(Column.DATA_FILE_SIZE_IN_BYTES.ordinal(), 
entry.file().fileSizeInBytes());
+        break;
+      case POSITION_DELETES:
+        record.set(Column.POSITION_DELETE_FILE_COUNT.ordinal(), 1);
+        record.set(Column.POSITION_DELETE_RECORD_COUNT.ordinal(), 
entry.file().recordCount());
+        break;
+      case EQUALITY_DELETES:
+        record.set(Column.EQUALITY_DELETE_FILE_COUNT.ordinal(), 1);
+        record.set(Column.EQUALITY_DELETE_RECORD_COUNT.ordinal(), 
entry.file().recordCount());
+        break;
+      default:
+        throw new UnsupportedOperationException(
+            "Unsupported file content type: " + entry.file().content());
+    }
+
+    // TODO: optionally compute TOTAL_RECORD_COUNT based on the flag

Review Comment:
   I am not sure we would want to extend this path to compute deletes. We may 
also reconsider how position deletes are handled in V3, so it may be possible 
to compute this without scanning the data.
   
   I'd probably drop this TODO for now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to