szehon-ho commented on code in PR #10288:
URL: https://github.com/apache/iceberg/pull/10288#discussion_r1691054962


##########
api/src/main/java/org/apache/iceberg/actions/ComputeTableStats.java:
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import org.apache.iceberg.StatisticsFile;
+
+/** An action that collects statistics of an Iceberg table and writes to 
Puffin files. */
+public interface ComputeTableStats extends Action<ComputeTableStats, 
ComputeTableStats.Result> {
+  /**
+   * Choose the set of columns to collect stats, by default all columns are 
chosen.
+   *
+   * @param columns a set of column names to be analyzed
+   * @return this for method chaining
+   */
+  ComputeTableStats columns(String... columns);
+
+  /**
+   * Choose the table snapshot to compute stats, by default the current 
snapshot is used.
+   *
+   * @param snapshotId long ID of the snapshot for which stats need to be 
computed
+   * @return this for method chaining
+   */
+  ComputeTableStats snapshot(long snapshotId);
+
+  /** The action result that contains summaries of the stats computation. */
+  interface Result {
+
+    /** Returns statistics file. */

Review Comment:
   'or none if no statistics were collected'?



##########
api/src/main/java/org/apache/iceberg/actions/ComputeTableStats.java:
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import org.apache.iceberg.StatisticsFile;
+
+/** An action that collects statistics of an Iceberg table and writes to 
Puffin files. */
+public interface ComputeTableStats extends Action<ComputeTableStats, 
ComputeTableStats.Result> {
+  /**
+   * Choose the set of columns to collect stats, by default all columns are 
chosen.
+   *
+   * @param columns a set of column names to be analyzed
+   * @return this for method chaining
+   */
+  ComputeTableStats columns(String... columns);
+
+  /**
+   * Choose the table snapshot to compute stats, by default the current 
snapshot is used.
+   *
+   * @param snapshotId long ID of the snapshot for which stats need to be 
computed
+   * @return this for method chaining
+   */
+  ComputeTableStats snapshot(long snapshotId);
+
+  /** The action result that contains summaries of the stats computation. */

Review Comment:
   should we update it (its not summaries anymore right?)
   maybe just
   `The result of table statistics collection.`



##########
api/src/main/java/org/apache/iceberg/actions/ComputeTableStats.java:
##########
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import org.apache.iceberg.StatisticsFile;
+
+/** An action that collects statistics of an Iceberg table and writes to 
Puffin files. */
+public interface ComputeTableStats extends Action<ComputeTableStats, 
ComputeTableStats.Result> {
+  /**

Review Comment:
   Nit: newline



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ComputeTableStatsSparkAction.java:
##########
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.iceberg.GenericBlobMetadata;
+import org.apache.iceberg.GenericStatisticsFile;
+import org.apache.iceberg.HasTableOperations;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.StatisticsFile;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableOperations;
+import org.apache.iceberg.actions.ComputeTableStats;
+import org.apache.iceberg.actions.ImmutableComputeTableStats;
+import org.apache.iceberg.exceptions.ValidationException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.Puffin;
+import org.apache.iceberg.puffin.PuffinWriter;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.spark.JobGroupInfo;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.SparkSession;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Computes the statistic of the given columns and stores it as Puffin files. 
*/
+public class ComputeTableStatsSparkAction extends 
BaseSparkAction<ComputeTableStatsSparkAction>
+    implements ComputeTableStats {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ComputeTableStatsSparkAction.class);
+
+  private final Table table;
+  private Set<String> columns;
+  private long snapshotId;
+
+  ComputeTableStatsSparkAction(SparkSession spark, Table table) {
+    super(spark);
+    this.table = table;
+    Snapshot snapshot = table.currentSnapshot();
+    if (snapshot != null) {
+      this.snapshotId = snapshot.snapshotId();
+    }
+    this.columns =
+        
table.schema().columns().stream().map(Types.NestedField::name).collect(Collectors.toSet());
+  }
+
+  @Override
+  protected ComputeTableStatsSparkAction self() {
+    return this;
+  }
+
+  @Override
+  public Result execute() {
+    String desc =
+        String.format("Computing stats for %s for snapshot id %s", 
table.name(), snapshotId);
+    JobGroupInfo info = newJobGroupInfo("COMPUTE-TABLE-STATS", desc);
+    return withJobGroupInfo(info, this::doExecute);
+  }
+
+  private Result doExecute() {
+    if (snapshotId == 0L) {

Review Comment:
   This is a bit messy now, it took me awhile to understand this check (default 
long value).
   
   How about, we replace 'snapshotId' member variable with 'snapshot'.
   
   in Constructor:
   ``` 
   this.snapshot = table.currentSnapshot(); 
   ```
   
   In snapshot(long id):
   ```
   this.snapshot = table.snapshot(id);
   ```
   
   In doExecute:
   ```
   if (snapshot == null) {
     return ImmutableComputeTableStats.Result.builder().build();
   }
   ```



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ComputeTableStatsSparkAction.java:
##########
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.iceberg.GenericBlobMetadata;
+import org.apache.iceberg.GenericStatisticsFile;
+import org.apache.iceberg.HasTableOperations;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.StatisticsFile;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableOperations;
+import org.apache.iceberg.actions.ComputeTableStats;
+import org.apache.iceberg.actions.ImmutableComputeTableStats;
+import org.apache.iceberg.exceptions.ValidationException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.Puffin;
+import org.apache.iceberg.puffin.PuffinWriter;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.spark.JobGroupInfo;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.SparkSession;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Computes the statistic of the given columns and stores it as Puffin files. 
*/

Review Comment:
   Nit: statistic -> statistics ?



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ComputeTableStatsSparkAction.java:
##########
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.iceberg.GenericBlobMetadata;
+import org.apache.iceberg.GenericStatisticsFile;
+import org.apache.iceberg.HasTableOperations;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.StatisticsFile;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableOperations;
+import org.apache.iceberg.actions.ComputeTableStats;
+import org.apache.iceberg.actions.ImmutableComputeTableStats;
+import org.apache.iceberg.exceptions.ValidationException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.Puffin;
+import org.apache.iceberg.puffin.PuffinWriter;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.spark.JobGroupInfo;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.SparkSession;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Computes the statistic of the given columns and stores it as Puffin files. 
*/
+public class ComputeTableStatsSparkAction extends 
BaseSparkAction<ComputeTableStatsSparkAction>
+    implements ComputeTableStats {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ComputeTableStatsSparkAction.class);
+
+  private final Table table;
+  private Set<String> columns;
+  private long snapshotId;
+
+  ComputeTableStatsSparkAction(SparkSession spark, Table table) {
+    super(spark);
+    this.table = table;
+    Snapshot snapshot = table.currentSnapshot();
+    if (snapshot != null) {
+      this.snapshotId = snapshot.snapshotId();
+    }
+    this.columns =
+        
table.schema().columns().stream().map(Types.NestedField::name).collect(Collectors.toSet());
+  }
+
+  @Override
+  protected ComputeTableStatsSparkAction self() {
+    return this;
+  }
+
+  @Override
+  public Result execute() {
+    String desc =
+        String.format("Computing stats for %s for snapshot id %s", 
table.name(), snapshotId);
+    JobGroupInfo info = newJobGroupInfo("COMPUTE-TABLE-STATS", desc);
+    return withJobGroupInfo(info, this::doExecute);
+  }
+
+  private Result doExecute() {
+    if (snapshotId == 0L) {
+      return ImmutableComputeTableStats.Result.builder().build();
+    }
+    LOG.info("Computing stats of {} for snapshot {}", table.name(), 
snapshotId);
+    List<Blob> blobs = generateNDVBlobs();
+    StatisticsFile statisticFile;
+    try {
+      statisticFile = writeAndCommitPuffin(blobs);
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+    return 
ImmutableComputeTableStats.Result.builder().statisticsFile(statisticFile).build();
+  }
+
+  private StatisticsFile writeAndCommitPuffin(List<Blob> blobs) throws 
Exception {
+    LOG.info("Writing stats to puffin files for table {}", table.name());
+    TableOperations operations = ((HasTableOperations) table).operations();
+    FileIO fileIO = operations.io();
+    String path = operations.metadataFileLocation(String.format("%s.stats", 
UUID.randomUUID()));
+    OutputFile outputFile = fileIO.newOutputFile(path);
+    GenericStatisticsFile statisticsFile;

Review Comment:
   Nit: how about an extra method, to avoid having the weird 
declaration/assignment inside the try clause:
   
   ```
   GenericStatisticsFile statisticsFile = writePuffinFile();
   ```
   
   ```
   writePuffinFile() {
       try (PuffinWriter writer =
           Puffin.write(outputFile).createdBy("Iceberg ComputeTableStats 
action").build()) {
         blobs.forEach(writer::add);
         writer.finish();
         return new GenericStatisticsFile(
                 snapshotId,
                 path,
                 writer.fileSize(),
                 writer.footerSize(),
                 writer.writtenBlobsMetadata().stream()
                     .map(GenericBlobMetadata::from)
                     .collect(ImmutableList.toImmutableList()));
       }
   ```
   



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ComputeTableStatsSparkAction.java:
##########
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.iceberg.GenericBlobMetadata;
+import org.apache.iceberg.GenericStatisticsFile;
+import org.apache.iceberg.HasTableOperations;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.StatisticsFile;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableOperations;
+import org.apache.iceberg.actions.ComputeTableStats;
+import org.apache.iceberg.actions.ImmutableComputeTableStats;
+import org.apache.iceberg.exceptions.ValidationException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.Puffin;
+import org.apache.iceberg.puffin.PuffinWriter;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.spark.JobGroupInfo;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.SparkSession;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Computes the statistic of the given columns and stores it as Puffin files. 
*/
+public class ComputeTableStatsSparkAction extends 
BaseSparkAction<ComputeTableStatsSparkAction>
+    implements ComputeTableStats {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ComputeTableStatsSparkAction.class);
+
+  private final Table table;
+  private Set<String> columns;
+  private long snapshotId;
+
+  ComputeTableStatsSparkAction(SparkSession spark, Table table) {
+    super(spark);
+    this.table = table;
+    Snapshot snapshot = table.currentSnapshot();
+    if (snapshot != null) {
+      this.snapshotId = snapshot.snapshotId();
+    }
+    this.columns =
+        
table.schema().columns().stream().map(Types.NestedField::name).collect(Collectors.toSet());
+  }
+
+  @Override
+  protected ComputeTableStatsSparkAction self() {
+    return this;
+  }
+
+  @Override
+  public Result execute() {
+    String desc =
+        String.format("Computing stats for %s for snapshot id %s", 
table.name(), snapshotId);
+    JobGroupInfo info = newJobGroupInfo("COMPUTE-TABLE-STATS", desc);
+    return withJobGroupInfo(info, this::doExecute);
+  }
+
+  private Result doExecute() {
+    if (snapshotId == 0L) {
+      return ImmutableComputeTableStats.Result.builder().build();
+    }
+    LOG.info("Computing stats of {} for snapshot {}", table.name(), 
snapshotId);
+    List<Blob> blobs = generateNDVBlobs();
+    StatisticsFile statisticFile;
+    try {
+      statisticFile = writeAndCommitPuffin(blobs);
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+    return 
ImmutableComputeTableStats.Result.builder().statisticsFile(statisticFile).build();
+  }
+
+  private StatisticsFile writeAndCommitPuffin(List<Blob> blobs) throws 
Exception {
+    LOG.info("Writing stats to puffin files for table {}", table.name());
+    TableOperations operations = ((HasTableOperations) table).operations();
+    FileIO fileIO = operations.io();
+    String path = operations.metadataFileLocation(String.format("%s.stats", 
UUID.randomUUID()));
+    OutputFile outputFile = fileIO.newOutputFile(path);
+    GenericStatisticsFile statisticsFile;
+    try (PuffinWriter writer =
+        Puffin.write(outputFile).createdBy("Iceberg ComputeTableStats 
action").build()) {
+      blobs.forEach(writer::add);
+      writer.finish();
+      statisticsFile =
+          new GenericStatisticsFile(
+              snapshotId,
+              path,
+              writer.fileSize(),
+              writer.footerSize(),
+              writer.writtenBlobsMetadata().stream()
+                  .map(GenericBlobMetadata::from)
+                  .collect(ImmutableList.toImmutableList()));
+    }
+    table.updateStatistics().setStatistics(snapshotId, 
statisticsFile).commit();
+    return statisticsFile;
+  }
+
+  private List<Blob> generateNDVBlobs() {
+    return NDVSketchGenerator.generateNDVSketchesAndBlobs(spark(), table, 
snapshotId, columns);
+  }
+
+  @Override
+  public ComputeTableStats columns(String... columnNames) {
+    Preconditions.checkArgument(
+        columnNames != null && columnNames.length > 0, "Columns cannot be 
null/empty");
+    for (String columnName : columnNames) {
+      Types.NestedField field = table.schema().findField(columnName);
+      if (field == null) {
+        throw new ValidationException("No column with %s name in the table", 
columnName);

Review Comment:
   Should this also throw IllegalArgument (the preconditions throws that above)



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ThetaSketchJavaSerializable.java:
##########
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.io.IOException;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+import org.apache.datasketches.memory.Memory;
+import org.apache.datasketches.theta.CompactSketch;
+import org.apache.datasketches.theta.SetOperationBuilder;
+import org.apache.datasketches.theta.Sketch;
+import org.apache.datasketches.theta.UpdateSketch;
+
+class ThetaSketchJavaSerializable implements Serializable {
+
+  private Sketch sketch;
+
+  ThetaSketchJavaSerializable() {}
+
+  ThetaSketchJavaSerializable(Sketch sketch) {
+    this.sketch = sketch;
+  }
+
+  Sketch getSketch() {
+    return sketch;
+  }
+
+  CompactSketch getCompactSketch() {
+    if (sketch == null) {
+      return null;
+    }
+
+    if (sketch instanceof UpdateSketch) {
+      return sketch.compact();
+    }
+
+    return (CompactSketch) sketch;
+  }
+
+  void update(ByteBuffer value) {
+    if (sketch == null) {
+      sketch = UpdateSketch.builder().build();
+    }
+    if (sketch instanceof UpdateSketch) {
+      ((UpdateSketch) sketch).update(value);
+    } else {
+      throw new RuntimeException("update() on read-only sketch");

Review Comment:
   This is still unresolved.



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ComputeTableStatsSparkAction.java:
##########
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.iceberg.GenericBlobMetadata;
+import org.apache.iceberg.GenericStatisticsFile;
+import org.apache.iceberg.HasTableOperations;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.StatisticsFile;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableOperations;
+import org.apache.iceberg.actions.ComputeTableStats;
+import org.apache.iceberg.actions.ImmutableComputeTableStats;
+import org.apache.iceberg.exceptions.ValidationException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.Puffin;
+import org.apache.iceberg.puffin.PuffinWriter;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.spark.JobGroupInfo;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.SparkSession;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Computes the statistic of the given columns and stores it as Puffin files. 
*/
+public class ComputeTableStatsSparkAction extends 
BaseSparkAction<ComputeTableStatsSparkAction>
+    implements ComputeTableStats {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ComputeTableStatsSparkAction.class);
+
+  private final Table table;
+  private Set<String> columns;
+  private long snapshotId;
+
+  ComputeTableStatsSparkAction(SparkSession spark, Table table) {
+    super(spark);
+    this.table = table;
+    Snapshot snapshot = table.currentSnapshot();
+    if (snapshot != null) {
+      this.snapshotId = snapshot.snapshotId();
+    }
+    this.columns =
+        
table.schema().columns().stream().map(Types.NestedField::name).collect(Collectors.toSet());
+  }
+
+  @Override
+  protected ComputeTableStatsSparkAction self() {
+    return this;
+  }
+
+  @Override
+  public Result execute() {
+    String desc =
+        String.format("Computing stats for %s for snapshot id %s", 
table.name(), snapshotId);
+    JobGroupInfo info = newJobGroupInfo("COMPUTE-TABLE-STATS", desc);
+    return withJobGroupInfo(info, this::doExecute);
+  }
+
+  private Result doExecute() {
+    if (snapshotId == 0L) {
+      return ImmutableComputeTableStats.Result.builder().build();
+    }
+    LOG.info("Computing stats of {} for snapshot {}", table.name(), 
snapshotId);
+    List<Blob> blobs = generateNDVBlobs();
+    StatisticsFile statisticFile;
+    try {
+      statisticFile = writeAndCommitPuffin(blobs);
+    } catch (Exception e) {
+      throw new RuntimeException(e);

Review Comment:
   RuntimeIOException



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/NDVSketchGenerator.java:
##########
@@ -0,0 +1,125 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.nio.ByteBuffer;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.datasketches.theta.Sketch;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.data.Record;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.StandardBlobTypes;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.spark.SparkReadOptions;
+import org.apache.iceberg.spark.SparkValueConverter;
+import org.apache.iceberg.types.Conversions;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.api.java.JavaPairRDD;
+import org.apache.spark.sql.Column;
+import org.apache.spark.sql.Dataset;
+import org.apache.spark.sql.Row;
+import org.apache.spark.sql.SparkSession;
+import org.apache.spark.sql.functions;
+import scala.Tuple2;
+
+public class NDVSketchGenerator {
+
+  private NDVSketchGenerator() {}
+
+  public static final String APACHE_DATASKETCHES_THETA_V1_NDV_PROPERTY = "ndv";
+
+  static List<Blob> generateNDVSketchesAndBlobs(
+      SparkSession spark, Table table, long snapshotId, Set<String> 
columnsToBeAnalyzed) {
+    Map<Integer, ThetaSketchJavaSerializable> columnToSketchMap =
+        computeNDVSketches(spark, table, snapshotId, columnsToBeAnalyzed);
+    return generateBlobs(table, columnsToBeAnalyzed, columnToSketchMap, 
snapshotId);
+  }
+
+  private static List<Blob> generateBlobs(
+      Table table,
+      Set<String> columns,
+      Map<Integer, ThetaSketchJavaSerializable> sketchMap,
+      long snapshotId) {
+    return columns.stream()
+        .map(
+            columnName -> {
+              Types.NestedField field = table.schema().findField(columnName);
+              Sketch sketch = sketchMap.get(field.fieldId()).getSketch();
+              long ndv = (long) sketch.getEstimate();

Review Comment:
   had a question, the sketch returns a double, why not preserve it?



##########
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/ComputeTableStatsSparkAction.java:
##########
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.spark.actions;
+
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import org.apache.iceberg.GenericBlobMetadata;
+import org.apache.iceberg.GenericStatisticsFile;
+import org.apache.iceberg.HasTableOperations;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.StatisticsFile;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableOperations;
+import org.apache.iceberg.actions.ComputeTableStats;
+import org.apache.iceberg.actions.ImmutableComputeTableStats;
+import org.apache.iceberg.exceptions.ValidationException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.OutputFile;
+import org.apache.iceberg.puffin.Blob;
+import org.apache.iceberg.puffin.Puffin;
+import org.apache.iceberg.puffin.PuffinWriter;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.spark.JobGroupInfo;
+import org.apache.iceberg.types.Types;
+import org.apache.spark.sql.SparkSession;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Computes the statistic of the given columns and stores it as Puffin files. 
*/
+public class ComputeTableStatsSparkAction extends 
BaseSparkAction<ComputeTableStatsSparkAction>
+    implements ComputeTableStats {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ComputeTableStatsSparkAction.class);
+
+  private final Table table;
+  private Set<String> columns;
+  private long snapshotId;
+
+  ComputeTableStatsSparkAction(SparkSession spark, Table table) {
+    super(spark);
+    this.table = table;
+    Snapshot snapshot = table.currentSnapshot();
+    if (snapshot != null) {
+      this.snapshotId = snapshot.snapshotId();
+    }
+    this.columns =
+        
table.schema().columns().stream().map(Types.NestedField::name).collect(Collectors.toSet());
+  }
+
+  @Override
+  protected ComputeTableStatsSparkAction self() {
+    return this;
+  }
+
+  @Override
+  public Result execute() {
+    String desc =
+        String.format("Computing stats for %s for snapshot id %s", 
table.name(), snapshotId);
+    JobGroupInfo info = newJobGroupInfo("COMPUTE-TABLE-STATS", desc);
+    return withJobGroupInfo(info, this::doExecute);
+  }
+
+  private Result doExecute() {
+    if (snapshotId == 0L) {
+      return ImmutableComputeTableStats.Result.builder().build();
+    }
+    LOG.info("Computing stats of {} for snapshot {}", table.name(), 
snapshotId);
+    List<Blob> blobs = generateNDVBlobs();
+    StatisticsFile statisticFile;
+    try {
+      statisticFile = writeAndCommitPuffin(blobs);
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+    return 
ImmutableComputeTableStats.Result.builder().statisticsFile(statisticFile).build();
+  }
+
+  private StatisticsFile writeAndCommitPuffin(List<Blob> blobs) throws 
Exception {
+    LOG.info("Writing stats to puffin files for table {}", table.name());
+    TableOperations operations = ((HasTableOperations) table).operations();
+    FileIO fileIO = operations.io();

Review Comment:
   Nit: fileIO is not used anywhere else, how about just assign outputFile 
variable instead in one line?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org


Reply via email to