flyrain commented on code in PR #6582: URL: https://github.com/apache/iceberg/pull/6582#discussion_r1084414607
########## core/src/main/java/org/apache/iceberg/puffin/StandardBlobTypes.java: ########## @@ -26,4 +26,6 @@ private StandardBlobTypes() {} * href="https://datasketches.apache.org/">Apache DataSketches</a> library */ public static final String APACHE_DATASKETCHES_THETA_V1 = "apache-datasketches-theta-v1"; + + public static final String NDV_BLOB = "ndv-blob"; Review Comment: Thanks @findepi. Does Trino update the NDV sketch every time a write happens? What if a table is wrote both by Trino and Spark? I believe the update from Spark side will be missing in that case. There are two general directions. 1. Update the metrics in each write commit. This keeps NDV alway updated, and no extra async operation is needed. It requires changes of each writer across engines. It's not a trivial effort. 2. A-synchronized operation like this procedure. 2.1. Not use the same type, only output the NDV number. 2.2 Use the same type, update the sketch as well. If we'd pick the option 2. I feel like 2.2 could be a solution, so that, the metric is shared by both Spark and Trino, and both engines can update it. WDYT? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org For additional commands, e-mail: issues-h...@iceberg.apache.org