gaborkaszab commented on code in PR #12461:
URL: https://github.com/apache/iceberg/pull/12461#discussion_r2009820381


##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HMSTablePropertyHelper.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.hive;
+
+import static org.apache.iceberg.TableProperties.GC_ENABLED;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.iceberg.BaseMetastoreTableOperations;
+import org.apache.iceberg.PartitionSpecParser;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.SchemaParser;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.SnapshotSummary;
+import org.apache.iceberg.SortOrderParser;
+import org.apache.iceberg.TableMetadata;
+import org.apache.iceberg.TableProperties;
+import 
org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting;
+import org.apache.iceberg.relocated.com.google.common.collect.BiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableBiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.util.JsonUtil;
+import org.apache.parquet.hadoop.ParquetOutputFormat;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class HMSTablePropertyHelper {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HMSTablePropertyHelper.class);
+
+  public static final String TABLE_TYPE_PROP = "table_type";

Review Comment:
   Do we need to create these variables here? They are already in 
`BaseMetastoreTableOperation` and I think we could use them instead of 
introducing the same here.



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HMSTablePropertyHelper.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.hive;
+
+import static org.apache.iceberg.TableProperties.GC_ENABLED;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.iceberg.BaseMetastoreTableOperations;
+import org.apache.iceberg.PartitionSpecParser;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.SchemaParser;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.SnapshotSummary;
+import org.apache.iceberg.SortOrderParser;
+import org.apache.iceberg.TableMetadata;
+import org.apache.iceberg.TableProperties;
+import 
org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting;
+import org.apache.iceberg.relocated.com.google.common.collect.BiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableBiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.util.JsonUtil;
+import org.apache.parquet.hadoop.ParquetOutputFormat;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class HMSTablePropertyHelper {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HMSTablePropertyHelper.class);
+
+  public static final String TABLE_TYPE_PROP = "table_type";
+  public static final String ICEBERG_TABLE_TYPE_VALUE = "iceberg";
+
+  private static final BiMap<String, String> ICEBERG_TO_HMS_TRANSLATION =
+      ImmutableBiMap.of(
+          // gc.enabled in Iceberg and external.table.purge in Hive are meant 
to do the same things
+          // but with different names
+          GC_ENABLED,
+          "external.table.purge",
+          TableProperties.PARQUET_COMPRESSION,
+          ParquetOutputFormat.COMPRESSION,
+          TableProperties.PARQUET_ROW_GROUP_SIZE_BYTES,
+          ParquetOutputFormat.BLOCK_SIZE);
+
+  private HMSTablePropertyHelper() {}
+
+  public static void setHmsTableParameters(
+      String newMetadataLocation,
+      Table tbl,
+      TableMetadata metadata,
+      Set<String> obsoleteProps,
+      boolean hiveEngineEnabled,
+      long maxHiveTablePropertySize,
+      String currentLocation) {
+    Map<String, String> parameters =
+        Optional.ofNullable(tbl.getParameters()).orElseGet(Maps::newHashMap);
+    Map<String, String> summary =
+        Optional.ofNullable(metadata.currentSnapshot())
+            .map(Snapshot::summary)
+            .orElseGet(ImmutableMap::of);
+    // push all Iceberg table properties into HMS
+    metadata.properties().entrySet().stream()
+        .filter(entry -> 
!entry.getKey().equalsIgnoreCase(HiveCatalog.HMS_TABLE_OWNER))
+        .forEach(
+            entry -> {
+              String key = entry.getKey();
+              // translate key names between Iceberg and HMS where needed
+              String hmsKey = ICEBERG_TO_HMS_TRANSLATION.getOrDefault(key, 
key);
+              parameters.put(hmsKey, entry.getValue());
+            });
+    if (metadata.uuid() != null) {
+      parameters.put(TableProperties.UUID, metadata.uuid());
+    }
+
+    // remove any props from HMS that are no longer present in Iceberg table 
props
+    if (obsoleteProps != null) {
+      obsoleteProps.forEach(parameters::remove);
+    }
+    parameters.put(TABLE_TYPE_PROP, 
ICEBERG_TABLE_TYPE_VALUE.toUpperCase(Locale.ENGLISH));
+    parameters.put(BaseMetastoreTableOperations.METADATA_LOCATION_PROP, 
newMetadataLocation);
+
+    if (currentLocation != null && !currentLocation.isEmpty()) {
+      tbl.getParameters()
+          .put(BaseMetastoreTableOperations.PREVIOUS_METADATA_LOCATION_PROP, 
currentLocation);
+    }
+    setStorageHandler(parameters, hiveEngineEnabled);
+
+    // Set the basic statistics
+    if (summary.get(SnapshotSummary.TOTAL_DATA_FILES_PROP) != null) {
+      parameters.put(StatsSetupConst.NUM_FILES, 
summary.get(SnapshotSummary.TOTAL_DATA_FILES_PROP));
+    }
+    if (summary.get(SnapshotSummary.TOTAL_RECORDS_PROP) != null) {
+      parameters.put(StatsSetupConst.ROW_COUNT, 
summary.get(SnapshotSummary.TOTAL_RECORDS_PROP));
+    }
+    if (summary.get(SnapshotSummary.TOTAL_FILE_SIZE_PROP) != null) {
+      parameters.put(StatsSetupConst.TOTAL_SIZE, 
summary.get(SnapshotSummary.TOTAL_FILE_SIZE_PROP));
+    }
+
+    setSnapshotStats(metadata, parameters, maxHiveTablePropertySize);
+    setSchema(metadata.schema(), parameters, maxHiveTablePropertySize);
+    setPartitionSpec(metadata, parameters, maxHiveTablePropertySize);
+    setSortOrder(metadata, parameters, maxHiveTablePropertySize);
+
+    tbl.setParameters(parameters);
+  }
+
+  private static void setStorageHandler(Map<String, String> parameters, 
boolean hiveEngineEnabled) {
+    // If needed set the 'storage_handler' property to enable query from Hive
+    if (hiveEngineEnabled) {
+      parameters.put(
+          hive_metastoreConstants.META_TABLE_STORAGE,
+          HiveOperationsBase.HIVE_ICEBERG_STORAGE_HANDLER);
+    } else {
+      parameters.remove(hive_metastoreConstants.META_TABLE_STORAGE);
+    }
+  }
+
+  @VisibleForTesting
+  static void setSnapshotStats(
+      TableMetadata metadata, Map<String, String> parameters, long 
maxHiveTablePropertySize) {
+    parameters.remove(TableProperties.CURRENT_SNAPSHOT_ID);
+    parameters.remove(TableProperties.CURRENT_SNAPSHOT_TIMESTAMP);
+    parameters.remove(TableProperties.CURRENT_SNAPSHOT_SUMMARY);
+
+    Snapshot currentSnapshot = metadata.currentSnapshot();
+    if (exposeInHmsProperties(maxHiveTablePropertySize) && currentSnapshot != 
null) {
+      parameters.put(
+          TableProperties.CURRENT_SNAPSHOT_ID, 
String.valueOf(currentSnapshot.snapshotId()));
+      parameters.put(
+          TableProperties.CURRENT_SNAPSHOT_TIMESTAMP,
+          String.valueOf(currentSnapshot.timestampMillis()));
+      setSnapshotSummary(parameters, currentSnapshot, 
maxHiveTablePropertySize);
+    }
+
+    parameters.put(TableProperties.SNAPSHOT_COUNT, 
String.valueOf(metadata.snapshots().size()));
+  }
+
+  @VisibleForTesting
+  static void setSnapshotSummary(
+      Map<String, String> parameters, Snapshot currentSnapshot, long 
maxHiveTablePropertySize) {
+    try {
+      String summary = 
JsonUtil.mapper().writeValueAsString(currentSnapshot.summary());
+      if (summary.length() <= maxHiveTablePropertySize) {
+        parameters.put(TableProperties.CURRENT_SNAPSHOT_SUMMARY, summary);
+      } else {
+        LOG.warn(
+            "Not exposing the current snapshot({}) summary in HMS since it 
exceeds {} characters",
+            currentSnapshot.snapshotId(),
+            maxHiveTablePropertySize);
+      }
+    } catch (JsonProcessingException e) {
+      LOG.warn(
+          "Failed to convert current snapshot({}) summary to a json string",
+          currentSnapshot.snapshotId(),
+          e);
+    }
+  }
+
+  @VisibleForTesting
+  static void setPartitionSpec(
+      TableMetadata metadata, Map<String, String> parameters, long 
maxHiveTablePropertySize) {
+    parameters.remove(TableProperties.DEFAULT_PARTITION_SPEC);
+    if (exposeInHmsProperties(maxHiveTablePropertySize)
+        && metadata.spec() != null
+        && metadata.spec().isPartitioned()) {
+      String spec = PartitionSpecParser.toJson(metadata.spec());
+      setField(parameters, TableProperties.DEFAULT_PARTITION_SPEC, spec, 
maxHiveTablePropertySize);
+    }
+  }
+
+  @VisibleForTesting
+  static void setSortOrder(
+      TableMetadata metadata, Map<String, String> parameters, long 
maxHiveTablePropertySize) {
+    parameters.remove(TableProperties.DEFAULT_SORT_ORDER);
+    if (exposeInHmsProperties(maxHiveTablePropertySize)
+        && metadata.sortOrder() != null
+        && metadata.sortOrder().isSorted()) {
+      String sortOrder = SortOrderParser.toJson(metadata.sortOrder());
+      setField(parameters, TableProperties.DEFAULT_SORT_ORDER, sortOrder, 
maxHiveTablePropertySize);
+    }
+  }
+
+  public static void setSchema(
+      Schema schema, Map<String, String> parameters, long 
maxHiveTablePropertySize) {
+    parameters.remove(TableProperties.CURRENT_SCHEMA);
+    if (exposeInHmsProperties(maxHiveTablePropertySize) && schema != null) {
+      String jsonSchema = SchemaParser.toJson(schema);
+      setField(parameters, TableProperties.CURRENT_SCHEMA, jsonSchema, 
maxHiveTablePropertySize);
+    }
+  }
+
+  private static void setField(

Review Comment:
   This still seem to be duplicated with the on in `HiveOperationsBase`. I 
quickly checked and I believe that the other one could be dropped now since the 
removal of `setSchema()`.



##########
hive-metastore/src/test/java/org/apache/iceberg/hive/TestHiveCatalog.java:
##########
@@ -1015,7 +1015,7 @@ public void testSnapshotStatsTableProperties() throws 
Exception {
   @Test
   public void testSetSnapshotSummary() throws Exception {
     Configuration conf = new Configuration();

Review Comment:
   creating a conf instance can now be moved to L1020, no need to hold a 
variable



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HMSTablePropertyHelper.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.hive;
+
+import static org.apache.iceberg.TableProperties.GC_ENABLED;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.iceberg.BaseMetastoreTableOperations;
+import org.apache.iceberg.PartitionSpecParser;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.SchemaParser;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.SnapshotSummary;
+import org.apache.iceberg.SortOrderParser;
+import org.apache.iceberg.TableMetadata;
+import org.apache.iceberg.TableProperties;
+import 
org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting;
+import org.apache.iceberg.relocated.com.google.common.collect.BiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableBiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.util.JsonUtil;
+import org.apache.parquet.hadoop.ParquetOutputFormat;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class HMSTablePropertyHelper {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HMSTablePropertyHelper.class);
+
+  public static final String TABLE_TYPE_PROP = "table_type";
+  public static final String ICEBERG_TABLE_TYPE_VALUE = "iceberg";
+
+  private static final BiMap<String, String> ICEBERG_TO_HMS_TRANSLATION =
+      ImmutableBiMap.of(
+          // gc.enabled in Iceberg and external.table.purge in Hive are meant 
to do the same things
+          // but with different names
+          GC_ENABLED,
+          "external.table.purge",
+          TableProperties.PARQUET_COMPRESSION,
+          ParquetOutputFormat.COMPRESSION,
+          TableProperties.PARQUET_ROW_GROUP_SIZE_BYTES,
+          ParquetOutputFormat.BLOCK_SIZE);

Review Comment:
   Ohh, I haven't noticed this. I'd highly recommend to make the behaviour 
changes in separate PRs from this refactor change. It could be misleading for 
reviewers and in general people would expect a refactor PR not to change 
internal behaviour.
   
   Additionally, as I wrote in some of my other comments, duplicating code is 
not desired, especially not for a refactor PR. With this change we'd have 2 
version of `ICEBERG_TO_HMS_TRANSLATION` that could be misleading.
   I just quickly checked the code now so I might miss something here, but with 
the proposed changes the only remaining use of this map would be in 
`HiveTableOperations.translateToIcebergProp(String hmsProp)`. Can't you change 
that function to reach out to the new helper util class and then we could 
completely remove the mapping from `HiveTableOperations`.



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveOperationsBase.java:
##########
@@ -46,6 +44,7 @@
 interface HiveOperationsBase {
 
   Logger LOG = LoggerFactory.getLogger(HiveOperationsBase.class);
+  String HIVE_ICEBERG_STORAGE_HANDLER = 
"org.apache.iceberg.mr.hive.HiveIcebergStorageHandler";

Review Comment:
   I found the only usage for this in the new util class. Can you move it there?



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HMSTablePropertyHelper.java:
##########
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.hive;
+
+import static org.apache.iceberg.TableProperties.GC_ENABLED;
+
+import com.fasterxml.jackson.core.JsonProcessingException;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Optional;
+import java.util.Set;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants;
+import org.apache.iceberg.BaseMetastoreTableOperations;
+import org.apache.iceberg.PartitionSpecParser;
+import org.apache.iceberg.Schema;
+import org.apache.iceberg.SchemaParser;
+import org.apache.iceberg.Snapshot;
+import org.apache.iceberg.SnapshotSummary;
+import org.apache.iceberg.SortOrderParser;
+import org.apache.iceberg.TableMetadata;
+import org.apache.iceberg.TableProperties;
+import 
org.apache.iceberg.relocated.com.google.common.annotations.VisibleForTesting;
+import org.apache.iceberg.relocated.com.google.common.collect.BiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableBiMap;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.util.JsonUtil;
+import org.apache.parquet.hadoop.ParquetOutputFormat;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class HMSTablePropertyHelper {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HMSTablePropertyHelper.class);
+
+  public static final String TABLE_TYPE_PROP = "table_type";
+  public static final String ICEBERG_TABLE_TYPE_VALUE = "iceberg";
+
+  private static final BiMap<String, String> ICEBERG_TO_HMS_TRANSLATION =
+      ImmutableBiMap.of(
+          // gc.enabled in Iceberg and external.table.purge in Hive are meant 
to do the same things
+          // but with different names
+          GC_ENABLED,
+          "external.table.purge",
+          TableProperties.PARQUET_COMPRESSION,
+          ParquetOutputFormat.COMPRESSION,
+          TableProperties.PARQUET_ROW_GROUP_SIZE_BYTES,
+          ParquetOutputFormat.BLOCK_SIZE);
+
+  private HMSTablePropertyHelper() {}
+
+  public static void setHmsTableParameters(
+      String newMetadataLocation,
+      Table tbl,
+      TableMetadata metadata,
+      Set<String> obsoleteProps,
+      boolean hiveEngineEnabled,
+      long maxHiveTablePropertySize,
+      String currentLocation) {
+    Map<String, String> parameters =
+        Optional.ofNullable(tbl.getParameters()).orElseGet(Maps::newHashMap);
+    Map<String, String> summary =

Review Comment:
   nit: add line break before and after



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveViewOperations.java:
##########
@@ -299,7 +299,10 @@ private void setHmsTableParameters(
           BaseMetastoreTableOperations.PREVIOUS_METADATA_LOCATION_PROP, 
currentMetadataLocation());
     }
 
-    setSchema(metadata.schema(), parameters);
+    HMSTablePropertyHelper.setSchema(

Review Comment:
   I know the intention of this PR, but for me the refactor seems half done in 
case we didn't move the `HiveViewOperations.setHmsTableParameters()` also to 
the util class. With the current state we have a util class to give us the 
params required for HMS but it only does it for tables, and partially for 
views. I'd find it beneficial to include views too.



##########
hive-metastore/src/test/java/org/apache/iceberg/hive/TestHiveCatalog.java:
##########
@@ -1015,7 +1015,7 @@ public void testSnapshotStatsTableProperties() throws 
Exception {
   @Test
   public void testSetSnapshotSummary() throws Exception {
     Configuration conf = new Configuration();
-    conf.set("iceberg.hive.table-property-max-size", "4000");
+    long maxHiveTablePropertySize = 4000;

Review Comment:
   can this be final?



##########
build.gradle:
##########
@@ -706,7 +706,6 @@ project(':iceberg-hive-metastore') {
       exclude group: 'com.google.code.findbugs', module: 'jsr305'
       exclude group: 'org.eclipse.jetty.aggregate', module: 'jetty-all'
       exclude group: 'org.eclipse.jetty.orbit', module: 'javax.servlet'
-      exclude group: 'org.apache.parquet', module: 'parquet-hadoop-bundle'

Review Comment:
   Not sure if change in intentional. Doesn't seem to be relevant to the PR for 
me.



##########
hive-metastore/src/test/java/org/apache/iceberg/hive/TestHiveCatalog.java:
##########
@@ -1048,7 +1048,7 @@ public void testSetSnapshotSummary() throws Exception {
   @Test
   public void testNotExposeTableProperties() {
     Configuration conf = new Configuration();

Review Comment:
   same as above for this and for the line below too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to