lirui-apache commented on code in PR #12637:
URL: https://github.com/apache/iceberg/pull/12637#discussion_r2013782944


##########
hive-metastore/src/test/java/org/apache/iceberg/hive/HiveMetastoreExtension.java:
##########
@@ -40,16 +40,10 @@ private HiveMetastoreExtension(String databaseName, 
Map<String, String> hiveConf
 
   @Override
   public void beforeAll(ExtensionContext extensionContext) throws Exception {
-    metastore = new TestHiveMetastore();
-    HiveConf hiveConfWithOverrides = new HiveConf(TestHiveMetastore.class);
-    if (hiveConfOverride != null) {
-      for (Map.Entry<String, String> kv : hiveConfOverride.entrySet()) {
-        hiveConfWithOverrides.set(kv.getKey(), kv.getValue());
-      }
-    }
+    metastore = new TestHiveMetastore(hiveConfOverride);
 
-    metastore.start(hiveConfWithOverrides);
-    metastoreClient = new HiveMetaStoreClient(hiveConfWithOverrides);
+    metastore.start(new HiveConf(TestHiveMetastore.class));
+    metastoreClient = new HiveMetaStoreClient(metastore.hiveConf());

Review Comment:
   Enabling direct SQL globally seems to fix the [weird test 
case](https://github.com/apache/iceberg/blob/af99da6c2daa8e53876638c839b0f729f2f3a86e/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAddFilesProcedure.java#L741).
 I did some debug and believe it's because direct SQL gets the partition filter 
pushdown correct and retrieves the corresponding partitions. Should I update 
the test cases here, or leave it to another PR?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to