lirui-apache commented on code in PR #6698:
URL: https://github.com/apache/iceberg/pull/6698#discussion_r1137025153


##########
hive-metastore/src/main/java/org/apache/iceberg/hive/CachedClientPool.java:
##########
@@ -53,26 +67,27 @@ public class CachedClientPool implements 
ClientPool<IMetaStoreClient, TException
             properties,
             CatalogProperties.CLIENT_POOL_CACHE_EVICTION_INTERVAL_MS,
             CatalogProperties.CLIENT_POOL_CACHE_EVICTION_INTERVAL_MS_DEFAULT);
+    this.key = 
extractKey(properties.get(CatalogProperties.CLIENT_POOL_CACHE_KEYS), conf);
     init();
   }
 
   @VisibleForTesting
   HiveClientPool clientPool() {
-    return clientPoolCache.get(metastoreUri, k -> new 
HiveClientPool(clientPoolSize, conf));
+    return clientPoolCache.get(key, k -> new HiveClientPool(clientPoolSize, 
conf));
   }
 
   private synchronized void init() {
     if (clientPoolCache == null) {
       clientPoolCache =
           Caffeine.newBuilder()
               .expireAfterAccess(evictionInterval, TimeUnit.MILLISECONDS)
-              .removalListener((key, value, cause) -> ((HiveClientPool) 
value).close())
+              .removalListener((ignored, value, cause) -> ((HiveClientPool) 
value).close())

Review Comment:
   This is actually required, otherwise checkstyle fails because the `key` here 
now hides a class member.



##########
core/src/main/java/org/apache/iceberg/CatalogProperties.java:
##########
@@ -119,6 +119,26 @@ private CatalogProperties() {}
       "client.pool.cache.eviction-interval-ms";
   public static final long CLIENT_POOL_CACHE_EVICTION_INTERVAL_MS_DEFAULT =
       TimeUnit.MINUTES.toMillis(5);
+  /**
+   * A comma separated list of elements that are used to compose the key of 
the client pool cache.
+   *
+   * <p>The following elements are supported:
+   *
+   * <ul>
+   *   <li>URI - as specified by {@link CatalogProperties#URI}. URI will be 
the only element when

Review Comment:
   Yeah we're using spark-server and we use UGI in the key (as suggested by our 
spark team). I suppose spark maintains a HiveCatalog for each user session, 
which means different sessions won't share the underlying pool, even though 
they are for the same end user.



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/CachedClientPool.java:
##########
@@ -87,4 +102,89 @@ public <R> R run(Action<R, IMetaStoreClient, TException> 
action, boolean retry)
       throws TException, InterruptedException {
     return clientPool().run(action, retry);
   }
+
+  @VisibleForTesting
+  static Key extractKey(String cacheKeys, Configuration conf) {
+    // generate key elements in a certain order, so that the Key instances are 
comparable
+    List<Object> elements = Lists.newArrayList();
+    elements.add(conf.get(HiveConf.ConfVars.METASTOREURIS.varname, ""));
+    if (cacheKeys == null || cacheKeys.isEmpty()) {

Review Comment:
   OK let's leave it to another PR.



##########
hive-metastore/src/main/java/org/apache/iceberg/hive/CachedClientPool.java:
##########
@@ -87,4 +102,89 @@ public <R> R run(Action<R, IMetaStoreClient, TException> 
action, boolean retry)
       throws TException, InterruptedException {
     return clientPool().run(action, retry);
   }
+
+  @VisibleForTesting
+  static Key extractKey(String cacheKeys, Configuration conf) {
+    // generate key elements in a certain order, so that the Key instances are 
comparable
+    List<Object> elements = Lists.newArrayList();
+    elements.add(conf.get(HiveConf.ConfVars.METASTOREURIS.varname, ""));
+    if (cacheKeys == null || cacheKeys.isEmpty()) {
+      return Key.of(elements);
+    }
+
+    Set<KeyElementType> types = 
Sets.newTreeSet(Comparator.comparingInt(Enum::ordinal));
+    Map<String, String> confElements = Maps.newTreeMap();
+    for (String element : cacheKeys.split(",", -1)) {
+      String trimmed = element.trim();
+      if (trimmed.toLowerCase(Locale.ROOT).startsWith(CONF_ELEMENT_PREFIX)) {
+        String key = trimmed.substring(CONF_ELEMENT_PREFIX.length());
+        ValidationException.check(
+            !confElements.containsKey(key), "Conf key element %s already 
specified", key);
+        confElements.put(key, conf.get(key));

Review Comment:
   `confElements` is a TreeMap so that the conf keys are sorted



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to