ExplorData24 commented on issue #2359:
URL: https://github.com/apache/iceberg/issues/2359#issuecomment-1847038152

    @coolderli 
   @RussellSpitzer 
   @hunter-cloud09 
   @dixingxing0 
   @pvary 
   
   Hello everyone.
   
   I am using Hive Catalog to create Iceberg tables with Spark as the execution 
engine:
   
   conf = (
   pyspark.SparkConf()
   .setAppName('app_name')
   #packages
   .set('spark.jars.packages', 
'org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.0.0,software.amazon.awssdk:bundle:2.17.178,software.amazon.awssdk:url-connection-client:2.17.178')
   #SQL Extensions
   .set('spark.sql.extensions', 
'org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions')
   #Configuring Catalog
   .set('spark.sql.catalog.catalog_hive', 
'org.apache.iceberg.spark.SparkCatalog')
   .set('spark.sql.catalog.catalog_hive.type', 'hive')
   .set('spark.sql.catalog.catalog_hive.uri', HIVE_URI)
   .set('spark.sql.catalog.catalog_hive.warehouse', WAREHOUSE)
   .set('spark.sql.catalog.catalog_hive.endpoint', AWS_S3_ENDPOINT)
   .set('spark.sql.catalog.catalog_hive.io-impl', 
'org.apache.iceberg.aws.s3.S3FileIO')
   .set('spark.hadoop.fs.s3a.access.key', AWS_ACCESS_KEY)
   .set('spark.hadoop.fs.s3a.secret.key', AWS_SECRET_KEY)
   )
   spark = SparkSession.builder.config(conf=conf).getOrCreate()
   print("Spark Running")
   spark.sql("CREATE TABLE catalog_hive.default.table (name STRING) USING 
iceberg;").show()
   
   When I try to run createTable command it gives me an exception:
   
   Spark Running
   23/12/07 16:47:16 WARN metastore: Failed to connect to the MetaStore 
Server...
   23/12/07 16:47:17 WARN metastore: Failed to connect to the MetaStore 
Server...
   23/12/07 16:47:18 WARN metastore: Failed to connect to the MetaStore 
Server...
   
   Py4JJavaError Traceback (most recent call last)
   Cell In[2], line 36
   34 print("Spark Running")
   35 ## Create a Table
   ---> 36 spark.sql("CREATE TABLE catalog_hive.default.table (name STRING) 
USING iceberg;").show()
   37 ## Insert Some Data
   38 spark.sql("INSERT INTO catalog_hive.default.table VALUES ('Alex Merced'), 
('Dipankar Mazumdar'), ('Jason Hughes')").show()
   
   File ~/.local/lib/python3.10/site-packages/pyspark/sql/session.py:1034, in 
SparkSession.sql(self, sqlQuery, **kwargs)
   1032 sqlQuery = formatter.format(sqlQuery, **kwargs)
   1033 try:
   -> 1034 return DataFrame(self._jsparkSession.sql(sqlQuery), self)
   1035 finally:
   1036 if len(kwargs) > 0:
   
   File ~/.local/lib/python3.10/site-packages/py4j/java_gateway.py:1321, in 
JavaMember.call(self, *args)
   1315 command = proto.CALL_COMMAND_NAME +
   1316 self.command_header +
   1317 args_command +
   1318 proto.END_COMMAND_PART
   1320 answer = self.gateway_client.send_command(command)
   -> 1321 return_value = get_return_value(
   1322 answer, self.gateway_client, self.target_id, self.name)
   1324 for temp_arg in temp_args:
   1325 temp_arg._detach()
   
   File ~/.local/lib/python3.10/site-packages/pyspark/sql/utils.py:190, in 
capture_sql_exception..deco(*a, **kw)
   188 def deco(*a: Any, **kw: Any) -> Any:
   189 try:
   --> 190 return f(*a, **kw)
   191 except Py4JJavaError as e:
   192 converted = convert_exception(e.java_exception)
   
   File ~/.local/lib/python3.10/site-packages/py4j/protocol.py:326, in 
get_return_value(answer, gateway_client, target_id, name)
   324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
   325 if answer[1] == REFERENCE_TYPE:
   --> 326 raise Py4JJavaError(
   327 "An error occurred while calling {0}{1}{2}.\n".
   328 format(target_id, ".", name), value)
   329 else:
   330 raise Py4JError(
   331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
   332 format(target_id, ".", name, value))
   
   Py4JJavaError: An error occurred while calling o47.sql.
   : org.apache.iceberg.hive.RuntimeMetaException: Failed to connect to Hive 
Metastore
   at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:84)
   at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
   at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
   at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
   at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
   at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:82)
   at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:205)
   at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:95)
   at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:78)
   at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:43)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
   at 
java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
   at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:166)
   at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:587)
   at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:142)
   at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:99)
   at 
org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:156)
   at 
org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:43)
   at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
   at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
   at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
   at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
   at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
   at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
   at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
   at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
   at org.apache.spark.sql.Dataset.(Dataset.scala:220)
   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   at py4j.Gateway.invoke(Gateway.java:282)
   at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
   at py4j.commands.CallCommand.execute(CallCommand.java:79)
   at 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
   at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
   at java.base/java.lang.Thread.run(Thread.java:829)
   Caused by: java.lang.RuntimeException: Unable to instantiate 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
   at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1742)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
   at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:60)
   at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:72)
   at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:185)
   at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
   ... 63 more
   Caused by: java.lang.reflect.InvocationTargetException
   at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
   at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
   at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
   ... 75 more
   Caused by: MetaException(message:Could not connect to meta store using any 
of the URIs provided. Most recent failure: 
org.apache.thrift.transport.TTransportException: java.net.UnknownHostException: 
nessie
   at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
   at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:478)
   at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:245)
   at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
   at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
   at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1740)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:83)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
   at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:97)
   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
   at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invokeChecked(DynMethods.java:60)
   at 
org.apache.iceberg.common.DynMethods$UnboundMethod.invoke(DynMethods.java:72)
   at 
org.apache.iceberg.common.DynMethods$StaticMethod.invoke(DynMethods.java:185)
   at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:63)
   at org.apache.iceberg.hive.HiveClientPool.newClient(HiveClientPool.java:34)
   at org.apache.iceberg.ClientPoolImpl.get(ClientPoolImpl.java:125)
   at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:56)
   at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:51)
   at org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:82)
   at 
org.apache.iceberg.hive.HiveTableOperations.doRefresh(HiveTableOperations.java:205)
   at 
org.apache.iceberg.BaseMetastoreTableOperations.refresh(BaseMetastoreTableOperations.java:95)
   at 
org.apache.iceberg.BaseMetastoreTableOperations.current(BaseMetastoreTableOperations.java:78)
   at 
org.apache.iceberg.BaseMetastoreCatalog.loadTable(BaseMetastoreCatalog.java:43)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
   at 
java.base/java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1908)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
   at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
   at org.apache.iceberg.CachingCatalog.loadTable(CachingCatalog.java:166)
   at org.apache.iceberg.spark.SparkCatalog.load(SparkCatalog.java:587)
   at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:142)
   at org.apache.iceberg.spark.SparkCatalog.loadTable(SparkCatalog.java:99)
   at 
org.apache.spark.sql.connector.catalog.TableCatalog.tableExists(TableCatalog.java:156)
   at 
org.apache.spark.sql.execution.datasources.v2.CreateTableExec.run(CreateTableExec.scala:43)
   at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
   at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
   at 
org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109)
   at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:584)
   at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:176)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:584)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:560)
   at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
   at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
   at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
   at org.apache.spark.sql.Dataset.(Dataset.scala:220)
   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
   at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:622)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:617)
   at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   at py4j.Gateway.invoke(Gateway.java:282)
   at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
   at py4j.commands.CallCommand.execute(CallCommand.java:79)
   at 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
   at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
   at java.base/java.lang.Thread.run(Thread.java:829)
   Caused by: java.net.UnknownHostException: nessie
   at 
java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:229)
   at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
   at java.base/java.net.Socket.connect(Socket.java:609)
   at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
   ... 82 more
   )
   at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:527)
   at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:245)
   ... 80 more
   
   Please let me know what I might be doing wrong.
   Thanks in advance.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to