AwasthiSomesh commented on issue #11297:
URL: https://github.com/apache/iceberg/issues/11297#issuecomment-2405237112
@nastra
val spark = SparkSession
.builder()
.master("local[*]")
.appName("Test")
//.config("spark.sql.defaultCatalog", "spark_catalog")
.config("fs.azure.account.auth.type.someshadlsgen2.dfs.core.windows.net",
"OAuth")
.config("fs.azure.account.oauth.provider.type.someshadlsgen2.dfs.core.windows.net",
"org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
.config("fs.azure.account.oauth2.client.id.someshadlsgen2.dfs.core.windows.net",
"XXXXXXX")
.config("fs.azure.account.oauth2.client.secret.someshadlsgen2.dfs.core.windows.net",
"XXXXXX")
.config("fs.azure.account.oauth2.client.endpoint.someshadlsgen2.dfs.core.windows.net",
"https://login.microsoftonline.com/XXXXXXXXXXXXXXXXx/oauth2/token")
.config("spark.sql.catalog.spark_catalog",
"org.apache.iceberg.spark.SparkCatalog")
.config("spark.sql.catalog.spark_catalog.uri",
"thrift://hostname:9083")
.config("spark.sql.catalog.spark_catalog.warehouse.dir",
"/opt/hive/data/warehouse/")
.config("spark.sql.extensions",
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
.config("spark.sql.catalog.spark_catalog",
"org.apache.iceberg.spark.SparkSessionCatalog")
.config("spark.sql.spark_catalog.io-impl",
"org.apache.iceberg.azure.adlsv2.ADLSFileIO")
.config("spark.sql.hive.hiveserver2.jdbc.url","jdbc:hive2://hostname:10000/default")
.config("spark.sql.spark_catalog.include-credentials", "true")
.config("spark.sql.catalog.spark_catalog.type", "hive")
.config("iceberg.engine.hive.enabled",true).enableHiveSupport()
.getOrCreate();
Error:
24/10/02 23:29:11 INFO HiveMetaStoreClient: HMS client filtering is enabled.
24/10/02 23:29:11 INFO HiveMetaStoreClient: Resolved metastore uris:
[thrift://invsh005vm0101.informatica.com:9083]
24/10/02 23:29:11 INFO HiveMetaStoreClient: Trying to connect to metastore
with URI (thrift://invsh005vm0101.informatica.com:9083) in binary transport mode
24/10/02 23:29:11 INFO HiveMetaStoreClient: Opened a connection to
metastore, URI (thrift://invsh005vm0101.informatica.com:9083) current
connections: 1
24/10/02 23:29:11 INFO RetryingMetaStoreClient: RetryingMetaStoreClient
proxy=class org.apache.hadoop.hive.metastore.HiveMetaStoreClient ugi=soawasth
(auth:SIMPLE) retries=1 delay=1 lifetime=0
Exception in thread "main" org.apache.spark.sql.AnalysisException:
[TABLE_OR_VIEW_NOT_FOUND] The table or view `iceberg1table2` cannot be found.
Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema()
output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF
EXISTS.; line 1 pos 14;
'Project [*]
+- 'UnresolvedRelation [iceberg1table2], [], false
at
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.tableNotFound(package.scala:87)
at
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2(CheckAnalysis.scala:202)
at
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$2$adapted(CheckAnalysis.scala:182)
at
org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:244)
at
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:243)
at
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:243)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
at scala.collection.IterableLike.foreach(IterableLike.scala:71)
at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at
org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:243)
at
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:182)
at
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:164)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:188)
at
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:160)
at
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:150)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:188)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:211)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:208)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:77)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:138)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:219)
at
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:546)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:219)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
at
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:218)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:77)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
at
org.apache.spark.sql.SparkSession.$anonfun$sql$4(SparkSession.scala:691)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:900)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:682)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:713)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:744)
at
Hive4StandAloneAzureAdlsgen2$.main(Hive4StandAloneAzureAdlsgen2.scala:36)
at Hive4StandAloneAzureAdlsgen2.main(Hive4StandAloneAzureAdlsgen2.scala)
24/10/02 23:29:12 INFO SparkContext: Invoking stop() from shutdown hook
24/10/02 23:29:12 INFO SparkContext: SparkContext is stopping with exitCode
0.
24/10/02 23:29:12 INFO SparkUI: Stopped Spark web UI at
http://invw16con46.informatica.com:4040/
24/10/02 23:29:12 INFO MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
24/10/02 23:29:12 INFO MemoryStore: MemoryStore cleared
24/10/02 23:29:12 INFO BlockManager: BlockManager stopped
24/10/02 23:29:12 INFO BlockManagerMaster: BlockManagerMaster stopped
24/10/02 23:29:12 INFO
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
24/10/02 23:29:12 INFO SparkContext: Successfully stopped SparkContext
24/10/02 23:29:12 INFO ShutdownHookManager: Shutdown hook called
24/10/02 23:29:12 INFO ShutdownHookManager: Deleting directory
C:\Users\soawasth\AppData\Local\Temp\spark-274
Please let me know what we are missing
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]