2MD opened a new issue, #11984:
URL: https://github.com/apache/iceberg/issues/11984

   ### Query engine
   
   Iceberg version = 1.7.1 and spark = 3.3.2
   
     "org.apache.iceberg" %% "iceberg-spark-runtime-3.3" % icebergVersion,
     "org.apache.iceberg" % "iceberg-aws-bundle" % icebergVersion,
     "org.apache.iceberg" % "iceberg-hive-metastore" % icebergVersion
     "org.apache.hadoop" % "hadoop-aws" % "3.3.1" % Provided,
     "org.apache.hadoop" % "hadoop-common" % "3.3.1" % Provided,
     "org.apache.hive" % "hive-metastore" % "3.1.3" % Provided,
     ("org.apache.hive" % "hive-exec" % "3.1.3").classifier("core") % Provided,
     "org.apache.spark" %% "spark-hive" % sparkVersion % Provided,
   
   ### Question
   
   I try create table with use iceberg + hive catalog + location = "s3a://....."
   
    val tableS3WarehousePath = s"s3a://$bucket/warehouse"
    val icebergCatalog = "iceberg_catalog"
     val icebergTable = "iceberg_table"
     val icebergDatabase = "iceberg_schema"
   
   
    SparkSession.builder
         .master("local[*]")
         .config("spark.hadoop.hive.metastore.uris", uris)
         .config("hive.metastore.warehouse.dir", tableS3WarehousePath)
         .config("spark.sql.legacy.parquet.nanosAsLong", "false")
         .config("spark.sql.extensions", 
"org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions")
         .config(s"spark.sql.catalog.$icebergCatalog", 
"org.apache.iceberg.spark.SparkCatalog")
         .config(s"spark.sql.catalog.$icebergCatalog.type", "hive")
         .config("spark.sql.catalogImplementation", "hive")
         .config(s"spark.sql.catalog.$icebergCatalog.warehouse", 
tableS3WarehousePath)
   .config(s"spark.sql.catalog.$icebergCatalog.io-impl", 
"org.apache.iceberg.aws.s3.S3FileIO")
   
    /*.config(s"spark.sql.catalog.$icebergCatalog.hadoop.fs.s3a.region", *****)
       .config(s"spark.sql.catalog.$icebergCatalog.hadoop.awsAccessKeyId", 
minioContainer.getMinioAccessValue)
       .config(s"spark.sql.catalog.$icebergCatalog.hadoop.awsSecretAccessKey", 
minioContainer.getMinioAccessValue)
       .config(s"spark.sql.catalog.$icebergCatalog.hadoop.fs.s3a.access.key", 
minioContainer.getMinioAccessValue)
       .config(s"spark.sql.catalog.$icebergCatalog.hadoop.fs.s3a.secret.key", 
minioContainer.getMinioAccessValue)
       .config(s"spark.sql.catalog.$icebergCatalog.hadoop.fs.s3a.endpoint", 
minioContainer.getHostAddress)
   */
         .config(s"spark.sql.catalog.$icebergCatalog.s3.endpoint", 
minioContainer.getHostAddress)
         .config(s"spark.sql.catalog.$icebergCatalog.s3.region", *****)
         .config(s"spark.sql.catalog.$icebergCatalog.s3.access-key-id", 
minioContainer.getMinioAccessValue)
         .config(s"spark.sql.catalog.$icebergCatalog.s3.secret-access-key", 
minioContainer.getMinioAccessValue)
         .config(s"spark.sql.catalog.$icebergCatalog.s3.path-style-access", 
"true")
         .config(s"spark.sql.catalog.$icebergCatalog.s3.ssl.enabled", "false")
         .enableHiveSupport()
         .getOrCreate()
   
   
    spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", 
minioContainer.getHostAddress)
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", 
minioContainer.getMinioAccessValue)
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", 
minioContainer.getMinioAccessValue)
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.connection.timeout", 
connectionTimeOut)
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl.disable.cache", 
"true")
       
spark.sparkContext.hadoopConfiguration.set("spark.sql.debug.maxToStringFields", 
"100")
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.path.style.access", 
"true")
       spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl", 
"org.apache.hadoop.fs.s3a.S3AFileSystem")
   
   //I know... it too much.. but i tried many versions 
   
   
    df // some dataFrame
           .writeTo(tableInfo.tablePath)
           .using("iceberg")
           .partitionedBy(days(col(partitionColumn)))
   
        df.sparkSession.sql(s"CREATE NAMESPACE  
iceberg_catalog.iceberg_schema") /// throws error below
      writer.create()
   
   
   
   
   
   Caused by: MetaException(message:Got exception: 
java.nio.file.AccessDeniedException s3a://*****/warehouse/iceberg_table.db: 
org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials 
provided by TemporaryAWSCredentialsProvider SimpleAWSCredentialsProvider 
EnvironmentVariableCredentialsProvider IAMInstanceCredentialsProvider : 
com.amazonaws.SdkClientException: Unable to load AWS credentials from 
environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY 
(or AWS_SECRET_ACCESS_KEY)))
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39343)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39311)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result.read(ThriftHiveMetastore.java:39245)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:88)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_database(ThriftHiveMetastore.java:1106)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_database(ThriftHiveMetastore.java:1093)
        at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:208)
        at com.sun.proxy.$Proxy55.createDatabase(Unknown Source)
        at 
org.apache.iceberg.hive.HiveCatalog.lambda$createNamespace$10(HiveCatalog.java:431)
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:72)
        at org.apache.iceberg.ClientPoolImpl.run(ClientPoolImpl.java:65)
        at 
org.apache.iceberg.hive.CachedClientPool.run(CachedClientPool.java:122)
        at 
org.apache.iceberg.hive.HiveCatalog.createNamespace(HiveCatalog.java:429)
        ... 70 more
   
   
   what is wrong?
   
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to