y-bowen opened a new issue, #6325:
URL: https://github.com/apache/iceberg/issues/6325

   ### Query engine
   
   Flink - 1.14.5
   
[iceberg-flink-runtime-1.14-1.1.0.jar](https://repo1.maven.org/maven2/org/apache/iceberg/iceberg-flink-runtime-1.14/1.1.0/iceberg-flink-runtime-1.14-1.1.0.jar)
   Hive-Standalone-metastore - 3.1.3
   Minio
   
   
   ### Question
   
   `CREATE TABLE IF NOT EXISTS sample (
       id BIGINT COMMENT 'unique id',
       data STRING)
       PARTITIONED BY (data)
       with (
         'connector'='iceberg',
         'catalog-name'='iceberg22',
         'uri'='thrift://10.43.133.33:9083',
         'catalog-type'='hive',
         'warehouse'='s3a://datalake/warehouse2/iceberg22'
       );`
   `INSERT INTO sample VALUES (1, 'AAA'), (2, 'BBB'), (3, 'CCC');`
   
   I can see the directory  
'datalake/warehouse2/iceberg22/default_database.db/sample/' , but it's an empty 
folder.
   
   Error message:
    org.apache.flink.table.api.ValidationException: Unable to create a sink for 
writing table 'default_catalog.default_database.sample'.
   
   Table options are:
   
   'catalog-name'='iceberg22'
   'catalog-type'='hive'
   'connector'='iceberg'
   'uri'='thrift://10.43.133.33:9083'
   'warehouse'='s3a://datalake/warehouse2/iceberg22'
   `Caused by: org.apache.iceberg.exceptions.RuntimeIOException: Failed to get 
file system for path: 
s3a://datalake/warehouse2/iceberg22/default_database.db/sample/metadata/00000-f33288bf-f3ae-465b-b1cf-4916552c98f5.metadata.json
        at org.apache.iceberg.hadoop.Util.getFs(Util.java:54)
        at 
org.apache.iceberg.hadoop.HadoopOutputFile.fromPath(HadoopOutputFile.java:53)
        at 
org.apache.iceberg.hadoop.HadoopFileIO.newOutputFile(HadoopFileIO.java:83)
        at 
org.apache.iceberg.BaseMetastoreTableOperations.writeNewMetadata(BaseMetastoreTableOperations.java:159)
        at 
org.apache.iceberg.hive.HiveTableOperations.doCommit(HiveTableOperations.java:252)
        at 
org.apache.iceberg.BaseMetastoreTableOperations.commit(BaseMetastoreTableOperations.java:135)
        at 
org.apache.iceberg.BaseMetastoreCatalog$BaseMetastoreCatalogTableBuilder.create(BaseMetastoreCatalog.java:196)
        at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.lambda$create$0(CachingCatalog.java:261)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.lambda$doComputeIfAbsent$14(BoundedLocalCache.java:2406)
        at 
java.util.concurrent.ConcurrentHashMap.compute(ConcurrentHashMap.java:1853)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.doComputeIfAbsent(BoundedLocalCache.java:2404)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.BoundedLocalCache.computeIfAbsent(BoundedLocalCache.java:2387)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalCache.computeIfAbsent(LocalCache.java:108)
        at 
org.apache.iceberg.shaded.com.github.benmanes.caffeine.cache.LocalManualCache.get(LocalManualCache.java:62)
        at 
org.apache.iceberg.CachingCatalog$CachingTableBuilder.create(CachingCatalog.java:257)
        at org.apache.iceberg.catalog.Catalog.createTable(Catalog.java:75)
        at 
org.apache.iceberg.flink.FlinkCatalog.createIcebergTable(FlinkCatalog.java:410)
        at 
org.apache.iceberg.flink.FlinkDynamicTableFactory.createTableLoader(FlinkDynamicTableFactory.java:187)
        at 
org.apache.iceberg.flink.FlinkDynamicTableFactory.createDynamicTableSink(FlinkDynamicTableFactory.java:118)
        at 
org.apache.flink.table.factories.FactoryUtil.createTableSink(FactoryUtil.java:181)
        ... 90 more
   Caused by: org.apache.hadoop.fs.s3a.AWSBadRequestException: doesBucketExist 
on datalake: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request 
(Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
172C8800F6871804; S3 Extended Request ID: null; Proxy: null), S3 Extended 
Request ID: null:400 Bad Request: Bad Request (Service: Amazon S3; Status Code: 
400; Error Code: 400 Bad Request; Request ID: 172C8800F6871804; S3 Extended 
Request ID: null; Proxy: null)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:224)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:111)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:265)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:322)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:261)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:236)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExists(S3AFileSystem.java:391)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:322)
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125)
        at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
        at org.apache.iceberg.hadoop.Util.getFs(Util.java:52)
        ... 109 more
   Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request 
(Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 
172C8800F6871804; S3 Extended Request ID: null; Proxy: null), S3 Extended 
Request ID: null
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1372)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5259)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5206)
        at 
com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1438)
        at 
com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1374)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:392)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
        ... 122 more
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to