SandeepSinghGahir commented on issue #8333:
URL: https://github.com/apache/iceberg/issues/8333#issuecomment-2284955150

    Hi,
   This issue/bug has been open for a while now -> 
https://github.com/apache/iceberg/issues/10340
   Do we know when can we expect a fix? Or is there any workaround?
   Background:  I'm joining multiple iceberg tables in glue that have 3 merges 
applied on them. Whenever I do any transform joining these table and write it 
to non-iceberg glue table, I'm getting SSL connection reset exception. On 
further checking exception in the executor logs I see  Base Reader exception in 
reading delete files or data files. 
   Error:24/08/12 04:07:15 ERROR BaseReader: Error reading file(s): 
s3://some-bucket/iceberg_catalog/iceberg_db.db/d_table/data/0yWGCw/region_id=1/marketplace_id=7/asin_bucket=7044/00598-112719-90dfe711-47dc-43e7-af6c-3c5395c527b6-00024.parquet,
 s3:// 
some-bucket/iceberg_catalog/iceberg_db.db/d_table/data/0yWGCw/region_id=1/marketplace_id=7/asin_bucket=7044/01086-113207-90dfe711-47dc-43e7-af6c-3c5395c527b6-00025-deletes.parquet,
 s3:// 
some-bucket/iceberg_catalog/iceberg_db.db/d_table/data/0yWGCw/region_id=1/marketplace_id=7/asin_bucket=7044/01086-113214-45a89e31-efe0-4110-bdb3-e467a520b1b3-00025-deletes.parquet
   org.apache.iceberg.exceptions.RuntimeIOException: 
javax.net.ssl.SSLException: Connection reset
    at 
org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.advance(VectorizedParquetReader.java:165)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:141)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:136) 
~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:119)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:156)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at scala.Option.exists(Option.scala:376) ~[scala-library-2.12.15.jar:?]
    at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) 
~[scala-library-2.12.15.jar:?]
    at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
 Source) ~[?:?]
    at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source) ~[?:?]
    at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:35)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hasNext(Unknown
 Source) ~[?:?]
    at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:968)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) 
~[scala-library-2.12.15.jar:?]
    at 
org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:183)
 ~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
 ~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at org.apache.spark.scheduler.Task.run(Task.scala:138) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
 ~[spark-core_2.12-3.3.0-amzn-1.jar:?]
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1516) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) 
~[spark-core_2.12-3.3.0-amzn-1.jar:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_412]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_412]
    at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_412]
   Caused by: javax.net.ssl.SSLException: Connection reset
    at sun.security.ssl.Alert.createSSLException(Alert.java:127) ~[?:1.8.0_412]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:331) 
~[?:1.8.0_412]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:274) 
~[?:1.8.0_412]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:269) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLTransport.decode(SSLTransport.java:138) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1404) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1372) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:966) 
~[?:1.8.0_412]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:197)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.services.s3.internal.checksums.S3ChecksumValidatingInputStream.read(S3ChecksumValidatingInputStream.java:112)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.core.io.SdkFilterInputStream.read(SdkFilterInputStream.java:66)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
software.amazon.awssdk.core.internal.metrics.BytesReadTrackingInputStream.read(BytesReadTrackingInputStream.java:49)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.core.io.SdkFilterInputStream.read(SdkFilterInputStream.java:66)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at org.apache.iceberg.aws.s3.S3InputStream.read(S3InputStream.java:109) 
~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:102)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader$ConsecutivePartList.readAll(ParquetFileReader.java:1850)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader.internalReadRowGroup(ParquetFileReader.java:990)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:940)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.advance(VectorizedParquetReader.java:163)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    ... 27 more
    Suppressed: java.net.SocketException: Broken pipe (Write failed)
    at java.net.SocketOutputStream.socketWrite0(Native Method) ~[?:1.8.0_412]
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111) 
~[?:1.8.0_412]
    at java.net.SocketOutputStream.write(SocketOutputStream.java:155) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketOutputRecord.encodeAlert(SSLSocketOutputRecord.java:81)
 ~[?:1.8.0_412]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:362) 
~[?:1.8.0_412]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:274) 
~[?:1.8.0_412]
    at sun.security.ssl.TransportContext.fatal(TransportContext.java:269) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLTransport.decode(SSLTransport.java:138) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1404) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1372) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:966) 
~[?:1.8.0_412]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:197)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.services.s3.internal.checksums.S3ChecksumValidatingInputStream.read(S3ChecksumValidatingInputStream.java:112)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.core.io.SdkFilterInputStream.read(SdkFilterInputStream.java:66)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
software.amazon.awssdk.core.internal.metrics.BytesReadTrackingInputStream.read(BytesReadTrackingInputStream.java:49)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.core.io.SdkFilterInputStream.read(SdkFilterInputStream.java:66)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at org.apache.iceberg.aws.s3.S3InputStream.read(S3InputStream.java:109) 
~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:102)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader$ConsecutivePartList.readAll(ParquetFileReader.java:1850)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader.internalReadRowGroup(ParquetFileReader.java:990)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:940)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.advance(VectorizedParquetReader.java:163)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.next(VectorizedParquetReader.java:141)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at org.apache.iceberg.spark.source.BaseReader.next(BaseReader.java:136) 
~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:119)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.execution.datasources.v2.MetricsIterator.hasNext(DataSourceRDD.scala:156)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1(DataSourceRDD.scala:63)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.$anonfun$hasNext$1$adapted(DataSourceRDD.scala:63)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at scala.Option.exists(Option.scala:376) ~[scala-library-2.12.15.jar:?]
    at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) 
~[scala-library-2.12.15.jar:?]
    at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
 Source) ~[?:?]
    at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source) ~[?:?]
    at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:35)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hasNext(Unknown
 Source) ~[?:?]
    at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:968)
 ~[spark-sql_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) 
~[scala-library-2.12.15.jar:?]
    at 
org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:183)
 ~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
 ~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at org.apache.spark.scheduler.Task.run(Task.scala:138) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
 ~[spark-core_2.12-3.3.0-amzn-1.jar:?]
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1516) 
~[spark-core_2.12-3.3.0-amzn-1.jar:3.3.0-amzn-1]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) 
~[spark-core_2.12-3.3.0-amzn-1.jar:?]
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_412]
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_412]
    at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_412]
   Caused by: java.net.SocketException: Connection reset
    at java.net.SocketInputStream.read(SocketInputStream.java:210) 
~[?:1.8.0_412]
    at java.net.SocketInputStream.read(SocketInputStream.java:141) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:464) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketInputRecord.decodeInputRecord(SSLSocketInputRecord.java:237)
 ~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:190) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLTransport.decode(SSLTransport.java:109) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1404) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1372) 
~[?:1.8.0_412]
    at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73) 
~[?:1.8.0_412]
    at 
sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:966) 
~[?:1.8.0_412]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:197)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
org.apache.iceberg.aws.shaded.org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.services.s3.internal.checksums.S3ChecksumValidatingInputStream.read(S3ChecksumValidatingInputStream.java:112)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.core.io.SdkFilterInputStream.read(SdkFilterInputStream.java:66)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at 
software.amazon.awssdk.core.internal.metrics.BytesReadTrackingInputStream.read(BytesReadTrackingInputStream.java:49)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at java.io.FilterInputStream.read(FilterInputStream.java:133) ~[?:1.8.0_412]
    at 
software.amazon.awssdk.core.io.SdkFilterInputStream.read(SdkFilterInputStream.java:66)
 ~[iceberg-aws-bundle-1.5.0.jar:?]
    at org.apache.iceberg.aws.s3.S3InputStream.read(S3InputStream.java:109) 
~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:102)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader$ConsecutivePartList.readAll(ParquetFileReader.java:1850)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader.internalReadRowGroup(ParquetFileReader.java:990)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.shaded.org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:940)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    at 
org.apache.iceberg.parquet.VectorizedParquetReader$FileIterator.advance(VectorizedParquetReader.java:163)
 ~[iceberg-spark-runtime-3.3_2.12-1.5.0.jar:?]
    ... 27 more
   
   I have tried using updated version of iceberg i.e 1.6.0 as well, but getting 
same error.
   
   
       On Thursday, August 1, 2024 at 12:43:41 AM PDT, Eduard Tudenhoefner 
***@***.***> wrote:  
    
    
   
   
   
   I understand that it's recommended to upgrade to spark 3.3.4, however, I'm 
using Glue 4.0 that comes with spark 3.3.0. Due to some other constraints, I 
cannot downgrade to Glue 3.0. Is there any other solution, config that we can 
use to avoid this error. Materializing the table and re-reading data from this 
table, only to apply MERGE takes away the potential incremental processing 
savings.
   
   
   @SandeepSinghGahir this appears to be an issue in Spark itself that was 
fixed with 3.3.4 (or maybe in a version in between). You might want to check 
with the Spark community whether there's a workaround for this issue.
   
   —
   Reply to this email directly, view it on GitHub, or unsubscribe.
   You are receiving this because you were mentioned.Message ID: ***@***.***>
     


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to