javrasya commented on issue #9444:
URL: https://github.com/apache/iceberg/issues/9444#issuecomment-1886422859
This happens more often when consumption rate is high which is like
replaying historical messages. When I run it in unbounded streaming mode and
use `INCREMENTAL_FROM_EARLIEST_SNAPS
ajantha-bhat commented on issue #9446:
URL: https://github.com/apache/iceberg/issues/9446#issuecomment-1886398118
If you look at the caller of `Parquet.ReadBuilder.createReaderFunc`, you can
find the testcases.
--
This is an automated message from the Apache Git Service.
To respond to th
ajantha-bhat commented on issue #9446:
URL: https://github.com/apache/iceberg/issues/9446#issuecomment-1886396291
> Hi, if someone could guide me on some information about the history of
batchedReaderFunc vs ReaderFunc and some related testing code path, happy to
work on the fix for that.
liurenjie1024 commented on code in PR #135:
URL: https://github.com/apache/iceberg-rust/pull/135#discussion_r1448357734
##
crates/iceberg/src/writer/file_writer/mod.rs:
##
@@ -0,0 +1,51 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor lic
yyy1000 commented on issue #9458:
URL: https://github.com/apache/iceberg/issues/9458#issuecomment-1886386123
I'd like to help with it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific
ajantha-bhat commented on PR #9455:
URL: https://github.com/apache/iceberg/pull/9455#issuecomment-1886371354
Yeah. I need to think bit more. Looks like just using `lazyFixedSnapshotId`
in equals and hashcode is enough instead of effectiveSnapshotID (which uses the
current snapshot).
--
T
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1448341650
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveViewOperations.java:
##
@@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1448339394
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -142,9 +143,15 @@ public FileIO io() {
@Override
protected void doRefresh()
liurenjie1024 commented on PR #160:
URL: https://github.com/apache/iceberg-rust/pull/160#issuecomment-1886347244
> Also , feature anyway has all the changes of main since it has been built
on top of it , or should i rebase main with feature?
Hi, @hiirrxnn You can learn how to resolve
chinmay-bhat commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1448315751
##
core/src/test/java/org/apache/iceberg/FilterFilesTestBase.java:
##
@@ -24,67 +24,67 @@
import java.io.File;
import java.io.IOException;
import java.nio.ByteB
chinmay-bhat commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1448313906
##
core/src/test/java/org/apache/iceberg/TestBase.java:
##
@@ -173,7 +173,7 @@ public class TestBase {
public TestTables.TestTable table = null;
@Parameter
chinmay-bhat commented on code in PR #9380:
URL: https://github.com/apache/iceberg/pull/9380#discussion_r1448306351
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPositionDeletesTable.java:
##
@@ -123,15 +126,9 @@ public static Object[][] parameters() {
chinmay-bhat commented on code in PR #9380:
URL: https://github.com/apache/iceberg/pull/9380#discussion_r1448306197
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java:
##
@@ -44,57 +48,58 @@
import org.apache.spark.sql.Row;
import org.
vinitpatni commented on PR #9381:
URL: https://github.com/apache/iceberg/pull/9381#issuecomment-1886232790
> LGTM, thanks @vinitpatni. Could you also please remove
`FlinkCatalogTestBase` as part of this PR as I don't think it's used anymore,
since everything was converted.
Ack. Remov
hiirrxnn commented on PR #160:
URL: https://github.com/apache/iceberg-rust/pull/160#issuecomment-1886231297
Also , feature anyway has all the changes of main since it has been built on
top of it , or should i rebase main with feature?
--
This is an automated message from the Apache Git Se
hiirrxnn commented on PR #160:
URL: https://github.com/apache/iceberg-rust/pull/160#issuecomment-1886208226
But the PR with main branch has been closed . Should i do it anyway?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub a
wooyeong commented on PR #9455:
URL: https://github.com/apache/iceberg/pull/9455#issuecomment-1886170959
I need your opinion regarding [failed
tests](https://github.com/apache/iceberg/actions/runs/7475896517/job/20353547835?pr=9455).
Previously SparkTable used only `name` intentionall
liurenjie1024 commented on PR #160:
URL: https://github.com/apache/iceberg-rust/pull/160#issuecomment-1886155116
cc @hiirrxnn Could you rebase with main branch?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL a
Fokko commented on PR #160:
URL: https://github.com/apache/iceberg-rust/pull/160#issuecomment-1886115997
Looks like there are conflicts?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
liurenjie1024 commented on PR #160:
URL: https://github.com/apache/iceberg-rust/pull/160#issuecomment-1886079899
cc @Fokko PTAL
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific commen
liurenjie1024 commented on issue #159:
URL: https://github.com/apache/iceberg-rust/issues/159#issuecomment-1886064501
> @liurenjie1024 mentioned error messages as another use cases. That's the
only time that the i128 representation might not be suitable. The question is
whether the error me
lisirrx commented on code in PR #9217:
URL: https://github.com/apache/iceberg/pull/9217#discussion_r1448172377
##
core/src/test/java/org/apache/iceberg/TestManifestReader.java:
##
@@ -32,17 +32,15 @@
import org.apache.iceberg.types.Types;
import org.assertj.core.api.Assertions
github-actions[bot] closed issue #6784: Hive memory issue with reading iceberg
v2 from hive
URL: https://github.com/apache/iceberg/issues/6784
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
github-actions[bot] commented on issue #6768:
URL: https://github.com/apache/iceberg/issues/6768#issuecomment-1885963774
This issue has been closed because it has not received any activity in the
last 14 days since being marked as 'stale'
--
This is an automated message from the Apache Gi
github-actions[bot] closed issue #6768: Support Delta name mapping to Iceberg
conversion
URL: https://github.com/apache/iceberg/issues/6768
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
github-actions[bot] commented on issue #6784:
URL: https://github.com/apache/iceberg/issues/6784#issuecomment-1885963748
This issue has been closed because it has not received any activity in the
last 14 days since being marked as 'stale'
--
This is an automated message from the Apache Gi
dependabot[bot] opened a new pull request, #260:
URL: https://github.com/apache/iceberg-python/pull/260
Bumps [cython](https://github.com/cython/cython) from 3.0.7 to 3.0.8.
Changelog
Sourced from https://github.com/cython/cython/blob/master/CHANGES.rst";>cython's
changelog.
ZachDischner commented on issue #9018:
URL: https://github.com/apache/iceberg/issues/9018#issuecomment-1885807415
I am also seeing this issue. I have existing Iceberg tables, for which a
large number of Spark SQL queries simply fail once I use more updated
libraries.
My existing tab
nastra commented on code in PR #9380:
URL: https://github.com/apache/iceberg/pull/9380#discussion_r1447971372
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPositionDeletesTable.java:
##
@@ -123,15 +126,9 @@ public static Object[][] parameters() {
};
nastra commented on code in PR #9380:
URL: https://github.com/apache/iceberg/pull/9380#discussion_r1447970015
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java:
##
@@ -44,57 +48,58 @@
import org.apache.spark.sql.Row;
import org.apache
huan233usc commented on issue #9446:
URL: https://github.com/apache/iceberg/issues/9446#issuecomment-1885597211
Hi, if someone could guide me on some information about the history of
batchedReaderFunc vs ReaderFunc and some related testing code path, happy to
work on the fix for that.
--
vinitpatni commented on PR #9381:
URL: https://github.com/apache/iceberg/pull/9381#issuecomment-1885563043
- Addressing Review Comments
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
vinitpatni commented on code in PR #9381:
URL: https://github.com/apache/iceberg/pull/9381#discussion_r1447853965
##
flink/v1.18/flink/src/test/java/org/apache/iceberg/flink/source/TestStreamScanSql.java:
##
@@ -127,20 +127,16 @@ private void insertRows(Table table, Row... rows)
vinitpatni commented on code in PR #9381:
URL: https://github.com/apache/iceberg/pull/9381#discussion_r1447849833
##
flink/v1.18/flink/src/test/java/org/apache/iceberg/flink/source/TestStreamScanSql.java:
##
@@ -33,30 +36,27 @@
import org.apache.iceberg.FileFormat;
import org.
RussellSpitzer opened a new issue, #9458:
URL: https://github.com/apache/iceberg/issues/9458
### Feature Request / Improvement
There are several places in our code currently where a failure while reading
a file will throw an exception but the exception will not contain any
informatio
jqin61 commented on issue #208:
URL: https://github.com/apache/iceberg-python/issues/208#issuecomment-1885365903
> In Iceberg it can be that some files are still on an older partitioning,
we should make sure that we handle those correctly based on the that we provide.
It seems Spark's
Fokko merged PR #259:
URL: https://github.com/apache/iceberg-python/pull/259
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.
Fokko commented on PR #259:
URL: https://github.com/apache/iceberg-python/pull/259#issuecomment-1885335933
Thanks for fixing this @syun64 👍
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the sp
xiaoxuandev opened a new pull request, #9457:
URL: https://github.com/apache/iceberg/pull/9457
### Notes
Support min/max/count aggregate push down for partition columns
- min/max/count aggregate push down is not working if partition columns
don't present as data columns(the stats w
Fokko commented on PR #259:
URL: https://github.com/apache/iceberg-python/pull/259#issuecomment-1885265016
@syun64 Can you fix the `mypy` violation:
```
pyiceberg/io/pyarrow.py:336: error: Incompatible types in assignment
(expression has type "float", target has type "Optional[str]
syun64 opened a new pull request, #259:
URL: https://github.com/apache/iceberg-python/pull/259
```
%env PYICEBERG_CATALOG__LACUS__S3.CONNECT-TIMEOUT=60
from pyiceberg.catalog import load_catalog
catalog = load_catalog("test")
tbl = catalog.load_table("test.test")
tbl.scan()
bryanck commented on PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#issuecomment-1885241611
Awesome! Thanks all for the feedback and guidance. I'll follow up with PRs
for the actual sink portion.
--
This is an automated message from the Apache Git Service.
To respond to the me
gjacoby126 commented on code in PR #9452:
URL: https://github.com/apache/iceberg/pull/9452#discussion_r1447629862
##
flink/v1.18/flink/src/main/java/org/apache/iceberg/flink/util/FlinkVersionDetector.java:
##
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (A
ajantha-bhat commented on issue #4977:
URL: https://github.com/apache/iceberg/issues/4977#issuecomment-1885176312
> Hey everyone, the Kafka connect sink by Tabular has been donated to
Iceberg in https://github.com/apache/iceberg/pull/8701. I go ahead and close
this issue, feel free to open
nastra commented on code in PR #9381:
URL: https://github.com/apache/iceberg/pull/9381#discussion_r1447617985
##
flink/v1.18/flink/src/test/java/org/apache/iceberg/flink/source/TestStreamScanSql.java:
##
@@ -127,20 +127,16 @@ private void insertRows(Table table, Row... rows) thr
nastra commented on code in PR #9381:
URL: https://github.com/apache/iceberg/pull/9381#discussion_r1447617468
##
flink/v1.18/flink/src/test/java/org/apache/iceberg/flink/source/TestStreamScanSql.java:
##
@@ -33,30 +36,27 @@
import org.apache.iceberg.FileFormat;
import org.apac
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447606490
##
core/src/test/java/org/apache/iceberg/FilterFilesTestBase.java:
##
@@ -24,67 +24,67 @@
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447607114
##
core/src/test/java/org/apache/iceberg/TestLocalFilterFiles.java:
##
@@ -18,20 +18,17 @@
*/
package org.apache.iceberg;
-import org.junit.runner.RunWith;
-import
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447605228
##
data/src/test/java/org/apache/iceberg/io/TestGenericSortedPosDeleteWriter.java:
##
@@ -62,7 +62,7 @@ public class TestGenericSortedPosDeleteWriter extends
TestBase {
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447604611
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java:
##
@@ -21,41 +21,38 @@
import static org.apache.iceberg.PlanningMode.DI
pvary commented on code in PR #9452:
URL: https://github.com/apache/iceberg/pull/9452#discussion_r1447604053
##
flink/v1.18/flink/src/main/java/org/apache/iceberg/flink/util/FlinkVersionDetector.java:
##
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) u
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447602835
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java:
##
@@ -21,46 +21,41 @@
import static org.apache.iceberg.PlanningMode.DISTRIB
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447603337
##
spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java:
##
@@ -21,42 +21,39 @@
import static org.apache.iceberg.PlanningMode.DIST
gjacoby126 commented on code in PR #9452:
URL: https://github.com/apache/iceberg/pull/9452#discussion_r1447601555
##
flink/v1.18/flink/src/main/java/org/apache/iceberg/flink/util/FlinkVersionDetector.java:
##
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (A
nastra commented on code in PR #9416:
URL: https://github.com/apache/iceberg/pull/9416#discussion_r1447600190
##
core/src/test/java/org/apache/iceberg/TestBase.java:
##
@@ -173,7 +173,7 @@ public class TestBase {
public TestTables.TestTable table = null;
@Parameters(name
nastra merged PR #9401:
URL: https://github.com/apache/iceberg/pull/9401
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apac
purna344 commented on issue #9404:
URL: https://github.com/apache/iceberg/issues/9404#issuecomment-1885023931
If the producers write the data in storage by setting the below config value
`spark.conf.set("spark.databricks.delta.writePartitionColumnsToParquet",
"false")`
Then *.parquet
Fokko commented on issue #3277:
URL: https://github.com/apache/iceberg/issues/3277#issuecomment-1884931848
This has been fixed in a later version, I'll go ahead and close this for now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to
Fokko closed issue #3277: Flink1.12.1 +Iceberg0.12.0 has problems with
real-time reading and writing in upsert mode
URL: https://github.com/apache/iceberg/issues/3277
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
Fokko commented on issue #4977:
URL: https://github.com/apache/iceberg/issues/4977#issuecomment-1884930862
Hey everyone, the Kafka connect sink by Tabular has been donated to Iceberg
in https://github.com/apache/iceberg/pull/8701. I go ahead and close this
issue, feel free to open up a new
Fokko closed issue #4977: Support Kafka Connect within Iceberg
URL: https://github.com/apache/iceberg/issues/4977
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe
jbonofre commented on PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#issuecomment-1884919080
@Fokko awesome, thanks !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
Fokko merged PR #8701:
URL: https://github.com/apache/iceberg/pull/8701
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apach
Fokko commented on PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#issuecomment-1884916386
Since there are no further comments, I'll go ahead and merge this. I would
like to express my gratitude to @bryanck for working on this since this will
help so many people in the Kafka comm
pvary commented on code in PR #9432:
URL: https://github.com/apache/iceberg/pull/9432#discussion_r1447431727
##
hive-metastore/src/main/java/org/apache/iceberg/hive/MetastoreUtil.java:
##
@@ -72,6 +73,23 @@ public static void alterTable(
env.putAll(extraEnv);
env.put(S
Fokko merged PR #9419:
URL: https://github.com/apache/iceberg/pull/9419
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apach
pvary commented on code in PR #9432:
URL: https://github.com/apache/iceberg/pull/9432#discussion_r1447431727
##
hive-metastore/src/main/java/org/apache/iceberg/hive/MetastoreUtil.java:
##
@@ -72,6 +73,23 @@ public static void alterTable(
env.putAll(extraEnv);
env.put(S
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1447426135
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveViewOperations.java:
##
@@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1447424619
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveViewOperations.java:
##
@@ -0,0 +1,306 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+
wooyeong commented on code in PR #9455:
URL: https://github.com/apache/iceberg/pull/9455#discussion_r1447417783
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##
@@ -405,15 +407,25 @@ public boolean equals(Object other) {
return false;
ajantha-bhat commented on code in PR #9455:
URL: https://github.com/apache/iceberg/pull/9455#discussion_r1447415016
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##
@@ -405,15 +407,25 @@ public boolean equals(Object other) {
return fal
wooyeong commented on code in PR #9455:
URL: https://github.com/apache/iceberg/pull/9455#discussion_r1447412492
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##
@@ -405,15 +407,25 @@ public boolean equals(Object other) {
return false;
ajantha-bhat commented on code in PR #9454:
URL: https://github.com/apache/iceberg/pull/9454#discussion_r1447410277
##
spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestRewriteDeleteFilesAction.java:
##
@@ -0,0 +1,400 @@
+/*
+ * Licensed to the Ap
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1447409989
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -142,9 +143,15 @@ public FileIO io() {
@Override
protected void doRefresh()
gjacoby126 commented on code in PR #9452:
URL: https://github.com/apache/iceberg/pull/9452#discussion_r1447388713
##
flink/v1.18/flink/src/main/java/org/apache/iceberg/flink/util/FlinkVersionDetector.java:
##
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (A
ajantha-bhat commented on code in PR #9455:
URL: https://github.com/apache/iceberg/pull/9455#discussion_r1447345339
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkTable.java:
##
@@ -405,15 +407,25 @@ public boolean equals(Object other) {
return fal
wooyeong opened a new pull request, #9455:
URL: https://github.com/apache/iceberg/pull/9455
Issue: #9450
I've changed SparkTable to use name and effective snapshot id for checking
equality.
With the previous code I mentioned in #9450,
```diff
-return icebergTable.nam
JanKaul commented on issue #159:
URL: https://github.com/apache/iceberg-rust/issues/159#issuecomment-1884586498
Following @Fokko's reasoning, Decimal is comparable to TimestampZ where the
timezone is stored in the type. Similarly the scale of the Decimal is stored in
the type.
I thin
szehon-ho commented on PR #8579:
URL: https://github.com/apache/iceberg/pull/8579#issuecomment-1884585967
> WOW, big congrats on the arrival of your newborn.
Thank you so much!
> I will resume this work support once I finished my internal project, which
I'm leveraging bucketi
javrasya commented on issue #9410:
URL: https://github.com/apache/iceberg/issues/9410#issuecomment-1884525429
I couldn't do this @pvary , the split is far ahead and some time is needed
to get there in the application. My local environment is not able to run the
app on real data and hit this
ajantha-bhat commented on issue #9433:
URL: https://github.com/apache/iceberg/issues/9433#issuecomment-1884524095
@nk1506: Thanks for reporting. I think we should have this. Else, view
metadata files will never be deleted from storage.
I will assign this issue to you.
--
This is
ajantha-bhat commented on code in PR #9419:
URL: https://github.com/apache/iceberg/pull/9419#discussion_r1447134418
##
api/src/main/java/org/apache/iceberg/util/CharSequenceWrapper.java:
##
@@ -44,6 +44,8 @@ public CharSequence get() {
}
@Override
+ // Suppressed errorp
aokolnychyi commented on code in PR #9419:
URL: https://github.com/apache/iceberg/pull/9419#discussion_r1447076282
##
api/src/main/java/org/apache/iceberg/util/CharSequenceWrapper.java:
##
@@ -44,6 +44,8 @@ public CharSequence get() {
}
@Override
+ // Suppressed errorpr
ggershinsky commented on code in PR #9453:
URL: https://github.com/apache/iceberg/pull/9453#discussion_r1447008929
##
core/src/main/java/org/apache/iceberg/encryption/AesGcmOutputStream.java:
##
@@ -95,6 +102,10 @@ public void write(byte[] b, int off, int len) throws
IOExceptio
84 matches
Mail list logo