rdblue commented on code in PR #6168:
URL: https://github.com/apache/iceberg/pull/6168#discussion_r1039013799
##
core/src/main/java/org/apache/iceberg/rest/RESTSerializers.java:
##
@@ -87,7 +88,41 @@ public static void registerAll(ObjectMapper mapper) {
mapper.registerModul
Fokko merged PR #6346:
URL: https://github.com/apache/iceberg/pull/6346
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apach
stevenzwu commented on code in PR #6299:
URL: https://github.com/apache/iceberg/pull/6299#discussion_r1039049527
##
flink/v1.16/flink/src/main/java/org/apache/iceberg/flink/source/enumerator/EnumerationHistory.java:
##
@@ -0,0 +1,58 @@
+/*
+ * Licensed to the Apache Software Fou
stevenzwu merged PR #6305:
URL: https://github.com/apache/iceberg/pull/6305
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.a
stevenzwu commented on PR #6305:
URL: https://github.com/apache/iceberg/pull/6305#issuecomment-1336556429
thanks @smallx for the contribution and @pvary for the review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use th
rdblue opened a new pull request, #6357:
URL: https://github.com/apache/iceberg/pull/6357
This removes the use of `TableOperations` from metadata tables. Where
`TableMetadata` is needed, operations is fetched from the `Table` instance
instead of passing the ops explicitly. This is in prepar
amogh-jahagirdar commented on code in PR #6335:
URL: https://github.com/apache/iceberg/pull/6335#discussion_r1039115360
##
core/src/main/java/org/apache/iceberg/MergingSnapshotProducer.java:
##
@@ -948,13 +949,10 @@ private List newDeleteFilesAsManifests() {
(specId,
gong commented on PR #2680:
URL: https://github.com/apache/iceberg/pull/2680#issuecomment-1336715567
>
@openinx Hello, Will not OOM be triggered if we use mysql-cdc2.0? Because
mysql-cdc2.0 checkpoint in chunk level ?
--
This is an automated message from the Apache Git Service.
To
gong commented on PR #2680:
URL: https://github.com/apache/iceberg/pull/2680#issuecomment-1336717074
> > For example, we have a TiDB table that has six hundred million records.
If we use flink streaming mode, it will cost too much time
>
> @coolderli Are you using the latest flink cdc
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039148512
##
docs/flink-getting-started.md:
##
@@ -683,7 +683,47 @@ env.execute("Test Iceberg DataStream");
OVERWRITE and UPSERT can't be set together. In UPSERT mode, if the
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039148936
##
docs/flink-getting-started.md:
##
@@ -683,7 +683,47 @@ env.execute("Test Iceberg DataStream");
OVERWRITE and UPSERT can't be set together. In UPSERT mode, if the
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039152100
##
flink/v1.15/flink/src/main/java/org/apache/iceberg/flink/source/ScanContext.java:
##
@@ -427,25 +386,25 @@ public Builder maxPlanningSnapshotCount(int
newMaxPlann
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039153155
##
flink/v1.15/flink/src/main/java/org/apache/iceberg/flink/source/IcebergSource.java:
##
@@ -335,8 +365,30 @@ public Builder exposeLocality(boolean
newExposeLocalit
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039153740
##
flink/v1.15/flink/src/main/java/org/apache/iceberg/flink/source/IcebergSource.java:
##
@@ -335,8 +365,30 @@ public Builder exposeLocality(boolean
newExposeLocalit
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039158054
##
flink/v1.15/flink/src/main/java/org/apache/iceberg/flink/source/IcebergSource.java:
##
@@ -349,6 +401,8 @@ public IcebergSource build() {
throw new Unchec
hililiwei commented on code in PR #5967:
URL: https://github.com/apache/iceberg/pull/5967#discussion_r1039172223
##
docs/flink-getting-started.md:
##
@@ -683,7 +683,47 @@ env.execute("Test Iceberg DataStream");
OVERWRITE and UPSERT can't be set together. In UPSERT mode, if the
aajisaka opened a new pull request, #6358:
URL: https://github.com/apache/iceberg/pull/6358
### Background
Glue optimistic locking is automatically enabled if aws-java-sdk version is
>= 2.17.131, however, users sometimes cannot check what version of aws-java-sdk
is actually used in t
kowshikdremio commented on issue #6263:
URL: https://github.com/apache/iceberg/issues/6263#issuecomment-1336829158
Hi,
I added the below jars to the HIVE lib directory and it all works fine
now.
1.iceberg-hive-runtime-1.2.0-SNAPSHOT.jar ( Got it by running ./gradlew
build -x te
nastra closed issue #6263: Unable to create Iceberg Table using HIVE
URL: https://github.com/apache/iceberg/issues/6263
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsu
nastra commented on code in PR #6168:
URL: https://github.com/apache/iceberg/pull/6168#discussion_r1039264542
##
core/src/main/java/org/apache/iceberg/rest/RESTSerializers.java:
##
@@ -87,7 +88,41 @@ public static void registerAll(ObjectMapper mapper) {
mapper.registerModul
20 matches
Mail list logo