nastra merged PR #12879:
URL: https://github.com/apache/iceberg/pull/12879
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.ap
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179246166
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179246166
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
nastra commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2179246847
##
spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/actions/TestRewriteDataFilesAction.java:
##
@@ -538,6 +542,75 @@ public void testBinPackWithDeletes() throws
bcho0023 closed issue #474: .avro file not found
URL: https://github.com/apache/iceberg-go/issues/474
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: is
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179239103
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
nastra commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2179227773
##
spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteDataFilesSparkAction.java:
##
@@ -172,14 +172,16 @@ public RewriteDataFiles.Result execute()
ajantha-bhat commented on PR #13425:
URL: https://github.com/apache/iceberg/pull/13425#issuecomment-3026618301
Thanks @pvary and @stevenzwu for the response. I will try it out your
approach and get back on this if any problems for this approach.
--
This is an automated message from the A
pvary commented on PR #13441:
URL: https://github.com/apache/iceberg/pull/13441#issuecomment-3026617699
Merged to main.
Thanks for the backport @aiborodin!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
pvary merged PR #13441:
URL: https://github.com/apache/iceberg/pull/13441
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apa
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179197908
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
nareshbab commented on issue #13431:
URL: https://github.com/apache/iceberg/issues/13431#issuecomment-3026603432
Update: Figured out the behaviour happening here. Spark is committing the
changes after the end of a batch. So during the execution of a specific batch,
though data is written in
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179211848
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
stevenzwu commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2179187784
##
core/src/main/java/org/apache/iceberg/PartitionSpecParser.java:
##
@@ -68,7 +68,7 @@ public static String toJson(UnboundPartitionSpec spec,
boolean pretty) {
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179197908
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2179196428
##
api/src/main/java/org/apache/iceberg/OverwriteFiles.java:
##
@@ -75,6 +77,19 @@ public interface OverwriteFiles extends
SnapshotUpdate {
*/
OverwriteFile
sdd commented on code in PR #1017:
URL: https://github.com/apache/iceberg-rust/pull/1017#discussion_r2179192924
##
crates/iceberg/src/arrow/caching_delete_file_loader.rs:
##
@@ -308,28 +319,231 @@ impl CachingDeleteFileLoader {
Ok(result)
}
-/// Parses record
stevenzwu commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2179187784
##
core/src/main/java/org/apache/iceberg/PartitionSpecParser.java:
##
@@ -68,7 +68,7 @@ public static String toJson(UnboundPartitionSpec spec,
boolean pretty) {
nastra commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2179164576
##
api/src/main/java/org/apache/iceberg/OverwriteFiles.java:
##
@@ -75,6 +77,19 @@ public interface OverwriteFiles extends
SnapshotUpdate {
*/
OverwriteFiles d
bcho0023 opened a new issue, #474:
URL: https://github.com/apache/iceberg-go/issues/474
### Question
I have followed the example from transaction-test.go to create and populate
data. I can see that .json and .crc files are created in the metadata for the
namespace. However, when tryi
amogh-jahagirdar commented on code in PR #12450:
URL: https://github.com/apache/iceberg/pull/12450#discussion_r2179062769
##
api/src/main/java/org/apache/iceberg/actions/ComputePartitionStats.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
amogh-jahagirdar commented on code in PR #12450:
URL: https://github.com/apache/iceberg/pull/12450#discussion_r2179062769
##
api/src/main/java/org/apache/iceberg/actions/ComputePartitionStats.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
amogh-jahagirdar commented on code in PR #12450:
URL: https://github.com/apache/iceberg/pull/12450#discussion_r2179062769
##
api/src/main/java/org/apache/iceberg/actions/ComputePartitionStats.java:
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
CTTY commented on code in PR #1484:
URL: https://github.com/apache/iceberg-rust/pull/1484#discussion_r2178995973
##
crates/iceberg/src/transaction/mod.rs:
##
@@ -152,13 +159,59 @@ impl Transaction {
}
/// Commit transaction.
-pub async fn commit(mut self, catalog
zhztheplayer commented on PR #13433:
URL: https://github.com/apache/iceberg/pull/13433#issuecomment-3026256229
@pvary
> The DeleteFilter is pushed down in the Parquet vectorized reader case.
Yes, it seemed like so, but based on what I can see, the filter is still
processed wit
manuzhang commented on issue #13438:
URL: https://github.com/apache/iceberg/issues/13438#issuecomment-3026224678
Maybe the 2nd way is like delta integration?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abov
manuzhang commented on issue #11709:
URL: https://github.com/apache/iceberg/issues/11709#issuecomment-3026222886
Fixed by https://github.com/apache/iceberg/pull/13435
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
manuzhang closed issue #11709: Error with IDENTIFIER FIELDS and merge-on-read
Mode in Iceberg
URL: https://github.com/apache/iceberg/issues/11709
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
Guosmilesmile commented on code in PR #13429:
URL: https://github.com/apache/iceberg/pull/13429#discussion_r2178891218
##
core/src/test/java/org/apache/iceberg/util/TestFileSystemWalker.java:
##
@@ -135,4 +158,33 @@ public void testListDirRecursivelyWithFileIO() {
assertTha
wgtmac commented on code in PR #127:
URL: https://github.com/apache/iceberg-cpp/pull/127#discussion_r2178912536
##
src/iceberg/avro/avro_reader.cc:
##
@@ -96,11 +99,17 @@ class AvroBatchReader::Impl {
// Validate field ids in the file schema.
HasIdVisitor has_id_visito
Guosmilesmile commented on code in PR #13429:
URL: https://github.com/apache/iceberg/pull/13429#discussion_r2178891587
##
core/src/main/java/org/apache/iceberg/actions/FileURI.java:
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or mo
pan3793 commented on code in PR #13106:
URL: https://github.com/apache/iceberg/pull/13106#discussion_r2178903418
##
spark/v4.0/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestWriteAborts.java:
##
@@ -121,7 +121,7 @@ public void testBatchAppend() throws IOE
MisterRaindrop commented on code in PR #127:
URL: https://github.com/apache/iceberg-cpp/pull/127#discussion_r2178882575
##
src/iceberg/avro/avro_reader.cc:
##
@@ -96,11 +99,17 @@ class AvroBatchReader::Impl {
// Validate field ids in the file schema.
HasIdVisitor has_i
Guosmilesmile commented on code in PR #13302:
URL: https://github.com/apache/iceberg/pull/13302#discussion_r2178857945
##
flink/v2.0/flink/src/main/java/org/apache/iceberg/flink/maintenance/operator/OrphanFilesDetector.java:
##
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache So
Guosmilesmile commented on code in PR #13302:
URL: https://github.com/apache/iceberg/pull/13302#discussion_r2178848631
##
flink/v2.0/flink/src/main/java/org/apache/iceberg/flink/maintenance/api/DeleteOrphanFiles.java:
##
@@ -0,0 +1,358 @@
+/*
+ * Licensed to the Apache Software
aiborodin opened a new pull request, #13441:
URL: https://github.com/apache/iceberg/pull/13441
This PR backports changes in https://github.com/apache/iceberg/pull/13382 to
Flink 1.19/1.20.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
liurenjie1024 commented on PR #1480:
URL: https://github.com/apache/iceberg-rust/pull/1480#issuecomment-3026102809
Also we need to remove `iceberg-catalog-memory` from github publish workflow.
--
This is an automated message from the Apache Git Service.
To respond to the message, please lo
aiborodin commented on PR #13441:
URL: https://github.com/apache/iceberg/pull/13441#issuecomment-3026091085
The backport applied cleanly.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci
aiborodin commented on PR #13382:
URL: https://github.com/apache/iceberg/pull/13382#issuecomment-3026090122
Thank you for merging @pvary! Thank you, @ben-manes, for the benchmarking
analysis and @mxm, for the review.
I raised a PR to backport changes to Flink 1.19/1.20:
https://github.co
wgtmac commented on code in PR #12584:
URL: https://github.com/apache/iceberg/pull/12584#discussion_r2178827315
##
open-api/rest-catalog-open-api.yaml:
##
@@ -4245,8 +4247,6 @@ components:
table-uuid:
type: string
format: uuid
-metadata:
R
sungwy commented on code in PR #2154:
URL: https://github.com/apache/iceberg-python/pull/2154#discussion_r2178796429
##
pyiceberg/view/metadata.py:
##
@@ -0,0 +1,89 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See t
amogh-jahagirdar merged PR #13435:
URL: https://github.com/apache/iceberg/pull/13435
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@
wgtmac commented on code in PR #127:
URL: https://github.com/apache/iceberg-cpp/pull/127#discussion_r2178802364
##
src/iceberg/avro/avro_reader.cc:
##
@@ -96,11 +99,17 @@ class AvroBatchReader::Impl {
// Validate field ids in the file schema.
HasIdVisitor has_id_visito
amogh-jahagirdar closed issue #11341: ERROR when executing UPDATE/DELETE
queries in Iceberg 1.6.0: "Cannot add fieldId 1 as an identifier field"
URL: https://github.com/apache/iceberg/issues/11341
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
MisterRaindrop commented on code in PR #127:
URL: https://github.com/apache/iceberg-cpp/pull/127#discussion_r2178798599
##
src/iceberg/avro/avro_reader.cc:
##
@@ -96,11 +99,17 @@ class AvroBatchReader::Impl {
// Validate field ids in the file schema.
HasIdVisitor has_i
amogh-jahagirdar commented on PR #13435:
URL: https://github.com/apache/iceberg/pull/13435#issuecomment-3026012351
Thanks @szehon-ho , thanks @manuzhang @dramaticlly for reviewing. I'll go
ahead and merge
--
This is an automated message from the Apache Git Service.
To respond to the messa
sungwy commented on code in PR #2154:
URL: https://github.com/apache/iceberg-python/pull/2154#discussion_r2178788343
##
tests/conftest.py:
##
Review Comment:
I second @jayceslesar 's suggestion to add some integration tests for
testing the new `create_view` function on the
sungwy commented on issue #2156:
URL:
https://github.com/apache/iceberg-python/issues/2156#issuecomment-3025963959
Thanks for volunteering @rambleraptor - I've assigned this ticket to you
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
amogh-jahagirdar commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178771819
##
core/src/main/java/org/apache/iceberg/PartitionSpecParser.java:
##
@@ -68,7 +68,7 @@ public static String toJson(UnboundPartitionSpec spec,
boolean prett
amogh-jahagirdar commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178771819
##
core/src/main/java/org/apache/iceberg/PartitionSpecParser.java:
##
@@ -68,7 +68,7 @@ public static String toJson(UnboundPartitionSpec spec,
boolean prett
amogh-jahagirdar commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178771947
##
core/src/main/java/org/apache/iceberg/PartitionSpecParser.java:
##
@@ -68,7 +68,7 @@ public static String toJson(UnboundPartitionSpec spec,
boolean prett
github-actions[bot] commented on issue #11898:
URL: https://github.com/apache/iceberg/issues/11898#issuecomment-3025902016
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occur
amogh-jahagirdar commented on code in PR #13435:
URL: https://github.com/apache/iceberg/pull/13435#discussion_r2178750497
##
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##
@@ -381,7 +381,7 @@ private Schema
calculateMetadataSchema(List
github-actions[bot] commented on issue #1463:
URL:
https://github.com/apache/iceberg-python/issues/1463#issuecomment-3025906012
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity
github-actions[bot] commented on issue #1477:
URL:
https://github.com/apache/iceberg-python/issues/1477#issuecomment-3025905980
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity
stevenzwu merged PR #13412:
URL: https://github.com/apache/iceberg/pull/13412
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg
github-actions[bot] commented on issue #11840:
URL: https://github.com/apache/iceberg/issues/11840#issuecomment-3025901884
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occur
github-actions[bot] commented on issue #11891:
URL: https://github.com/apache/iceberg/issues/11891#issuecomment-3025901968
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occur
github-actions[bot] commented on issue #11884:
URL: https://github.com/apache/iceberg/issues/11884#issuecomment-3025901936
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occur
szehon-ho commented on code in PR #13435:
URL: https://github.com/apache/iceberg/pull/13435#discussion_r2178746326
##
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##
@@ -381,7 +381,7 @@ private Schema
calculateMetadataSchema(List metaCol
szehon-ho commented on code in PR #13435:
URL: https://github.com/apache/iceberg/pull/13435#discussion_r2178746326
##
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##
@@ -381,7 +381,7 @@ private Schema
calculateMetadataSchema(List metaCol
szehon-ho commented on code in PR #13435:
URL: https://github.com/apache/iceberg/pull/13435#discussion_r2178699297
##
spark/v4.0/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##
@@ -381,7 +381,7 @@ private Schema
calculateMetadataSchema(List metaCol
amogh-jahagirdar commented on code in PR #13435:
URL: https://github.com/apache/iceberg/pull/13435#discussion_r2178669292
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##
@@ -381,7 +381,7 @@ private Schema
calculateMetadataSchema(List
jayceslesar commented on issue #2166:
URL:
https://github.com/apache/iceberg-python/issues/2166#issuecomment-3025734976
I am unable to reproduce with the following on `v0.9.0rc3` or on `main` --
can you provide some more details if possible?
Let me know if there are any modifications
zeroshade commented on code in PR #467:
URL: https://github.com/apache/iceberg-go/pull/467#discussion_r2178445663
##
table/update_spec.go:
##
@@ -0,0 +1,367 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements. See the NOT
jayceslesar commented on code in PR #2158:
URL: https://github.com/apache/iceberg-python/pull/2158#discussion_r2178615632
##
pyiceberg/catalog/rest/__init__.py:
##
@@ -584,15 +587,47 @@ def register_table(self, identifier: Union[str,
Identifier], metadata_location:
ret
stevenzwu commented on PR #13425:
URL: https://github.com/apache/iceberg/pull/13425#issuecomment-3025543248
I agree with @pvary on the reasoning and comparison with metadata and
manifiest file
A new optional `DV_COUNT` field is probably good, which will also result in
simpler code.
stevie9868 commented on issue #1818:
URL:
https://github.com/apache/iceberg-python/issues/1818#issuecomment-3025659394
@rambleraptor
yeah, I am planning to work on it.
There is an ongoing work to build the DeleteFileIndex, which the DV work
might depend on.
--
This is an autom
geruh commented on PR #13347:
URL: https://github.com/apache/iceberg/pull/13347#issuecomment-3025602329
Yes, that makes sense then we can always have a test for both paths for each
release
--
This is an automated message from the Apache Git Service.
To respond to the message, please log o
rambleraptor commented on PR #2149:
URL: https://github.com/apache/iceberg-python/pull/2149#issuecomment-3025554445
@Fokko @kevinjqliu mind taking a look when you can? thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub an
rambleraptor commented on PR #2154:
URL: https://github.com/apache/iceberg-python/pull/2154#issuecomment-3025554228
@Fokko @kevinjqliu mind taking a look when you can? thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub an
adutra commented on PR #12562:
URL: https://github.com/apache/iceberg/pull/12562#issuecomment-3025167554
> Unfortunately, our revapi requirements are more strict for core and we
can't just assume other people haven't built against it. We need to go through
deprecation cycle, so this isn't a
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2178282829
##
core/src/main/java/org/apache/iceberg/actions/RewriteFileGroup.java:
##
@@ -59,6 +62,12 @@ public Set rewrittenFiles() {
.collect(Collectors.toCollectio
amogh-jahagirdar opened a new pull request, #13440:
URL: https://github.com/apache/iceberg/pull/13440
Backport of tests from #13070 to 3.4
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spec
stevenzwu merged PR #13440:
URL: https://github.com/apache/iceberg/pull/13440
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg
dentiny commented on PR #1484:
URL: https://github.com/apache/iceberg-rust/pull/1484#issuecomment-3025236950
> Hi @dentiny , this is a good question:) I'm thinking of using a mock
catalog, but haven't figured out exactly how
I use [`mockall`](https://docs.rs/mockall/latest/mockall/) o
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2178316080
##
spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/SparkWrite.java:
##
@@ -377,11 +375,23 @@ private CopyOnWriteOperation(SparkCopyOnWriteScan scan,
szehon-ho commented on code in PR #13106:
URL: https://github.com/apache/iceberg/pull/13106#discussion_r2178317419
##
spark/v4.0/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestWriteAborts.java:
##
@@ -121,7 +121,7 @@ public void testBatchAppend() throws I
Fokko commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178374579
##
api/src/main/java/org/apache/iceberg/PartitionSpec.java:
##
@@ -132,6 +132,12 @@ public StructType partitionType() {
for (PartitionField field : fields) {
CTTY commented on PR #1484:
URL: https://github.com/apache/iceberg-rust/pull/1484#issuecomment-3025187150
> curious how do we plan to test the retry? i.e. is it possible to use mock
catalog and inject retriable / non-retriable error?
Hi @dentiny , this is a good question:) I'm thinkin
stubz151 commented on PR #13347:
URL: https://github.com/apache/iceberg/pull/13347#issuecomment-3025210721
That makes sense @geruh.
Was thinking I could set parameters at the start of the class and just run
with both on/off. We happy with that?
--
This is an automated message from the
amogh-jahagirdar commented on code in PR #13440:
URL: https://github.com/apache/iceberg/pull/13440#discussion_r2178356321
##
spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/data/TestSparkAvroReader.java:
##
@@ -39,34 +41,51 @@ protected void writeAndValidate(Schema schem
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2178300768
##
spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteDataFilesSparkAction.java:
##
@@ -172,14 +172,16 @@ public RewriteDataFiles.Result execute
stevenzwu commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178355208
##
api/src/main/java/org/apache/iceberg/PartitionSpec.java:
##
@@ -132,6 +132,12 @@ public StructType partitionType() {
for (PartitionField field : field
RussellSpitzer commented on PR #13352:
URL: https://github.com/apache/iceberg/pull/13352#issuecomment-3025045258
LGTM, @dennishuo has now scared me of 503's and anonymous 409's (not thrown
by the IRC) potentially causing issues but I think we can address that in a
follow up if we get consen
geruh commented on PR #13347:
URL: https://github.com/apache/iceberg/pull/13347#issuecomment-3025106350
Thanks for this, @SanjayMarreddi! I was able to run and validate the tests
successfully. However, Iām a little concerned with the current approach, unless
I'm missing something. Right now
CTTY opened a new pull request, #1484:
URL: https://github.com/apache/iceberg-rust/pull/1484
## Which issue does this PR close?
- Closes #1387
## What changes are included in this PR?
- Added retry logic using `backon`
## Are these changes tested?
wordiodev closed pull request #13439: Poc 3 1
URL: https://github.com/apache/iceberg/pull/13439
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-u
Fokko commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178303855
##
api/src/main/java/org/apache/iceberg/PartitionSpec.java:
##
@@ -132,6 +132,12 @@ public StructType partitionType() {
for (PartitionField field : fields) {
Fokko commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178299964
##
core/src/test/java/org/apache/iceberg/TestPartitioning.java:
##
@@ -183,6 +183,27 @@ public void
testPartitionTypeWithIncompatibleSpecEvolution() {
.hasMes
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2178282829
##
core/src/main/java/org/apache/iceberg/actions/RewriteFileGroup.java:
##
@@ -59,6 +62,12 @@ public Set rewrittenFiles() {
.collect(Collectors.toCollectio
danielcweeks commented on PR #12562:
URL: https://github.com/apache/iceberg/pull/12562#issuecomment-3025084377
> Unfortunately that requires some revapi exceptions. I think they are
acceptable because it's unlikely that anybody ā apart from me :-) ā is using
`AuthSessionCache`; but I could
RussellSpitzer commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178279289
##
api/src/main/java/org/apache/iceberg/PartitionSpec.java:
##
@@ -132,6 +132,12 @@ public StructType partitionType() {
for (PartitionField field :
RussellSpitzer commented on code in PR #11868:
URL: https://github.com/apache/iceberg/pull/11868#discussion_r2178273800
##
core/src/test/java/org/apache/iceberg/TestPartitioning.java:
##
@@ -183,6 +183,27 @@ public void
testPartitionTypeWithIncompatibleSpecEvolution() {
RussellSpitzer merged PR #13352:
URL: https://github.com/apache/iceberg/pull/13352
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@ic
RussellSpitzer commented on PR #13352:
URL: https://github.com/apache/iceberg/pull/13352#issuecomment-3025048383
Thanks @singhpk234 for reverting and fixing again! Thanks @amogh-jahagirdar,
@nastra , @dennishuo and @rdblue for discussion and review
--
This is an automated message from the
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2178226485
##
api/src/main/java/org/apache/iceberg/OverwriteFiles.java:
##
@@ -75,6 +77,19 @@ public interface OverwriteFiles extends
SnapshotUpdate {
*/
OverwriteFile
wordiodev closed pull request #13439: Poc 3 1
URL: https://github.com/apache/iceberg/pull/13439
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-u
stevenzwu commented on PR #13411:
URL: https://github.com/apache/iceberg/pull/13411#issuecomment-3024977420
thanks @RussellSpitzer for the change and @singhpk234 for the review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub a
stevenzwu commented on code in PR #13245:
URL: https://github.com/apache/iceberg/pull/13245#discussion_r2178226485
##
api/src/main/java/org/apache/iceberg/OverwriteFiles.java:
##
@@ -75,6 +77,19 @@ public interface OverwriteFiles extends
SnapshotUpdate {
*/
OverwriteFile
1 - 100 of 206 matches
Mail list logo