xiaoyang-sde commented on PR #110:
URL: https://github.com/apache/iceberg-rust/pull/110#issuecomment-1833246705
cc @Fokko
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1410207565
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveViewOperations.java:
##
@@ -0,0 +1,315 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
wypoon commented on PR #6799:
URL: https://github.com/apache/iceberg/pull/6799#issuecomment-1833208488
@nastra @aokolnychyi I have rebased on main and resolved the conflicts with
the `RewriteManifestsSparkAction` refactoring. As I mentioned before, I have
introduced `ManifestWriter.Options`
nk1506 opened a new pull request, #9184:
URL: https://github.com/apache/iceberg/pull/9184
With
[dropTableData](https://github.com/apache/iceberg/blob/d247b20f166ccb0b92443d4b05330b1e0d9c5d49/core/src/main/java/org/apache/iceberg/CatalogUtil.java#L86)
we plan to delete orphan files as many a
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1410207565
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveViewOperations.java:
##
@@ -0,0 +1,315 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1410206641
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -250,14 +262,15 @@ protected void doCommit(TableMetadata base, TableMetadata
met
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1410206174
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -162,8 +163,11 @@ protected void doRefresh() {
Thread.currentThread().inte
nk1506 commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1410204384
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -328,6 +346,14 @@ private void setHmsTableParameters(
Set obsoleteProps,
TCGOGOGO commented on issue #9072:
URL: https://github.com/apache/iceberg/issues/9072#issuecomment-1833130222
Any update for this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific com
wypoon commented on PR #6799:
URL: https://github.com/apache/iceberg/pull/6799#issuecomment-1833125326
Hmm, I think TestExpireSnapshotsAction > dataFilesCleanupWithParallelTasks
might be a flaky test?
All Spark 3 tests passed with Java 8 and 11, and even for Java 17, they
passed for Scal
huaxingao commented on code in PR #9176:
URL: https://github.com/apache/iceberg/pull/9176#discussion_r1410118039
##
api/src/main/java/org/apache/iceberg/expressions/ValueAggregate.java:
##
@@ -30,13 +30,16 @@ protected ValueAggregate(Operation op, BoundTerm term) {
@Overrid
my-vegetable-has-exploded commented on code in PR #106:
URL: https://github.com/apache/iceberg-rust/pull/106#discussion_r1410103369
##
crates/iceberg/src/spec/partition.rs:
##
@@ -60,6 +60,44 @@ impl PartitionSpec {
}
}
+/// Reference to [`UnboundPartitionSpec`].
+pub ty
liurenjie1024 commented on code in PR #111:
URL: https://github.com/apache/iceberg-rust/pull/111#discussion_r1410090267
##
CONTRIBUTING.md:
##
@@ -108,6 +108,26 @@ $ cargo version
cargo 1.69.0 (6e9a83356 2023-04-12)
```
+## Build
+
+### Compile
+
+```shell
+make build
+```
+
liurenjie1024 commented on code in PR #111:
URL: https://github.com/apache/iceberg-rust/pull/111#discussion_r1410084539
##
CONTRIBUTING.md:
##
@@ -108,6 +108,26 @@ $ cargo version
cargo 1.69.0 (6e9a83356 2023-04-12)
```
+## Build
+
+### Compile
+
+```shell
+make build
+```
+
liurenjie1024 commented on code in PR #106:
URL: https://github.com/apache/iceberg-rust/pull/106#discussion_r1410088445
##
crates/iceberg/src/spec/partition.rs:
##
@@ -60,6 +60,44 @@ impl PartitionSpec {
}
}
+/// Reference to [`UnboundPartitionSpec`].
+pub type UnboundPa
manuzhang commented on code in PR #111:
URL: https://github.com/apache/iceberg-rust/pull/111#discussion_r1410085665
##
CONTRIBUTING.md:
##
@@ -108,6 +108,26 @@ $ cargo version
cargo 1.69.0 (6e9a83356 2023-04-12)
```
+## Build
+
+### Compile
+
+```shell
+make build
+```
+
+##
liurenjie1024 commented on code in PR #111:
URL: https://github.com/apache/iceberg-rust/pull/111#discussion_r1410084539
##
CONTRIBUTING.md:
##
@@ -108,6 +108,26 @@ $ cargo version
cargo 1.69.0 (6e9a83356 2023-04-12)
```
+## Build
+
+### Compile
+
+```shell
+make build
+```
+
my-vegetable-has-exploded commented on code in PR #106:
URL: https://github.com/apache/iceberg-rust/pull/106#discussion_r1410083813
##
crates/iceberg/src/spec/partition.rs:
##
@@ -60,6 +62,99 @@ impl PartitionSpec {
}
}
+static PARTITION_DATA_ID_START: i32 = 1000;
+
+///
my-vegetable-has-exploded commented on code in PR #106:
URL: https://github.com/apache/iceberg-rust/pull/106#discussion_r1410082084
##
crates/iceberg/src/spec/partition.rs:
##
@@ -60,6 +62,99 @@ impl PartitionSpec {
}
}
+static PARTITION_DATA_ID_START: i32 = 1000;
+
+///
AllenWee1106 commented on issue #9178:
URL: https://github.com/apache/iceberg/issues/9178#issuecomment-1832991874
@nk1506
Thank you for your reply.
Based on your analysis, I have the following questions
1. I have multiple Spark tasks running at the same time. According to your
ana
qianzhen0 closed issue #9175: Does iceberg flink streaming read support recover
from last ckp?
URL: https://github.com/apache/iceberg/issues/9175
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
amogh-jahagirdar commented on code in PR #9183:
URL: https://github.com/apache/iceberg/pull/9183#discussion_r1410012734
##
core/src/test/java/org/apache/iceberg/TestSequenceNumberForV2Table.java:
##
@@ -319,12 +321,18 @@ public void testExpirationInTransaction() {
V2Assert.
github-actions[bot] commented on issue #7574:
URL: https://github.com/apache/iceberg/issues/7574#issuecomment-1832897692
This issue has been closed because it has not received any activity in the
last 14 days since being marked as 'stale'
--
This is an automated message from the Apache Gi
github-actions[bot] closed issue #7574: Iceberg with Hive Metastore does not
create a catalog in Spark and uses default
URL: https://github.com/apache/iceberg/issues/7574
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use th
github-actions[bot] commented on issue #7639:
URL: https://github.com/apache/iceberg/issues/7639#issuecomment-1832897644
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occurs.
github-actions[bot] commented on issue #7748:
URL: https://github.com/apache/iceberg/issues/7748#issuecomment-1832897578
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occurs.
amogh-jahagirdar commented on code in PR #9183:
URL: https://github.com/apache/iceberg/pull/9183#discussion_r1410002841
##
core/src/main/java/org/apache/iceberg/BaseTransaction.java:
##
@@ -446,20 +446,16 @@ private void commitSimpleTransaction() {
}
Set committe
amogh-jahagirdar commented on PR #9183:
URL: https://github.com/apache/iceberg/pull/9183#issuecomment-1832872079
Thanks for the PR @bartash I'm taking a look.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
dependabot[bot] opened a new pull request, #171:
URL: https://github.com/apache/iceberg-python/pull/171
Bumps [mypy-boto3-glue](https://github.com/youtype/mypy_boto3_builder) from
1.29.2 to 1.33.0.
Commits
See full diff in https://github.com/youtype/mypy_boto3_builder/commits";
amitmittal5 commented on issue #9171:
URL: https://github.com/apache/iceberg/issues/9171#issuecomment-1832783780
Hello,
I am also running a spark streaming job with latest version of spark and
iceberg, however seeing the data file is getting overwritten in subsequent
stream execution. I
amitmittal5 commented on issue #9172:
URL: https://github.com/apache/iceberg/issues/9172#issuecomment-1832775989
I also tested with latest version,
**iceberg-spark-runtime-3.4_2.12-1.4.2.jar** as well, I could see that the
second number, part of the file name, is continuously increasing
00
bartash commented on PR #9183:
URL: https://github.com/apache/iceberg/pull/9183#issuecomment-1832734372
@amogh-jahagirdar can you please take a look when you have time as you
implemented #6634
--
This is an automated message from the Apache Git Service.
To respond to the message, please l
bartash opened a new pull request, #9183:
URL: https://github.com/apache/iceberg/pull/9183
When a snapshot is expired as part of a transaction, the snapshot file(s)
should be deleted when the transaction commits. A recent change (#6634) ensured
that files are not deleted when they have also
bartash commented on issue #9182:
URL: https://github.com/apache/iceberg/issues/9182#issuecomment-1832727577
Can someone assign this to me please?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
bartash opened a new issue, #9182:
URL: https://github.com/apache/iceberg/issues/9182
### Apache Iceberg version
main (development)
### Query engine
Impala
### Please describe the bug 🐞
When a snapshot is expired as part of a transaction, the snapshot file(s
Fokko commented on issue #170:
URL: https://github.com/apache/iceberg-python/issues/170#issuecomment-1832717377
It looks like we need to add the BoundTransform to the left hand side:
https://github.com/apache/iceberg/blob/d247b20f166ccb0b92443d4b05330b1e0d9c5d49/api/src/main/java/org/apache/
amogh-jahagirdar opened a new pull request, #9181:
URL: https://github.com/apache/iceberg/pull/9181
We reference using "comment" as a property in the view spec, but it looks
like we don't have a constant defined in the library. See
https://github.com/trinodb/trino/pull/19818#discussion_r140
Fokko commented on code in PR #9176:
URL: https://github.com/apache/iceberg/pull/9176#discussion_r1409850325
##
api/src/main/java/org/apache/iceberg/expressions/ValueAggregate.java:
##
@@ -30,13 +30,16 @@ protected ValueAggregate(Operation op, BoundTerm term) {
@Override
mas-chen commented on code in PR #9173:
URL: https://github.com/apache/iceberg/pull/9173#discussion_r1409747023
##
flink/v1.17/flink/src/main/java/org/apache/iceberg/flink/source/IcebergSource.java:
##
@@ -120,26 +122,30 @@ private String planningThreadName() {
// a public
mas-chen commented on code in PR #9179:
URL: https://github.com/apache/iceberg/pull/9179#discussion_r1409722185
##
docs/flink-queries.md:
##
@@ -277,6 +277,58 @@ DataStream stream = env.fromSource(source,
WatermarkStrategy.noWatermarks()
"Iceberg Source as Avro GenericReco
mas-chen commented on code in PR #9179:
URL: https://github.com/apache/iceberg/pull/9179#discussion_r1409704342
##
docs/flink-queries.md:
##
@@ -277,6 +277,58 @@ DataStream stream = env.fromSource(source,
WatermarkStrategy.noWatermarks()
"Iceberg Source as Avro GenericReco
RussellSpitzer commented on code in PR #8755:
URL: https://github.com/apache/iceberg/pull/8755#discussion_r1409670268
##
core/src/main/java/org/apache/iceberg/deletes/BitmapPositionDeleteIndex.java:
##
@@ -27,6 +27,15 @@ class BitmapPositionDeleteIndex implements
PositionDelete
RussellSpitzer commented on code in PR #8755:
URL: https://github.com/apache/iceberg/pull/8755#discussion_r1409668358
##
core/src/main/java/org/apache/iceberg/SystemConfigs.java:
##
@@ -43,14 +43,14 @@ private SystemConfigs() {}
Integer::parseUnsignedInt);
/**
-
stevenzwu commented on PR #9179:
URL: https://github.com/apache/iceberg/pull/9179#issuecomment-1832386184
@dchristle can you also help review this doc PR? your perspective can help
improve the readability of the doc.
--
This is an automated message from the Apache Git Service.
To respond
stevenzwu commented on code in PR #9179:
URL: https://github.com/apache/iceberg/pull/9179#discussion_r1409630306
##
docs/flink-queries.md:
##
@@ -277,6 +277,58 @@ DataStream stream = env.fromSource(source,
WatermarkStrategy.noWatermarks()
"Iceberg Source as Avro GenericRec
mrcnc commented on code in PR #8998:
URL: https://github.com/apache/iceberg/pull/8998#discussion_r1409630922
##
open-api/rest-catalog-open-api.yaml:
##
@@ -2872,6 +2872,10 @@ components:
For unauthorized requests, services should return an appropriate 401 or
40
RussellSpitzer commented on code in PR #8755:
URL: https://github.com/apache/iceberg/pull/8755#discussion_r1409617517
##
api/src/main/java/org/apache/iceberg/types/TypeUtil.java:
##
@@ -452,6 +454,59 @@ private static void checkSchemaCompatibility(
}
}
+ public static
XN137 commented on PR #9157:
URL: https://github.com/apache/iceberg/pull/9157#issuecomment-1832326554
@dependabot rebase
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
dependabot[bot] commented on PR #9157:
URL: https://github.com/apache/iceberg/pull/9157#issuecomment-1832326624
Sorry, only users with push access can use that command.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
amitmittal5 commented on issue #9172:
URL: https://github.com/apache/iceberg/issues/9172#issuecomment-1832293641
Just adding more details on the table to which the job is writting, I tried
with both external and managed table, but see similar results. The behavior I
see, that it took around
Fokko commented on code in PR #139:
URL: https://github.com/apache/iceberg-python/pull/139#discussion_r1409365694
##
pyiceberg/table/__init__.py:
##
@@ -350,6 +357,241 @@ class RemovePropertiesUpdate(TableUpdate):
removals: List[str]
+class TableMetadataUpdateContext:
+
Fokko commented on code in PR #139:
URL: https://github.com/apache/iceberg-python/pull/139#discussion_r1409362760
##
pyiceberg/table/__init__.py:
##
@@ -350,6 +357,241 @@ class RemovePropertiesUpdate(TableUpdate):
removals: List[str]
+class TableMetadataUpdateContext:
+
Fokko commented on code in PR #139:
URL: https://github.com/apache/iceberg-python/pull/139#discussion_r1409336199
##
pyiceberg/table/__init__.py:
##
@@ -350,6 +357,241 @@ class RemovePropertiesUpdate(TableUpdate):
removals: List[str]
+class TableMetadataUpdateContext:
+
Fokko commented on code in PR #139:
URL: https://github.com/apache/iceberg-python/pull/139#discussion_r1409338251
##
pyiceberg/table/__init__.py:
##
@@ -350,6 +357,241 @@ class RemovePropertiesUpdate(TableUpdate):
removals: List[str]
+class TableMetadataUpdateContext:
+
Fokko merged PR #169:
URL: https://github.com/apache/iceberg-python/pull/169
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.
bigluck opened a new issue, #170:
URL: https://github.com/apache/iceberg-python/issues/170
### Feature Request / Improvement
Ciao @Fokko
Seems like `table.scan()` supports a limited set of `filter` conditions, and
it fails when a user specifies a complex one.
In my case, I h
pvary commented on issue #9175:
URL: https://github.com/apache/iceberg/issues/9175#issuecomment-1831887127
`IcebergSource` is using the new Flink Source API - this is the newer one
`FlinkSource` is using the old Flink SourceFunction API - this will be
removed (I plan to deprecate it in th
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409262085
##
.palantir/revapi.yml:
##
@@ -877,6 +877,27 @@ acceptedBreaks:
- code: "java.field.serialVersionUIDChanged"
new: "field org.apache.iceberg.util.Serializable
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409261131
##
.palantir/revapi.yml:
##
@@ -877,6 +877,27 @@ acceptedBreaks:
- code: "java.field.serialVersionUIDChanged"
new: "field org.apache.iceberg.util.Serializable
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409258375
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -328,6 +346,14 @@ private void setHmsTableParameters(
Set obsoleteProps,
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409256826
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveOperationsBase.java:
##
@@ -181,4 +186,30 @@ default Table newHmsTable(String hmsTableOwner) {
return n
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409253203
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -250,14 +262,15 @@ protected void doCommit(TableMetadata base, TableMetadata
meta
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409251564
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveTableOperations.java:
##
@@ -162,8 +163,11 @@ protected void doRefresh() {
Thread.currentThread().inter
pvary commented on code in PR #8907:
URL: https://github.com/apache/iceberg/pull/8907#discussion_r1409243044
##
hive-metastore/src/main/java/org/apache/iceberg/hive/HiveCatalog.java:
##
@@ -264,6 +265,163 @@ public void renameTable(TableIdentifier from,
TableIdentifier original
imonteroq commented on issue #2788:
URL: https://github.com/apache/iceberg/issues/2788#issuecomment-1831736292
Hi @SreeramGarlapati , @tmnd1991 are there any plans to implement this. This
is a selling point for Hudi over Iceberg.
--
This is an automated message from the Apache Git Service
gaborkaszab closed issue #7612: DataFile creation by file path seems wrong
URL: https://github.com/apache/iceberg/issues/7612
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
gaborkaszab commented on issue #7612:
URL: https://github.com/apache/iceberg/issues/7612#issuecomment-1831734715
https://github.com/apache/iceberg/pull/7744 solves this
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t
nk1506 commented on issue #9178:
URL: https://github.com/apache/iceberg/issues/9178#issuecomment-1831614906
it seems concurrent commit issues. Is there by an chance you are running
these queries in parallel?
--
This is an automated message from the Apache Git Service.
To respond to the m
qianzhen0 commented on issue #9175:
URL: https://github.com/apache/iceberg/issues/9175#issuecomment-1831508633
@pvary thanks for the input!
by saying `IcebergSource`, do you mean iceberg flink connector? or spark
engine?
So, if i run `insert into sink select * from iceberg_ta
github-raphael-douyere commented on issue #8953:
URL: https://github.com/apache/iceberg/issues/8953#issuecomment-1831463021
@Fokko I don't know how to have a simple and reproductible setup. We had
the issue at a rate of ~10 files per week with an app producing hundreds of
files per hour.
zhongyujiang commented on PR #9177:
URL: https://github.com/apache/iceberg/pull/9177#issuecomment-1831405339
The [CI
failure](https://github.com/apache/iceberg/actions/runs/7029032258/job/19125970118#step:6:214)
seems unreleated:
```
> Task :iceberg-flink:iceberg-flink-1.17:test
liurenjie1024 commented on issue #88:
URL: https://github.com/apache/iceberg-rust/issues/88#issuecomment-1831404891
> > Hi, @xiaoyang-sde Welcome to contribute! This seems reasonable to me.
>
> Thanks! Could you please assign this issue to me?
Done!
--
This is an automated me
xiaoyang-sde commented on issue #88:
URL: https://github.com/apache/iceberg-rust/issues/88#issuecomment-1831394325
> Hi, @xiaoyang-sde Welcome to contribute! This seems reasonable to me.
Thanks! Could you please assign this issue to me?
--
This is an automated message from the A
73 matches
Mail list logo