rajcoolguy commented on issue #3044:
URL: https://github.com/apache/iceberg/issues/3044#issuecomment-1790142135
@mgmarino - Thank you so much for the reference, i am able to make the KDA
iceberg sink working.
--
This is an automated message from the Apache Git Service.
To respond to the
ajantha-bhat commented on code in PR #8098:
URL: https://github.com/apache/iceberg/pull/8098#discussion_r1379603818
##
parquet/src/test/java/org/apache/iceberg/avro/TestParquetReadProjection.java:
##
@@ -45,4 +45,20 @@ protected GenericData.Record writeAndRead(
return Ite
ajantha-bhat commented on PR #8098:
URL: https://github.com/apache/iceberg/pull/8098#issuecomment-1790077472
@nastra, @Fokko: I have rebased this PR. I think it is good to go. Please
check and merge.
--
This is an automated message from the Apache Git Service.
To respond to the message
ajantha-bhat commented on code in PR #8098:
URL: https://github.com/apache/iceberg/pull/8098#discussion_r1379600584
##
parquet/src/test/java/org/apache/iceberg/avro/TestParquetReadProjection.java:
##
@@ -32,7 +32,7 @@ public class TestParquetReadProjection extends
TestReadProje
ajantha-bhat commented on code in PR #8098:
URL: https://github.com/apache/iceberg/pull/8098#discussion_r1379600138
##
parquet/src/test/java/org/apache/iceberg/avro/TestParquetReadProjection.java:
##
@@ -45,4 +45,20 @@ protected GenericData.Record writeAndRead(
return Ite
aokolnychyi commented on code in PR #8717:
URL: https://github.com/apache/iceberg/pull/8717#discussion_r1379552388
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/SparkScan.java:
##
@@ -200,12 +220,31 @@ public CustomTaskMetric[] reportDriverMetrics() {
}
lirui-apache commented on PR #5738:
URL: https://github.com/apache/iceberg/pull/5738#issuecomment-1790006696
Hey @rzhang10 , I think this feature is very useful. What's the status of
the work here?
--
This is an automated message from the Apache Git Service.
To respond to the message, ple
anfy2002us commented on issue #7173:
URL: https://github.com/apache/iceberg/issues/7173#issuecomment-1789990297
I see the PR didn't make it to a release. Do you need any help with it?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G
aokolnychyi commented on PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#issuecomment-1789934651
Thanks, @dramaticlly! Thanks for reviewing, @ajantha-bhat @Fokko @flyrain!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to Gi
aokolnychyi closed issue #8951: Partitions metadata table returns inconsistent
results size between 1.3.1 and 1.4.1
URL: https://github.com/apache/iceberg/issues/8951
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
aokolnychyi merged PR #8969:
URL: https://github.com/apache/iceberg/pull/8969
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg
manuzhang commented on issue #8949:
URL: https://github.com/apache/iceberg/issues/8949#issuecomment-1789930655
@yabola It's hard to reproduce with simple data samples. I may send you a
parquet file via slack or other channels if possible.
--
This is an automated message from the Apache G
jacobmarble commented on PR #8971:
URL: https://github.com/apache/iceberg/pull/8971#issuecomment-1789878828
I've tracked down the source of the four failing "core-tests" and will push
a fix tomorrow.
--
This is an automated message from the Apache Git Service.
To respond to the message, p
bryanck commented on code in PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#discussion_r1379410092
##
kafka-connect/kafka-connect-events/src/main/java/org/apache/iceberg/connect/events/DataWritten.java:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Fou
bryanck commented on code in PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#discussion_r1379406817
##
core/src/main/java/org/apache/iceberg/avro/TypeToSchema.java:
##
@@ -238,8 +218,68 @@ public Schema primitive(Type.PrimitiveType primitive) {
throw new Unsu
aokolnychyi commented on code in PR #8972:
URL: https://github.com/apache/iceberg/pull/8972#discussion_r1379398550
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteManifestsSparkAction.java:
##
@@ -354,104 +323,90 @@ private void deleteFiles(Iterable loc
aokolnychyi commented on code in PR #8972:
URL: https://github.com/apache/iceberg/pull/8972#discussion_r1379398550
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteManifestsSparkAction.java:
##
@@ -354,104 +323,90 @@ private void deleteFiles(Iterable loc
aokolnychyi commented on code in PR #8972:
URL: https://github.com/apache/iceberg/pull/8972#discussion_r1379397775
##
spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/actions/RewriteManifestsSparkAction.java:
##
@@ -155,20 +155,7 @@ private RewriteManifests.Result doExecu
aokolnychyi opened a new pull request, #8972:
URL: https://github.com/apache/iceberg/pull/8972
This PR migrates the action for rewriting manifests to use rolling writers.
Right now, we collect all entries in a Spark partition into a list to determine
the number of entries that must be writt
dramaticlly commented on code in PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#discussion_r1379363975
##
spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestDelete.java:
##
@@ -335,6 +335,37 @@ public void testDeleteFileThenMetadataDele
flyrain commented on code in PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#discussion_r1379340855
##
spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestDelete.java:
##
@@ -335,6 +335,37 @@ public void testDeleteFileThenMetadataDelete()
jacobmarble opened a new pull request, #8971:
URL: https://github.com/apache/iceberg/pull/8971
Helps #8657
This change adds field `ChronoUnit unit` to `TimestampType`, such that
`TimestampType` now represents four specified types:
- `timestamp` (existing)
- `timestamptz` (ex
jacobmarble closed pull request #8676: Core: add type Timestampns
URL: https://github.com/apache/iceberg/pull/8676
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscrib
aokolnychyi commented on code in PR #8970:
URL: https://github.com/apache/iceberg/pull/8970#discussion_r1379295058
##
core/src/main/java/org/apache/iceberg/DataFiles.java:
##
@@ -301,12 +299,10 @@ public Builder withSplitOffsets(List offsets) {
return this;
}
+
rdblue commented on code in PR #7913:
URL: https://github.com/apache/iceberg/pull/7913#discussion_r1379226089
##
core/src/main/java/org/apache/iceberg/UpdateRequirement.java:
##
@@ -20,11 +20,17 @@
import org.apache.iceberg.exceptions.CommitFailedException;
import org.apache
rdblue commented on code in PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#discussion_r1379221960
##
kafka-connect/kafka-connect-events/src/main/java/org/apache/iceberg/connect/events/DataWritten.java:
##
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foun
rdblue commented on code in PR #8701:
URL: https://github.com/apache/iceberg/pull/8701#discussion_r1379219229
##
core/src/main/java/org/apache/iceberg/avro/TypeToSchema.java:
##
@@ -238,8 +218,68 @@ public Schema primitive(Type.PrimitiveType primitive) {
throw new Unsup
aokolnychyi commented on code in PR #8962:
URL: https://github.com/apache/iceberg/pull/8962#discussion_r1379189209
##
api/src/main/java/org/apache/iceberg/AppendFiles.java:
##
@@ -42,16 +42,17 @@ public interface AppendFiles extends
SnapshotUpdate {
* The manifest must cont
jzhuge commented on PR #8922:
URL: https://github.com/apache/iceberg/pull/8922#issuecomment-1789421862
> I am not sure it is a good idea too, we always avoided respecting Spark
configs for the built-in file sources. Iceberg split planning is different.
Thanks for the feedback. If you
jacobmarble closed pull request #8960: API: implement types timestamp_ns and
timestamptz_ns
URL: https://github.com/apache/iceberg/pull/8960
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci
ajantha-bhat commented on issue #8452:
URL: https://github.com/apache/iceberg/issues/8452#issuecomment-1789303068
Fixed by #7105
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
ajantha-bhat closed issue #8452: Define how to track partition stats files and
add it to the Iceberg spec.
URL: https://github.com/apache/iceberg/issues/8452
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above t
dramaticlly commented on code in PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#discussion_r1378995026
##
spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestDelete.java:
##
@@ -335,6 +335,37 @@ public void testDeleteFileThenMetadataDele
aokolnychyi opened a new pull request, #8970:
URL: https://github.com/apache/iceberg/pull/8970
This PR disallows setting equality field IDs for data to honor the spec.
```
Field ids used to determine row equality in equality delete files. Required
when content=2 and should be null
dramaticlly commented on code in PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#discussion_r1378995026
##
spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestDelete.java:
##
@@ -335,6 +335,37 @@ public void testDeleteFileThenMetadataDele
dramaticlly commented on code in PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#discussion_r1378995026
##
spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestDelete.java:
##
@@ -335,6 +335,37 @@ public void testDeleteFileThenMetadataDele
ajantha-bhat commented on PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#issuecomment-1789161147
Should we have a test for `merge-on-read`? Looks like issue is reported for
`merge-on-read` (delete manifests).
--
This is an automated message from the Apache Git Service.
To resp
jbonofre commented on PR #7105:
URL: https://github.com/apache/iceberg/pull/7105#issuecomment-1789154241
@aokolnychyi @ajantha-bhat thank guys ! Good one !
As the implementation doesn't seem so complex, I suggest to create the PR
including doc. We can start discussion directly in the
aokolnychyi commented on PR #7105:
URL: https://github.com/apache/iceberg/pull/7105#issuecomment-1789144339
Thanks everyone involved! Thanks for pushing this, @ajantha-bhat!
> I will first breakdown the steps/PRs needed for on demand generation in
doc or in dev slack channel.
Whate
aokolnychyi closed issue #8451: Define the partition stats schema and
requirements then add them to the Iceberg spec.
URL: https://github.com/apache/iceberg/issues/8451
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
aokolnychyi merged PR #7105:
URL: https://github.com/apache/iceberg/pull/7105
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg
ajantha-bhat commented on PR #7105:
URL: https://github.com/apache/iceberg/pull/7105#issuecomment-1789103695
@aokolnychyi: I have addressed the nits and pushed. PR is ready.
> I think the next step would be to create an action to generate these files
on demand. Afterwards, there woul
dramaticlly commented on PR #8969:
URL: https://github.com/apache/iceberg/pull/8969#issuecomment-1789096348
CC @RussellSpitzer @ajantha-bhat @zhangbutao
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
dramaticlly opened a new pull request, #8969:
URL: https://github.com/apache/iceberg/pull/8969
close #8951
This help fix the bug introduced in #7581 where partition table scan entries
instead of files to have partition level stats aggregagted over partition
struct of files. In order
ajantha-bhat commented on code in PR #8966:
URL: https://github.com/apache/iceberg/pull/8966#discussion_r1378895173
##
format/view-spec.md:
##
@@ -239,12 +239,14 @@
s3://bucket/warehouse/default.db/event_agg/metadata/1-(uuid).metadata.json
```
Each change creates a new
ajantha-bhat commented on code in PR #8966:
URL: https://github.com/apache/iceberg/pull/8966#discussion_r1378650407
##
format/view-spec.md:
##
@@ -243,7 +243,7 @@ Each change creates a new metadata JSON file.
```sql
USE prod.other_db;
CREATE OR REPLACE VIEW default.event_agg
ajantha-bhat commented on code in PR #8966:
URL: https://github.com/apache/iceberg/pull/8966#discussion_r1378893246
##
format/view-spec.md:
##
@@ -263,9 +263,6 @@
s3://bucket/warehouse/default.db/event_agg/metadata/2-(uuid).metadata.json
"format-version" : 1,
"locatio
aokolnychyi commented on code in PR #7105:
URL: https://github.com/apache/iceberg/pull/7105#discussion_r1378874945
##
format/spec.md:
##
@@ -702,6 +703,58 @@ Blob metadata is a struct with the following fields:
| _optional_ | _optional_ | **`properties`** | `map` |
Additional
nastra commented on code in PR #8966:
URL: https://github.com/apache/iceberg/pull/8966#discussion_r1378822660
##
format/view-spec.md:
##
@@ -263,9 +263,6 @@
s3://bucket/warehouse/default.db/event_agg/metadata/2-(uuid).metadata.json
"format-version" : 1,
"location" : "
nastra merged PR #8963:
URL: https://github.com/apache/iceberg/pull/8963
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apac
ramdas-jagtap opened a new issue, #8968:
URL: https://github.com/apache/iceberg/issues/8968
### Apache Iceberg version
0.13.0
### Query engine
Flink
### Please describe the bug 🐞
I have setup of [flink1.13.2+iceberg0.13.0+hive-metastore3.0.0+minio(S3).
How
ramdas-jagtap commented on issue #4743:
URL: https://github.com/apache/iceberg/issues/4743#issuecomment-1788935355
@caesar168 Were you successful in creating a catalog using the environment
mentioned above? If yes please share the Flink client configuration details. As
I'm still getting thi
rustyconover commented on PR #115:
URL: https://github.com/apache/iceberg-python/pull/115#issuecomment-1788911355
Looks great to me Sean!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the speci
yabola commented on issue #8949:
URL: https://github.com/apache/iceberg/issues/8949#issuecomment-1788792125
@manuzhang Can you provide a way for me to reproduce it locally?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and u
ajantha-bhat commented on code in PR #8966:
URL: https://github.com/apache/iceberg/pull/8966#discussion_r1378650407
##
format/view-spec.md:
##
@@ -243,7 +243,7 @@ Each change creates a new metadata JSON file.
```sql
USE prod.other_db;
CREATE OR REPLACE VIEW default.event_agg
ajantha-bhat opened a new pull request, #8966:
URL: https://github.com/apache/iceberg/pull/8966
(no comment)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe,
liurenjie1024 commented on code in PR #89:
URL: https://github.com/apache/iceberg-rust/pull/89#discussion_r1378534267
##
crates/catalog/rest/src/catalog.rs:
##
@@ -312,11 +316,43 @@ impl Catalog for RestCatalog {
}
/// Load table from the catalog.
-async fn load_
ZENOTME commented on PR #82:
URL: https://github.com/apache/iceberg-rust/pull/82#issuecomment-1788612183
My bad, have fixed the typos now.🥵
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
liurenjie1024 commented on code in PR #89:
URL: https://github.com/apache/iceberg-rust/pull/89#discussion_r1378532556
##
crates/catalog/rest/src/catalog.rs:
##
@@ -312,11 +316,43 @@ impl Catalog for RestCatalog {
}
/// Load table from the catalog.
-async fn load_
liurenjie1024 commented on code in PR #89:
URL: https://github.com/apache/iceberg-rust/pull/89#discussion_r1378523841
##
crates/iceberg/src/table.rs:
##
@@ -17,10 +17,33 @@
//! Table API for Apache Iceberg
+use crate::io::FileIO;
use crate::spec::TableMetadata;
+use crate:
liurenjie1024 commented on code in PR #89:
URL: https://github.com/apache/iceberg-rust/pull/89#discussion_r1378522771
##
crates/catalog/rest/src/catalog.rs:
##
@@ -312,11 +316,43 @@ impl Catalog for RestCatalog {
}
/// Load table from the catalog.
-async fn load_
Fokko commented on code in PR #89:
URL: https://github.com/apache/iceberg-rust/pull/89#discussion_r1377882393
##
crates/catalog/rest/src/catalog.rs:
##
@@ -312,11 +316,43 @@ impl Catalog for RestCatalog {
}
/// Load table from the catalog.
-async fn load_table(&s
62 matches
Mail list logo