nastra commented on code in PR #187:
URL: https://github.com/apache/iceberg-docs/pull/187#discussion_r1086933656
##
landing-page/content/common/how-to-release.md:
##
@@ -192,11 +212,15 @@ This release includes important changes that I should
have summarized here, but
Please
nastra commented on code in PR #:
URL: https://github.com/apache/iceberg/pull/#discussion_r1086946040
##
open-api/rest-catalog-open-api.yaml:
##
@@ -69,6 +69,13 @@ paths:
- Configuration API
summary: List all catalog configuration settings
operatio
ajantha-bhat commented on code in PR #187:
URL: https://github.com/apache/iceberg-docs/pull/187#discussion_r1086946962
##
landing-page/content/common/how-to-release.md:
##
@@ -192,11 +212,15 @@ This release includes important changes that I should
have summarized here, but
P
rdblue merged PR #:
URL: https://github.com/apache/iceberg/pull/
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apac
rdblue commented on PR #:
URL: https://github.com/apache/iceberg/pull/#issuecomment-1404008281
Thanks, @danielcweeks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
Fokko commented on issue #6414:
URL: https://github.com/apache/iceberg/issues/6414#issuecomment-1404045827
@ajantha-bhat I think you're good to pick this up if you're still interested
:)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on
stevenzwu commented on PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#issuecomment-1404049007
> As I implemented this though started to think we may just want to have a
direct branch method on the FlinkSink builder itself. That seems more intuitive
from an API perspective and is
stevenzwu commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087025072
##
flink/v1.16/flink/src/main/java/org/apache/iceberg/flink/sink/IcebergFilesCommitter.java:
##
@@ -471,8 +476,9 @@ private static ListStateDescriptor>
buildStateDes
szehon-ho commented on code in PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#discussion_r1087040113
##
core/src/main/java/org/apache/iceberg/BaseTableScan.java:
##
@@ -47,4 +48,10 @@ public CloseableIterable planTasks() {
return TableScanUtil.planTasks(
szehon-ho commented on code in PR #6648:
URL: https://github.com/apache/iceberg/pull/6648#discussion_r1087052395
##
hive-metastore/src/main/java/org/apache/iceberg/hive/MetastoreLock.java:
##
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+
szehon-ho merged PR #6591:
URL: https://github.com/apache/iceberg/pull/6591
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.a
szehon-ho commented on PR #6591:
URL: https://github.com/apache/iceberg/pull/6591#issuecomment-1404106102
Merged, thanks @krvikash for change, @ajantha-bhat for review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use th
holdenk commented on issue #6652:
URL: https://github.com/apache/iceberg/issues/6652#issuecomment-1404136088
Actually I think we could do this with a small extension to Icebergs WAP so
that the logs contain the table names.
--
This is an automated message from the Apache Git Service.
To r
holdenk commented on issue #6652:
URL: https://github.com/apache/iceberg/issues/6652#issuecomment-1404153325
hmm more poking I can probably do this with an Iceberg listener. Closing for
now.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log
holdenk closed issue #6652: Support for global shadow writes + logs
URL: https://github.com/apache/iceberg/issues/6652
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubs
RussellSpitzer commented on issue #6652:
URL: https://github.com/apache/iceberg/issues/6652#issuecomment-1404156574
This is also kind of similar to Nessie's branching idea, I think Iceberg is
also moving towards having some first level branching support.
--
This is an automated message fr
RussellSpitzer commented on code in PR #6554:
URL: https://github.com/apache/iceberg/pull/6554#discussion_r1087115907
##
data/src/test/java/org/apache/iceberg/data/TestMetricsRowGroupFilterTypes.java:
##
@@ -212,74 +218,82 @@ public void createParquetInputFile(List records)
thr
haydenflinner commented on PR #5933:
URL: https://github.com/apache/iceberg/pull/5933#issuecomment-1404162903
If I have a table with more than 100 columns, what are the downsides since
I'm above this param value? I don't see it documented here --
https://iceberg.apache.org/docs/latest/confi
GabeChurch opened a new issue, #6667:
URL: https://github.com/apache/iceberg/issues/6667
### Query engine
Spark
### Question
I have a situation where I need to make high(ish)-frequency writes to a
single iceberg table in multiple Spark jobs, and multiple times per job --
abmo-x opened a new pull request, #6668:
URL: https://github.com/apache/iceberg/pull/6668
max_concurrent_adds argument allows users of add_files procedure to
concurrently add data files. Today the parallelism defaults to 1 and there is
no way to configure.
--
This is an automated message
jackye1995 commented on issue #6625:
URL: https://github.com/apache/iceberg/issues/6625#issuecomment-1404235165
This sounds like a very cool plugin to add, any thoughts about that? @rdblue
--
This is an automated message from the Apache Git Service.
To respond to the message, please log o
Fokko commented on code in PR #6566:
URL: https://github.com/apache/iceberg/pull/6566#discussion_r1087197108
##
python/pyiceberg/expressions/visitors.py:
##
@@ -881,3 +881,82 @@ def rewrite_to_dnf(expr: BooleanExpression) ->
Tuple[BooleanExpression, ...]:
# (A AND NOT(B) A
Fokko commented on code in PR #6566:
URL: https://github.com/apache/iceberg/pull/6566#discussion_r1087197108
##
python/pyiceberg/expressions/visitors.py:
##
@@ -881,3 +881,82 @@ def rewrite_to_dnf(expr: BooleanExpression) ->
Tuple[BooleanExpression, ...]:
# (A AND NOT(B) A
Fokko commented on code in PR #6566:
URL: https://github.com/apache/iceberg/pull/6566#discussion_r1087197108
##
python/pyiceberg/expressions/visitors.py:
##
@@ -881,3 +881,82 @@ def rewrite_to_dnf(expr: BooleanExpression) ->
Tuple[BooleanExpression, ...]:
# (A AND NOT(B) A
amogh-jahagirdar commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087201304
##
flink/v1.16/flink/src/main/java/org/apache/iceberg/flink/sink/IcebergFilesCommitter.java:
##
@@ -471,8 +476,9 @@ private static ListStateDescriptor>
buildS
RussellSpitzer commented on issue #6669:
URL: https://github.com/apache/iceberg/issues/6669#issuecomment-1404252649
Yep this basically expected. You can always set parameters such that all
files are marked for rewriting. The defaults are all based around percentage
sizes of the target file
Fokko commented on code in PR #6566:
URL: https://github.com/apache/iceberg/pull/6566#discussion_r1087205606
##
python/pyiceberg/expressions/visitors.py:
##
@@ -881,3 +881,82 @@ def rewrite_to_dnf(expr: BooleanExpression) ->
Tuple[BooleanExpression, ...]:
# (A AND NOT(B) A
Fokko commented on code in PR #6644:
URL: https://github.com/apache/iceberg/pull/6644#discussion_r1087211802
##
python/pyiceberg/catalog/rest.py:
##
@@ -175,11 +175,7 @@ class RestCatalog(Catalog):
session: Session
properties: Properties
-def __init__(
-s
jackye1995 commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087212028
##
flink/v1.16/flink/src/test/java/org/apache/iceberg/flink/SimpleDataUtil.java:
##
@@ -284,10 +303,23 @@ public static void assertTableRecords(Table table,
List ex
jackye1995 commented on PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#issuecomment-1404275883
> As I implemented this though started to think we may just want to have a
direct branch method on the FlinkSink builder itself. That seems more intuitive
from an API perspective and i
srilman commented on issue #6620:
URL: https://github.com/apache/iceberg/issues/6620#issuecomment-1404304169
@fokko Any thoughts?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
amogh-jahagirdar commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087244952
##
flink/v1.16/flink/src/test/java/org/apache/iceberg/flink/SimpleDataUtil.java:
##
@@ -284,10 +303,23 @@ public static void assertTableRecords(Table table,
L
amogh-jahagirdar commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087244952
##
flink/v1.16/flink/src/test/java/org/apache/iceberg/flink/SimpleDataUtil.java:
##
@@ -284,10 +303,23 @@ public static void assertTableRecords(Table table,
L
amogh-jahagirdar commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087244952
##
flink/v1.16/flink/src/test/java/org/apache/iceberg/flink/SimpleDataUtil.java:
##
@@ -284,10 +303,23 @@ public static void assertTableRecords(Table table,
L
jackye1995 commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087256344
##
flink/v1.16/flink/src/test/java/org/apache/iceberg/flink/SimpleDataUtil.java:
##
@@ -284,10 +303,23 @@ public static void assertTableRecords(Table table,
List ex
JonasJ-ap commented on code in PR #6449:
URL: https://github.com/apache/iceberg/pull/6449#discussion_r1087285016
##
delta-lake/src/main/java/org/apache/iceberg/delta/BaseSnapshotDeltaLakeTableAction.java:
##
@@ -0,0 +1,403 @@
+/*
+ * Licensed to the Apache Software Foundation (A
abmo-x closed pull request #6668: add max_concurrent_adds argument to add_files
procedure
URL: https://github.com/apache/iceberg/pull/6668
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
JonasJ-ap commented on code in PR #6449:
URL: https://github.com/apache/iceberg/pull/6449#discussion_r1087287064
##
delta-lake/src/main/java/org/apache/iceberg/delta/BaseSnapshotDeltaLakeTableAction.java:
##
@@ -0,0 +1,403 @@
+/*
+ * Licensed to the Apache Software Foundation (A
amogh-jahagirdar commented on PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#issuecomment-1404381644
Thanks for the reviews @stevenzwu @jackye1995 !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL
ajantha-bhat commented on issue #6414:
URL: https://github.com/apache/iceberg/issues/6414#issuecomment-1404465465
> @ajantha-bhat I think you're good to pick this up if you're still
interested :)
Sure. But I am occupied with partition stats and some internal work. Can't
promise this
szehon-ho opened a new issue, #6670:
URL: https://github.com/apache/iceberg/issues/6670
### Apache Iceberg version
1.1.0 (latest release)
### Query engine
Spark
### Please describe the bug 🐞
This is a serious random bug I debugged together with @dramaticlly,
szehon-ho commented on issue #6670:
URL: https://github.com/apache/iceberg/issues/6670#issuecomment-1404485884
FYI @rdblue @RussellSpitzer @aokolnychyi
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to
RussellSpitzer commented on issue #6670:
URL: https://github.com/apache/iceberg/issues/6670#issuecomment-1404492150
I think this implies that the "equals" method for StructLikeWrapper is
broken because we should only have a problem if both the hashcode and equals
methods both return equalit
JonasJ-ap commented on code in PR #6571:
URL: https://github.com/apache/iceberg/pull/6571#discussion_r1087385526
##
docs/java-api.md:
##
@@ -147,6 +147,58 @@ t.newAppend().appendFile(data).commit();
t.commitTransaction();
```
+### WriteData
+
+The java api can write data int
wypoon opened a new pull request, #6671:
URL: https://github.com/apache/iceberg/pull/6671
Port of https://github.com/apache/iceberg/pull/6550 to Spark 3.2.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above
RussellSpitzer commented on issue #6670:
URL: https://github.com/apache/iceberg/issues/6670#issuecomment-1404503530
I did a quick test using PartitionData
```java
public void testBug() {
Types.StructType STRUCT_TYPE =
Types.StructType.of(
T
szehon-ho commented on issue #6670:
URL: https://github.com/apache/iceberg/issues/6670#issuecomment-1404513790
@RussellSpitzer is right, forget that hashcode is not the only factor for
get() in a map. let us look further into it.
--
This is an automated message from the Apache Git Servi
anthonysgro commented on issue #6619:
URL: https://github.com/apache/iceberg/issues/6619#issuecomment-1404528549
Yes. So here is specifically how it happens:
Creating my table:
I create my table through an Athena query
```
CREATE TABLE IF NOT EXISTS db.friends (
id stri
sfc-gh-standure opened a new pull request, #196:
URL: https://github.com/apache/iceberg-docs/pull/196
New Blog on How Apache Iceberg enables ACID compliance for data lakes.
Authored by Sumeet Tandure, published on Snowflake Medium accout.
--
This is an automated message from the Apache Gi
arminnajafi commented on PR #6646:
URL: https://github.com/apache/iceberg/pull/6646#issuecomment-1404588075
@Fokko @jackye1995
Ready for review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
pvary commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1087469498
##
flink/v1.16/flink/src/main/java/org/apache/iceberg/flink/sink/FlinkSink.java:
##
@@ -316,6 +316,11 @@ public Builder setSnapshotProperty(String property, String
value
pvary commented on PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#issuecomment-1404604496
Thanks for the PR @amogh-jahagirdar!
Left one small question.
Also, do we have this feature in FlinkSource?
--
This is an automated message from the Apache Git Service.
To respon
amogh-jahagirdar commented on code in PR #6622:
URL: https://github.com/apache/iceberg/pull/6622#discussion_r1087455589
##
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/source/SparkScanBuilder.java:
##
@@ -158,6 +182,141 @@ public Filter[] pushedFilters() {
return
pvary commented on issue #6667:
URL: https://github.com/apache/iceberg/issues/6667#issuecomment-1404609798
Hi @GabeChurch,
Sadly I am not too familiar with the Spark configurations, but when #6570
gets in, it might help you with the high frequency concurrent writes.
--
This is an autom
nastra commented on code in PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#discussion_r1087507535
##
core/src/main/java/org/apache/iceberg/BaseTableScan.java:
##
@@ -47,4 +48,10 @@ public CloseableIterable planTasks() {
return TableScanUtil.planTasks(
s
nastra commented on PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#issuecomment-1404648315
@szehon-ho thanks for the review. I double-checking and we actually don't
need to deprecate anything. Making the method in the super class `protected`
again fixes the issue, so we should b
nastra closed pull request #6664: Core: Fix API breakages around scanMetrics()
URL: https://github.com/apache/iceberg/pull/6664
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
pvary commented on code in PR #6648:
URL: https://github.com/apache/iceberg/pull/6648#discussion_r1087526419
##
hive-metastore/src/main/java/org/apache/iceberg/hive/MetastoreLock.java:
##
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or
pvary commented on code in PR #6648:
URL: https://github.com/apache/iceberg/pull/6648#discussion_r1087529984
##
hive-metastore/src/main/java/org/apache/iceberg/hive/MetastoreLock.java:
##
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or
pvary commented on code in PR #6648:
URL: https://github.com/apache/iceberg/pull/6648#discussion_r1087530236
##
hive-metastore/src/main/java/org/apache/iceberg/hive/MetastoreLock.java:
##
@@ -0,0 +1,531 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or
Fokko commented on issue #6475:
URL: https://github.com/apache/iceberg/issues/6475#issuecomment-1404692576
Looking at the recent improvements by @rdblue
Before 887 requests, after the change 203 requests, which is a great
improvement, but still more to be done!
After:
```
Fokko merged PR #6671:
URL: https://github.com/apache/iceberg/pull/6671
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apach
ajantha-bhat commented on code in PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#discussion_r1087650560
##
.palantir/revapi.yml:
##
@@ -320,38 +320,6 @@ acceptedBreaks:
\ org.apache.iceberg.types.Types.StructType,
java.util.function.BiFunction)"
just
youngxinler commented on code in PR #6571:
URL: https://github.com/apache/iceberg/pull/6571#discussion_r1087654680
##
docs/java-api.md:
##
@@ -147,6 +147,58 @@ t.newAppend().appendFile(data).commit();
t.commitTransaction();
```
+### WriteData
+
+The java api can write data i
ajantha-bhat commented on code in PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#discussion_r1087657361
##
.palantir/revapi.yml:
##
@@ -320,38 +320,6 @@ acceptedBreaks:
\ org.apache.iceberg.types.Types.StructType,
java.util.function.BiFunction)"
just
nastra commented on code in PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#discussion_r1087665522
##
.palantir/revapi.yml:
##
@@ -320,38 +320,6 @@ acceptedBreaks:
\ org.apache.iceberg.types.Types.StructType,
java.util.function.BiFunction)"
justificat
kingeasternsun commented on code in PR #6624:
URL: https://github.com/apache/iceberg/pull/6624#discussion_r1087744708
##
api/src/main/java/org/apache/iceberg/actions/MigrateTable.java:
##
@@ -50,6 +50,15 @@ default MigrateTable dropBackup() {
throw new UnsupportedOperationE
kingeasternsun commented on code in PR #6624:
URL: https://github.com/apache/iceberg/pull/6624#discussion_r1087750558
##
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java:
##
@@ -405,14 +405,16 @@ private static Iterator buildManifest(
* @param part
kingeasternsun commented on code in PR #6624:
URL: https://github.com/apache/iceberg/pull/6624#discussion_r1087772776
##
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java:
##
@@ -442,14 +444,51 @@ public static void importSparkTable(
"Canno
0xNacho commented on PR #6470:
URL: https://github.com/apache/iceberg/pull/6470#issuecomment-1405009473
+1 @peay . I have the same problem!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
RussellSpitzer commented on PR #6470:
URL: https://github.com/apache/iceberg/pull/6470#issuecomment-1405061598
Does the streaming write have the ability to set the file format? Or does
that only let you use the table default as well?
--
This is an automated message from the Apache Git Ser
Fokko commented on PR #6672:
URL: https://github.com/apache/iceberg/pull/6672#issuecomment-1405075260
@snazy Out of curiosity, do you use the open api spec to directly generate
code? In Python, we tried to do it as well, but the structure was too complex
in the end, and we used the generate
RussellSpitzer commented on code in PR #6624:
URL: https://github.com/apache/iceberg/pull/6624#discussion_r1087917320
##
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java:
##
@@ -442,14 +444,51 @@ public static void importSparkTable(
"Canno
dimas-b commented on code in PR #6656:
URL: https://github.com/apache/iceberg/pull/6656#discussion_r1087949854
##
nessie/src/test/java/org/apache/iceberg/nessie/TestNessieCatalog.java:
##
@@ -70,12 +72,14 @@ public class TestNessieCatalog extends
CatalogTests {
private Strin
RussellSpitzer merged PR #196:
URL: https://github.com/apache/iceberg-docs/pull/196
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@i
RussellSpitzer commented on PR #196:
URL: https://github.com/apache/iceberg-docs/pull/196#issuecomment-1405133032
Thanks @sfc-gh-standure !
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the spe
RussellSpitzer commented on code in PR #6624:
URL: https://github.com/apache/iceberg/pull/6624#discussion_r1087967571
##
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/procedures/AddFilesProcedure.java:
##
@@ -119,8 +120,15 @@ public InternalRow[] call(InternalRow args)
RussellSpitzer commented on code in PR #187:
URL: https://github.com/apache/iceberg-docs/pull/187#discussion_r1087992589
##
landing-page/content/common/how-to-release.md:
##
@@ -21,6 +21,18 @@ disableSidebar: true
- limitations under the License.
-->
+## Introduction
+
+Th
RussellSpitzer commented on code in PR #187:
URL: https://github.com/apache/iceberg-docs/pull/187#discussion_r1088003719
##
landing-page/content/common/how-to-release.md:
##
@@ -21,6 +21,18 @@ disableSidebar: true
- limitations under the License.
-->
+## Introduction
+
+Th
RussellSpitzer commented on code in PR #6582:
URL: https://github.com/apache/iceberg/pull/6582#discussion_r1088030207
##
core/src/main/java/org/apache/iceberg/puffin/StandardBlobTypes.java:
##
@@ -26,4 +26,6 @@ private StandardBlobTypes() {}
* href="https://datasketches.apac
Fokko opened a new pull request, #6673:
URL: https://github.com/apache/iceberg/pull/6673
PyArrow is still sluggish when it comes into opening files, and we still see
many requests being made to S3.
This PR removes the Dataset, and uses the lower read_table API. Since the
read_table A
ajantha-bhat commented on code in PR #6656:
URL: https://github.com/apache/iceberg/pull/6656#discussion_r1088078440
##
nessie/src/test/java/org/apache/iceberg/nessie/TestNessieCatalog.java:
##
@@ -70,12 +72,14 @@ public class TestNessieCatalog extends
CatalogTests {
private
ajantha-bhat commented on PR #6661:
URL: https://github.com/apache/iceberg/pull/6661#issuecomment-1405302607
cc: @szehon-ho, @RussellSpitzer, @rdblue, @flyrain
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
ajantha-bhat commented on PR #6656:
URL: https://github.com/apache/iceberg/pull/6656#issuecomment-1405308124
@dimas-b: Thanks for the review. @Fokko: Can you please help in reviewing
and merging the PR?
--
This is an automated message from the Apache Git Service.
To respond to the mess
jackye1995 commented on code in PR #6646:
URL: https://github.com/apache/iceberg/pull/6646#discussion_r1088108564
##
python/mkdocs/docs/configuration.md:
##
@@ -85,3 +85,15 @@ catalog:
default:
type: glue
```
+
+## DynamoDB Catalog
+
+If you want to use AWS DynamoDB as
jackye1995 commented on code in PR #6646:
URL: https://github.com/apache/iceberg/pull/6646#discussion_r1088117070
##
python/pyiceberg/catalog/base_aws_catalog.py:
##
@@ -0,0 +1,163 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license
jackye1995 commented on code in PR #6646:
URL: https://github.com/apache/iceberg/pull/6646#discussion_r1088118591
##
python/pyiceberg/catalog/hive.py:
##
@@ -548,10 +511,9 @@ def update_namespace_properties(
for key, value in updates.items():
ajantha-bhat commented on code in PR #6661:
URL: https://github.com/apache/iceberg/pull/6661#discussion_r1087862183
##
core/src/main/java/org/apache/iceberg/PartitionsTable.java:
##
@@ -47,7 +48,11 @@ public class PartitionsTable extends BaseMetadataTable {
Types.Ne
RussellSpitzer commented on code in PR #6661:
URL: https://github.com/apache/iceberg/pull/6661#discussion_r1088136888
##
core/src/main/java/org/apache/iceberg/PartitionsTable.java:
##
@@ -47,7 +48,11 @@ public class PartitionsTable extends BaseMetadataTable {
Types.
szehon-ho merged PR #6664:
URL: https://github.com/apache/iceberg/pull/6664
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr...@iceberg.a
szehon-ho commented on PR #6664:
URL: https://github.com/apache/iceberg/pull/6664#issuecomment-1405377742
Merged, thanks @nastra and @ajantha-bhat for review
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL abo
wypoon commented on PR #6671:
URL: https://github.com/apache/iceberg/pull/6671#issuecomment-1405433121
Thanks @nastra and @Fokko.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comm
manisin opened a new pull request, #6674:
URL: https://github.com/apache/iceberg/pull/6674
Currently the catalog is unable to handle databases or schema or table names
(snowflake identifiers) with special characters. This limitation is due to
sanitizing of the parameter for the like clause
GabeChurch commented on issue #6667:
URL: https://github.com/apache/iceberg/issues/6667#issuecomment-1405560496
Thanks for the quick response @pvary just saw that ticket last night as
well!
I actually created a benchmark in airflow, that runs 30 concurrent Spark
jobs with 50 sequent
amogh-jahagirdar commented on code in PR #6598:
URL: https://github.com/apache/iceberg/pull/6598#discussion_r1088290116
##
core/src/main/java/org/apache/iceberg/view/SQLViewRepresentationParser.java:
##
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) u
amogh-jahagirdar commented on code in PR #6598:
URL: https://github.com/apache/iceberg/pull/6598#discussion_r1088290778
##
core/src/main/java/org/apache/iceberg/view/ViewRepresentationParser.java:
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under
jackye1995 commented on code in PR #6655:
URL: https://github.com/apache/iceberg/pull/6655#discussion_r1088296400
##
core/src/main/java/org/apache/iceberg/io/ResolvingFileIO.java:
##
@@ -164,7 +164,7 @@ private FileIO io(String location) {
return io;
}
- private stati
amogh-jahagirdar commented on code in PR #6598:
URL: https://github.com/apache/iceberg/pull/6598#discussion_r1088320961
##
core/src/main/java/org/apache/iceberg/view/UnknownViewRepresentation.java:
##
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) unde
amogh-jahagirdar commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1088335461
##
flink/v1.16/flink/src/main/java/org/apache/iceberg/flink/sink/FlinkSink.java:
##
@@ -316,6 +316,11 @@ public Builder setSnapshotProperty(String property, St
amogh-jahagirdar commented on code in PR #6660:
URL: https://github.com/apache/iceberg/pull/6660#discussion_r1088335461
##
flink/v1.16/flink/src/main/java/org/apache/iceberg/flink/sink/FlinkSink.java:
##
@@ -316,6 +316,11 @@ public Builder setSnapshotProperty(String property, St
401 - 500 of 85799 matches
Mail list logo