pvary commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2429181921
If the schema is changed, then the target Iceberg table needs to be updated
to the new schema anyways. So we can use the Iceberg schemaId to send along the
records.
--
This is an au
FranMorilloAWS commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2425812884
How would that look? So normally we consume from Kafka or Kinesis and use
glue schema registry or confluent schema registry. As of now the Sink has the
option of using Generi
pvary commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2425567002
The Iceberg table could be used as a schema registry. I would be reluctant
to add any new requirements if possible
--
This is an automated message from the Apache Git Service.
To re
pvary commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2421426631
The current tradeoff is more like doubled CPU time (we need caching and an
extra serialization/deserialization step, which is on an already well optimized
hot path). We are still looki
ottomata commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2420588912
Ah, thanks!
FWIW, I think schema evolution support is worth the tradeoff of extra bytes
per record :)
--
This is an automated message from the Apache Git Service.
T
pvary commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2420417129
It sends the schema along with every record. I'm playing around with a
somewhat similar, but more performant solution, where we send only the schemaId
instead of the full schema. The t
ottomata commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2413847042
How does [flink-cdc do
it](https://nightlies.apache.org/flink/flink-cdc-docs-master/docs/core-concept/schema-evolution/)?
--
This is an automated message from the Apache G
github-actions[bot] commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2412570266
This issue has been automatically marked as stale because it has been open
for 180 days with no activity. It will be closed in next 14 days if no further
activity occurs.
pvary commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2061836288
I think it is not trivial to implement this feature, as the schema of the
RowData objects which are the input of the Sink is finalized when the job graph
is created. To change the sche
Ruees commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2058647346
> @leichangqing You can refer to the last two commits of my branch
https://github.com/lintingbin2009/iceberg/tree/flink-sink-dynamically-change.
We have put this part of the code in ou
Ruees commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-2058150468
> @leichangqing You can refer to the last two commits of my branch
https://github.com/lintingbin2009/iceberg/tree/flink-sink-dynamically-change.
We have put this part of the code in ou
FranMorilloAWS commented on issue #4190:
URL: https://github.com/apache/iceberg/issues/4190#issuecomment-1969361137
Is there any news on this?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to
12 matches
Mail list logo