Hi Mickael,
Thank you for the detailed review! And really sorry for the absence and
late reply. Also I am extremely sorry for the discrepancy between the KIP
number in the discussion thread and the actual KIP number. Actually I took
KIP 1153 but forgot to update the number on the main page. I've updated the
KIP to address all your feedback points.
1. Removed full implementation code. The KIP now explicitly lists every
public API change (method signatures, types, return values) in tables and
describes the associated behavior
changes for each—without embedding the full logic.
2. *TimestampConverter* is now clearly called out. It has its own
dedicated section ("5. Updated SMT: TimestampConverter") that describes the
new target.type values (TimestampMicros,
TimestampNanos), precision-preserving unix conversion behavior, and
updates to timestampTypeFromSchema().
3. Compatibility section updated. I've replaced the blanket "No breaking
changes" statement with a nuanced breakdown:
- A "Backward Compatibility" subsection confirming existing
configurations are unaffected.
- A "Behavioral Changes" subsection listing what changes when the new
types are actively used (opt-in via connector/converter configuration).
- A "Rolling Upgrades" subsection covering the scenario where workers
are at different versions.
4. Test plan expanded. Now covers TimestampMicros, TimestampNanos,
TimestampConverter (including precision-preserving conversions, upscaling,
truncation, schema-based and schemaless
modes), JsonConverter, ConnectSchema validation, and the Cast SMT.
I've also added a summary table of all files changed across modules for
easier review.
Updated KIP: KIP-1154: Extending support for Microsecond/Nanosecond
Precision for Kafka Connect
<https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=347933282#KIP1154:ExtendingsupportforMicrosecond/NanosecondPrecisionforKafkaConnect-1.NewLogicalType:TimestampMicros>
Looking forward to your thoughts.
Thanks,
Pritam
On Mon, Jan 5, 2026 at 10:00 PM Mickael Maison <[email protected]>
wrote:
> Hi,
>
> Thanks for the KIP. I think overall this would be a useful improvement.
>
> However, the KIP is kind of messy and hard to read.
> It shouldn't include the full logic but instead it should explicitly
> list the changes to public APIs as well as other major required
> changes. For each it should highlight the associated behavior changes.
>
> It seems the code following the "After introduction of these logical
> types, a fix will be added to KIP-808 in the following way" sentence
> refers to the TimestampConverter transformation. If so, can you make
> it clear that this transformation is being updated as part of the KIP?
>
> In the Compatibility, Deprecation, and Migration Plan, you state "No
> breaking changes" and "Backward-compatible". It seems after this KIP,
> there would be some behavioral changes as there are no way to disable
> the feature.
>
> In test plan you will need to also test TimestampNanos and
> TimestampConverter (if applicable).
>
> Thanks,
> Mickael
>
>
> On Wed, May 21, 2025 at 2:56 AM pritam kumar <[email protected]>
> wrote:
> >
> > Hi Kafka Community,
> > If there is no other feedback. I would like to start a VOTE.
> >
> > On Mon, May 19, 2025 at 6:51 PM pritam kumar <[email protected]>
> > wrote:
> >
> > > Hi Kafka Community,
> > > Here is the KIP LInk ->
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=347933282
> > >
> > > On Mon, May 19, 2025 at 6:47 PM pritam kumar <
> [email protected]>
> > > wrote:
> > >
> > >> Hi Kafka Community,
> > >>
> > >> Could you please review this KIP proposal when you have a chance?
> > >>
> > >> Thank you,
> > >> Pritam
> > >>
> > >>
> > >> On Sat, May 3, 2025 at 10:53 PM pritam kumar <
> [email protected]>
> > >> wrote:
> > >>
> > >>> Hi Chris,
> > >>> Can you please review this one too?
> > >>>
> > >>> On Sat, Apr 5, 2025 at 7:00 AM pritam kumar <
> [email protected]>
> > >>> wrote:
> > >>>
> > >>>> Hi Kafka Community,
> > >>>> Sorry due to some changes I had to change the link to the kip.
> > >>>> Here is the updated KIP link:
> > >>>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1154%3A+Extending+support+for+Microsecond+Precision+for+Kafka+Connect
> > >>>>
> > >>>> On Sat, Apr 5, 2025 at 12:14 AM pritam kumar <
> [email protected]>
> > >>>> wrote:
> > >>>>
> > >>>>> Hi Kafka Community,
> > >>>>>
> > >>>>> I’d like to start a discussion on KIP-1153: Extending Support for
> > >>>>> Microsecond Precision for Kafka Connect
> > >>>>> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1153%3A+Extending+Support+for+Microsecond+Precision+for+Kafka+Connect
> >
> > >>>>> .
> > >>>>>
> > >>>>> The primary motivation behind this KIP is to enhance the precision
> of
> > >>>>> timestamp handling in Kafka Connect. Currently, Kafka Connect is
> limited to
> > >>>>> millisecond-level precision for timestamps. However, many modern
> data
> > >>>>> formats and platforms have moved beyond this limitation:
> > >>>>>
> > >>>>> -
> > >>>>>
> > >>>>> Formats such as *Avro*, *Parquet*, and *ORC* support microsecond
> > >>>>> (and even nanosecond) precision. For example, Avro specifies
> support for
> > >>>>> timestamp-micros (spec link
> > >>>>> <https://avro.apache.org/docs/1.11.0/spec.html#timestamp-micros
> >).
> > >>>>> -
> > >>>>>
> > >>>>> Sink systems like *Apache Iceberg*, *Delta Lake*, and *Apache
> Hudi*
> > >>>>> offer *microsecond and nanosecond precision* for time-based
> > >>>>> fields, making millisecond precision inadequate for accurate
> data
> > >>>>> replication and analytics in some use cases.
> > >>>>>
> > >>>>> This gap can lead to *loss of precision* when transferring data
> > >>>>> through Kafka Connect, especially when interacting with systems
> that depend
> > >>>>> on high-resolution timestamps for change tracking, event ordering,
> or
> > >>>>> deduplication.
> > >>>>>
> > >>>>> The goal of this KIP is to:
> > >>>>>
> > >>>>> -
> > >>>>>
> > >>>>> Introduce microsecond-level timestamp handling in Kafka Connect
> > >>>>> schema and data representation.
> > >>>>> -
> > >>>>>
> > >>>>> Ensure connectors (both source and sink) can leverage this
> > >>>>> precision when supported by the underlying data systems.
> > >>>>> -
> > >>>>>
> > >>>>> Maintain backward compatibility with existing millisecond-based
> > >>>>> configurations and data.
> > >>>>>
> > >>>>> We welcome community feedback on:
> > >>>>>
> > >>>>> -
> > >>>>>
> > >>>>> Potential implementation concerns or edge cases we should
> address
> > >>>>> -
> > >>>>>
> > >>>>> Suggestions for schema enhancements or conversion strategies
> > >>>>> -
> > >>>>>
> > >>>>> Impacts on connector compatibility and testing
> > >>>>>
> > >>>>> Looking forward to your thoughts and input on this proposal!
> > >>>>>
> > >>>>> Thanks.
> > >>>>> Link to the KIP.
> > >>>>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1153%3A+Extending+Support+for+Microsecond+Precision+for+Kafka+Connect
> > >>>>> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1152%3A+Add+transactional+ID+prefix+filter+to+ListTransactions+API
> >
> > >>>>>
> > >>>>>
>