rk3rn3r edited a comment on pull request #1052:
URL: 
https://github.com/apache/camel-kafka-connector/pull/1052#issuecomment-784354653


   Hey everyone! This is a PR that solves the issue I had with the 
camel-sql-kafka-connector. I left some comments in-line and I think the code 
can maybe polished a bit, but in general I would like to see that change 
happening (as you know).
   
   My main point is the architectural view and related developer experience, 
similar to what Omar already pointed out:
   
   - a Kafka Connect connector can receive very different formats based on the 
used Kafka Connect Converter (StringConverter or IntegerConverter, etc return 
String or a respective numeric type; JsonConverter, AvroConverter, 
ProtobufConverter, etc are schema-ed converters for structural data that yield 
structural data, mostly a Kafka Connect Struct when a schema is available or a 
Map when there's no schema available)
   
   - a Kafka Connect **sink** connector is mainly meant to accept data from 
Kafka and provide it to a downstream system that naturally can't talk to Kafka. 
The main purpose is converting the standardized input (Map, Struct, String or 
scalar thing) into the format for the downstream system
   
   - The essence of that PR is that a Kafka Connect Struct is only valid inside 
Kafka Connect and **should** or must be converted to something that matches the 
format for downstream processing (SQL for databases, etc). In case of a Camel 
pipeline from a dev point of view I would either expect there exists a 
standardized format to handover events between pipeline steps or that ae a 
generally available format will be used, and for structured/hierarchical data 
(which is inside a Struct) in Java this would match the Map type.
   
   - That change reliefs also from necessary config changes when the wire 
format in Kafka Connect is changed (from a schema-less to a schema-ed format): 
in this case a user needs to update the pipeline and inlcude the proposed SMT 
which was not needed prior the wire format change. The basic idea of a 
serialization format in Kafka Connect is quite decoupled from the data that are 
handled and send to the downstream system/s. When you pass down the Struct to 
downstream systems then you imo force handling Kafka internals in the 
downstream system/connector.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to