[ 
https://issues.apache.org/jira/browse/HADOOP-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16096484#comment-16096484
 ] 

Bharat Viswanadham commented on HADOOP-14623:
---------------------------------------------

Hi [~Hongyuan Li]
Ya, as you have seen that is the behavior with Kafka. Kafka is backward 
compatible with regards to clients but not forward compatible. That is, a 0.9 
client can use a 0.10 cluster but a 0.10 client can not use a 0.9 cluster.

I think from 0.10.2 version, they are coming up with full compatibility(bi 
directional compatibility), and you will not see this error.


> fixed some bugs in KafkaSink 
> -----------------------------
>
>                 Key: HADOOP-14623
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14623
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common, tools
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Hongyuan Li
>            Assignee: Hongyuan Li
>         Attachments: HADOOP-14623-001.patch, HADOOP-14623-002.patch
>
>
> {{KafkaSink}}#{{init}}  should set ack to *1* to make sure the message has 
> been written to the broker at least.
> current code list below:
> {code}
>   
>     props.put("request.required.acks", "0");
> {code}
> *Update*
> find another bug about this class, {{key.serializer}} used 
> {{org.apache.kafka.common.serialization.ByteArraySerializer}}, however, the 
> key properties of Producer is Integer, codes list below:
> {code}
>     props.put("key.serializer",
>         "org.apache.kafka.common.serialization.ByteArraySerializer");
> ……………
>  producer = new KafkaProducer<Integer, byte[]>(props);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to