This is an automated email from the ASF dual-hosted git repository.

orpiske pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel.git


The following commit(s) were added to refs/heads/main by this push:
     new 34a391f  CAMEL-16914: simplify handling of numeric headers for 
idempotency support (#6053)
34a391f is described below

commit 34a391f882bd55b2de3cd4b2558062a4fe313ce3
Author: Otavio Rodolfo Piske <orpi...@users.noreply.github.com>
AuthorDate: Tue Sep 7 08:09:36 2021 +0200

    CAMEL-16914: simplify handling of numeric headers for idempotency support 
(#6053)
---
 .../camel-kafka/src/main/docs/kafka-component.adoc | 47 +++++++++++
 .../component/kafka/serde/KafkaSerdeHelper.java    | 43 ++++++++++
 .../integration/CustomHeaderDeserializer.java      | 40 +++++++++
 .../kafka/integration/KafkaConsumerFullIT.java     |  3 +
 .../integration/KafkaConsumerIdempotentIT.java     | 91 +++++++++++++++++++++
 .../KafkaConsumerIdempotentTestSupport.java        | 69 ++++++++++++++++
 ...kaConsumerIdempotentWithCustomSerializerIT.java | 87 ++++++++++++++++++++
 .../KafkaConsumerIdempotentWithProcessorIT.java    | 94 ++++++++++++++++++++++
 .../integration/KafkaConsumerTopicIsPatternIT.java |  3 +
 .../modules/ROOT/pages/kafka-component.adoc        | 47 +++++++++++
 10 files changed, 524 insertions(+)

diff --git a/components/camel-kafka/src/main/docs/kafka-component.adoc 
b/components/camel-kafka/src/main/docs/kafka-component.adoc
index f7f8a7a..f37f524 100644
--- a/components/camel-kafka/src/main/docs/kafka-component.adoc
+++ b/components/camel-kafka/src/main/docs/kafka-component.adoc
@@ -530,6 +530,7 @@ The `camel-kafka` library provides a Kafka topic-based 
idempotent repository. Th
 The topic used must be unique per idempotent repository instance. The 
mechanism does not have any requirements about the number of topic partitions; 
as the repository consumes from all partitions at the same time. It also does 
not have any requirements about the replication factor of the topic.
 Each repository instance that uses the topic (e.g. typically on different 
machines running in parallel) controls its own consumer group, so in a cluster 
of 10 Camel processes using the same topic each will control its own offset.
 On startup, the instance subscribes to the topic and rewinds the offset to the 
beginning, rebuilding the cache to the latest state. The cache will not be 
considered warmed up until one poll of `pollDurationMs` in length returns 0 
records. Startup will not be completed until either the cache has warmed up, or 
30 seconds go by; if the latter happens the idempotent repository may be in an 
inconsistent state until its consumer catches up to the end of the topic.
+Be mindful of the format of the header used for the uniqueness check. By 
default, it uses Strings as the data types. When using primitive numeric 
formats, the header must be deserialized accordingly. Check the samples below 
for examples.
 
 A `KafkaIdempotentRepository` has the following properties:
 [width="100%",cols="2m,5",options="header"]
@@ -593,6 +594,52 @@ In XML:
 </bean>
 ----
 
+There are 3 alternatives to choose from when using idempotency with numeric 
identifiers. The first one is to use the static method `numericHeader` method 
from `org.apache.camel.component.kafka.serde.KafkaSerdeHelper` to perform the 
conversion for you:
+
+[source,java]
+----
+from("direct:performInsert")
+    
.idempotentConsumer(numericHeader("id")).messageIdRepositoryRef("insertDbIdemRepo")
+        // once-only insert into database
+    .end()
+----
+
+Alternatively, it is possible use a custom serializer configured via the route 
URL to perform the conversion:
+
+[source,java]
+----
+public class CustomHeaderDeserializer extends DefaultKafkaHeaderDeserializer {
+    private static final Logger LOG = 
LoggerFactory.getLogger(CustomHeaderDeserializer.class);
+
+    @Override
+    public Object deserialize(String key, byte[] value) {
+        if (key.equals("id")) {
+            BigInteger bi = new BigInteger(value);
+
+            return String.valueOf(bi.longValue());
+        } else {
+            return super.deserialize(key, value);
+        }
+    }
+}
+----
+
+Lastly, it is also possible to do so in a processor:
+
+[source,java]
+----
+from(from).routeId("foo")
+    .process(exchange -> {
+        byte[] id = exchange.getIn().getHeader("id", byte[].class);
+
+        BigInteger bi = new BigInteger(id);
+        exchange.getIn().setHeader("id", String.valueOf(bi.longValue()));
+    })
+    .idempotentConsumer(header("id"))
+    .messageIdRepositoryRef("kafkaIdempotentRepository")
+    .to(to);
+----
+
 == Using manual commit with Kafka consumer
 
 By default the Kafka consumer will use auto commit, where the offset will be 
committed automatically in the background using a given interval.
diff --git 
a/components/camel-kafka/src/main/java/org/apache/camel/component/kafka/serde/KafkaSerdeHelper.java
 
b/components/camel-kafka/src/main/java/org/apache/camel/component/kafka/serde/KafkaSerdeHelper.java
new file mode 100644
index 0000000..645da01
--- /dev/null
+++ 
b/components/camel-kafka/src/main/java/org/apache/camel/component/kafka/serde/KafkaSerdeHelper.java
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.camel.component.kafka.serde;
+
+import java.math.BigInteger;
+
+import org.apache.camel.Exchange;
+import org.apache.camel.support.ExpressionAdapter;
+import org.apache.camel.support.builder.ValueBuilder;
+
+public final class KafkaSerdeHelper {
+    private KafkaSerdeHelper() {
+
+    }
+
+    public static ValueBuilder numericHeader(String name) {
+        return new ValueBuilder(new ExpressionAdapter() {
+            @Override
+            public Object evaluate(Exchange exchange) {
+                byte[] id = exchange.getIn().getHeader(name, byte[].class);
+
+                BigInteger bi = new BigInteger(id);
+
+                return String.valueOf(bi.longValue());
+            }
+        });
+    }
+}
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/CustomHeaderDeserializer.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/CustomHeaderDeserializer.java
new file mode 100644
index 0000000..44c295a
--- /dev/null
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/CustomHeaderDeserializer.java
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.camel.component.kafka.integration;
+
+import java.math.BigInteger;
+
+import org.apache.camel.component.kafka.serde.DefaultKafkaHeaderDeserializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class CustomHeaderDeserializer extends DefaultKafkaHeaderDeserializer {
+    private static final Logger LOG = 
LoggerFactory.getLogger(CustomHeaderDeserializer.class);
+
+    @Override
+    public Object deserialize(String key, byte[] value) {
+        if (key.equals("id")) {
+            BigInteger bi = new BigInteger(value);
+            LOG.debug("Converted the header {} to {} via custom serializer", 
key, bi.longValue());
+
+            return String.valueOf(bi.longValue());
+        } else {
+            return super.deserialize(key, value);
+        }
+    }
+}
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerFullIT.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerFullIT.java
index 3940b6e..5cdcc35 100644
--- 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerFullIT.java
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerFullIT.java
@@ -37,12 +37,15 @@ import org.junit.jupiter.api.AfterEach;
 import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Disabled;
 import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.condition.DisabledIfSystemProperty;
 
 import static org.apache.camel.test.junit5.TestSupport.assertIsInstanceOf;
 import static org.junit.jupiter.api.Assertions.assertEquals;
 import static org.junit.jupiter.api.Assertions.assertFalse;
 import static org.junit.jupiter.api.Assertions.assertTrue;
 
+@DisabledIfSystemProperty(named = "enable.kafka.consumer.idempotency.tests", 
matches = "true",
+                          disabledReason = "Runtime conflicts with the 
idempotency tests")
 public class KafkaConsumerFullIT extends BaseEmbeddedKafkaTestSupport {
 
     public static final String TOPIC = "test";
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentIT.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentIT.java
new file mode 100644
index 0000000..058b885
--- /dev/null
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentIT.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.camel.component.kafka.integration;
+
+import java.util.Collections;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.camel.BindToRegistry;
+import org.apache.camel.Endpoint;
+import org.apache.camel.EndpointInject;
+import org.apache.camel.builder.RouteBuilder;
+import org.apache.camel.component.mock.MockEndpoint;
+import org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.condition.EnabledIfSystemProperty;
+
+import static 
org.apache.camel.component.kafka.serde.KafkaSerdeHelper.numericHeader;
+
+@EnabledIfSystemProperty(named = "enable.kafka.consumer.idempotency.tests", 
matches = "true")
+public class KafkaConsumerIdempotentIT extends 
KafkaConsumerIdempotentTestSupport {
+
+    public static final String TOPIC = "idempt";
+
+    @BindToRegistry("kafkaIdempotentRepository")
+    private KafkaIdempotentRepository kafkaIdempotentRepository
+            = new KafkaIdempotentRepository("TEST_IDEMPOTENT", 
getBootstrapServers());
+
+    @EndpointInject("kafka:" + TOPIC
+                    + "?groupId=group2&autoOffsetReset=earliest"
+                    + 
"&keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer"
+                    + 
"&valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer"
+                    + 
"&autoCommitIntervalMs=1000&sessionTimeoutMs=30000&autoCommitEnable=true"
+                    + 
"&interceptorClasses=org.apache.camel.component.kafka.MockConsumerInterceptor")
+    private Endpoint from;
+
+    @EndpointInject("mock:result")
+    private MockEndpoint to;
+
+    private int size = 200;
+
+    @BeforeEach
+    public void before() throws ExecutionException, InterruptedException, 
TimeoutException {
+        doSend(size, TOPIC);
+    }
+
+    @AfterEach
+    public void after() {
+
+        // clean all test topics
+        kafkaAdminClient.deleteTopics(Collections.singletonList(TOPIC));
+    }
+
+    @Override
+    protected RouteBuilder createRouteBuilder() throws Exception {
+
+        return new RouteBuilder() {
+
+            @Override
+            public void configure() throws Exception {
+                from(from).routeId("foo")
+                        .idempotentConsumer(numericHeader("id"))
+                        .messageIdRepositoryRef("kafkaIdempotentRepository")
+                        .to(to);
+            }
+        };
+    }
+
+    @Test
+    @DisplayName("Numeric headers is consumable when using idempotent 
(CAMEL-16914)")
+    public void kafkaIdempotentMessageIsConsumedByCamel() throws 
InterruptedException {
+        doRun(to, size);
+    }
+}
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentTestSupport.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentTestSupport.java
new file mode 100644
index 0000000..e3f62d6
--- /dev/null
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentTestSupport.java
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.camel.component.kafka.integration;
+
+import java.math.BigInteger;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.camel.Exchange;
+import org.apache.camel.component.mock.MockEndpoint;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.common.header.internals.RecordHeader;
+
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertTrue;
+
+public abstract class KafkaConsumerIdempotentTestSupport extends 
BaseEmbeddedKafkaTestSupport {
+
+    protected void doSend(int size, String topic) throws ExecutionException, 
InterruptedException, TimeoutException {
+        Properties props = getDefaultProperties();
+        org.apache.kafka.clients.producer.KafkaProducer<String, String> 
producer
+                = new org.apache.kafka.clients.producer.KafkaProducer<>(props);
+
+        try {
+            for (int k = 0; k < size; k++) {
+                String msg = "message-" + k;
+                ProducerRecord<String, String> data = new 
ProducerRecord<>(topic, String.valueOf(k), msg);
+
+                data.headers().add(new RecordHeader("id", 
BigInteger.valueOf(k).toByteArray()));
+                producer.send(data);
+            }
+        } finally {
+            if (producer != null) {
+                producer.close();
+            }
+        }
+    }
+
+    protected void doRun(MockEndpoint mockEndpoint, int size) throws 
InterruptedException {
+        mockEndpoint.expectedMessageCount(size);
+
+        List<Exchange> exchangeList = mockEndpoint.getReceivedExchanges();
+
+        mockEndpoint.assertIsSatisfied(10000);
+
+        assertEquals(size, exchangeList.size());
+
+        Map<String, Object> headers = 
mockEndpoint.getExchanges().get(0).getIn().getHeaders();
+        assertTrue(headers.containsKey("id"), "0");
+    }
+}
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentWithCustomSerializerIT.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentWithCustomSerializerIT.java
new file mode 100644
index 0000000..5a37f0d
--- /dev/null
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentWithCustomSerializerIT.java
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.camel.component.kafka.integration;
+
+import java.util.Collections;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.camel.BindToRegistry;
+import org.apache.camel.Endpoint;
+import org.apache.camel.EndpointInject;
+import org.apache.camel.builder.RouteBuilder;
+import org.apache.camel.component.mock.MockEndpoint;
+import org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.condition.EnabledIfSystemProperty;
+
+@EnabledIfSystemProperty(named = "enable.kafka.consumer.idempotency.tests", 
matches = "true")
+public class KafkaConsumerIdempotentWithCustomSerializerIT extends 
KafkaConsumerIdempotentTestSupport {
+
+    public static final String TOPIC = "idempt2";
+
+    @BindToRegistry("kafkaIdempotentRepository")
+    private KafkaIdempotentRepository kafkaIdempotentRepository
+            = new KafkaIdempotentRepository("TEST_IDEMPOTENT", 
getBootstrapServers());
+
+    @EndpointInject("kafka:" + TOPIC
+                    + "?groupId=group2&autoOffsetReset=earliest"
+                    + 
"&keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer"
+                    + 
"&valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer"
+                    + 
"&headerDeserializer=#class:org.apache.camel.component.kafka.integration.CustomHeaderDeserializer"
+                    + 
"&autoCommitIntervalMs=1000&sessionTimeoutMs=30000&autoCommitEnable=true"
+                    + 
"&interceptorClasses=org.apache.camel.component.kafka.MockConsumerInterceptor")
+    private Endpoint from;
+
+    @EndpointInject("mock:result")
+    private MockEndpoint to;
+
+    private int size = 200;
+
+    @BeforeEach
+    public void before() throws ExecutionException, InterruptedException, 
TimeoutException {
+        doSend(size, TOPIC);
+    }
+
+    @AfterEach
+    public void after() {
+
+        // clean all test topics
+        kafkaAdminClient.deleteTopics(Collections.singletonList(TOPIC));
+    }
+
+    @Override
+    protected RouteBuilder createRouteBuilder() throws Exception {
+
+        return new RouteBuilder() {
+            @Override
+            public void configure() {
+                from(from).routeId("foo")
+                        .idempotentConsumer(header("id"))
+                        .messageIdRepositoryRef("kafkaIdempotentRepository")
+                        .to(to);
+            }
+        };
+    }
+
+    @Test
+    public void kafkaMessageIsConsumedByCamel() throws InterruptedException {
+        doRun(to, size);
+    }
+}
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentWithProcessorIT.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentWithProcessorIT.java
new file mode 100644
index 0000000..4f0bd9e
--- /dev/null
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerIdempotentWithProcessorIT.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.camel.component.kafka.integration;
+
+import java.math.BigInteger;
+import java.util.Collections;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.TimeoutException;
+
+import org.apache.camel.BindToRegistry;
+import org.apache.camel.Endpoint;
+import org.apache.camel.EndpointInject;
+import org.apache.camel.builder.RouteBuilder;
+import org.apache.camel.component.mock.MockEndpoint;
+import org.apache.camel.processor.idempotent.kafka.KafkaIdempotentRepository;
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.condition.EnabledIfSystemProperty;
+
+@EnabledIfSystemProperty(named = "enable.kafka.consumer.idempotency.tests", 
matches = "true")
+public class KafkaConsumerIdempotentWithProcessorIT extends 
KafkaConsumerIdempotentTestSupport {
+    public static final String TOPIC = "testidemp3";
+
+    @BindToRegistry("kafkaIdempotentRepository")
+    private KafkaIdempotentRepository kafkaIdempotentRepository
+            = new KafkaIdempotentRepository("TEST_IDEMPOTENT", 
getBootstrapServers());
+
+    @EndpointInject("kafka:" + TOPIC
+                    + "?groupId=group2&autoOffsetReset=earliest"
+                    + 
"&keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer"
+                    + 
"&valueDeserializer=org.apache.kafka.common.serialization.StringDeserializer"
+                    + 
"&autoCommitIntervalMs=1000&sessionTimeoutMs=30000&autoCommitEnable=true"
+                    + 
"&interceptorClasses=org.apache.camel.component.kafka.MockConsumerInterceptor")
+    private Endpoint from;
+
+    @EndpointInject("mock:result")
+    private MockEndpoint to;
+
+    private int size = 200;
+
+    @BeforeEach
+    public void before() throws ExecutionException, InterruptedException, 
TimeoutException {
+        doSend(size, TOPIC);
+    }
+
+    @AfterEach
+    public void after() {
+
+        // clean all test topics
+        kafkaAdminClient.deleteTopics(Collections.singletonList(TOPIC));
+    }
+
+    @Override
+    protected RouteBuilder createRouteBuilder() throws Exception {
+
+        return new RouteBuilder() {
+
+            @Override
+            public void configure() throws Exception {
+                from(from).routeId("foo")
+                        .process(exchange -> {
+                            byte[] id = exchange.getIn().getHeader("id", 
byte[].class);
+
+                            BigInteger bi = new BigInteger(id);
+
+                            exchange.getIn().setHeader("id", 
String.valueOf(bi.longValue()));
+                        })
+                        .idempotentConsumer(header("id"))
+                        .messageIdRepositoryRef("kafkaIdempotentRepository")
+                        .to(to);
+            }
+        };
+    }
+
+    @Test
+    public void kafkaMessageIsConsumedByCamel() throws InterruptedException {
+        doRun(to, size);
+    }
+}
diff --git 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerTopicIsPatternIT.java
 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerTopicIsPatternIT.java
index 50281ed..af9c8ce 100644
--- 
a/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerTopicIsPatternIT.java
+++ 
b/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/integration/KafkaConsumerTopicIsPatternIT.java
@@ -30,9 +30,12 @@ import org.apache.kafka.clients.producer.ProducerRecord;
 import org.junit.jupiter.api.AfterEach;
 import org.junit.jupiter.api.BeforeEach;
 import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.condition.DisabledIfSystemProperty;
 
 import static org.junit.jupiter.api.Assertions.assertEquals;
 
+@DisabledIfSystemProperty(named = "enable.kafka.consumer.idempotency.tests", 
matches = "true",
+                          disabledReason = "Runtime conflicts with the 
idempotency tests")
 public class KafkaConsumerTopicIsPatternIT extends 
BaseEmbeddedKafkaTestSupport {
 
     public static final String TOPIC = "test";
diff --git a/docs/components/modules/ROOT/pages/kafka-component.adoc 
b/docs/components/modules/ROOT/pages/kafka-component.adoc
index 5739f03..a228bdf 100644
--- a/docs/components/modules/ROOT/pages/kafka-component.adoc
+++ b/docs/components/modules/ROOT/pages/kafka-component.adoc
@@ -532,6 +532,7 @@ The `camel-kafka` library provides a Kafka topic-based 
idempotent repository. Th
 The topic used must be unique per idempotent repository instance. The 
mechanism does not have any requirements about the number of topic partitions; 
as the repository consumes from all partitions at the same time. It also does 
not have any requirements about the replication factor of the topic.
 Each repository instance that uses the topic (e.g. typically on different 
machines running in parallel) controls its own consumer group, so in a cluster 
of 10 Camel processes using the same topic each will control its own offset.
 On startup, the instance subscribes to the topic and rewinds the offset to the 
beginning, rebuilding the cache to the latest state. The cache will not be 
considered warmed up until one poll of `pollDurationMs` in length returns 0 
records. Startup will not be completed until either the cache has warmed up, or 
30 seconds go by; if the latter happens the idempotent repository may be in an 
inconsistent state until its consumer catches up to the end of the topic.
+Be mindful of the format of the header used for the uniqueness check. By 
default, it uses Strings as the data types. When using primitive numeric 
formats, the header must be deserialized accordingly. Check the samples below 
for examples.
 
 A `KafkaIdempotentRepository` has the following properties:
 [width="100%",cols="2m,5",options="header"]
@@ -595,6 +596,52 @@ In XML:
 </bean>
 ----
 
+There are 3 alternatives to choose from when using idempotency with numeric 
identifiers. The first one is to use the static method `numericHeader` method 
from `org.apache.camel.component.kafka.serde.KafkaSerdeHelper` to perform the 
conversion for you:
+
+[source,java]
+----
+from("direct:performInsert")
+    
.idempotentConsumer(numericHeader("id")).messageIdRepositoryRef("insertDbIdemRepo")
+        // once-only insert into database
+    .end()
+----
+
+Alternatively, it is possible use a custom serializer configured via the route 
URL to perform the conversion:
+
+[source,java]
+----
+public class CustomHeaderDeserializer extends DefaultKafkaHeaderDeserializer {
+    private static final Logger LOG = 
LoggerFactory.getLogger(CustomHeaderDeserializer.class);
+
+    @Override
+    public Object deserialize(String key, byte[] value) {
+        if (key.equals("id")) {
+            BigInteger bi = new BigInteger(value);
+
+            return String.valueOf(bi.longValue());
+        } else {
+            return super.deserialize(key, value);
+        }
+    }
+}
+----
+
+Lastly, it is also possible to do so in a processor:
+
+[source,java]
+----
+from(from).routeId("foo")
+    .process(exchange -> {
+        byte[] id = exchange.getIn().getHeader("id", byte[].class);
+
+        BigInteger bi = new BigInteger(id);
+        exchange.getIn().setHeader("id", String.valueOf(bi.longValue()));
+    })
+    .idempotentConsumer(header("id"))
+    .messageIdRepositoryRef("kafkaIdempotentRepository")
+    .to(to);
+----
+
 == Using manual commit with Kafka consumer
 
 By default the Kafka consumer will use auto commit, where the offset will be 
committed automatically in the background using a given interval.

Reply via email to