This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch camel-kafka-connector-0.7.x
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector.git

commit 73ed9833b68ec794773e34de30ebb54596fc0965
Author: Andrea Cosentino <anco...@gmail.com>
AuthorDate: Tue Jan 12 06:38:09 2021 +0100

    Fixed idempotency images names
---
 docs/modules/ROOT/pages/idempotency.adoc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/modules/ROOT/pages/idempotency.adoc 
b/docs/modules/ROOT/pages/idempotency.adoc
index 24cbc1e..d87cf3e 100644
--- a/docs/modules/ROOT/pages/idempotency.adoc
+++ b/docs/modules/ROOT/pages/idempotency.adoc
@@ -16,13 +16,13 @@ Suppose you're using a source connector of any kind. By 
using the idempotency fe
 
 This means, in the Kafkish language, you won't ingest the same payload 
multiple times in the target Kafka topic. 
 
-image::ckc-idempontency-source.png[image]
+image::ckc-idempotency-source.png[image]
 
 In the sink scenario, we'll stream out of a Kafka topic multiple records, 
transform/convert/manipulate them and send them to an external system, like a 
messaging broker, a storage infra, a database etc.
 
 In the Kafka topic used as source we may have multiple repeated records with 
the same payload or same metadata. Based on this information we can choose to 
skip the same records while sending data to the external system and for doing 
this we can leverage the idempotency feature of ckc.
 
-image::ckc-idempontency-sink.png[image]
+image::ckc-idempotency-sink.png[image]
 
 == Camel-Kafka-connector idempotency configuration
 

Reply via email to