This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch camel-master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector.git


The following commit(s) were added to refs/heads/camel-master by this push:
     new 087dfb5  [create-pull-request] automated change
087dfb5 is described below

commit 087dfb5612d786d259d8f30f8e638ab1ebdd1686
Author: github-actions[bot] 
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Sat Feb 13 03:39:39 2021 +0000

    [create-pull-request] automated change
---
 .../resources/connectors/camel-aws2-sqs-sink.json          | 14 ++++++++++++++
 .../generated/resources/connectors/camel-kafka-sink.json   |  2 +-
 .../generated/resources/connectors/camel-kafka-source.json |  2 +-
 .../resources/connectors/camel-milo-client-sink.json       |  6 ++++++
 .../resources/connectors/camel-milo-client-source.json     |  6 ++++++
 .../resources/connectors/camel-vertx-kafka-sink.json       |  6 ------
 .../resources/connectors/camel-vertx-kafka-source.json     |  6 ------
 .../src/generated/resources/camel-aws2-sqs-sink.json       | 14 ++++++++++++++
 .../src/main/docs/camel-aws2-sqs-kafka-sink-connector.adoc |  4 +++-
 .../aws2sqs/CamelAws2sqsSinkConnectorConfig.java           |  8 ++++++++
 .../src/generated/resources/camel-kafka-sink.json          |  2 +-
 .../src/generated/resources/camel-kafka-source.json        |  2 +-
 .../src/main/docs/camel-kafka-kafka-sink-connector.adoc    |  2 +-
 .../src/main/docs/camel-kafka-kafka-source-connector.adoc  |  2 +-
 .../kafka/CamelKafkaSinkConnectorConfig.java               |  2 +-
 .../kafka/CamelKafkaSourceConnectorConfig.java             |  2 +-
 .../src/generated/resources/camel-milo-client-sink.json    |  6 ++++++
 .../src/generated/resources/camel-milo-client-source.json  |  6 ++++++
 .../main/docs/camel-milo-client-kafka-sink-connector.adoc  |  3 ++-
 .../docs/camel-milo-client-kafka-source-connector.adoc     |  3 ++-
 .../miloclient/CamelMiloclientSinkConnectorConfig.java     |  4 ++++
 .../miloclient/CamelMiloclientSourceConnectorConfig.java   |  4 ++++
 .../src/generated/resources/camel-vertx-kafka-sink.json    |  6 ------
 .../src/generated/resources/camel-vertx-kafka-source.json  |  6 ------
 .../main/docs/camel-vertx-kafka-kafka-sink-connector.adoc  |  3 +--
 .../docs/camel-vertx-kafka-kafka-source-connector.adoc     |  3 +--
 .../vertxkafka/CamelVertxkafkaSinkConnectorConfig.java     |  4 ----
 .../vertxkafka/CamelVertxkafkaSourceConnectorConfig.java   |  4 ----
 .../connectors/camel-aws2-sqs-kafka-sink-connector.adoc    |  4 +++-
 .../pages/connectors/camel-kafka-kafka-sink-connector.adoc |  2 +-
 .../connectors/camel-kafka-kafka-source-connector.adoc     |  2 +-
 .../connectors/camel-milo-client-kafka-sink-connector.adoc |  3 ++-
 .../camel-milo-client-kafka-source-connector.adoc          |  3 ++-
 .../connectors/camel-vertx-kafka-kafka-sink-connector.adoc |  3 +--
 .../camel-vertx-kafka-kafka-source-connector.adoc          |  3 +--
 35 files changed, 96 insertions(+), 56 deletions(-)

diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-aws2-sqs-sink.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-aws2-sqs-sink.json
index 8d185cb..be8e767 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-aws2-sqs-sink.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-aws2-sqs-sink.json
@@ -85,6 +85,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.sink.endpoint.batchSeparator": {
+                       "name": "camel.sink.endpoint.batchSeparator",
+                       "description": "Set the separator when passing a String 
to send batch message operation",
+                       "defaultValue": "\",\"",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.sink.endpoint.delaySeconds": {
                        "name": "camel.sink.endpoint.delaySeconds",
                        "description": "Delay sending messages for a number of 
seconds.",
@@ -269,6 +276,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.aws2-sqs.batchSeparator": {
+                       "name": "camel.component.aws2-sqs.batchSeparator",
+                       "description": "Set the separator when passing a String 
to send batch message operation",
+                       "defaultValue": "\",\"",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.aws2-sqs.delaySeconds": {
                        "name": "camel.component.aws2-sqs.delaySeconds",
                        "description": "Delay sending messages for a number of 
seconds.",
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-sink.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-sink.json
index b997786..dfae88e 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-sink.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-sink.json
@@ -743,7 +743,7 @@
                },
                "camel.component.kafka.kafkaClientFactory": {
                        "name": "camel.component.kafka.kafkaClientFactory",
-                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients.",
+                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients.",
                        "priority": "MEDIUM",
                        "required": "false"
                },
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
index ebfe2c8..7124ba4 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
@@ -698,7 +698,7 @@
                },
                "camel.component.kafka.kafkaClientFactory": {
                        "name": "camel.component.kafka.kafkaClientFactory",
-                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients.",
+                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients.",
                        "priority": "MEDIUM",
                        "required": "false"
                },
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-sink.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-sink.json
index a626861..3ded443 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-sink.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-sink.json
@@ -313,6 +313,12 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.milo-client.miloClientConnectionManager": {
+                       "name": 
"camel.component.milo-client.miloClientConnectionManager",
+                       "description": "Instance for managing client 
connections",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.milo-client.overrideHost": {
                        "name": "camel.component.milo-client.overrideHost",
                        "description": "Override the server reported endpoint 
host with the host from the endpoint URI.",
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-source.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-source.json
index 83746c4..1b45a55 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-source.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-milo-client-source.json
@@ -330,6 +330,12 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.milo-client.miloClientConnectionManager": {
+                       "name": 
"camel.component.milo-client.miloClientConnectionManager",
+                       "description": "Instance for managing client 
connections",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.milo-client.overrideHost": {
                        "name": "camel.component.milo-client.overrideHost",
                        "description": "Override the server reported endpoint 
host with the host from the endpoint URI.",
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-sink.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-sink.json
index 459b4d5..b0021be 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-sink.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-sink.json
@@ -296,12 +296,6 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
-               "camel.sink.endpoint.vertxKafkaClientFactory": {
-                       "name": "camel.sink.endpoint.vertxKafkaClientFactory",
-                       "description": "Factory to use for creating 
io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients.",
-                       "priority": "MEDIUM",
-                       "required": "false"
-               },
                "camel.sink.endpoint.saslClientCallbackHandlerClass": {
                        "name": 
"camel.sink.endpoint.saslClientCallbackHandlerClass",
                        "description": "The fully qualified name of a SASL 
client callback handler class that implements the AuthenticateCallbackHandler 
interface.",
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-source.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-source.json
index f049e88..6294c79 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-source.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-vertx-kafka-source.json
@@ -365,12 +365,6 @@
                                "InOptionalOut"
                        ]
                },
-               "camel.source.endpoint.vertxKafkaClientFactory": {
-                       "name": "camel.source.endpoint.vertxKafkaClientFactory",
-                       "description": "Factory to use for creating 
io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients.",
-                       "priority": "MEDIUM",
-                       "required": "false"
-               },
                "camel.source.endpoint.saslClientCallbackHandlerClass": {
                        "name": 
"camel.source.endpoint.saslClientCallbackHandlerClass",
                        "description": "The fully qualified name of a SASL 
client callback handler class that implements the AuthenticateCallbackHandler 
interface.",
diff --git 
a/connectors/camel-aws2-sqs-kafka-connector/src/generated/resources/camel-aws2-sqs-sink.json
 
b/connectors/camel-aws2-sqs-kafka-connector/src/generated/resources/camel-aws2-sqs-sink.json
index 8d185cb..be8e767 100644
--- 
a/connectors/camel-aws2-sqs-kafka-connector/src/generated/resources/camel-aws2-sqs-sink.json
+++ 
b/connectors/camel-aws2-sqs-kafka-connector/src/generated/resources/camel-aws2-sqs-sink.json
@@ -85,6 +85,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.sink.endpoint.batchSeparator": {
+                       "name": "camel.sink.endpoint.batchSeparator",
+                       "description": "Set the separator when passing a String 
to send batch message operation",
+                       "defaultValue": "\",\"",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.sink.endpoint.delaySeconds": {
                        "name": "camel.sink.endpoint.delaySeconds",
                        "description": "Delay sending messages for a number of 
seconds.",
@@ -269,6 +276,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.aws2-sqs.batchSeparator": {
+                       "name": "camel.component.aws2-sqs.batchSeparator",
+                       "description": "Set the separator when passing a String 
to send batch message operation",
+                       "defaultValue": "\",\"",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.aws2-sqs.delaySeconds": {
                        "name": "camel.component.aws2-sqs.delaySeconds",
                        "description": "Delay sending messages for a number of 
seconds.",
diff --git 
a/connectors/camel-aws2-sqs-kafka-connector/src/main/docs/camel-aws2-sqs-kafka-sink-connector.adoc
 
b/connectors/camel-aws2-sqs-kafka-connector/src/main/docs/camel-aws2-sqs-kafka-sink-connector.adoc
index 1ed8652..d218f26 100644
--- 
a/connectors/camel-aws2-sqs-kafka-connector/src/main/docs/camel-aws2-sqs-kafka-sink-connector.adoc
+++ 
b/connectors/camel-aws2-sqs-kafka-connector/src/main/docs/camel-aws2-sqs-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnecto
 ----
 
 
-The camel-aws2-sqs sink connector supports 54 options, which are listed below.
+The camel-aws2-sqs sink connector supports 56 options, which are listed below.
 
 
 
@@ -42,6 +42,7 @@ The camel-aws2-sqs sink connector supports 54 options, which 
are listed below.
 | *camel.sink.endpoint.region* | The region in which SQS client needs to work. 
When using this parameter, the configuration will expect the lowercase name of 
the region (for example ap-east-1) You'll need to use the name 
Region.EU_WEST_1.id() | null | false | MEDIUM
 | *camel.sink.endpoint.trustAllCertificates* | If we want to trust all 
certificates in case of overriding the endpoint | false | false | MEDIUM
 | *camel.sink.endpoint.useDefaultCredentialsProvider* | Set whether the SQS 
client should expect to load credentials on an AWS infra instance or to expect 
static credentials to be passed in. | false | false | MEDIUM
+| *camel.sink.endpoint.batchSeparator* | Set the separator when passing a 
String to send batch message operation | "," | false | MEDIUM
 | *camel.sink.endpoint.delaySeconds* | Delay sending messages for a number of 
seconds. | null | false | MEDIUM
 | *camel.sink.endpoint.lazyStartProducer* | Whether the producer should be 
started lazy (on the first message). By starting lazy you can use this to allow 
CamelContext and routes to startup in situations where a producer may otherwise 
fail during starting and cause the route to fail being started. By deferring 
this startup to be lazy then the startup failure can be handled during routing 
messages via Camel's routing error handlers. Beware that when the first message 
is processed then cre [...]
 | *camel.sink.endpoint.messageDeduplicationIdStrategy* | Only for FIFO queues. 
Strategy for setting the messageDeduplicationId on the message. Can be one of 
the following options: useExchangeId, useContentBasedDeduplication. For the 
useContentBasedDeduplication option, no messageDeduplicationId will be set on 
the message. One of: [useExchangeId] [useContentBasedDeduplication] | 
"useExchangeId" | false | MEDIUM
@@ -68,6 +69,7 @@ The camel-aws2-sqs sink connector supports 54 options, which 
are listed below.
 | *camel.component.aws2-sqs.region* | The region in which SQS client needs to 
work. When using this parameter, the configuration will expect the lowercase 
name of the region (for example ap-east-1) You'll need to use the name 
Region.EU_WEST_1.id() | null | false | MEDIUM
 | *camel.component.aws2-sqs.trustAllCertificates* | If we want to trust all 
certificates in case of overriding the endpoint | false | false | MEDIUM
 | *camel.component.aws2-sqs.useDefaultCredentials Provider* | Set whether the 
SQS client should expect to load credentials on an AWS infra instance or to 
expect static credentials to be passed in. | false | false | MEDIUM
+| *camel.component.aws2-sqs.batchSeparator* | Set the separator when passing a 
String to send batch message operation | "," | false | MEDIUM
 | *camel.component.aws2-sqs.delaySeconds* | Delay sending messages for a 
number of seconds. | null | false | MEDIUM
 | *camel.component.aws2-sqs.lazyStartProducer* | Whether the producer should 
be started lazy (on the first message). By starting lazy you can use this to 
allow CamelContext and routes to startup in situations where a producer may 
otherwise fail during starting and cause the route to fail being started. By 
deferring this startup to be lazy then the startup failure can be handled 
during routing messages via Camel's routing error handlers. Beware that when 
the first message is processed the [...]
 | *camel.component.aws2-sqs.messageDeduplicationId Strategy* | Only for FIFO 
queues. Strategy for setting the messageDeduplicationId on the message. Can be 
one of the following options: useExchangeId, useContentBasedDeduplication. For 
the useContentBasedDeduplication option, no messageDeduplicationId will be set 
on the message. One of: [useExchangeId] [useContentBasedDeduplication] | 
"useExchangeId" | false | MEDIUM
diff --git 
a/connectors/camel-aws2-sqs-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/aws2sqs/CamelAws2sqsSinkConnectorConfig.java
 
b/connectors/camel-aws2-sqs-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/aws2sqs/CamelAws2sqsSinkConnectorConfig.java
index adcead1..6214a36 100644
--- 
a/connectors/camel-aws2-sqs-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/aws2sqs/CamelAws2sqsSinkConnectorConfig.java
+++ 
b/connectors/camel-aws2-sqs-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/aws2sqs/CamelAws2sqsSinkConnectorConfig.java
@@ -57,6 +57,9 @@ public class CamelAws2sqsSinkConnectorConfig extends 
CamelSinkConnectorConfig {
     public static final String 
CAMEL_SINK_AWS2SQS_ENDPOINT_USE_DEFAULT_CREDENTIALS_PROVIDER_CONF = 
"camel.sink.endpoint.useDefaultCredentialsProvider";
     public static final String 
CAMEL_SINK_AWS2SQS_ENDPOINT_USE_DEFAULT_CREDENTIALS_PROVIDER_DOC = "Set whether 
the SQS client should expect to load credentials on an AWS infra instance or to 
expect static credentials to be passed in.";
     public static final Boolean 
CAMEL_SINK_AWS2SQS_ENDPOINT_USE_DEFAULT_CREDENTIALS_PROVIDER_DEFAULT = false;
+    public static final String 
CAMEL_SINK_AWS2SQS_ENDPOINT_BATCH_SEPARATOR_CONF = 
"camel.sink.endpoint.batchSeparator";
+    public static final String CAMEL_SINK_AWS2SQS_ENDPOINT_BATCH_SEPARATOR_DOC 
= "Set the separator when passing a String to send batch message operation";
+    public static final String 
CAMEL_SINK_AWS2SQS_ENDPOINT_BATCH_SEPARATOR_DEFAULT = ",";
     public static final String CAMEL_SINK_AWS2SQS_ENDPOINT_DELAY_SECONDS_CONF 
= "camel.sink.endpoint.delaySeconds";
     public static final String CAMEL_SINK_AWS2SQS_ENDPOINT_DELAY_SECONDS_DOC = 
"Delay sending messages for a number of seconds.";
     public static final String 
CAMEL_SINK_AWS2SQS_ENDPOINT_DELAY_SECONDS_DEFAULT = null;
@@ -135,6 +138,9 @@ public class CamelAws2sqsSinkConnectorConfig extends 
CamelSinkConnectorConfig {
     public static final String 
CAMEL_SINK_AWS2SQS_COMPONENT_USE_DEFAULT_CREDENTIALS_PROVIDER_CONF = 
"camel.component.aws2-sqs.useDefaultCredentialsProvider";
     public static final String 
CAMEL_SINK_AWS2SQS_COMPONENT_USE_DEFAULT_CREDENTIALS_PROVIDER_DOC = "Set 
whether the SQS client should expect to load credentials on an AWS infra 
instance or to expect static credentials to be passed in.";
     public static final Boolean 
CAMEL_SINK_AWS2SQS_COMPONENT_USE_DEFAULT_CREDENTIALS_PROVIDER_DEFAULT = false;
+    public static final String 
CAMEL_SINK_AWS2SQS_COMPONENT_BATCH_SEPARATOR_CONF = 
"camel.component.aws2-sqs.batchSeparator";
+    public static final String 
CAMEL_SINK_AWS2SQS_COMPONENT_BATCH_SEPARATOR_DOC = "Set the separator when 
passing a String to send batch message operation";
+    public static final String 
CAMEL_SINK_AWS2SQS_COMPONENT_BATCH_SEPARATOR_DEFAULT = ",";
     public static final String CAMEL_SINK_AWS2SQS_COMPONENT_DELAY_SECONDS_CONF 
= "camel.component.aws2-sqs.delaySeconds";
     public static final String CAMEL_SINK_AWS2SQS_COMPONENT_DELAY_SECONDS_DOC 
= "Delay sending messages for a number of seconds.";
     public static final String 
CAMEL_SINK_AWS2SQS_COMPONENT_DELAY_SECONDS_DEFAULT = null;
@@ -210,6 +216,7 @@ public class CamelAws2sqsSinkConnectorConfig extends 
CamelSinkConnectorConfig {
         conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_REGION_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_AWS2SQS_ENDPOINT_REGION_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_AWS2SQS_ENDPOINT_REGION_DOC);
         conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_TRUST_ALL_CERTIFICATES_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_AWS2SQS_ENDPOINT_TRUST_ALL_CERTIFICATES_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_ENDPOINT_TRUST_ALL_CERTIFICATES_DOC);
         
conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_USE_DEFAULT_CREDENTIALS_PROVIDER_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_AWS2SQS_ENDPOINT_USE_DEFAULT_CREDENTIALS_PROVIDER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_ENDPOINT_USE_DEFAULT_CREDENTIALS_PROVIDER_DOC);
+        conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_BATCH_SEPARATOR_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_AWS2SQS_ENDPOINT_BATCH_SEPARATOR_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_AWS2SQS_ENDPOINT_BATCH_SEPARATOR_DOC);
         conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_DELAY_SECONDS_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_AWS2SQS_ENDPOINT_DELAY_SECONDS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_AWS2SQS_ENDPOINT_DELAY_SECONDS_DOC);
         conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_LAZY_START_PRODUCER_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_AWS2SQS_ENDPOINT_LAZY_START_PRODUCER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_ENDPOINT_LAZY_START_PRODUCER_DOC);
         
conf.define(CAMEL_SINK_AWS2SQS_ENDPOINT_MESSAGE_DEDUPLICATION_ID_STRATEGY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_AWS2SQS_ENDPOINT_MESSAGE_DEDUPLICATION_ID_STRATEGY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_ENDPOINT_MESSAGE_DEDUPLICATION_ID_STRATEGY_DOC);
@@ -236,6 +243,7 @@ public class CamelAws2sqsSinkConnectorConfig extends 
CamelSinkConnectorConfig {
         conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_REGION_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_AWS2SQS_COMPONENT_REGION_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_AWS2SQS_COMPONENT_REGION_DOC);
         conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_TRUST_ALL_CERTIFICATES_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_AWS2SQS_COMPONENT_TRUST_ALL_CERTIFICATES_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_COMPONENT_TRUST_ALL_CERTIFICATES_DOC);
         
conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_USE_DEFAULT_CREDENTIALS_PROVIDER_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_AWS2SQS_COMPONENT_USE_DEFAULT_CREDENTIALS_PROVIDER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_COMPONENT_USE_DEFAULT_CREDENTIALS_PROVIDER_DOC);
+        conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_BATCH_SEPARATOR_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_AWS2SQS_COMPONENT_BATCH_SEPARATOR_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_AWS2SQS_COMPONENT_BATCH_SEPARATOR_DOC);
         conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_DELAY_SECONDS_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_AWS2SQS_COMPONENT_DELAY_SECONDS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_AWS2SQS_COMPONENT_DELAY_SECONDS_DOC);
         conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_LAZY_START_PRODUCER_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_AWS2SQS_COMPONENT_LAZY_START_PRODUCER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_COMPONENT_LAZY_START_PRODUCER_DOC);
         
conf.define(CAMEL_SINK_AWS2SQS_COMPONENT_MESSAGE_DEDUPLICATION_ID_STRATEGY_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SINK_AWS2SQS_COMPONENT_MESSAGE_DEDUPLICATION_ID_STRATEGY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_AWS2SQS_COMPONENT_MESSAGE_DEDUPLICATION_ID_STRATEGY_DOC);
diff --git 
a/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-sink.json
 
b/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-sink.json
index b997786..dfae88e 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-sink.json
+++ 
b/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-sink.json
@@ -743,7 +743,7 @@
                },
                "camel.component.kafka.kafkaClientFactory": {
                        "name": "camel.component.kafka.kafkaClientFactory",
-                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients.",
+                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients.",
                        "priority": "MEDIUM",
                        "required": "false"
                },
diff --git 
a/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
 
b/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
index ebfe2c8..7124ba4 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
+++ 
b/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
@@ -698,7 +698,7 @@
                },
                "camel.component.kafka.kafkaClientFactory": {
                        "name": "camel.component.kafka.kafkaClientFactory",
-                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients.",
+                       "description": "Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients.",
                        "priority": "MEDIUM",
                        "required": "false"
                },
diff --git 
a/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-sink-connector.adoc
 
b/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-sink-connector.adoc
index ea98fa7..3e2b65e 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-sink-connector.adoc
+++ 
b/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-sink-connector.adoc
@@ -137,7 +137,7 @@ The camel-kafka sink connector supports 135 options, which 
are listed below.
 | *camel.component.kafka.workerPoolCoreSize* | Number of core threads for the 
worker pool for continue routing Exchange after kafka server has acknowledge 
the message that was sent to it from KafkaProducer using asynchronous 
non-blocking processing. | "10" | false | MEDIUM
 | *camel.component.kafka.workerPoolMaxSize* | Maximum number of threads for 
the worker pool for continue routing Exchange after kafka server has 
acknowledge the message that was sent to it from KafkaProducer using 
asynchronous non-blocking processing. | "20" | false | MEDIUM
 | *camel.component.kafka.autowiredEnabled* | Whether autowiring is enabled. 
This is used for automatic autowiring options (the option must be marked as 
autowired) by looking up in the registry to find if there is a single instance 
of matching type, which then gets configured on the component. This can be used 
for automatic configuring JDBC data sources, JMS connection factories, AWS 
Clients, etc. | true | false | MEDIUM
-| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients. | null | false | MEDIUM
+| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients. | null | false | MEDIUM
 | *camel.component.kafka.synchronous* | Sets whether synchronous processing 
should be strictly used | false | false | MEDIUM
 | *camel.component.kafka.schemaRegistryURL* | URL of the Confluent Platform 
schema registry servers to use. The format is host1:port1,host2:port2. This is 
known as schema.registry.url in the Confluent Platform documentation. This 
option is only available in the Confluent Platform (not standard Apache Kafka) 
| null | false | MEDIUM
 | *camel.component.kafka.interceptorClasses* | Sets interceptors for producer 
or consumers. Producer interceptors have to be classes implementing 
org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors 
have to be classes implementing 
org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use 
Producer interceptor on a consumer it will throw a class cast exception in 
runtime | null | false | MEDIUM
diff --git 
a/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
 
b/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
index b41e14b..03f8c86 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
+++ 
b/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
@@ -129,7 +129,7 @@ The camel-kafka source connector supports 122 options, 
which are listed below.
 | *camel.component.kafka.valueDeserializer* | Deserializer class for value 
that implements the Deserializer interface. | 
"org.apache.kafka.common.serialization.StringDeserializer" | false | MEDIUM
 | *camel.component.kafka.kafkaManualCommitFactory* | Factory to use for 
creating KafkaManualCommit instances. This allows to plugin a custom factory to 
create custom KafkaManualCommit instances in case special logic is needed when 
doing manual commits that deviates from the default implementation that comes 
out of the box. | null | false | MEDIUM
 | *camel.component.kafka.autowiredEnabled* | Whether autowiring is enabled. 
This is used for automatic autowiring options (the option must be marked as 
autowired) by looking up in the registry to find if there is a single instance 
of matching type, which then gets configured on the component. This can be used 
for automatic configuring JDBC data sources, JMS connection factories, AWS 
Clients, etc. | true | false | MEDIUM
-| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients. | null | false | MEDIUM
+| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients. | null | false | MEDIUM
 | *camel.component.kafka.synchronous* | Sets whether synchronous processing 
should be strictly used | false | false | MEDIUM
 | *camel.component.kafka.schemaRegistryURL* | URL of the Confluent Platform 
schema registry servers to use. The format is host1:port1,host2:port2. This is 
known as schema.registry.url in the Confluent Platform documentation. This 
option is only available in the Confluent Platform (not standard Apache Kafka) 
| null | false | MEDIUM
 | *camel.component.kafka.interceptorClasses* | Sets interceptors for producer 
or consumers. Producer interceptors have to be classes implementing 
org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors 
have to be classes implementing 
org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use 
Producer interceptor on a consumer it will throw a class cast exception in 
runtime | null | false | MEDIUM
diff --git 
a/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSinkConnectorConfig.java
 
b/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSinkConnectorConfig.java
index 327cd04..6ae1f14 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSinkConnectorConfig.java
+++ 
b/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSinkConnectorConfig.java
@@ -343,7 +343,7 @@ public class CamelKafkaSinkConnectorConfig extends 
CamelSinkConnectorConfig {
     public static final String 
CAMEL_SINK_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DOC = "Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc.";
     public static final Boolean 
CAMEL_SINK_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DEFAULT = true;
     public static final String 
CAMEL_SINK_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_CONF = 
"camel.component.kafka.kafkaClientFactory";
-    public static final String 
CAMEL_SINK_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DOC = "Factory to use for 
creating org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients.";
+    public static final String 
CAMEL_SINK_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DOC = "Factory to use for 
creating org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients.";
     public static final String 
CAMEL_SINK_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DEFAULT = null;
     public static final String CAMEL_SINK_KAFKA_COMPONENT_SYNCHRONOUS_CONF = 
"camel.component.kafka.synchronous";
     public static final String CAMEL_SINK_KAFKA_COMPONENT_SYNCHRONOUS_DOC = 
"Sets whether synchronous processing should be strictly used";
diff --git 
a/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
 
b/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
index 50627a8..8f81152 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
+++ 
b/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
@@ -321,7 +321,7 @@ public class CamelKafkaSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DOC = "Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc.";
     public static final Boolean 
CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DEFAULT = true;
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_CONF = 
"camel.component.kafka.kafkaClientFactory";
-    public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DOC = "Factory to use for 
creating org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients.";
+    public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DOC = "Factory to use for 
creating org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients.";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DEFAULT = null;
     public static final String CAMEL_SOURCE_KAFKA_COMPONENT_SYNCHRONOUS_CONF = 
"camel.component.kafka.synchronous";
     public static final String CAMEL_SOURCE_KAFKA_COMPONENT_SYNCHRONOUS_DOC = 
"Sets whether synchronous processing should be strictly used";
diff --git 
a/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-sink.json
 
b/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-sink.json
index a626861..3ded443 100644
--- 
a/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-sink.json
+++ 
b/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-sink.json
@@ -313,6 +313,12 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.milo-client.miloClientConnectionManager": {
+                       "name": 
"camel.component.milo-client.miloClientConnectionManager",
+                       "description": "Instance for managing client 
connections",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.milo-client.overrideHost": {
                        "name": "camel.component.milo-client.overrideHost",
                        "description": "Override the server reported endpoint 
host with the host from the endpoint URI.",
diff --git 
a/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-source.json
 
b/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-source.json
index 83746c4..1b45a55 100644
--- 
a/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-source.json
+++ 
b/connectors/camel-milo-client-kafka-connector/src/generated/resources/camel-milo-client-source.json
@@ -330,6 +330,12 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.milo-client.miloClientConnectionManager": {
+                       "name": 
"camel.component.milo-client.miloClientConnectionManager",
+                       "description": "Instance for managing client 
connections",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.milo-client.overrideHost": {
                        "name": "camel.component.milo-client.overrideHost",
                        "description": "Override the server reported endpoint 
host with the host from the endpoint URI.",
diff --git 
a/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-sink-connector.adoc
 
b/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-sink-connector.adoc
index 53546d0..82650ab 100644
--- 
a/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-sink-connector.adoc
+++ 
b/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.miloclient.CamelMiloclientSinkCo
 ----
 
 
-The camel-milo-client sink connector supports 53 options, which are listed 
below.
+The camel-milo-client sink connector supports 54 options, which are listed 
below.
 
 
 
@@ -78,6 +78,7 @@ The camel-milo-client sink connector supports 53 options, 
which are listed below
 | *camel.component.milo-client.keyStoreUrl* | The URL where the key should be 
loaded from | null | false | MEDIUM
 | *camel.component.milo-client.maxPendingPublish Requests* | The maximum 
number of pending publish requests | null | false | MEDIUM
 | *camel.component.milo-client.maxResponseMessageSize* | The maximum number of 
bytes a response message may have | null | false | MEDIUM
+| *camel.component.milo-client.miloClientConnection Manager* | Instance for 
managing client connections | null | false | MEDIUM
 | *camel.component.milo-client.overrideHost* | Override the server reported 
endpoint host with the host from the endpoint URI. | false | false | MEDIUM
 | *camel.component.milo-client.productUri* | The product URI | 
"http://camel.apache.org/EclipseMilo"; | false | MEDIUM
 | *camel.component.milo-client.requestedPublishing Interval* | The requested 
publishing interval in milliseconds | "1_000.0" | false | MEDIUM
diff --git 
a/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-source-connector.adoc
 
b/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-source-connector.adoc
index 1f1ece7..755b388 100644
--- 
a/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-source-connector.adoc
+++ 
b/connectors/camel-milo-client-kafka-connector/src/main/docs/camel-milo-client-kafka-source-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.miloclient.CamelMiloclientSource
 ----
 
 
-The camel-milo-client source connector supports 55 options, which are listed 
below.
+The camel-milo-client source connector supports 56 options, which are listed 
below.
 
 
 
@@ -80,6 +80,7 @@ The camel-milo-client source connector supports 55 options, 
which are listed bel
 | *camel.component.milo-client.keyStoreUrl* | The URL where the key should be 
loaded from | null | false | MEDIUM
 | *camel.component.milo-client.maxPendingPublish Requests* | The maximum 
number of pending publish requests | null | false | MEDIUM
 | *camel.component.milo-client.maxResponseMessageSize* | The maximum number of 
bytes a response message may have | null | false | MEDIUM
+| *camel.component.milo-client.miloClientConnection Manager* | Instance for 
managing client connections | null | false | MEDIUM
 | *camel.component.milo-client.overrideHost* | Override the server reported 
endpoint host with the host from the endpoint URI. | false | false | MEDIUM
 | *camel.component.milo-client.productUri* | The product URI | 
"http://camel.apache.org/EclipseMilo"; | false | MEDIUM
 | *camel.component.milo-client.requestedPublishing Interval* | The requested 
publishing interval in milliseconds | "1_000.0" | false | MEDIUM
diff --git 
a/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSinkConnectorConfig.java
 
b/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSinkConnectorConfig.java
index 9ec86e8..868e10e 100644
--- 
a/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSinkConnectorConfig.java
+++ 
b/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSinkConnectorConfig.java
@@ -167,6 +167,9 @@ public class CamelMiloclientSinkConnectorConfig
     public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_CONF = 
"camel.component.milo-client.maxResponseMessageSize";
     public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DOC = "The maximum 
number of bytes a response message may have";
     public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DEFAULT = null;
+    public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_CONF = 
"camel.component.milo-client.miloClientConnectionManager";
+    public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DOC = "Instance 
for managing client connections";
+    public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DEFAULT = null;
     public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_OVERRIDE_HOST_CONF = 
"camel.component.milo-client.overrideHost";
     public static final String 
CAMEL_SINK_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DOC = "Override the server 
reported endpoint host with the host from the endpoint URI.";
     public static final Boolean 
CAMEL_SINK_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DEFAULT = false;
@@ -245,6 +248,7 @@ public class CamelMiloclientSinkConnectorConfig
         conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_KEY_STORE_URL_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_MILOCLIENT_COMPONENT_KEY_STORE_URL_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_MILOCLIENT_COMPONENT_KEY_STORE_URL_DOC);
         
conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_PENDING_PUBLISH_REQUESTS_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_PENDING_PUBLISH_REQUESTS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_PENDING_PUBLISH_REQUESTS_DOC);
         
conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DOC);
+        
conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SINK_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DOC);
         conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_OVERRIDE_HOST_CONF, 
ConfigDef.Type.BOOLEAN, CAMEL_SINK_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DOC);
         conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_PRODUCT_URI_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_MILOCLIENT_COMPONENT_PRODUCT_URI_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SINK_MILOCLIENT_COMPONENT_PRODUCT_URI_DOC);
         
conf.define(CAMEL_SINK_MILOCLIENT_COMPONENT_REQUESTED_PUBLISHING_INTERVAL_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_MILOCLIENT_COMPONENT_REQUESTED_PUBLISHING_INTERVAL_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_MILOCLIENT_COMPONENT_REQUESTED_PUBLISHING_INTERVAL_DOC);
diff --git 
a/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSourceConnectorConfig.java
 
b/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSourceConnectorConfig.java
index b0785e5..f791706 100644
--- 
a/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSourceConnectorConfig.java
+++ 
b/connectors/camel-milo-client-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/miloclient/CamelMiloclientSourceConnectorConfig.java
@@ -173,6 +173,9 @@ public class CamelMiloclientSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_CONF = 
"camel.component.milo-client.maxResponseMessageSize";
     public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DOC = "The maximum 
number of bytes a response message may have";
     public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DEFAULT = null;
+    public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_CONF = 
"camel.component.milo-client.miloClientConnectionManager";
+    public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DOC = 
"Instance for managing client connections";
+    public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DEFAULT = null;
     public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_OVERRIDE_HOST_CONF = 
"camel.component.milo-client.overrideHost";
     public static final String 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DOC = "Override the server 
reported endpoint host with the host from the endpoint URI.";
     public static final Boolean 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DEFAULT = false;
@@ -253,6 +256,7 @@ public class CamelMiloclientSourceConnectorConfig
         conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_KEY_STORE_URL_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_MILOCLIENT_COMPONENT_KEY_STORE_URL_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_KEY_STORE_URL_DOC);
         
conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_PENDING_PUBLISH_REQUESTS_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_PENDING_PUBLISH_REQUESTS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_PENDING_PUBLISH_REQUESTS_DOC);
         
conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MAX_RESPONSE_MESSAGE_SIZE_DOC);
+        
conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_MILO_CLIENT_CONNECTION_MANAGER_DOC);
         conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_OVERRIDE_HOST_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_OVERRIDE_HOST_DOC);
         conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_PRODUCT_URI_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_MILOCLIENT_COMPONENT_PRODUCT_URI_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_MILOCLIENT_COMPONENT_PRODUCT_URI_DOC);
         
conf.define(CAMEL_SOURCE_MILOCLIENT_COMPONENT_REQUESTED_PUBLISHING_INTERVAL_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_REQUESTED_PUBLISHING_INTERVAL_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_MILOCLIENT_COMPONENT_REQUESTED_PUBLISHING_INTERVAL_DOC);
diff --git 
a/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-sink.json
 
b/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-sink.json
index 459b4d5..b0021be 100644
--- 
a/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-sink.json
+++ 
b/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-sink.json
@@ -296,12 +296,6 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
-               "camel.sink.endpoint.vertxKafkaClientFactory": {
-                       "name": "camel.sink.endpoint.vertxKafkaClientFactory",
-                       "description": "Factory to use for creating 
io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients.",
-                       "priority": "MEDIUM",
-                       "required": "false"
-               },
                "camel.sink.endpoint.saslClientCallbackHandlerClass": {
                        "name": 
"camel.sink.endpoint.saslClientCallbackHandlerClass",
                        "description": "The fully qualified name of a SASL 
client callback handler class that implements the AuthenticateCallbackHandler 
interface.",
diff --git 
a/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-source.json
 
b/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-source.json
index f049e88..6294c79 100644
--- 
a/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-source.json
+++ 
b/connectors/camel-vertx-kafka-kafka-connector/src/generated/resources/camel-vertx-kafka-source.json
@@ -365,12 +365,6 @@
                                "InOptionalOut"
                        ]
                },
-               "camel.source.endpoint.vertxKafkaClientFactory": {
-                       "name": "camel.source.endpoint.vertxKafkaClientFactory",
-                       "description": "Factory to use for creating 
io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients.",
-                       "priority": "MEDIUM",
-                       "required": "false"
-               },
                "camel.source.endpoint.saslClientCallbackHandlerClass": {
                        "name": 
"camel.source.endpoint.saslClientCallbackHandlerClass",
                        "description": "The fully qualified name of a SASL 
client callback handler class that implements the AuthenticateCallbackHandler 
interface.",
diff --git 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-sink-connector.adoc
 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-sink-connector.adoc
index 3dd4808..d6e756b 100644
--- 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-sink-connector.adoc
+++ 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.vertxkafka.CamelVertxkafkaSinkCo
 ----
 
 
-The camel-vertx-kafka sink connector supports 155 options, which are listed 
below.
+The camel-vertx-kafka sink connector supports 154 options, which are listed 
below.
 
 
 
@@ -71,7 +71,6 @@ The camel-vertx-kafka sink connector supports 155 options, 
which are listed belo
 | *camel.sink.endpoint.transactionalId* | The TransactionalId to use for 
transactional delivery. This enables reliability semantics which span multiple 
producer sessions since it allows the client to guarantee that transactions 
using the same TransactionalId have been completed prior to starting any new 
transactions. If no TransactionalId is provided, then the producer is limited 
to idempotent delivery. If a TransactionalId is configured, enable.idempotence 
is implied. By default the Tra [...]
 | *camel.sink.endpoint.transactionTimeoutMs* | The maximum amount of time in 
ms that the transaction coordinator will wait for a transaction status update 
from the producer before proactively aborting the ongoing transaction.If this 
value is larger than the transaction.max.timeout.ms setting in the broker, the 
request will fail with a InvalidTxnTimeoutException error. | 60000 | false | 
MEDIUM
 | *camel.sink.endpoint.valueSerializer* | Serializer class for value that 
implements the org.apache.kafka.common.serialization.Serializer interface. | 
"org.apache.kafka.common.serialization.StringSerializer" | false | MEDIUM
-| *camel.sink.endpoint.vertxKafkaClientFactory* | Factory to use for creating 
io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients. | null | 
false | MEDIUM
 | *camel.sink.endpoint.saslClientCallbackHandlerClass* | The fully qualified 
name of a SASL client callback handler class that implements the 
AuthenticateCallbackHandler interface. | null | false | MEDIUM
 | *camel.sink.endpoint.saslJaasConfig* | JAAS login context parameters for 
SASL connections in the format used by JAAS configuration files. JAAS 
configuration file format is described here. The format for the value is: 
'loginModuleClass controlFlag (optionName=optionValue);'. For brokers, the 
config must be prefixed with listener prefix and SASL mechanism name in 
lower-case. For example, 
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule
 required; | null | [...]
 | *camel.sink.endpoint.saslKerberosKinitCmd* | Kerberos kinit command path. | 
"/usr/bin/kinit" | false | MEDIUM
diff --git 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-source-connector.adoc
 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-source-connector.adoc
index abd4742..b12a000 100644
--- 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-source-connector.adoc
+++ 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/docs/camel-vertx-kafka-kafka-source-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.vertxkafka.CamelVertxkafkaSource
 ----
 
 
-The camel-vertx-kafka source connector supports 171 options, which are listed 
below.
+The camel-vertx-kafka source connector supports 170 options, which are listed 
below.
 
 
 
@@ -80,7 +80,6 @@ The camel-vertx-kafka source connector supports 171 options, 
which are listed be
 | *camel.source.endpoint.valueDeserializer* | Deserializer class for value 
that implements the org.apache.kafka.common.serialization.Deserializer 
interface. | "org.apache.kafka.common.serialization.StringDeserializer" | false 
| MEDIUM
 | *camel.source.endpoint.exceptionHandler* | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
option is not in use. By default the consumer will deal with exceptions, that 
will be logged at WARN or ERROR level and ignored. | null | false | MEDIUM
 | *camel.source.endpoint.exchangePattern* | Sets the exchange pattern when the 
consumer creates an exchange. One of: [InOnly] [InOut] [InOptionalOut] | null | 
false | MEDIUM
-| *camel.source.endpoint.vertxKafkaClientFactory* | Factory to use for 
creating io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients. | null | 
false | MEDIUM
 | *camel.source.endpoint.saslClientCallbackHandler Class* | The fully 
qualified name of a SASL client callback handler class that implements the 
AuthenticateCallbackHandler interface. | null | false | MEDIUM
 | *camel.source.endpoint.saslJaasConfig* | JAAS login context parameters for 
SASL connections in the format used by JAAS configuration files. JAAS 
configuration file format is described here. The format for the value is: 
'loginModuleClass controlFlag (optionName=optionValue);'. For brokers, the 
config must be prefixed with listener prefix and SASL mechanism name in 
lower-case. For example, 
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule
 required; | null [...]
 | *camel.source.endpoint.saslKerberosKinitCmd* | Kerberos kinit command path. 
| "/usr/bin/kinit" | false | MEDIUM
diff --git 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSinkConnectorConfig.java
 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSinkConnectorConfig.java
index 5dbe5d5..65bdfbc 100644
--- 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSinkConnectorConfig.java
+++ 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSinkConnectorConfig.java
@@ -146,9 +146,6 @@ public class CamelVertxkafkaSinkConnectorConfig
     public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VALUE_SERIALIZER_CONF = 
"camel.sink.endpoint.valueSerializer";
     public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VALUE_SERIALIZER_DOC = "Serializer class for 
value that implements the org.apache.kafka.common.serialization.Serializer 
interface.";
     public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VALUE_SERIALIZER_DEFAULT = 
"org.apache.kafka.common.serialization.StringSerializer";
-    public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_CONF = 
"camel.sink.endpoint.vertxKafkaClientFactory";
-    public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DOC = "Factory to use 
for creating io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients.";
-    public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DEFAULT = null;
     public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_CONF = 
"camel.sink.endpoint.saslClientCallbackHandlerClass";
     public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DOC = "The 
fully qualified name of a SASL client callback handler class that implements 
the AuthenticateCallbackHandler interface.";
     public static final String 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DEFAULT = 
null;
@@ -544,7 +541,6 @@ public class CamelVertxkafkaSinkConnectorConfig
         conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_TRANSACTIONAL_ID_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_VERTXKAFKA_ENDPOINT_TRANSACTIONAL_ID_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_TRANSACTIONAL_ID_DOC);
         
conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_TRANSACTION_TIMEOUT_MS_CONF, 
ConfigDef.Type.INT, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_TRANSACTION_TIMEOUT_MS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_TRANSACTION_TIMEOUT_MS_DOC);
         conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_VALUE_SERIALIZER_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_VERTXKAFKA_ENDPOINT_VALUE_SERIALIZER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VALUE_SERIALIZER_DOC);
-        
conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DOC);
         
conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DOC);
         conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_JAAS_CONFIG_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_JAAS_CONFIG_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_JAAS_CONFIG_DOC);
         
conf.define(CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_KERBEROS_KINIT_CMD_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_KERBEROS_KINIT_CMD_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_VERTXKAFKA_ENDPOINT_SASL_KERBEROS_KINIT_CMD_DOC);
diff --git 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSourceConnectorConfig.java
 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSourceConnectorConfig.java
index f6a55eb..c91d7c6 100644
--- 
a/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSourceConnectorConfig.java
+++ 
b/connectors/camel-vertx-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/vertxkafka/CamelVertxkafkaSourceConnectorConfig.java
@@ -173,9 +173,6 @@ public class CamelVertxkafkaSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCHANGE_PATTERN_CONF = 
"camel.source.endpoint.exchangePattern";
     public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCHANGE_PATTERN_DOC = "Sets the exchange 
pattern when the consumer creates an exchange. One of: [InOnly] [InOut] 
[InOptionalOut]";
     public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCHANGE_PATTERN_DEFAULT = null;
-    public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_CONF = 
"camel.source.endpoint.vertxKafkaClientFactory";
-    public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DOC = "Factory to 
use for creating io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients.";
-    public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DEFAULT = null;
     public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_CONF = 
"camel.source.endpoint.saslClientCallbackHandlerClass";
     public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DOC = "The 
fully qualified name of a SASL client callback handler class that implements 
the AuthenticateCallbackHandler interface.";
     public static final String 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DEFAULT = 
null;
@@ -601,7 +598,6 @@ public class CamelVertxkafkaSourceConnectorConfig
         conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VALUE_DESERIALIZER_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VALUE_DESERIALIZER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VALUE_DESERIALIZER_DOC);
         conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCEPTION_HANDLER_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCEPTION_HANDLER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCEPTION_HANDLER_DOC);
         conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCHANGE_PATTERN_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCHANGE_PATTERN_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_EXCHANGE_PATTERN_DOC);
-        
conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_VERTX_KAFKA_CLIENT_FACTORY_DOC);
         
conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_CONF,
 ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_CLIENT_CALLBACK_HANDLER_CLASS_DOC);
         conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_JAAS_CONFIG_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_JAAS_CONFIG_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_JAAS_CONFIG_DOC);
         
conf.define(CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_KERBEROS_KINIT_CMD_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_KERBEROS_KINIT_CMD_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_VERTXKAFKA_ENDPOINT_SASL_KERBEROS_KINIT_CMD_DOC);
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-aws2-sqs-kafka-sink-connector.adoc 
b/docs/modules/ROOT/pages/connectors/camel-aws2-sqs-kafka-sink-connector.adoc
index 1ed8652..d218f26 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-aws2-sqs-kafka-sink-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-aws2-sqs-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnecto
 ----
 
 
-The camel-aws2-sqs sink connector supports 54 options, which are listed below.
+The camel-aws2-sqs sink connector supports 56 options, which are listed below.
 
 
 
@@ -42,6 +42,7 @@ The camel-aws2-sqs sink connector supports 54 options, which 
are listed below.
 | *camel.sink.endpoint.region* | The region in which SQS client needs to work. 
When using this parameter, the configuration will expect the lowercase name of 
the region (for example ap-east-1) You'll need to use the name 
Region.EU_WEST_1.id() | null | false | MEDIUM
 | *camel.sink.endpoint.trustAllCertificates* | If we want to trust all 
certificates in case of overriding the endpoint | false | false | MEDIUM
 | *camel.sink.endpoint.useDefaultCredentialsProvider* | Set whether the SQS 
client should expect to load credentials on an AWS infra instance or to expect 
static credentials to be passed in. | false | false | MEDIUM
+| *camel.sink.endpoint.batchSeparator* | Set the separator when passing a 
String to send batch message operation | "," | false | MEDIUM
 | *camel.sink.endpoint.delaySeconds* | Delay sending messages for a number of 
seconds. | null | false | MEDIUM
 | *camel.sink.endpoint.lazyStartProducer* | Whether the producer should be 
started lazy (on the first message). By starting lazy you can use this to allow 
CamelContext and routes to startup in situations where a producer may otherwise 
fail during starting and cause the route to fail being started. By deferring 
this startup to be lazy then the startup failure can be handled during routing 
messages via Camel's routing error handlers. Beware that when the first message 
is processed then cre [...]
 | *camel.sink.endpoint.messageDeduplicationIdStrategy* | Only for FIFO queues. 
Strategy for setting the messageDeduplicationId on the message. Can be one of 
the following options: useExchangeId, useContentBasedDeduplication. For the 
useContentBasedDeduplication option, no messageDeduplicationId will be set on 
the message. One of: [useExchangeId] [useContentBasedDeduplication] | 
"useExchangeId" | false | MEDIUM
@@ -68,6 +69,7 @@ The camel-aws2-sqs sink connector supports 54 options, which 
are listed below.
 | *camel.component.aws2-sqs.region* | The region in which SQS client needs to 
work. When using this parameter, the configuration will expect the lowercase 
name of the region (for example ap-east-1) You'll need to use the name 
Region.EU_WEST_1.id() | null | false | MEDIUM
 | *camel.component.aws2-sqs.trustAllCertificates* | If we want to trust all 
certificates in case of overriding the endpoint | false | false | MEDIUM
 | *camel.component.aws2-sqs.useDefaultCredentials Provider* | Set whether the 
SQS client should expect to load credentials on an AWS infra instance or to 
expect static credentials to be passed in. | false | false | MEDIUM
+| *camel.component.aws2-sqs.batchSeparator* | Set the separator when passing a 
String to send batch message operation | "," | false | MEDIUM
 | *camel.component.aws2-sqs.delaySeconds* | Delay sending messages for a 
number of seconds. | null | false | MEDIUM
 | *camel.component.aws2-sqs.lazyStartProducer* | Whether the producer should 
be started lazy (on the first message). By starting lazy you can use this to 
allow CamelContext and routes to startup in situations where a producer may 
otherwise fail during starting and cause the route to fail being started. By 
deferring this startup to be lazy then the startup failure can be handled 
during routing messages via Camel's routing error handlers. Beware that when 
the first message is processed the [...]
 | *camel.component.aws2-sqs.messageDeduplicationId Strategy* | Only for FIFO 
queues. Strategy for setting the messageDeduplicationId on the message. Can be 
one of the following options: useExchangeId, useContentBasedDeduplication. For 
the useContentBasedDeduplication option, no messageDeduplicationId will be set 
on the message. One of: [useExchangeId] [useContentBasedDeduplication] | 
"useExchangeId" | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-sink-connector.adoc 
b/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-sink-connector.adoc
index ea98fa7..3e2b65e 100644
--- a/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-sink-connector.adoc
+++ b/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-sink-connector.adoc
@@ -137,7 +137,7 @@ The camel-kafka sink connector supports 135 options, which 
are listed below.
 | *camel.component.kafka.workerPoolCoreSize* | Number of core threads for the 
worker pool for continue routing Exchange after kafka server has acknowledge 
the message that was sent to it from KafkaProducer using asynchronous 
non-blocking processing. | "10" | false | MEDIUM
 | *camel.component.kafka.workerPoolMaxSize* | Maximum number of threads for 
the worker pool for continue routing Exchange after kafka server has 
acknowledge the message that was sent to it from KafkaProducer using 
asynchronous non-blocking processing. | "20" | false | MEDIUM
 | *camel.component.kafka.autowiredEnabled* | Whether autowiring is enabled. 
This is used for automatic autowiring options (the option must be marked as 
autowired) by looking up in the registry to find if there is a single instance 
of matching type, which then gets configured on the component. This can be used 
for automatic configuring JDBC data sources, JMS connection factories, AWS 
Clients, etc. | true | false | MEDIUM
-| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients. | null | false | MEDIUM
+| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients. | null | false | MEDIUM
 | *camel.component.kafka.synchronous* | Sets whether synchronous processing 
should be strictly used | false | false | MEDIUM
 | *camel.component.kafka.schemaRegistryURL* | URL of the Confluent Platform 
schema registry servers to use. The format is host1:port1,host2:port2. This is 
known as schema.registry.url in the Confluent Platform documentation. This 
option is only available in the Confluent Platform (not standard Apache Kafka) 
| null | false | MEDIUM
 | *camel.component.kafka.interceptorClasses* | Sets interceptors for producer 
or consumers. Producer interceptors have to be classes implementing 
org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors 
have to be classes implementing 
org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use 
Producer interceptor on a consumer it will throw a class cast exception in 
runtime | null | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc 
b/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc
index b41e14b..03f8c86 100644
--- a/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc
+++ b/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc
@@ -129,7 +129,7 @@ The camel-kafka source connector supports 122 options, 
which are listed below.
 | *camel.component.kafka.valueDeserializer* | Deserializer class for value 
that implements the Deserializer interface. | 
"org.apache.kafka.common.serialization.StringDeserializer" | false | MEDIUM
 | *camel.component.kafka.kafkaManualCommitFactory* | Factory to use for 
creating KafkaManualCommit instances. This allows to plugin a custom factory to 
create custom KafkaManualCommit instances in case special logic is needed when 
doing manual commits that deviates from the default implementation that comes 
out of the box. | null | false | MEDIUM
 | *camel.component.kafka.autowiredEnabled* | Whether autowiring is enabled. 
This is used for automatic autowiring options (the option must be marked as 
autowired) by looking up in the registry to find if there is a single instance 
of matching type, which then gets configured on the component. This can be used 
for automatic configuring JDBC data sources, JMS connection factories, AWS 
Clients, etc. | true | false | MEDIUM
-| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances with logic that 
extends the vanilla Kafka clients. | null | false | MEDIUM
+| *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients. | null | false | MEDIUM
 | *camel.component.kafka.synchronous* | Sets whether synchronous processing 
should be strictly used | false | false | MEDIUM
 | *camel.component.kafka.schemaRegistryURL* | URL of the Confluent Platform 
schema registry servers to use. The format is host1:port1,host2:port2. This is 
known as schema.registry.url in the Confluent Platform documentation. This 
option is only available in the Confluent Platform (not standard Apache Kafka) 
| null | false | MEDIUM
 | *camel.component.kafka.interceptorClasses* | Sets interceptors for producer 
or consumers. Producer interceptors have to be classes implementing 
org.apache.kafka.clients.producer.ProducerInterceptor Consumer interceptors 
have to be classes implementing 
org.apache.kafka.clients.consumer.ConsumerInterceptor Note that if you use 
Producer interceptor on a consumer it will throw a class cast exception in 
runtime | null | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-sink-connector.adoc
 
b/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-sink-connector.adoc
index 53546d0..82650ab 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-sink-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.miloclient.CamelMiloclientSinkCo
 ----
 
 
-The camel-milo-client sink connector supports 53 options, which are listed 
below.
+The camel-milo-client sink connector supports 54 options, which are listed 
below.
 
 
 
@@ -78,6 +78,7 @@ The camel-milo-client sink connector supports 53 options, 
which are listed below
 | *camel.component.milo-client.keyStoreUrl* | The URL where the key should be 
loaded from | null | false | MEDIUM
 | *camel.component.milo-client.maxPendingPublish Requests* | The maximum 
number of pending publish requests | null | false | MEDIUM
 | *camel.component.milo-client.maxResponseMessageSize* | The maximum number of 
bytes a response message may have | null | false | MEDIUM
+| *camel.component.milo-client.miloClientConnection Manager* | Instance for 
managing client connections | null | false | MEDIUM
 | *camel.component.milo-client.overrideHost* | Override the server reported 
endpoint host with the host from the endpoint URI. | false | false | MEDIUM
 | *camel.component.milo-client.productUri* | The product URI | 
"http://camel.apache.org/EclipseMilo"; | false | MEDIUM
 | *camel.component.milo-client.requestedPublishing Interval* | The requested 
publishing interval in milliseconds | "1_000.0" | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-source-connector.adoc
 
b/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-source-connector.adoc
index 1f1ece7..755b388 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-source-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-milo-client-kafka-source-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.miloclient.CamelMiloclientSource
 ----
 
 
-The camel-milo-client source connector supports 55 options, which are listed 
below.
+The camel-milo-client source connector supports 56 options, which are listed 
below.
 
 
 
@@ -80,6 +80,7 @@ The camel-milo-client source connector supports 55 options, 
which are listed bel
 | *camel.component.milo-client.keyStoreUrl* | The URL where the key should be 
loaded from | null | false | MEDIUM
 | *camel.component.milo-client.maxPendingPublish Requests* | The maximum 
number of pending publish requests | null | false | MEDIUM
 | *camel.component.milo-client.maxResponseMessageSize* | The maximum number of 
bytes a response message may have | null | false | MEDIUM
+| *camel.component.milo-client.miloClientConnection Manager* | Instance for 
managing client connections | null | false | MEDIUM
 | *camel.component.milo-client.overrideHost* | Override the server reported 
endpoint host with the host from the endpoint URI. | false | false | MEDIUM
 | *camel.component.milo-client.productUri* | The product URI | 
"http://camel.apache.org/EclipseMilo"; | false | MEDIUM
 | *camel.component.milo-client.requestedPublishing Interval* | The requested 
publishing interval in milliseconds | "1_000.0" | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-sink-connector.adoc
 
b/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-sink-connector.adoc
index 3dd4808..d6e756b 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-sink-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.vertxkafka.CamelVertxkafkaSinkCo
 ----
 
 
-The camel-vertx-kafka sink connector supports 155 options, which are listed 
below.
+The camel-vertx-kafka sink connector supports 154 options, which are listed 
below.
 
 
 
@@ -71,7 +71,6 @@ The camel-vertx-kafka sink connector supports 155 options, 
which are listed belo
 | *camel.sink.endpoint.transactionalId* | The TransactionalId to use for 
transactional delivery. This enables reliability semantics which span multiple 
producer sessions since it allows the client to guarantee that transactions 
using the same TransactionalId have been completed prior to starting any new 
transactions. If no TransactionalId is provided, then the producer is limited 
to idempotent delivery. If a TransactionalId is configured, enable.idempotence 
is implied. By default the Tra [...]
 | *camel.sink.endpoint.transactionTimeoutMs* | The maximum amount of time in 
ms that the transaction coordinator will wait for a transaction status update 
from the producer before proactively aborting the ongoing transaction.If this 
value is larger than the transaction.max.timeout.ms setting in the broker, the 
request will fail with a InvalidTxnTimeoutException error. | 60000 | false | 
MEDIUM
 | *camel.sink.endpoint.valueSerializer* | Serializer class for value that 
implements the org.apache.kafka.common.serialization.Serializer interface. | 
"org.apache.kafka.common.serialization.StringSerializer" | false | MEDIUM
-| *camel.sink.endpoint.vertxKafkaClientFactory* | Factory to use for creating 
io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients. | null | 
false | MEDIUM
 | *camel.sink.endpoint.saslClientCallbackHandlerClass* | The fully qualified 
name of a SASL client callback handler class that implements the 
AuthenticateCallbackHandler interface. | null | false | MEDIUM
 | *camel.sink.endpoint.saslJaasConfig* | JAAS login context parameters for 
SASL connections in the format used by JAAS configuration files. JAAS 
configuration file format is described here. The format for the value is: 
'loginModuleClass controlFlag (optionName=optionValue);'. For brokers, the 
config must be prefixed with listener prefix and SASL mechanism name in 
lower-case. For example, 
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule
 required; | null | [...]
 | *camel.sink.endpoint.saslKerberosKinitCmd* | Kerberos kinit command path. | 
"/usr/bin/kinit" | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-source-connector.adoc
 
b/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-source-connector.adoc
index abd4742..b12a000 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-source-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-vertx-kafka-kafka-source-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.vertxkafka.CamelVertxkafkaSource
 ----
 
 
-The camel-vertx-kafka source connector supports 171 options, which are listed 
below.
+The camel-vertx-kafka source connector supports 170 options, which are listed 
below.
 
 
 
@@ -80,7 +80,6 @@ The camel-vertx-kafka source connector supports 171 options, 
which are listed be
 | *camel.source.endpoint.valueDeserializer* | Deserializer class for value 
that implements the org.apache.kafka.common.serialization.Deserializer 
interface. | "org.apache.kafka.common.serialization.StringDeserializer" | false 
| MEDIUM
 | *camel.source.endpoint.exceptionHandler* | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
option is not in use. By default the consumer will deal with exceptions, that 
will be logged at WARN or ERROR level and ignored. | null | false | MEDIUM
 | *camel.source.endpoint.exchangePattern* | Sets the exchange pattern when the 
consumer creates an exchange. One of: [InOnly] [InOut] [InOptionalOut] | null | 
false | MEDIUM
-| *camel.source.endpoint.vertxKafkaClientFactory* | Factory to use for 
creating io.vertx.kafka.client.consumer.KafkaConsumer and 
io.vertx.kafka.client.consumer.KafkaProducer instances. This allows to 
configure a custom factory to create custom KafkaConsumer and KafkaProducer 
instances with logic that extends the vanilla VertX Kafka clients. | null | 
false | MEDIUM
 | *camel.source.endpoint.saslClientCallbackHandler Class* | The fully 
qualified name of a SASL client callback handler class that implements the 
AuthenticateCallbackHandler interface. | null | false | MEDIUM
 | *camel.source.endpoint.saslJaasConfig* | JAAS login context parameters for 
SASL connections in the format used by JAAS configuration files. JAAS 
configuration file format is described here. The format for the value is: 
'loginModuleClass controlFlag (optionName=optionValue);'. For brokers, the 
config must be prefixed with listener prefix and SASL mechanism name in 
lower-case. For example, 
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule
 required; | null [...]
 | *camel.source.endpoint.saslKerberosKinitCmd* | Kerberos kinit command path. 
| "/usr/bin/kinit" | false | MEDIUM

Reply via email to