This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git

commit d9cb0924e503aefad3ab47e70d0f69cb5dc27f04
Author: Andrea Cosentino <anco...@gmail.com>
AuthorDate: Tue Mar 3 17:50:32 2020 +0100

    Regen docs
---
 docs/components/modules/ROOT/nav.adoc                      | 1 +
 docs/components/modules/ROOT/pages/aws2-ec2-component.adoc | 2 +-
 docs/components/modules/ROOT/pages/kafka-component.adoc    | 4 ++--
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/docs/components/modules/ROOT/nav.adoc 
b/docs/components/modules/ROOT/nav.adoc
index 58f3244..bfbb2a8 100644
--- a/docs/components/modules/ROOT/nav.adoc
+++ b/docs/components/modules/ROOT/nav.adoc
@@ -47,6 +47,7 @@
 * xref:aws2-ddb-component.adoc[AWS 2 DynamoDB Component]
 * xref:aws2-ddbstream-component.adoc[AWS 2 DynamoDB Streams Component]
 * xref:aws2-ec2-component.adoc[AWS 2 EC2 Component]
+* xref:aws2-ec2-component.adoc[AWS 2 EC2 Component]
 * xref:aws2-ecs-component.adoc[AWS 2 ECS Component]
 * xref:aws2-eks-component.adoc[AWS 2 EKS Component]
 * xref:aws2-iam-component.adoc[AWS 2 IAM Component]
diff --git a/docs/components/modules/ROOT/pages/aws2-ec2-component.adoc 
b/docs/components/modules/ROOT/pages/aws2-ec2-component.adoc
index f54c04d..08c93bb 100644
--- a/docs/components/modules/ROOT/pages/aws2-ec2-component.adoc
+++ b/docs/components/modules/ROOT/pages/aws2-ec2-component.adoc
@@ -1,6 +1,6 @@
 [[aws2-ec2-component]]
 = AWS 2 EC2 Component
-:page-source: components/camel-aws2-ec2/src/main/docs/aws2-ec2-component.adoc
+:page-source: 
components/camel-aws2-ec2/bin/src/main/docs/aws2-ec2-component.adoc
 
 *Since Camel 3.1*
 
diff --git a/docs/components/modules/ROOT/pages/kafka-component.adoc 
b/docs/components/modules/ROOT/pages/kafka-component.adoc
index 190d4cd..503fda1 100644
--- a/docs/components/modules/ROOT/pages/kafka-component.adoc
+++ b/docs/components/modules/ROOT/pages/kafka-component.adoc
@@ -88,7 +88,7 @@ The Kafka component supports 96 options, which are listed 
below.
 | *key* (producer) | The record key (or null if no key is specified). If this 
option has been configured then it take precedence over header 
KafkaConstants#KEY |  | String
 | *keySerializerClass* (producer) | The serializer class for keys (defaults to 
the same as for messages if nothing is given). | 
org.apache.kafka.common.serialization.StringSerializer | String
 | *lazyStartProducer* (producer) | Whether the producer should be started lazy 
(on the first message). By starting lazy you can use this to allow CamelContext 
and routes to startup in situations where a producer may otherwise fail during 
starting and cause the route to fail being started. By deferring this startup 
to be lazy then the startup failure can be handled during routing messages via 
Camel's routing error handlers. Beware that when the first message is processed 
then creating and [...]
-| *lingerMs* (producer) | The producer groups together any records that arrive 
in between request transmissions into a single batched request. Normally this 
occurs only under load when records arrive faster than they can be sent out. 
However in some circumstances the client may want to reduce the number of 
requests even under moderate load. This setting accomplishes this by adding a 
small amount of artificial delaythat is, rather than immediately sending out a 
record the producer will wa [...]
+| *lingerMs* (producer) | The producer groups together any records that arrive 
in between request transmissions into a single batched request. Normally this 
occurs only under load when records arrive faster than they can be sent out. 
However in some circumstances the client may want to reduce the number of 
requests even under moderate load. This setting accomplishes this by adding a 
small amount of artificial delay that is, rather than immediately sending out a 
record the producer will w [...]
 | *maxBlockMs* (producer) | The configuration controls how long sending to 
kafka will block. These methods can be blocked for multiple reasons. For e.g: 
buffer full, metadata unavailable.This configuration imposes maximum limit on 
the total time spent in fetching metadata, serialization of key and value, 
partitioning and allocation of buffer memory when doing a send(). In case of 
partitionsFor(), this configuration imposes a maximum time threshold on waiting 
for metadata | 60000 | Integer
 | *maxInFlightRequest* (producer) | The maximum number of unacknowledged 
requests the client will send on a single connection before blocking. Note that 
if this setting is set to be greater than 1 and there are failed sends, there 
is a risk of message re-ordering due to retries (i.e., if retries are enabled). 
| 5 | Integer
 | *maxRequestSize* (producer) | The maximum size of a request. This is also 
effectively a cap on the maximum record size. Note that the server has its own 
cap on record size which may be different from this. This setting will limit 
the number of record batches the producer will send in a single request to 
avoid sending huge requests. | 1048576 | Integer
@@ -214,7 +214,7 @@ with the following path and query parameters:
 | *key* (producer) | The record key (or null if no key is specified). If this 
option has been configured then it take precedence over header 
KafkaConstants#KEY |  | String
 | *keySerializerClass* (producer) | The serializer class for keys (defaults to 
the same as for messages if nothing is given). | 
org.apache.kafka.common.serialization.StringSerializer | String
 | *lazyStartProducer* (producer) | Whether the producer should be started lazy 
(on the first message). By starting lazy you can use this to allow CamelContext 
and routes to startup in situations where a producer may otherwise fail during 
starting and cause the route to fail being started. By deferring this startup 
to be lazy then the startup failure can be handled during routing messages via 
Camel's routing error handlers. Beware that when the first message is processed 
then creating and [...]
-| *lingerMs* (producer) | The producer groups together any records that arrive 
in between request transmissions into a single batched request. Normally this 
occurs only under load when records arrive faster than they can be sent out. 
However in some circumstances the client may want to reduce the number of 
requests even under moderate load. This setting accomplishes this by adding a 
small amount of artificial delaythat is, rather than immediately sending out a 
record the producer will wa [...]
+| *lingerMs* (producer) | The producer groups together any records that arrive 
in between request transmissions into a single batched request. Normally this 
occurs only under load when records arrive faster than they can be sent out. 
However in some circumstances the client may want to reduce the number of 
requests even under moderate load. This setting accomplishes this by adding a 
small amount of artificial delay that is, rather than immediately sending out a 
record the producer will w [...]
 | *maxBlockMs* (producer) | The configuration controls how long sending to 
kafka will block. These methods can be blocked for multiple reasons. For e.g: 
buffer full, metadata unavailable.This configuration imposes maximum limit on 
the total time spent in fetching metadata, serialization of key and value, 
partitioning and allocation of buffer memory when doing a send(). In case of 
partitionsFor(), this configuration imposes a maximum time threshold on waiting 
for metadata | 60000 | Integer
 | *maxInFlightRequest* (producer) | The maximum number of unacknowledged 
requests the client will send on a single connection before blocking. Note that 
if this setting is set to be greater than 1 and there are failed sends, there 
is a risk of message re-ordering due to retries (i.e., if retries are enabled). 
| 5 | Integer
 | *maxRequestSize* (producer) | The maximum size of a request. This is also 
effectively a cap on the maximum record size. Note that the server has its own 
cap on record size which may be different from this. This setting will limit 
the number of record batches the producer will send in a single request to 
avoid sending huge requests. | 1048576 | Integer

Reply via email to