This is an automated email from the ASF dual-hosted git repository.

orpiske pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel.git

commit 5e27fc0bff1e217191324d48d16e7f3537abc365
Author: Otavio Rodolfo Piske <angusyo...@gmail.com>
AuthorDate: Tue Aug 6 15:42:15 2024 +0200

    CAMEL-21040: ensure more consistency in the document sections
---
 .../src/main/docs/chatscript-component.adoc        | 22 ++++----
 .../src/main/docs/avro-component.adoc              |  6 ++-
 .../camel-avro/src/main/docs/avro-dataformat.adoc  |  2 +-
 .../src/main/docs/aws-bedrock-component.adoc       |  4 +-
 .../src/main/docs/aws-cloudtrail-component.adoc    | 61 ++++++++++++----------
 .../src/main/docs/aws-config-component.adoc        |  8 +--
 .../src/main/docs/aws2-athena-component.adoc       |  8 +--
 .../src/main/docs/aws2-cw-component.adoc           |  8 +--
 .../src/main/docs/aws2-ddb-component.adoc          | 10 ++--
 .../src/main/docs/aws2-ddbstream-component.adoc    |  4 +-
 .../src/main/docs/aws2-ec2-component.adoc          | 12 ++---
 .../src/main/docs/aws2-ecs-component.adoc          | 10 ++--
 .../src/main/docs/aws2-eks-component.adoc          | 14 ++---
 .../src/main/docs/aws2-eventbridge-component.adoc  | 12 +++--
 .../src/main/docs/aws2-iam-component.adoc          | 14 ++---
 .../src/main/docs/aws2-kinesis-component.adoc      | 17 +++---
 .../src/main/docs/aws2-kms-component.adoc          | 14 ++---
 .../src/main/docs/aws2-lambda-component.adoc       |  9 ++--
 .../src/main/docs/aws2-mq-component.adoc           | 10 ++--
 .../src/main/docs/aws2-msk-component.adoc          | 10 ++--
 .../main/docs/aws2-redshift-data-component.adoc    | 14 ++---
 .../src/main/docs/aws2-s3-component.adoc           | 47 +++++++++--------
 .../src/main/docs/aws2-ses-component.adoc          |  8 +--
 .../src/main/docs/aws2-sns-component.adoc          | 18 +++----
 .../src/main/docs/aws2-sqs-component.adoc          | 33 ++++++------
 .../main/docs/aws2-step-functions-component.adoc   | 14 ++---
 .../src/main/docs/aws2-sts-component.adoc          | 14 ++---
 .../src/main/docs/aws2-timestream-component.adoc   | 14 ++---
 .../src/main/docs/aws2-translate-component.adoc    | 14 ++---
 .../src/main/docs/azure-cosmosdb-component.adoc    | 36 +++++++------
 .../src/main/docs/azure-eventhubs-component.adoc   | 22 ++++----
 .../src/main/docs/azure-files-component.adoc       | 33 ++++++------
 .../src/main/docs/azure-servicebus-component.adoc  | 16 +++---
 .../main/docs/azure-storage-blob-component.adoc    |  4 +-
 .../docs/azure-storage-datalake-component.adoc     |  2 +
 .../main/docs/azure-storage-queue-component.adoc   |  4 +-
 .../camel-bean/src/main/docs/bean-component.adoc   |  2 +-
 .../camel-bean/src/main/docs/class-component.adoc  |  6 ++-
 .../src/main/docs/bindy-dataformat.adoc            | 10 ++--
 .../src/main/docs/bonita-component.adoc            |  3 +-
 .../src/main/docs/box-component.adoc               | 34 ++++++------
 .../src/main/docs/browse-component.adoc            |  2 +-
 .../src/main/docs/caffeine-cache-component.adoc    | 16 +++---
 .../src/main/docs/cql-component.adoc               | 16 +++---
 .../camel-cbor/src/main/docs/cbor-dataformat.adoc  |  6 ++-
 .../camel-chunk/src/main/docs/chunk-component.adoc |  8 +--
 .../camel-coap/src/main/docs/coap-component.adoc   | 43 ++++++++-------
 .../src/main/docs/cometd-component.adoc            |  6 +--
 .../src/main/docs/consul-component.adoc            |  8 ++-
 .../src/main/docs/controlbus-component.adoc        |  7 +--
 50 files changed, 390 insertions(+), 325 deletions(-)

diff --git 
a/components/camel-ai/camel-chatscript/src/main/docs/chatscript-component.adoc 
b/components/camel-ai/camel-chatscript/src/main/docs/chatscript-component.adoc
index 5878c765121..eab958c580d 100644
--- 
a/components/camel-ai/camel-chatscript/src/main/docs/chatscript-component.adoc
+++ 
b/components/camel-ai/camel-chatscript/src/main/docs/chatscript-component.adoc
@@ -17,17 +17,6 @@
 
 The ChatScript component allows you to interact with 
https://github.com/ChatScript/ChatScript[ChatScript Server] and have 
conversations. This component is stateless and relies on ChatScript to maintain 
chat history.
 
-This component expects a JSON with the following fields:
-
-[source,json]
-----
-{
-  "username": "name here",
-  "botname": "name here",
-  "body": "body here"
-}
-----
-
 [NOTE]
 ====
 Refer to the file 
https://github.com/apache/camel/blob/main/components/camel-chatscript/src/main/java/org/apache/camel/component/chatscript/ChatScriptMessage.java[`ChatScriptMessage.java`]
 for details and samples.
@@ -47,6 +36,17 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+== Usage
 
+This component expects a JSON with the following fields:
+
+[source,json]
+----
+{
+  "username": "name here",
+  "botname": "name here",
+  "body": "body here"
+}
+----
 
 include::spring-boot:partial$starter.adoc[]
diff --git 
a/components/camel-avro-rpc/camel-avro-rpc-component/src/main/docs/avro-component.adoc
 
b/components/camel-avro-rpc/camel-avro-rpc-component/src/main/docs/avro-component.adoc
index 61867d7cd9f..9b97b4f3528 100644
--- 
a/components/camel-avro-rpc/camel-avro-rpc-component/src/main/docs/avro-component.adoc
+++ 
b/components/camel-avro-rpc/camel-avro-rpc-component/src/main/docs/avro-component.adoc
@@ -111,7 +111,9 @@ class Value {
 _Note: Existing classes can be used only for RPC (see below), not in
 data format._
 
-== Using Avro RPC in Camel
+== Usage
+
+=== Using Avro RPC in Camel
 
 As mentioned above, Avro also provides RPC support over multiple
 transports such as http and netty. Camel provides consumers and
@@ -220,7 +222,7 @@ while `putProcessor` will receive an array of size 2 with 
`String` key and
 `Value` value filled as array contents.
 
 
-== Avro via HTTP SPI
+=== Avro via HTTP SPI
 
 The Avro RPC component offers the 
`org.apache.camel.component.avro.spi.AvroRpcHttpServerFactory` service provider 
interface (SPI) so that various platforms can provide their own implementation 
based on their native HTTP server.
 
diff --git a/components/camel-avro/src/main/docs/avro-dataformat.adoc 
b/components/camel-avro/src/main/docs/avro-dataformat.adoc
index 9d1ff37f7af..0214d90ee41 100644
--- a/components/camel-avro/src/main/docs/avro-dataformat.adoc
+++ b/components/camel-avro/src/main/docs/avro-dataformat.adoc
@@ -44,7 +44,7 @@ include::partial$dataformat-options.adoc[]
 
 == Examples
 
-== Avro Data Format usage
+=== Avro Data Format usage
 
 Using the avro data format is as easy as specifying that the class that
 you want to marshal or unmarshal in your route.
diff --git 
a/components/camel-aws/camel-aws-bedrock/src/main/docs/aws-bedrock-component.adoc
 
b/components/camel-aws/camel-aws-bedrock/src/main/docs/aws-bedrock-component.adoc
index 43a4aa4225c..2a37a5d2b70 100644
--- 
a/components/camel-aws/camel-aws-bedrock/src/main/docs/aws-bedrock-component.adoc
+++ 
b/components/camel-aws/camel-aws-bedrock/src/main/docs/aws-bedrock-component.adoc
@@ -689,7 +689,9 @@ Camel-AWS Bedrock component provides the following 
operation on the producer sid
 - invokeImageModel
 - invokeEmbeddingsModel
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - invokeTextModel: this operation will invoke a model from Bedrock. This is an 
example for both Titan Express and Titan Lite.
 
diff --git 
a/components/camel-aws/camel-aws-cloudtrail/src/main/docs/aws-cloudtrail-component.adoc
 
b/components/camel-aws/camel-aws-cloudtrail/src/main/docs/aws-cloudtrail-component.adoc
index 71b43b95c6f..f4163508597 100644
--- 
a/components/camel-aws/camel-aws-cloudtrail/src/main/docs/aws-cloudtrail-component.adoc
+++ 
b/components/camel-aws/camel-aws-cloudtrail/src/main/docs/aws-cloudtrail-component.adoc
@@ -24,35 +24,6 @@ You must have a valid Amazon Web Services developer account, 
and be
 signed up to use Amazon Cloudtrail. More information is available
 at https://aws.amazon.com/cloudtrail/[AWS Cloudtrail]
 
-== Static credentials, Default Credential Provider and Profile Credentials 
Provider
-
-You have the possibility of avoiding the usage of explicit static credentials 
by specifying the useDefaultCredentialsProvider option and set it to true.
-
-The order of evaluation for Default Credentials Provider is the following:
-
- - Java system properties - `aws.accessKeyId` and `aws.secretKey`.
- - Environment variables - `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
- - Web Identity Token from AWS STS.
- - The shared credentials and config files.
- - Amazon ECS container credentials - loaded from the Amazon ECS if the 
environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is set.
- - Amazon EC2 Instance profile credentials. 
- 
-You have also the possibility of using Profile Credentials Provider, by 
specifying the useProfileCredentialsProvider option to true and 
profileCredentialsName to the profile name.
-
-Only one of static, default and profile credentials could be used at the same 
time.
-
-For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
-
-== Cloudtrail Events consumed
-
-The Cloudtrail consumer will use an API method called LookupEvents.
-
-This method will only take into account management events like 
create/update/delete of resources and Cloudtrail insight events where enabled.
-
-This means you won't consume the events registered in the Cloudtrail logs 
stored on S3, in case of creation of a new Trail.
-
-This is important to notice, and it must be taken into account when using this 
component.
-
 == URI Format
 
 -----------------------------------
@@ -79,4 +50,36 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+== Usage
+
+=== Static credentials, Default Credential Provider and Profile Credentials 
Provider
+
+You have the possibility of avoiding the usage of explicit static credentials 
by specifying the useDefaultCredentialsProvider option and set it to true.
+
+The order of evaluation for Default Credentials Provider is the following:
+
+- Java system properties - `aws.accessKeyId` and `aws.secretKey`.
+- Environment variables - `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
+- Web Identity Token from AWS STS.
+- The shared credentials and config files.
+- Amazon ECS container credentials - loaded from the Amazon ECS if the 
environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is set.
+- Amazon EC2 Instance profile credentials.
+
+You have also the possibility of using Profile Credentials Provider, by 
specifying the useProfileCredentialsProvider option to true and 
profileCredentialsName to the profile name.
+
+Only one of static, default and profile credentials could be used at the same 
time.
+
+For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
+
+=== Cloudtrail Events consumed
+
+The Cloudtrail consumer will use an API method called LookupEvents.
+
+This method will only take into account management events like 
create/update/delete of resources and Cloudtrail insight events where enabled.
+
+This means you won't consume the events registered in the Cloudtrail logs 
stored on S3, in case of creation of a new Trail.
+
+This is important to notice, and it must be taken into account when using this 
component.
+
+
 include::spring-boot:partial$starter.adoc[]
diff --git 
a/components/camel-aws/camel-aws-config/src/main/docs/aws-config-component.adoc 
b/components/camel-aws/camel-aws-config/src/main/docs/aws-config-component.adoc
index 71e59944608..c9b6524c1bf 100644
--- 
a/components/camel-aws/camel-aws-config/src/main/docs/aws-config-component.adoc
+++ 
b/components/camel-aws/camel-aws-config/src/main/docs/aws-config-component.adoc
@@ -54,6 +54,10 @@ You have to provide the ConfigClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/config/[Amazon Config] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -75,8 +79,4 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 include::spring-boot:partial$starter.adoc[]
diff --git 
a/components/camel-aws/camel-aws2-athena/src/main/docs/aws2-athena-component.adoc
 
b/components/camel-aws/camel-aws2-athena/src/main/docs/aws2-athena-component.adoc
index f1f5a0a4fc5..4f609c644ce 100644
--- 
a/components/camel-aws/camel-aws2-athena/src/main/docs/aws2-athena-component.adoc
+++ 
b/components/camel-aws/camel-aws2-athena/src/main/docs/aws2-athena-component.adoc
@@ -51,6 +51,10 @@ You have to provide the amazonAthenaClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/athena/[AWS Athena] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Examples
 
 === Producer Examples
@@ -79,10 +83,6 @@ from("direct:start")
     .to("mock:result");
 
--------------------------------------------------------------------------------
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
 
 You have the possibility of avoiding the usage of explicit static credentials 
by specifying the useDefaultCredentialsProvider option and set it to true.
diff --git 
a/components/camel-aws/camel-aws2-cw/src/main/docs/aws2-cw-component.adoc 
b/components/camel-aws/camel-aws2-cw/src/main/docs/aws2-cw-component.adoc
index a9a2a26616b..0a4ecaccdbd 100644
--- a/components/camel-aws/camel-aws2-cw/src/main/docs/aws2-cw-component.adoc
+++ b/components/camel-aws/camel-aws2-cw/src/main/docs/aws2-cw-component.adoc
@@ -56,6 +56,10 @@ You have to provide the amazonCwClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/cloudwatch/[Amazon's CloudWatch].
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -77,10 +81,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Advanced CloudWatchClient configuration
 
 If you need more control over the `CloudWatchClient` instance
diff --git 
a/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddb-component.adoc 
b/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddb-component.adoc
index b319d86a2fd..1433bff5308 100644
--- a/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddb-component.adoc
+++ b/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddb-component.adoc
@@ -55,6 +55,10 @@ You have to provide the amazonDDBClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/dynamodb[Amazon's DynamoDB].
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -76,10 +80,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Advanced AmazonDynamoDB configuration
 
 If you need more control over the `AmazonDynamoDB` instance
@@ -112,7 +112,7 @@ public class MyRouteBuilder extends RouteBuilder {
 The `#client` refers to a `DynamoDbClient` in the
 Registry.
 
-== Supported producer operations
+=== Supported producer operations
 
 - BatchGetItems
 - DeleteItem
diff --git 
a/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddbstream-component.adoc
 
b/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddbstream-component.adoc
index 50230e1ed51..62b110e08c0 100644
--- 
a/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddbstream-component.adoc
+++ 
b/components/camel-aws/camel-aws2-ddb/src/main/docs/aws2-ddbstream-component.adoc
@@ -86,9 +86,9 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-== Coping with Downtime
+=== Coping with Downtime
 
-=== AWS DynamoDB Streams outage of less than 24 hours
+==== AWS DynamoDB Streams outage of less than 24 hours
 
 The consumer will resume from the last seen sequence number (as
 implemented
diff --git 
a/components/camel-aws/camel-aws2-ec2/src/main/docs/aws2-ec2-component.adoc 
b/components/camel-aws/camel-aws2-ec2/src/main/docs/aws2-ec2-component.adoc
index 063c547cd62..1a70c5ec8b4 100644
--- a/components/camel-aws/camel-aws2-ec2/src/main/docs/aws2-ec2-component.adoc
+++ b/components/camel-aws/camel-aws2-ec2/src/main/docs/aws2-ec2-component.adoc
@@ -55,6 +55,10 @@ You have to provide the amazonEc2Client in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/ec2/[Amazon EC2] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -76,11 +80,7 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
-== Supported producer operations
+=== Supported producer operations
 
 - createAndRunInstances
 - startInstances
@@ -158,7 +158,7 @@ from("direct:stop")
      
.to("aws2-ec2://TestDomain?accessKey=xxxx&secretKey=xxxx&operation=terminateInstances");
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as a body.
 In AWS EC2 there are multiple operations you can submit, as an example for 
Create and run an instance, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-ecs/src/main/docs/aws2-ecs-component.adoc 
b/components/camel-aws/camel-aws2-ecs/src/main/docs/aws2-ecs-component.adoc
index 2cc8346f114..b69f437cd55 100644
--- a/components/camel-aws/camel-aws2-ecs/src/main/docs/aws2-ecs-component.adoc
+++ b/components/camel-aws/camel-aws2-ecs/src/main/docs/aws2-ecs-component.adoc
@@ -54,6 +54,10 @@ You have to provide the amazonECSClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/ecs/[Amazon ECS] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -75,10 +79,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === ECS Producer operations
 
 Camel-AWS ECS component provides the following operation on the producer side:
@@ -98,7 +98,7 @@ from("direct:listClusters")
     .to("aws2-ecs://test?ecsClient=#amazonEcsClient&operation=listClusters")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as a body.
 In AWS ECS there are multiple operations you can submit, as an example for 
List cluster request, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-eks/src/main/docs/aws2-eks-component.adoc 
b/components/camel-aws/camel-aws2-eks/src/main/docs/aws2-eks-component.adoc
index c9825057f00..b537c6cea88 100644
--- a/components/camel-aws/camel-aws2-eks/src/main/docs/aws2-eks-component.adoc
+++ b/components/camel-aws/camel-aws2-eks/src/main/docs/aws2-eks-component.adoc
@@ -56,6 +56,10 @@ You have to provide the amazonEKSClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/eks/[Amazon EKS] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -77,10 +81,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === EKS Producer operations
 
 Camel-AWS EKS component provides the following operation on the producer side:
@@ -90,7 +90,9 @@ Camel-AWS EKS component provides the following operation on 
the producer side:
 - describeCluster
 - deleteCluster
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - listClusters: this operation will list the available clusters in EKS
 
@@ -100,7 +102,7 @@ from("direct:listClusters")
     .to("aws2-eks://test?eksClient=#amazonEksClient&operation=listClusters")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as a body.
 In AWS EKS there are multiple operations you can submit, as an example for 
List cluster request, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-eventbridge/src/main/docs/aws2-eventbridge-component.adoc
 
b/components/camel-aws/camel-aws2-eventbridge/src/main/docs/aws2-eventbridge-component.adoc
index 95e1257de22..8214cc94ff4 100644
--- 
a/components/camel-aws/camel-aws2-eventbridge/src/main/docs/aws2-eventbridge-component.adoc
+++ 
b/components/camel-aws/camel-aws2-eventbridge/src/main/docs/aws2-eventbridge-component.adoc
@@ -53,6 +53,12 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
+== Usage
+
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
 
 You have the possibility of avoiding the usage of explicit static credentials 
by specifying the useDefaultCredentialsProvider option and set it to true.
@@ -72,10 +78,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === AWS2-Eventbridge Producer operations
 
 Camel-AWS2-Eventbridge component provides the following operation on the 
producer side:
@@ -290,7 +292,7 @@ this operation will return a list of rules associated with 
a target.
 
 this operation will return a list of entries with related ID sent to 
servicebus.
 
-== Updating the rule
+=== Updating the rule
 
 To update a rule, you'll need to perform the putRule operation again.
 There is no explicit update rule operation in the Java SDK.
diff --git 
a/components/camel-aws/camel-aws2-iam/src/main/docs/aws2-iam-component.adoc 
b/components/camel-aws/camel-aws2-iam/src/main/docs/aws2-iam-component.adoc
index d61a26e2765..c651a8a258e 100644
--- a/components/camel-aws/camel-aws2-iam/src/main/docs/aws2-iam-component.adoc
+++ b/components/camel-aws/camel-aws2-iam/src/main/docs/aws2-iam-component.adoc
@@ -61,6 +61,10 @@ You have to provide the amazonKmsClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/iam/[Amazon IAM] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -82,10 +86,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === IAM Producer operations
 
 Camel-AWS2 IAM component provides the following operation on the producer side:
@@ -104,7 +104,9 @@ Camel-AWS2 IAM component provides the following operation 
on the producer side:
 - addUserToGroup
 - removeUserFromGroup
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - createUser: this operation will create a user in IAM
 
@@ -158,7 +160,7 @@ from("direct:listUsers")
     .to("aws2-iam://test?iamClient=#amazonIAMClient&operation=listGroups")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as a body.
 In AWS IAM, there are multiple operations you can submit, as an example for 
Create User request, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-kinesis/src/main/docs/aws2-kinesis-component.adoc
 
b/components/camel-aws/camel-aws2-kinesis/src/main/docs/aws2-kinesis-component.adoc
index c5702c1a024..6d97f573e5a 100644
--- 
a/components/camel-aws/camel-aws2-kinesis/src/main/docs/aws2-kinesis-component.adoc
+++ 
b/components/camel-aws/camel-aws2-kinesis/src/main/docs/aws2-kinesis-component.adoc
@@ -53,14 +53,19 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
-
 Required Kinesis component options
 
 You have to provide the KinesisClient in the
 Registry with proxies and relevant credentials
 configured.
 
-== Batch Consumer
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
+== Usage
+
+=== Batch Consumer
 
 This component implements the Batch Consumer.
 
@@ -73,7 +78,7 @@ all available shards (multiple shards consumption) of Amazon 
Kinesis, therefore,
 property in the DSL configuration empty, then it'll consume all available 
shards
 otherwise only the specified shard corresponding to the shardId will be 
consumed.
 
-== Batch Producer
+=== Batch Producer
 
 This component implements the Batch Producer.
 
@@ -84,8 +89,6 @@ The batch type needs to implement the `Iterable` interface. 
For example, it can
 The message type can be one or more of types `byte[]`, `ByteBuffer`, UTF-8 
`String`, or `InputStream`. Other types are not supported.
 
 
-== Usage
-
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
 
 You have the possibility of avoiding the usage of explicit static credentials 
by specifying the useDefaultCredentialsProvider option and set it to true.
@@ -105,10 +108,6 @@ Only one of static, default and profile credentials could 
be used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === AmazonKinesis configuration
 
 You then have to reference the KinesisClient in the `amazonKinesisClient` URI 
option.
diff --git 
a/components/camel-aws/camel-aws2-kms/src/main/docs/aws2-kms-component.adoc 
b/components/camel-aws/camel-aws2-kms/src/main/docs/aws2-kms-component.adoc
index 5460a257f58..5e781e472f0 100644
--- a/components/camel-aws/camel-aws2-kms/src/main/docs/aws2-kms-component.adoc
+++ b/components/camel-aws/camel-aws2-kms/src/main/docs/aws2-kms-component.adoc
@@ -53,6 +53,10 @@ You have to provide the amazonKmsClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/kms/[Amazon KMS] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -74,10 +78,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === KMS Producer operations
 
 Camel-AWS KMS component provides the following operation on the producer side:
@@ -89,7 +89,9 @@ Camel-AWS KMS component provides the following operation on 
the producer side:
 - describeKey
 - enableKey
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - listKeys: this operation will list the available keys in KMS
 
@@ -125,7 +127,7 @@ from("direct:enableKey")
       .to("aws2-kms://test?kmsClient=#amazonKmsClient&operation=enableKey")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options.
 We introduce the possibility to use a POJO as the body.
diff --git 
a/components/camel-aws/camel-aws2-lambda/src/main/docs/aws2-lambda-component.adoc
 
b/components/camel-aws/camel-aws2-lambda/src/main/docs/aws2-lambda-component.adoc
index e0e15bb9466..20c4d441500 100644
--- 
a/components/camel-aws/camel-aws2-lambda/src/main/docs/aws2-lambda-component.adoc
+++ 
b/components/camel-aws/camel-aws2-lambda/src/main/docs/aws2-lambda-component.adoc
@@ -50,13 +50,16 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
-
 Required Lambda component options
 
 You have to provide the awsLambdaClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/lambda/[Amazon Lambda] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -78,10 +81,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 == List of Available Operations
 
 - listFunctions
diff --git 
a/components/camel-aws/camel-aws2-mq/src/main/docs/aws2-mq-component.adoc 
b/components/camel-aws/camel-aws2-mq/src/main/docs/aws2-mq-component.adoc
index ee82d5bf4cd..99dd6d53103 100644
--- a/components/camel-aws/camel-aws2-mq/src/main/docs/aws2-mq-component.adoc
+++ b/components/camel-aws/camel-aws2-mq/src/main/docs/aws2-mq-component.adoc
@@ -54,6 +54,10 @@ You have to provide the amazonMqClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/amazon-mq/[Amazon MQ] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -75,10 +79,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === MQ Producer operations
 
 Camel-AWS MQ component provides the following operation on the producer side:
@@ -146,7 +146,7 @@ from("direct:listBrokers")
     .to("aws2-mq://test?amazonMqClient=#amazonMqClient&operation=rebootBroker")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS MQ, there are multiple operations you can submit, as an example for 
List brokers request, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-msk/src/main/docs/aws2-msk-component.adoc 
b/components/camel-aws/camel-aws2-msk/src/main/docs/aws2-msk-component.adoc
index e1ab2029d78..4709fc16a2a 100644
--- a/components/camel-aws/camel-aws2-msk/src/main/docs/aws2-msk-component.adoc
+++ b/components/camel-aws/camel-aws2-msk/src/main/docs/aws2-msk-component.adoc
@@ -54,6 +54,10 @@ You have to provide the amazonMskClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/msk/[Amazon MSK] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials vs Default Credential Provider
@@ -69,10 +73,6 @@ You have the possibility of avoiding the usage of explicit 
static credentials by
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === MSK Producer operations
 
 Camel-AWS MSK component provides the following operation on the producer side:
@@ -137,7 +137,7 @@ from("direct:createCluster")
     .to("aws2-msk://test?mskClient=#amazonMskClient&operation=deleteCluster")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS MSK, there are multiple operations you can submit, as an example for 
List clusters request, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-redshift/src/main/docs/aws2-redshift-data-component.adoc
 
b/components/camel-aws/camel-aws2-redshift/src/main/docs/aws2-redshift-data-component.adoc
index 959d250d211..f2f08226dd1 100644
--- 
a/components/camel-aws/camel-aws2-redshift/src/main/docs/aws2-redshift-data-component.adoc
+++ 
b/components/camel-aws/camel-aws2-redshift/src/main/docs/aws2-redshift-data-component.adoc
@@ -58,6 +58,10 @@ You have to provide the awsRedshiftDataClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/redshift/[AWS Redshift] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -79,10 +83,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Redshift Producer operations
 
 Camel-AWS Redshift Data component provides the following operation on the 
producer side:
@@ -98,7 +98,9 @@ Camel-AWS Redshift Data component provides the following 
operation on the produc
 - describeStatement
 - getStatementResult
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - listDatabases: this operation will list redshift databases
 
@@ -108,7 +110,7 @@ from("direct:listDatabases")
     
.to("aws2-redshift-data://test?awsRedshiftDataClient=#awsRedshiftDataClient&operation=listDatabases")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as body.
 In AWS Redshift Data there are multiple operations you can submit, as an 
example for List Databases
diff --git 
a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc 
b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
index dccc400e6f9..55547075609 100644
--- a/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
+++ b/components/camel-aws/camel-aws2-s3/src/main/docs/aws2-s3-component.adoc
@@ -57,7 +57,13 @@ You have to provide the amazonS3Client in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/s3[Amazon's S3].
 
-== Batch Consumer
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
+== Usage
+
+=== Batch Consumer
 
 This component implements the Batch Consumer.
 
@@ -65,20 +71,6 @@ This allows you, for instance, to know how many messages 
exist in this
 batch and for instance, let the Aggregator
 aggregate this number of messages.
 
-== Usage
-
-For example, to read file `hello.txt` from bucket `helloBucket`, use the 
following snippet:
-
-[source,java]
---------------------------------------------------------------------------------
-from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt")
-  .to("file:/var/downloaded");
---------------------------------------------------------------------------------
-
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === S3 Producer operations
 
 Camel-AWS2-S3 component provides the following operation on the producer side:
@@ -97,6 +89,16 @@ If you don't specify an operation, explicitly the producer 
will do:
 - a single file upload
 - a multipart upload if multiPartUpload option is enabled
 
+== Examples
+
+For example, to read file `hello.txt` from bucket `helloBucket`, use the 
following snippet:
+
+[source,java]
+--------------------------------------------------------------------------------
+from("aws2-s3://helloBucket?accessKey=yourAccessKey&secretKey=yourSecretKey&prefix=hello.txt")
+  .to("file:/var/downloaded");
+--------------------------------------------------------------------------------
+
 === Advanced AmazonS3 configuration
 
 If your Camel Application is running behind a firewall or if you need to
@@ -305,7 +307,7 @@ Parameters (`accessKey`, `secretKey` and `region`) are 
mandatory for this operat
 
 NOTE: If checksum validations are enabled, the url will no longer be browser 
compatible because it adds a signed header that must be included in the HTTP 
request.
 
-== Streaming Upload mode
+=== Streaming Upload mode
 
 With the stream mode enabled, users will be able to upload data to S3 without 
knowing ahead of time the dimension of the data, by leveraging multipart upload.
 The upload will be completed when the batchSize has been completed or the 
batchMessageNumber has been reached.
@@ -369,18 +371,18 @@ from(kafka("topic1").brokers("localhost:9092"))
 
 In this case, the upload will be completed after 10 seconds.
 
-== Bucket Auto-creation
+=== Bucket Auto-creation
 
 With the option `autoCreateBucket` users are able to avoid the auto-creation 
of an S3 Bucket in case it doesn't exist. The default for this option is 
`false`.
 If set to false, any operation on a not-existent bucket in AWS won't be 
successful and an error will be returned.
 
-== Moving stuff between a bucket and another bucket
+=== Moving stuff between a bucket and another bucket
 
 Some users like to consume stuff from a bucket and move the content in a 
different one without using the copyObject feature of this component.
 If this is case for you, remember to remove the bucketName header from the 
incoming exchange of the consumer, otherwise the file will always be 
overwritten on the same
 original bucket.
 
-== MoveAfterRead consumer option
+=== MoveAfterRead consumer option
 
 In addition to deleteAfterRead, it has been added another option, 
moveAfterRead. With this option enabled, the consumed object will be moved to a 
target destinationBucket instead of being only deleted.
 This will require specifying the destinationBucket option. As example:
@@ -407,7 +409,7 @@ In this case, the objects consumed will be moved to 
_myothercamelbucket_ bucket
 
 So if the file name is test, in the _myothercamelbucket_ you should see a file 
called pre-test-suff.
 
-== Using customer key as encryption
+=== Using the customer key as encryption
 
 We introduced also the customer key support (an alternative of using KMS). The 
following code shows an example.
 
@@ -426,7 +428,7 @@ from("direct:putObject")
     .to(awsEndpoint);
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS S3 there are multiple operations you can submit, as an example for List 
brokers request, you can do something like:
@@ -440,7 +442,8 @@ from("direct:aws2-s3")
 
 In this way, you'll pass the request directly without the need of passing 
headers and options specifically related to this operation.
 
-== Create S3 client and add component to registry
+=== Create S3 client and add component to registry
+
 Sometimes you would want to perform some advanced configuration using 
AWS2S3Configuration, which also allows to set the S3 client.
 You can create and set the S3 client in the component configuration as shown 
in the following example
 
diff --git 
a/components/camel-aws/camel-aws2-ses/src/main/docs/aws2-ses-component.adoc 
b/components/camel-aws/camel-aws2-ses/src/main/docs/aws2-ses-component.adoc
index 8e448dd2614..7da81f9d5e7 100644
--- a/components/camel-aws/camel-aws2-ses/src/main/docs/aws2-ses-component.adoc
+++ b/components/camel-aws/camel-aws2-ses/src/main/docs/aws2-ses-component.adoc
@@ -53,6 +53,10 @@ You have to provide the amazonSESClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/ses[Amazon's SES].
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -74,10 +78,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Advanced SesClient configuration
 
 If you need more control over the `SesClient` instance
diff --git 
a/components/camel-aws/camel-aws2-sns/src/main/docs/aws2-sns-component.adoc 
b/components/camel-aws/camel-aws2-sns/src/main/docs/aws2-sns-component.adoc
index 8d0149a27ea..3c303969d34 100644
--- a/components/camel-aws/camel-aws2-sns/src/main/docs/aws2-sns-component.adoc
+++ b/components/camel-aws/camel-aws2-sns/src/main/docs/aws2-sns-component.adoc
@@ -60,7 +60,9 @@ You have to provide the amazonSNSClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/sns[Amazon's SNS].
 
-== Usage
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
 
@@ -81,9 +83,7 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
+== Usage
 
 === Advanced AmazonSNS configuration
 
@@ -121,20 +121,20 @@ 
from("aws2-sqs://test-camel?amazonSQSClient=#amazonSQSClient&delay=50&maxMessage
     .to(...);
 -------------------------------------------------
 
-== Topic Autocreation
+=== Topic Auto-creation
 
-With the option `autoCreateTopic` users are able to avoid the autocreation of 
an SNS Topic in case it doesn't exist. The default for this option is `false`.
+With the option `autoCreateTopic` users are able to avoid the auto-creation of 
an SNS Topic in case it doesn't exist. The default for this option is `false`.
 If set to false, any operation on a non-existent topic in AWS won't be 
successful and an error will be returned.
 
-== SNS FIFO
+=== SNS FIFO
 
 SNS FIFO are supported. While creating the SQS queue, you will subscribe to 
the SNS topic there is an important point to remember, you'll need to make 
possible for the SNS Topic to send the message to the SQS Queue.
 
 This is clear with an example.
 
-Suppose you created an SNS FIFO Topic called Order.fifo and an SQS Queue 
called QueueSub.fifo.
+Suppose you created an SNS FIFO Topic called `Order.fifo` and an SQS Queue 
called `QueueSub.fifo`.
 
-In the access Policy of the QueueSub.fifo you should submit something like this
+In the access Policy of the `QueueSub.fifo` you should submit something like 
this
 
 [source,json]
 -------------------------------------------------
diff --git 
a/components/camel-aws/camel-aws2-sqs/src/main/docs/aws2-sqs-component.adoc 
b/components/camel-aws/camel-aws2-sqs/src/main/docs/aws2-sqs-component.adoc
index 0f65cf550f3..e518365a0bc 100644
--- a/components/camel-aws/camel-aws2-sqs/src/main/docs/aws2-sqs-component.adoc
+++ b/components/camel-aws/camel-aws2-sqs/src/main/docs/aws2-sqs-component.adoc
@@ -57,7 +57,13 @@ You have to provide the amazonSQSClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/sqs[Amazon's SQS].
 
-== Batch Consumer
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
+== Usage
+
+=== Batch Consumer
 
 This component implements the Batch Consumer.
 
@@ -65,8 +71,6 @@ This allows you, for instance, to know how many messages 
exist in this
 batch and for instance, let the Aggregator
 aggregate this number of messages.
 
-== Usage
-
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
 
 You have the possibility of avoiding the usage of explicit static credentials 
by specifying the useDefaultCredentialsProvider option and set it to true.
@@ -86,9 +90,7 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
+== Examples
 
 === Advanced AmazonSQS configuration
 
@@ -128,7 +130,7 @@ to true if you need to add a fixed delay to all messages 
enqueued.
 There is a set of Server Side Encryption attributes for a queue. The related 
option are: `serverSideEncryptionEnabled`, `keyMasterKeyId` and 
`kmsDataKeyReusePeriod`.
 The SSE is disabled by default. You need to explicitly set the option to true 
and set the related parameters as queue attributes.
 
-== JMS-style Selectors
+=== JMS-style Selectors
 
 SQS does not allow selectors, but you can effectively achieve this by
 using the Camel Filter EIP and setting an
@@ -156,13 +158,14 @@ consumers.
 Note we must set the property `Sqs2Constants.SQS_DELETE_FILTERED` to `true` to
 instruct Camel to send the DeleteMessage, if being filtered.
 
-== Available Producer Operations
+=== Available Producer Operations
+
 - single message (default)
 - sendBatchMessage
 - deleteMessage
 - listQueues
 
-== Send Message
+=== Send Message
 
 [source,java]
 
------------------------------------------------------------------------------------------------------
@@ -171,7 +174,7 @@ from("direct:start")
   
.to("aws2-sqs://camel-1?accessKey=RAW(xxx)&secretKey=RAW(xxx)&region=eu-west-1");
 
------------------------------------------------------------------------------------------------------
 
-== Send Batch Message
+=== Send Batch Message
 
 You can set a `SendMessageBatchRequest` or an `Iterable`
 
@@ -196,7 +199,7 @@ from("direct:start")
 As result, you'll get an exchange containing a `SendMessageBatchResponse` 
instance, that you can examine to check what messages were successful and what 
not.
 The id set on each message of the batch will be a Random UUID.
 
-== Delete single Message
+=== Delete single Message
 
 Use deleteMessage operation to delete a single message. You'll need to set a 
receipt handle header for the message you want to delete.
 
@@ -210,7 +213,7 @@ from("direct:start")
 
 As result, you'll get an exchange containing a `DeleteMessageResponse` 
instance, that you can use to check if the message was deleted or not.
 
-== List Queues
+=== List Queues
 
 Use listQueues operation to list queues.
 
@@ -223,7 +226,7 @@ from("direct:start")
 
 As result, you'll get an exchange containing a `ListQueuesResponse` instance, 
that you can examine to check the actual queues.
 
-== Purge Queue
+=== Purge Queue
 
 Use purgeQueue operation to purge queue.
 
@@ -236,12 +239,12 @@ from("direct:start")
 
 As result you'll get an exchange containing a `PurgeQueueResponse` instance.
 
-== Queue Auto-creation
+=== Queue Auto-creation
 
 With the option `autoCreateQueue` users are able to avoid the autocreation of 
an SQS Queue in case it doesn't exist. The default for this option is `false`.
 If set to _false_, any operation on a non-existent queue in AWS won't be 
successful and an error will be returned.
 
-== Send Batch Message and Message Deduplication Strategy
+=== Send Batch Message and Message Deduplication Strategy
 
 In case you're using a SendBatchMessage Operation, you can set two different 
kinds of Message Deduplication Strategy:
 - useExchangeId
diff --git 
a/components/camel-aws/camel-aws2-step-functions/src/main/docs/aws2-step-functions-component.adoc
 
b/components/camel-aws/camel-aws2-step-functions/src/main/docs/aws2-step-functions-component.adoc
index 6f57014ae36..44229968b89 100644
--- 
a/components/camel-aws/camel-aws2-step-functions/src/main/docs/aws2-step-functions-component.adoc
+++ 
b/components/camel-aws/camel-aws2-step-functions/src/main/docs/aws2-step-functions-component.adoc
@@ -62,6 +62,10 @@ You have to provide the awsSfnClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/step-functions/[AWS Step Functions] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -83,10 +87,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Step Functions Producer operations
 
 Camel-AWS Step Functions component provides the following operation on the 
producer side:
@@ -108,7 +108,9 @@ Camel-AWS Step Functions component provides the following 
operation on the produ
 - listExecutions
 - getExecutionHistory
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - createStateMachine: this operation will create a state machine
 
@@ -118,7 +120,7 @@ from("direct:createStateMachine")
     
.to("aws2-step-functions://test?awsSfnClient=#awsSfnClient&operation=createMachine")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS Step Functions, there are multiple operations you can submit, as an 
example for Create state machine
diff --git 
a/components/camel-aws/camel-aws2-sts/src/main/docs/aws2-sts-component.adoc 
b/components/camel-aws/camel-aws2-sts/src/main/docs/aws2-sts-component.adoc
index 86fea6bd2ba..dc6cee6f227 100644
--- a/components/camel-aws/camel-aws2-sts/src/main/docs/aws2-sts-component.adoc
+++ b/components/camel-aws/camel-aws2-sts/src/main/docs/aws2-sts-component.adoc
@@ -60,6 +60,10 @@ You have to provide the amazonSTSClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/sts/[Amazon STS] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -81,10 +85,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === STS Producer operations
 
 Camel-AWS STS component provides the following operation on the producer side:
@@ -93,7 +93,9 @@ Camel-AWS STS component provides the following operation on 
the producer side:
 - getSessionToken
 - getFederationToken
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 - assumeRole: this operation will make an AWS user assume a different role 
temporary
 
@@ -122,7 +124,7 @@ from("direct:getFederationToken")
     .to("aws2-sts://test?stsClient=#amazonSTSClient&operation=getSessionToken")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS STS, as an example for Assume Role request, you can do something like:
diff --git 
a/components/camel-aws/camel-aws2-timestream/src/main/docs/aws2-timestream-component.adoc
 
b/components/camel-aws/camel-aws2-timestream/src/main/docs/aws2-timestream-component.adoc
index ddb05c3971d..72d0ea147ff 100644
--- 
a/components/camel-aws/camel-aws2-timestream/src/main/docs/aws2-timestream-component.adoc
+++ 
b/components/camel-aws/camel-aws2-timestream/src/main/docs/aws2-timestream-component.adoc
@@ -72,6 +72,10 @@ You have to provide either the awsTimestreamWriteClient(for 
write operations) or
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/timestream/[AWS Timestream] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -93,10 +97,6 @@ Only one of static, default and profile credentials could be 
used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Timestream Producer operations
 
 Camel-AWS Timestream component provides the following operation on the 
producer side:
@@ -132,7 +132,9 @@ Camel-AWS Timestream component provides the following 
operation on the producer
 ** query
 ** cancelQuery
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 * Write Operation
 ** createDatabase: this operation will create a timestream database
@@ -155,7 +157,7 @@ from("direct:query")
     
.to("aws2-timestream://query:test?awsTimestreamQueryClient=#awsTimestreamQueryClient&operation=query")
 
--------------------------------------------------------------------------------
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS Timestream there are multiple operations you can submit, as an example 
for Create state machine
diff --git 
a/components/camel-aws/camel-aws2-translate/src/main/docs/aws2-translate-component.adoc
 
b/components/camel-aws/camel-aws2-translate/src/main/docs/aws2-translate-component.adoc
index 787e12229f1..95df1db17d7 100644
--- 
a/components/camel-aws/camel-aws2-translate/src/main/docs/aws2-translate-component.adoc
+++ 
b/components/camel-aws/camel-aws2-translate/src/main/docs/aws2-translate-component.adoc
@@ -55,6 +55,10 @@ You have to provide the amazonTranslateClient in the
 Registry or your accessKey and secretKey to access
 the https://aws.amazon.com/translate/[Amazon Translate] service.
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
+
 == Usage
 
 === Static credentials, Default Credential Provider and Profile Credentials 
Provider
@@ -76,17 +80,15 @@ Only one of static, default and profile credentials could 
be used at the same ti
 
 For more information about this you can look at 
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS
 credentials documentation]
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Translate Producer operations
 
 Camel-AWS Translate component provides the following operation on the producer 
side:
 
 - translateText
 
-== Translate Text example
+== Examples
+
+=== Translate Text example
 
 [source,java]
 
------------------------------------------------------------------------------------------------------
@@ -99,7 +101,7 @@ from("direct:start")
 
 As a result, you'll get an exchange containing the translated text.
 
-== Using a POJO as body
+=== Using a POJO as body
 
 Sometimes building an AWS Request can be complex because of multiple options. 
We introduce the possibility to use a POJO as the body.
 In AWS Translate, the only operation available is TranslateText, so you can do 
something like:
diff --git 
a/components/camel-azure/camel-azure-cosmosdb/src/main/docs/azure-cosmosdb-component.adoc
 
b/components/camel-azure/camel-azure-cosmosdb/src/main/docs/azure-cosmosdb-component.adoc
index dc42e9dcde7..6a7cddc9732 100644
--- 
a/components/camel-azure/camel-azure-cosmosdb/src/main/docs/azure-cosmosdb-component.adoc
+++ 
b/components/camel-azure/camel-azure-cosmosdb/src/main/docs/azure-cosmosdb-component.adoc
@@ -64,8 +64,9 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+== Usage
 
-== Authentication Information
+=== Authentication Information
 
 To use this component, you have two options to provide the required Azure 
authentication information:
 
@@ -75,22 +76,12 @@ be generated through your CosmosDB Azure portal.
 provided into `cosmosAsyncClient`.
 
 
-== Async Consumer and Producer
+=== Async Consumer and Producer
 
 This component implements the async Consumer and producer.
 
 This allows camel route to consume and produce events asynchronously without 
blocking any threads.
 
-== Usage
-
-For example, to consume records from a specific container in a specific 
database to a file, use the following snippet:
-
-[source,java]
---------------------------------------------------------------------------------
-from("azure-cosmosdb://camelDb/myContainer?accountKey=MyaccountKey&databaseEndpoint=https//myazure.com:443&leaseDatabaseName=myLeaseDB&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true").
-to("file://directory");
---------------------------------------------------------------------------------
-
 === Message headers evaluated by the component producer
 [width="100%",cols="10%,10%,10%,70%",options="header",]
 |=======================================================================
@@ -177,7 +168,21 @@ For these operations, `databaseName` and `containerName` 
is *required* for all o
 
 Refer to the example section in this page to learn how to use these operations 
into your camel application.
 
-==== Examples
+== Examples
+
+
+=== Consuming records from a specific container
+
+For example, to consume records from a specific container in a specific 
database to a file, use the following snippet:
+
+[source,java]
+--------------------------------------------------------------------------------
+from("azure-cosmosdb://camelDb/myContainer?accountKey=MyaccountKey&databaseEndpoint=https//myazure.com:443&leaseDatabaseName=myLeaseDB&createLeaseDatabaseIfNotExists=true&createLeaseContainerIfNotExists=true").
+to("file://directory");
+--------------------------------------------------------------------------------
+
+=== Operations
+
 - `listDatabases`:
 
 [source,java]
@@ -358,7 +363,8 @@ To use the Camel Azure CosmosDB, `containerName` and 
`databaseName` are required
 
 The consumer will set `List<Map<String,>>` in exchange message body which 
reflect the list of items in a single feed.
 
-==== Example:
+==== Example
+
 For example, to listen to the events in `myContainer` container in `myDb`:
 
 [source,java]
@@ -368,7 +374,7 @@ 
from("azure-cosmosdb://myDb/myContainer?leaseDatabaseName=myLeaseDb&createLeaseD
 
--------------------------------------------------------------------------------
 
 
-=== Development Notes (Important)
+== Important Development Notes
 
 When developing on this component, you will need to obtain your Azure 
accessKey in order to run the integration tests. In addition to the mocked unit 
tests,
 you *will need to run the integration tests with every change you make or even 
client upgrade as the Azure client can break things even on minor versions 
upgrade.*
diff --git 
a/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
 
b/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
index 921ad086fb6..3a107549a35 100644
--- 
a/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
+++ 
b/components/camel-azure/camel-azure-eventhubs/src/main/docs/azure-eventhubs-component.adoc
@@ -63,8 +63,13 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+// component headers: START
+include::partial$component-endpoint-headers.adoc[]
+// component headers: END
 
-== Authentication Information
+== Usage
+
+=== Authentication Information
 
 You have three different Credential Types: AZURE_IDENTITY, TOKEN_CREDENTIAL 
and CONNECTION_STRING. You can also provide a client instance yourself.
 To use this component, you have three options to provide the required Azure 
authentication information:
@@ -81,7 +86,7 @@ as these data already included in the `connectionString`, 
therefore is the simpl
 - Provide an implementation of `com.azure.core.credential.TokenCredential` 
into the Camel's Registry, e.g., using the 
`com.azure.identity.DefaultAzureCredentialBuilder().build();` API.
 See the documentation 
https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication[here
 about Azure-AD authentication].
 
-AZURE_IDENTITY:
+*AZURE_IDENTITY*:
 - This will use `com.azure.identity.DefaultAzureCredentialBuilder().build();` 
instance. This will follow the Default Azure Credential Chain.
 See the documentation 
https://docs.microsoft.com/en-us/azure/active-directory/authentication/overview-authentication[here
 about Azure-AD authentication].
 
@@ -90,7 +95,7 @@ See the documentation 
https://docs.microsoft.com/en-us/azure/active-directory/au
 - Provide a 
https://docs.microsoft.com/en-us/java/api/com.azure.messaging.eventhubs.eventhubproducerasyncclient?view=azure-java-stable[EventHubProducerAsyncClient]
 instance which can be
 provided into `producerAsyncClient`. However, this is *only possible for camel 
producer*, for the camel consumer, is not possible to inject the client due to 
some design constraint by the `EventProcessorClient`.
 
-== Checkpoint Store Information
+=== Checkpoint Store Information
 
 A checkpoint store stores and retrieves partition ownership information and 
checkpoint details for each partition in a given consumer group of an event hub 
instance. Users are not meant to implement a CheckpointStore.
 Users are expected to choose existing implementations of this interface, 
instantiate it, and pass it to the component through `checkpointStore` option.
@@ -103,15 +108,14 @@ If you chose to use the default `BlobCheckpointStore`, 
you will need to supply t
 - `blobAccessKey`: It sets the access key for the associated azure account 
name to be used for authentication with azure blob services.
 - `blobContainerName`: It sets the blob container that shall be used by the 
BlobCheckpointStore to store the checkpoint offsets.
 
-
-== Async Consumer and Producer
+=== Async Consumer and Producer
 
 This component implements the async Consumer and producer.
 
 This allows camel route to consume and produce events asynchronously without 
blocking any threads.
 
 
-== Usage
+== Examples
 
 For example, to consume event from EventHub, use the following snippet:
 
@@ -121,10 +125,6 @@ 
from("azure-eventhubs:/camel/camelHub?sharedAccessName=SASaccountName&sharedAcce
   .to("file://queuedirectory");
 
--------------------------------------------------------------------------------
 
-// component headers: START
-include::partial$component-endpoint-headers.adoc[]
-// component headers: END
-
 === Message body type
 The component's producer expects the data in the message body to be in 
`byte[]`. This allows the user to utilize Camel TypeConverter to 
marshal/unmarshal data with ease.
 The same goes as well for the component's consumer, it will set the encoded 
data as `byte[]` in the message body.
@@ -192,7 +192,7 @@ from("direct:start")
 
.to("azure-eventhubs:namespace/eventHubName?tokenCredential=#myTokenCredential&credentialType=TOKEN_CREDENTIAL)"
 ----
 
-=== Development Notes (Important)
+== Important Development Notes
 
 When developing on this component, you will need to obtain your Azure 
accessKey to run the integration tests. In addition to the mocked unit tests,
 you *will need to run the integration tests with every change you make or even 
client upgrade as the Azure client can break things even on minor versions 
upgrade.*
diff --git 
a/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
 
b/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
index 837b8a0475a..66867640d72 100644
--- 
a/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
+++ 
b/components/camel-azure/camel-azure-files/src/main/docs/azure-files-component.adoc
@@ -96,7 +96,9 @@ 
azure-files://camelazurefiles.file.core.windows.net/samples?sv=2022-11-02&ss=f&s
 
azure-files://camelazurefiles/samples/inbox/spam?sharedKey=FAKE502UyuBD...3Z%2BASt9dCmJg%3D%3D&delete=true
 ----
 
-== Paths
+== Usage
+
+=== Paths
 
 The path separator is `/`. The absolute paths start with the path separator.
 The absolute paths do not include the share name, and they are relative
@@ -107,11 +109,11 @@ appears, and the relative paths are relative to the share 
root (rather than
 to the current working directory or to the endpoint starting directory)
 so interpret them with a grain of salt.
 
-== Concurrency
+=== Concurrency
 
 This component does not support concurrency on its endpoints.
 
-== More Information
+=== More Information
 
 This component mimics the FTP component.
 So, there are more samples and details on the FTP
@@ -119,7 +121,7 @@ component page.
 
 This component uses the Azure Java SDK libraries for the actual work.
 
-== Consuming Files
+=== Consuming Files
 
 The remote consumer will by default leave the consumed
 files untouched on the remote cloud files server. You have to configure it
@@ -135,7 +137,7 @@ default move files to a `.camel` sub directory. The reason 
Camel does
 *not* do this by default for the remote consumer is that it may lack
 permissions by default to be able to move or delete files.
 
-=== Body Type Options
+==== Body Type Options
 
 For each matching file, the consumer sends to the Camel exchange
 a message with a selected body type:
@@ -147,7 +149,7 @@ a message with a selected body type:
 The body type configuration should be tuned to fit available resources,
 performance targets, route processors, caching, resuming, etc.
 
-=== Limitations
+==== Limitations
 
 The option *readLock* can be used to force Camel *not* to consume files
 that are currently in the progress of being written. However, this option
@@ -180,7 +182,7 @@ The consumer sets the following exchange properties
 |`CamelBatchComplete` | True if there are no more files in this batch.
 |=======================================================================
 
-== Producing Files
+=== Producing Files
 
 The Files producer is optimized for two body types:
 
@@ -192,7 +194,7 @@ and then rewritten with body content. Any inconsistency 
between
 declared file length and stream length results in a corrupted
 remote file.
 
-=== Limitations
+==== Limitations
 
 The underlying Azure Files service does not allow growing files. The file
 length must be known at its creation time, consequently:
@@ -202,7 +204,7 @@ length must be known at its creation time, consequently:
   - No appending mode is supported.
 
 
-== About Timeouts
+=== About Timeouts
 
 You can use the `connectTimeout` option to set
 a timeout in millis to connect or disconnect. 
@@ -216,7 +218,7 @@ For now, the file upload has no timeout. During the upload,
 the underlying library could log timeout warnings. They are
 recoverable and upload could continue.   
 
-== Using Local Work Directory
+=== Using Local Work Directory
 
 Camel supports consuming from remote files servers and downloading the
 files directly into a local work directory. This avoids reading the
@@ -245,7 +247,7 @@ The `java.io.File` handle is then used as the Exchange 
body. The file producer l
 As Camel knows it's a local work file, it can optimize and use a rename 
instead of a file copy, as the work file is meant to be deleted anyway.
 ====
 
-== Custom Filtering
+=== Custom Filtering
 
 Camel supports pluggable filtering strategies. This strategy it to use
 the build in `org.apache.camel.component.file.GenericFileFilter` in
@@ -267,7 +269,7 @@ The accept(file) file argument has properties:
   - file length: if not a directory, then a length of the file in bytes
 
 
-== Filtering using ANT path matcher
+=== Filtering using ANT path matcher
 
 The ANT path matcher is a filter shipped out-of-the-box in the
 *camel-spring* jar. So you need to depend on *camel-spring* if you are
@@ -289,13 +291,13 @@ The sample below demonstrates how to use it:
 from("azure-files://...&antInclude=**/*.txt").to("...");
 ----
 
-== Using a Proxy
+=== Using a Proxy
 
 Consult the 
https://learn.microsoft.com/en-us/azure/developer/java/sdk/proxying[underlying 
library]
 documentation.
 
 
-== Consuming a single file using a fixed name
+=== Consuming a single file using a fixed name
 
 Unlike FTP component that features a special combination of options:
   
@@ -307,8 +309,7 @@ to optimize _the single file using a fixed name_ use case,
 it is necessary to fall back to regular filters (i.e. the list
 permission is needed). 
 
-
-== Debug logging
+=== Debug logging
 
 This component has log level *TRACE* that can be helpful if you have
 problems.
diff --git 
a/components/camel-azure/camel-azure-servicebus/src/main/docs/azure-servicebus-component.adoc
 
b/components/camel-azure/camel-azure-servicebus/src/main/docs/azure-servicebus-component.adoc
index c48ae2fd4d6..147f77cc8a8 100644
--- 
a/components/camel-azure/camel-azure-servicebus/src/main/docs/azure-servicebus-component.adoc
+++ 
b/components/camel-azure/camel-azure-servicebus/src/main/docs/azure-servicebus-component.adoc
@@ -48,17 +48,16 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
-
-== Consumer and Producer
-
-This component implements the Consumer and Producer.
-
-== Usage
-
 // component headers: START
 include::partial$component-endpoint-headers.adoc[]
 // component headers: END
 
+== Usage
+
+=== Consumer and Producer
+
+This component implements the Consumer and Producer.
+
 === Authentication Information
 
 You have three different Credential Types: AZURE_IDENTITY, TOKEN_CREDENTIAL 
and CONNECTION_STRING. You can also provide a client instance yourself.
@@ -106,7 +105,8 @@ In the consumer, the returned message body will be of type 
`String.
 |===
 
 
-==== Examples
+== Examples
+
 - `sendMessages`
 
 [source,java]
diff --git 
a/components/camel-azure/camel-azure-storage-blob/src/main/docs/azure-storage-blob-component.adoc
 
b/components/camel-azure/camel-azure-storage-blob/src/main/docs/azure-storage-blob-component.adoc
index fd726e36982..1a8f246bda7 100644
--- 
a/components/camel-azure/camel-azure-storage-blob/src/main/docs/azure-storage-blob-component.adoc
+++ 
b/components/camel-azure/camel-azure-storage-blob/src/main/docs/azure-storage-blob-component.adoc
@@ -178,6 +178,8 @@ and existing blocks together. Any blocks not specified in 
the block list and per
 
 Refer to the example section in this page to learn how to use these operations 
into your camel application.
 
+== Examples
+
 === Consumer Examples
 
 To consume a blob into a file using the file component, this can be done like 
this:
@@ -580,7 +582,7 @@ from("direct:copyBlob")
   
.to("azure-storage-blob://account/containerblob2?operation=uploadBlockBlob&credentialType=AZURE_SAS")
 
--------------------------------------------------------------------------------
 
-=== Development Notes (Important)
+== Important Development Notes
 
 All integration tests use https://www.testcontainers.org/[Testcontainers] and 
run by default.
 Obtaining of Azure accessKey and accountName is needed to be able to run all 
integration tests using Azure services.
diff --git 
a/components/camel-azure/camel-azure-storage-datalake/src/main/docs/azure-storage-datalake-component.adoc
 
b/components/camel-azure/camel-azure-storage-datalake/src/main/docs/azure-storage-datalake-component.adoc
index a312e8ef95a..273619ab158 100644
--- 
a/components/camel-azure/camel-azure-storage-datalake/src/main/docs/azure-storage-datalake-component.adoc
+++ 
b/components/camel-azure/camel-azure-storage-datalake/src/main/docs/azure-storage-datalake-component.adoc
@@ -146,6 +146,8 @@ you must first register the query acceleration feature with 
your subscription.
 
 Refer to the examples section below for more details on how to use these 
operations
 
+== Examples
+
 === Consumer Examples
 To consume a file from the storage datalake into a file using the file 
component, this can be done like this:
 
diff --git 
a/components/camel-azure/camel-azure-storage-queue/src/main/docs/azure-storage-queue-component.adoc
 
b/components/camel-azure/camel-azure-storage-queue/src/main/docs/azure-storage-queue-component.adoc
index 11616c85703..a5cff960e20 100644
--- 
a/components/camel-azure/camel-azure-storage-queue/src/main/docs/azure-storage-queue-component.adoc
+++ 
b/components/camel-azure/camel-azure-storage-queue/src/main/docs/azure-storage-queue-component.adoc
@@ -158,6 +158,8 @@ For these operations, `accountName` and `queueName` are 
*required*.
 
 Refer to the example section in this page to learn how to use these operations 
into your camel application.
 
+== Examples
+
 === Consumer Examples
 To consume a queue into a file component with maximum five messages in one 
batch, this can be done like this:
 
@@ -307,7 +309,7 @@ from("direct:start")
 -----------------------------------------------------------------------
 
 
-=== Development Notes (Important)
+== Important Development Notes
 
 When developing on this component, you will need to obtain your Azure 
`accessKey` to run the integration tests. In addition to the mocked unit tests,
 you *will need to run the integration tests with every change you make or even 
client upgrade as the Azure client can break things even on minor versions 
upgrade.*
diff --git a/components/camel-bean/src/main/docs/bean-component.adoc 
b/components/camel-bean/src/main/docs/bean-component.adoc
index 2267c6cdc90..9d33852c60d 100644
--- a/components/camel-bean/src/main/docs/bean-component.adoc
+++ b/components/camel-bean/src/main/docs/bean-component.adoc
@@ -96,7 +96,7 @@ The bean component can also call a bean by _bean id_ by 
looking up the bean
 in the xref:manual::registry.adoc[Registry] instead of using the class name.
 ====
 
-== Java DSL specific bean syntax
+=== Java DSL specific bean syntax
 
 Java DSL comes with syntactic sugar for the xref:bean-component.adoc[Bean]
 component. Instead of specifying the bean explicitly as the endpoint
diff --git a/components/camel-bean/src/main/docs/class-component.adoc 
b/components/camel-bean/src/main/docs/class-component.adoc
index 606d5bb7bf7..32faee6b4e8 100644
--- a/components/camel-bean/src/main/docs/class-component.adoc
+++ b/components/camel-bean/src/main/docs/class-component.adoc
@@ -46,7 +46,7 @@ include::partial$component-endpoint-options.adoc[]
 include::partial$component-endpoint-headers.adoc[]
 // component headers: END
 
-== Using
+== Usage
 
 You simply use the *class* component just as the xref:bean-component.adoc[Bean]
 component but by specifying the fully qualified class name instead.
@@ -69,7 +69,9 @@ from("direct:start")
     .to("mock:result");
 
--------------------------------------------------------------------------------------------------------------
 
-== Setting properties on the created instance
+== Examples
+
+=== Setting properties on the created instance
 
 In the endpoint uri you can specify properties to set on the created
 instance, for example, if it has a `setPrefix` method:
diff --git a/components/camel-bindy/src/main/docs/bindy-dataformat.adoc 
b/components/camel-bindy/src/main/docs/bindy-dataformat.adoc
index ed4414caf4a..080c071b973 100644
--- a/components/camel-bindy/src/main/docs/bindy-dataformat.adoc
+++ b/components/camel-bindy/src/main/docs/bindy-dataformat.adoc
@@ -63,7 +63,9 @@ you can put multiple models in the same package.
 :shortName: bindyCsv
 include::partial$dataformat-options.adoc[]
 
-== Annotations
+== Usage
+
+=== Annotations
 
 The annotations created allow mapping different concept of your model to
 the POJO like:
@@ -1666,7 +1668,7 @@ public static class OrderNumberFormatFactory extends 
AbstractFormatFactory {
 }
 ----
 
-== Supported Datatypes
+=== Supported Datatypes
 
 The DefaultFormatFactory makes formatting of the following datatype available 
by
 returning an instance of the interface FormatFactoryInterface based on the 
provided
@@ -1693,7 +1695,7 @@ FormattingOptions:
 The DefaultFormatFactory can be overridden by providing an instance of
 FactoryRegistry in the registry in use (e.g., Spring or JNDI).
 
-== Using the Java DSL
+=== Using the Java DSL
 
 The next step instantiates the DataFormat _bindy_ class
 associated with this record type and providing a class as a parameter.
@@ -1825,7 +1827,7 @@ from("direct:handleOrders")
    .to("file://outbox")
 ----
 
-== Using Spring XML
+=== Using Spring XML
 
 This is really easy to use Spring as your favorite DSL language to
 declare the routes to be used for camel-bindy. The following example
diff --git a/components/camel-bonita/src/main/docs/bonita-component.adoc 
b/components/camel-bonita/src/main/docs/bonita-component.adoc
index 1d79e45b222..77c58b43135 100644
--- a/components/camel-bonita/src/main/docs/bonita-component.adoc
+++ b/components/camel-bonita/src/main/docs/bonita-component.adoc
@@ -37,8 +37,9 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+== Usage
 
-== Body content
+=== Body content
 
 For the startCase operation, the input variables are retrieved from the body 
message.
 This one has to contain a `Map<String,Serializable>`.
diff --git 
a/components/camel-box/camel-box-component/src/main/docs/box-component.adoc 
b/components/camel-box/camel-box-component/src/main/docs/box-component.adoc
index fd538521b1b..3abe0486334 100644
--- a/components/camel-box/camel-box-component/src/main/docs/box-component.adoc
+++ b/components/camel-box/camel-box-component/src/main/docs/box-component.adoc
@@ -40,22 +40,6 @@ for this component:
 </dependency>
 ----
 
-== Connection Authentication Types
-
-The Box component supports three different types of authenticated connections.
-
-=== Standard Authentication
-
-*Standard Authentication* uses the *OAuth 2.0 three-legged authentication 
process* to authenticate its connections with Box.com. This type of 
authentication enables Box *managed users* and *external users* to access, 
edit, and save their Box content through the Box component.
-
-=== App Enterprise Authentication
-
-*App Enterprise Authentication* uses the *OAuth 2.0 with JSON Web Tokens 
(JWT)* to authenticate its connections as a *Service Account* for a *Box 
Application*. This type of authentication enables a service account to access, 
edit, and save the Box content of its *Box Application* through the Box 
component.
-
-=== App User Authentication
-
-*App User Authentication* uses the *OAuth 2.0 with JSON Web Tokens (JWT)* to 
authenticate its connections as an *App User* for a *Box Application*. This 
type of authentication enables app users to access, edit, and save their Box 
content in its *Box Application* through the Box component.
-
 // component-configure options: START
 
 // component-configure options: END
@@ -69,6 +53,24 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+== Usage
+
+=== Connection Authentication Types
+
+The Box component supports three different types of authenticated connections.
+
+==== Standard Authentication
+
+*Standard Authentication* uses the *OAuth 2.0 three-legged authentication 
process* to authenticate its connections with Box.com. This type of 
authentication enables Box *managed users* and *external users* to access, 
edit, and save their Box content through the Box component.
+
+==== App Enterprise Authentication
+
+*App Enterprise Authentication* uses the *OAuth 2.0 with JSON Web Tokens 
(JWT)* to authenticate its connections as a *Service Account* for a *Box 
Application*. This type of authentication enables a service account to access, 
edit, and save the Box content of its *Box Application* through the Box 
component.
+
+==== App User Authentication
+
+*App User Authentication* uses the *OAuth 2.0 with JSON Web Tokens (JWT)* to 
authenticate its connections as an *App User* for a *Box Application*. This 
type of authentication enables app users to access, edit, and save their Box 
content in its *Box Application* through the Box component.
+
 == Examples
 
 The following route uploads new files to the user's root folder:
diff --git a/components/camel-browse/src/main/docs/browse-component.adoc 
b/components/camel-browse/src/main/docs/browse-component.adoc
index d2c1baf128c..42722d650cc 100644
--- a/components/camel-browse/src/main/docs/browse-component.adoc
+++ b/components/camel-browse/src/main/docs/browse-component.adoc
@@ -41,7 +41,7 @@ include::partial$component-endpoint-options.adoc[]
 // endpoint options: END
 
 
-== Example
+== Examples
 
 In the route below, we insert a `browse:` component to be able to browse
 the Exchanges that are passing through:
diff --git 
a/components/camel-caffeine/src/main/docs/caffeine-cache-component.adoc 
b/components/camel-caffeine/src/main/docs/caffeine-cache-component.adoc
index 98a3476ed6c..9b30ac3a130 100644
--- a/components/camel-caffeine/src/main/docs/caffeine-cache-component.adoc
+++ b/components/camel-caffeine/src/main/docs/caffeine-cache-component.adoc
@@ -53,6 +53,15 @@ include::partial$component-endpoint-options.adoc[]
 include::partial$component-endpoint-headers.adoc[]
 // component headers: END
 
+== Usage
+
+=== Checking the operation result
+
+Each time you'll use an operation on the cache, you'll have two different 
headers to check for status:
+
+* `CaffeineConstants.ACTION_HAS_RESULT`
+* `CaffeineConstants.ACTION_SUCCEEDED`
+
 == Examples
 
 You can use your cache with the following code:
@@ -79,11 +88,4 @@ protected RouteBuilder createRouteBuilder() throws Exception 
{
 
 In this way, you'll work always on the same cache in the registry.
 
-== Checking the operation result
-
-Each time you'll use an operation on the cache, you'll have two different 
headers to check for status:
-
-* `CaffeineConstants.ACTION_HAS_RESULT`
-* `CaffeineConstants.ACTION_SUCCEEDED`
-
 include::spring-boot:partial$starter.adoc[]
diff --git a/components/camel-cassandraql/src/main/docs/cql-component.adoc 
b/components/camel-cassandraql/src/main/docs/cql-component.adoc
index 0422bdefa4c..fcbf4bc3ab6 100644
--- a/components/camel-cassandraql/src/main/docs/cql-component.adoc
+++ b/components/camel-cassandraql/src/main/docs/cql-component.adoc
@@ -45,7 +45,9 @@ include::partial$component-endpoint-options.adoc[]
 include::partial$component-endpoint-headers.adoc[]
 // component headers: END
 
-== Endpoint Connection Syntax
+== Usage
+
+=== Endpoint Connection Syntax
 
 The endpoint can initiate the Cassandra connection or use an existing
 one.
@@ -64,9 +66,9 @@ To fine-tune the Cassandra connection (SSL options, pooling 
options,
 load balancing policy, retry policy, reconnection policy...), create
 your own Cluster instance and give it to the Camel endpoint.
 
-== Messages
+=== Messages
 
-=== Incoming Message
+==== Incoming Message
 
 The Camel Cassandra endpoint expects a bunch of simple objects (`Object`
 or `Object[]` or `Collection<Object>`) which will be bound to the CQL
@@ -78,7 +80,7 @@ Headers:
 * `CamelCqlQuery` (optional, `String` or `RegularStatement`): CQL query
 either as a plain String or built using the `QueryBuilder`.
 
-=== Outgoing Message
+==== Outgoing Message
 
 The Camel Cassandra endpoint produces one or many a Cassandra Row
 objects depending on the `resultSetConversionStrategy`:
@@ -89,7 +91,7 @@ objects depending on the `resultSetConversionStrategy`:
 * Anything else, if `resultSetConversionStrategy` is a custom
 implementation of the `ResultSetConversionStrategy`
 
-== Repositories
+=== Repositories
 
 Cassandra can be used to store message keys or messages for the
 idempotent and aggregation EIP.
@@ -100,7 +102,7 @@ anti-patterns queues and queue like datasets]. It's advised 
to use
 LeveledCompaction and a small GC grace setting for these tables to allow
 tombstoned rows to be removed quickly.
 
-== Idempotent repository
+==== Idempotent repository
 
 The `NamedCassandraIdempotentRepository` stores messages keys in a
 Cassandra table like this:
@@ -142,7 +144,7 @@ Alternatively, the `CassandraIdempotentRepository` does not 
have a
 `LOCAL_QUORUM`…
 |=======================================================================
 
-== Aggregation repository
+==== Aggregation repository
 
 The `NamedCassandraAggregationRepository` stores exchanges by
 correlation key in a Cassandra table like this:
diff --git a/components/camel-cbor/src/main/docs/cbor-dataformat.adoc 
b/components/camel-cbor/src/main/docs/cbor-dataformat.adoc
index dabad671a4d..49725fcbf57 100644
--- a/components/camel-cbor/src/main/docs/cbor-dataformat.adoc
+++ b/components/camel-cbor/src/main/docs/cbor-dataformat.adoc
@@ -24,13 +24,15 @@ from("activemq:My.Queue")
     .to("mqseries:Another.Queue");
 -------------------------------
 
-== CBOR Options
+== Usage
+
+=== CBOR Options
 
 // dataformat options: START
 include::partial$dataformat-options.adoc[]
 // dataformat options: END
 
-=== Using CBOR in Spring DSL
+==== Using CBOR in Spring DSL
 
 When using Data Format in Spring DSL, you need to
 declare the data formats first. This is done in the *DataFormats* XML
diff --git a/components/camel-chunk/src/main/docs/chunk-component.adoc 
b/components/camel-chunk/src/main/docs/chunk-component.adoc
index f6c938fb6ee..de73618e99a 100644
--- a/components/camel-chunk/src/main/docs/chunk-component.adoc
+++ b/components/camel-chunk/src/main/docs/chunk-component.adoc
@@ -51,7 +51,9 @@ with extensions _.chtml_ or _.cxml. _If you need to specify a 
different
 folder or extensions, you will need to use the specific options listed
 above.
 
-== Chunk Context
+== Usage
+
+=== Chunk Context
 
 Camel will provide exchange information in the Chunk context (just
 a `Map`). The `Exchange` is transferred as:
@@ -78,7 +80,7 @@ a `Map`). The `Exchange` is transferred as:
 |`response` |The Out message (only for InOut message exchange pattern).
 |=======================================================================
 
-== Dynamic templates
+=== Dynamic templates
 
 Camel provides two headers by which you can define a different resource
 location for a template or the template content itself. If any of these
@@ -133,7 +135,7 @@ 
to("chunk:file_example?themeFolder=template&themeSubfolder=subfolder&extension=c
 In this example, the Chunk component will look for the file
 `file_example.chunk` in the folder `template/subfolder`.
 
-== The Email Example
+=== The Email Example
 
 In this sample, we want to use Chunk templating for an order confirmation 
email.
 The email template is laid out in Chunk as:
diff --git a/components/camel-coap/src/main/docs/coap-component.adoc 
b/components/camel-coap/src/main/docs/coap-component.adoc
index 21b10b53a2f..11c0825df4b 100644
--- a/components/camel-coap/src/main/docs/coap-component.adoc
+++ b/components/camel-coap/src/main/docs/coap-component.adoc
@@ -19,23 +19,6 @@ allows you to work with CoAP, a lightweight REST-type 
protocol for machine-to-ma
 http://coap.technology/[CoAP], Constrained Application Protocol is a 
specialized web transfer protocol 
 for use with constrained nodes and constrained networks, and it is based on 
RFC 7252.
 
-Camel supports the DTLS, TCP and TLS protocols via the following URI schemes:
-
-[width="100%",cols="2,5",options="header"]
-|===
-| Scheme | Protocol
-| coap   | UDP
-| coaps  | UDP + DTLS
-| coap+tcp | TCP
-| coaps+tcp | TCP + TLS
-|===
-
-There are a number of different configuration options to configure TLS. For 
both DTLS (the "coaps" uri scheme)
-and TCP + TLS (the "coaps+tcp" uri scheme), it is possible to use a 
"sslContextParameters" parameter, from 
-which the camel-coap component will extract the required truststore / 
keystores etc. to set up TLS. In addition,
-the DTLS protocol supports two alternative configuration mechanisms. To use a 
pre-shared key, configure a 
-pskStore, and to work with raw public keys, configure privateKey + publicKey 
objects.
-
 Maven users will need to add the following dependency to their pom.xml
 for this component:
 
@@ -67,14 +50,34 @@ include::partial$component-endpoint-options.adoc[]
 include::partial$component-endpoint-headers.adoc[]
 // component headers: END
 
+== Usage
+
+Camel supports the DTLS, TCP and TLS protocols via the following URI schemes:
+
+[width="100%",cols="2,5",options="header"]
+|===
+| Scheme | Protocol
+| coap   | UDP
+| coaps  | UDP + DTLS
+| coap+tcp | TCP
+| coaps+tcp | TCP + TLS
+|===
+
+There are a number of different configuration options to configure TLS. For 
both DTLS (the "coaps" uri scheme)
+and TCP + TLS (the "coaps+tcp" uri scheme), it is possible to use a 
"sslContextParameters" parameter, from
+which the camel-coap component will extract the required truststore / 
keystores etc. to set up TLS. In addition,
+the DTLS protocol supports two alternative configuration mechanisms. To use a 
pre-shared key, configure a
+pskStore, and to work with raw public keys, configure privateKey + publicKey 
objects.
+
+
 === Configuring the CoAP producer request method
 
 The following rules determine which request method the CoAP producer will use 
to invoke the target URI:
 
  1. The value of the `CamelCoapMethod` header
- 2. **GET** if a query string is provided on the target CoAP server URI.
- 3. **POST** if the message exchange body is not null.
- 4. **GET** otherwise.
+ 2. `GE` if a query string is provided on the target CoAP server URI.
+ 3. `POST` if the message exchange body is not null.
+ 4. `GET` otherwise.
 
 
 include::spring-boot:partial$starter.adoc[]
diff --git a/components/camel-cometd/src/main/docs/cometd-component.adoc 
b/components/camel-cometd/src/main/docs/cometd-component.adoc
index 00366f71656..f378625d839 100644
--- a/components/camel-cometd/src/main/docs/cometd-component.adoc
+++ b/components/camel-cometd/src/main/docs/cometd-component.adoc
@@ -77,15 +77,15 @@ For file, for webapp resources located in the Web 
Application directory --> `com
 
 For classpath, when, for example, the web resources are packaged inside the 
webapp folder --> `cometd://localhost:8080?resourceBase=classpath:webapp`
 
-== Authentication
+=== Authentication
 
 You can configure custom `SecurityPolicy` and `Extension`'s to the
 `CometdComponent` which allows you to use authentication as
 http://cometd.org/documentation/howtos/authentication[documented here]
 
-== Setting up SSL for Cometd Component
+=== Setting up SSL for Cometd Component
 
-=== Using the JSSE Configuration Utility
+==== Using the JSSE Configuration Utility
 
 The Cometd component supports SSL/TLS configuration
 through the xref:manual::camel-configuration-utilities.adoc[Camel JSSE
diff --git a/components/camel-consul/src/main/docs/consul-component.adoc 
b/components/camel-consul/src/main/docs/consul-component.adoc
index 15f8c17ca97..d6eadbc88a0 100644
--- a/components/camel-consul/src/main/docs/consul-component.adoc
+++ b/components/camel-consul/src/main/docs/consul-component.adoc
@@ -51,7 +51,9 @@ include::partial$component-endpoint-options.adoc[]
 include::partial$component-endpoint-headers.adoc[]
 // component headers: END
 
-== Api Endpoint
+== Usage
+
+=== Api Endpoint
 
 The `apiEndpoint` denotes the type of https://www.consul.io/api-docs[consul 
api] which should be addressed.
 
@@ -69,7 +71,9 @@ The `apiEndpoint` denotes the type of 
https://www.consul.io/api-docs[consul api]
 | session | ConsulSessionProducer | -
 |===
 
-== Producer Examples
+== Examples
+
+=== Producer Examples
 
 As an example, we will show how to use the `ConsulAgentProducer` to register a 
service by means of the Consul agent api.
 
diff --git 
a/components/camel-controlbus/src/main/docs/controlbus-component.adoc 
b/components/camel-controlbus/src/main/docs/controlbus-component.adoc
index 9d319240d41..28e6237ae7a 100644
--- a/components/camel-controlbus/src/main/docs/controlbus-component.adoc
+++ b/components/camel-controlbus/src/main/docs/controlbus-component.adoc
@@ -71,8 +71,9 @@ include::partial$component-endpoint-options.adoc[]
 
 // endpoint options: END
 
+== Examples
 
-== Using route command
+=== Using route command
 
 The route command allows you to do common tasks on a given route very easily.
 For example, to start a route, you can send an empty message to this endpoint:
@@ -90,7 +91,7 @@ String status = 
template.requestBody("controlbus:route?routeId=foo&action=status
 ----
 
 [[ControlBus-Gettingperformancestatistics]]
-== Getting performance statistics
+=== Getting performance statistics
 
 This requires JMX to be enabled (it is enabled by default) then you can get the
 performance statics per route, or for the CamelContext.
@@ -111,7 +112,7 @@ To get statics for the entire `CamelContext` you just omit 
the routeId parameter
 String xml = template.requestBody("controlbus:route?action=stats", null, 
String.class);
 ----
 
-== Using Simple language
+=== Using Simple language
 
 You can use the xref:languages:simple-language.adoc[Simple] language with the 
control bus.
 For example, to stop a specific route, you can send a message to the 
`"controlbus:language:simple"` endpoint containing the following message:


Reply via email to