http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-hdfs/src/main/docs/hdfs-component.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs/src/main/docs/hdfs-component.adoc 
b/components/camel-hdfs/src/main/docs/hdfs-component.adoc
index 5d192e9..44c0bb7 100644
--- a/components/camel-hdfs/src/main/docs/hdfs-component.adoc
+++ b/components/camel-hdfs/src/main/docs/hdfs-component.adoc
@@ -1,4 +1,4 @@
-## HDFS Component (deprecated)
+== HDFS Component (deprecated)
 
 *Available as of Camel version 2.8*
 
@@ -60,11 +60,11 @@ The HDFS component supports 2 options which are listed 
below.
 
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **jAASConfiguration** (common) | To use the given configuration for security 
with JAAS. |  | Configuration
-| **resolveProperty Placeholders** (advanced) | Whether the component should 
resolve property placeholders on itself when starting. Only properties which 
are of String type can use property placeholders. | true | boolean
-|=======================================================================
+| *jAASConfiguration* (common) | To use the given configuration for security 
with JAAS. |  | Configuration
+| *resolveProperty Placeholders* (advanced) | Whether the component should 
resolve property placeholders on itself when starting. Only properties which 
are of String type can use property placeholders. | true | boolean
+|===
 // component options: END
 
 
@@ -76,64 +76,66 @@ The HDFS component supports 2 options which are listed 
below.
 // endpoint options: START
 The HDFS endpoint is configured using URI syntax:
 
-    hdfs:hostName:port/path
+----
+hdfs:hostName:port/path
+----
 
 with the following path and query parameters:
 
-#### Path Parameters (3 parameters):
+==== Path Parameters (3 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **hostName** | *Required* HDFS host to use |  | String
-| **port** | HDFS port to use | 8020 | int
-| **path** | *Required* The directory path to use |  | String
-|=======================================================================
+| *hostName* | *Required* HDFS host to use |  | String
+| *port* | HDFS port to use | 8020 | int
+| *path* | *Required* The directory path to use |  | String
+|===
 
-#### Query Parameters (38 parameters):
+==== Query Parameters (38 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **connectOnStartup** (common) | Whether to connect to the HDFS file system 
on starting the producer/consumer. If false then the connection is created 
on-demand. Notice that HDFS may take up till 15 minutes to establish a 
connection as it has hardcoded 45 x 20 sec redelivery. By setting this option 
to false allows your application to startup and not block for up till 15 
minutes. | true | boolean
-| **fileSystemType** (common) | Set to LOCAL to not use HDFS but local 
java.io.File instead. | HDFS | HdfsFileSystemType
-| **fileType** (common) | The file type to use. For more details see Hadoop 
HDFS documentation about the various files types. | NORMAL_FILE | HdfsFileType
-| **keyType** (common) | The type for the key in case of sequence or map 
files. | NULL | WritableType
-| **owner** (common) | The file owner must match this owner for the consumer 
to pickup the file. Otherwise the file is skipped. |  | String
-| **valueType** (common) | The type for the key in case of sequence or map 
files | BYTES | WritableType
-| **bridgeErrorHandler** (consumer) | Allows for bridging the consumer to the 
Camel routing Error Handler which mean any exceptions occurred while the 
consumer is trying to pickup incoming messages or the likes will now be 
processed as a message and handled by the routing Error Handler. By default the 
consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN or ERROR level and ignored. | false | 
boolean
-| **delay** (consumer) | The interval (milliseconds) between the directory 
scans. | 1000 | long
-| **initialDelay** (consumer) | For the consumer how much to wait 
(milliseconds) before to start scanning the directory. |  | long
-| **pattern** (consumer) | The pattern used for scanning the directory | * | 
String
-| **sendEmptyMessageWhenIdle** (consumer) | If the polling consumer did not 
poll any files you can enable this option to send an empty message (no body) 
instead. | false | boolean
-| **exceptionHandler** (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
options is not in use. By default the consumer will deal with exceptions that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
-| **exchangePattern** (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
-| **pollStrategy** (consumer) | A pluggable 
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your 
custom implementation to control error handling usually occurred during the 
poll operation before an Exchange have been created and being routed in Camel. 
|  | PollingConsumerPoll Strategy
-| **append** (producer) | Append to existing file. Notice that not all HDFS 
file systems support the append option. | false | boolean
-| **overwrite** (producer) | Whether to overwrite existing files with the same 
name | true | boolean
-| **blockSize** (advanced) | The size of the HDFS blocks | 67108864 | long
-| **bufferSize** (advanced) | The buffer size used by HDFS | 4096 | int
-| **checkIdleInterval** (advanced) | How often (time in millis) in to run the 
idle checker background task. This option is only in use if the splitter 
strategy is IDLE. | 500 | int
-| **chunkSize** (advanced) | When reading a normal file this is split into 
chunks producing a message per chunk. | 4096 | int
-| **compressionCodec** (advanced) | The compression codec to use | DEFAULT | 
HdfsCompressionCodec
-| **compressionType** (advanced) | The compression type to use (is default not 
in use) | NONE | CompressionType
-| **openedSuffix** (advanced) | When a file is opened for reading/writing the 
file is renamed with this suffix to avoid to read it during the writing phase. 
| opened | String
-| **readSuffix** (advanced) | Once the file has been read is renamed with this 
suffix to avoid to read it again. | read | String
-| **replication** (advanced) | The HDFS replication factor | 3 | short
-| **splitStrategy** (advanced) | In the current version of Hadoop opening a 
file in append mode is disabled since it's not very reliable. So for the moment 
it's only possible to create new files. The Camel HDFS endpoint tries to solve 
this problem in this way: If the split strategy option has been defined the 
hdfs path will be used as a directory and files will be created using the 
configured UuidGenerator. Every time a splitting condition is met a new file is 
created. The splitStrategy option is defined as a string with the following 
syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is 
created and the old is closed when the number of written bytes is more than 
value MESSAGES a new file is created and the old is closed when the number of 
written messages is more than value IDLE a new file is created and the old is 
closed when no writing happened in the last value milliseconds |  | String
-| **synchronous** (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
-| **backoffErrorThreshold** (scheduler) | The number of subsequent error polls 
(failed due some error) that should happen before the backoffMultipler should 
kick-in. |  | int
-| **backoffIdleThreshold** (scheduler) | The number of subsequent idle polls 
that should happen before the backoffMultipler should kick-in. |  | int
-| **backoffMultiplier** (scheduler) | To let the scheduled polling consumer 
backoff if there has been a number of subsequent idles/errors in a row. The 
multiplier is then the number of polls that will be skipped before the next 
actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |  | 
int
-| **greedy** (scheduler) | If greedy is enabled then the ScheduledPollConsumer 
will run immediately again if the previous run polled 1 or more messages. | 
false | boolean
-| **runLoggingLevel** (scheduler) | The consumer logs a start/complete log 
line when it polls. This option allows you to configure the logging level for 
that. | TRACE | LoggingLevel
-| **scheduledExecutorService** (scheduler) | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. |  | ScheduledExecutor Service
-| **scheduler** (scheduler) | To use a cron scheduler from either camel-spring 
or camel-quartz2 component | none | ScheduledPollConsumer Scheduler
-| **schedulerProperties** (scheduler) | To configure additional properties 
when using a custom scheduler or any of the Quartz2 Spring based scheduler. |  
| Map
-| **startScheduler** (scheduler) | Whether the scheduler should be auto 
started. | true | boolean
-| **timeUnit** (scheduler) | Time unit for initialDelay and delay options. | 
MILLISECONDS | TimeUnit
-| **useFixedDelay** (scheduler) | Controls if fixed delay or fixed rate is 
used. See ScheduledExecutorService in JDK for details. | true | boolean
-|=======================================================================
+| *connectOnStartup* (common) | Whether to connect to the HDFS file system on 
starting the producer/consumer. If false then the connection is created 
on-demand. Notice that HDFS may take up till 15 minutes to establish a 
connection as it has hardcoded 45 x 20 sec redelivery. By setting this option 
to false allows your application to startup and not block for up till 15 
minutes. | true | boolean
+| *fileSystemType* (common) | Set to LOCAL to not use HDFS but local 
java.io.File instead. | HDFS | HdfsFileSystemType
+| *fileType* (common) | The file type to use. For more details see Hadoop HDFS 
documentation about the various files types. | NORMAL_FILE | HdfsFileType
+| *keyType* (common) | The type for the key in case of sequence or map files. 
| NULL | WritableType
+| *owner* (common) | The file owner must match this owner for the consumer to 
pickup the file. Otherwise the file is skipped. |  | String
+| *valueType* (common) | The type for the key in case of sequence or map files 
| BYTES | WritableType
+| *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the 
Camel routing Error Handler which mean any exceptions occurred while the 
consumer is trying to pickup incoming messages or the likes will now be 
processed as a message and handled by the routing Error Handler. By default the 
consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN or ERROR level and ignored. | false | 
boolean
+| *delay* (consumer) | The interval (milliseconds) between the directory 
scans. | 1000 | long
+| *initialDelay* (consumer) | For the consumer how much to wait (milliseconds) 
before to start scanning the directory. |  | long
+| *pattern* (consumer) | The pattern used for scanning the directory | * | 
String
+| *sendEmptyMessageWhenIdle* (consumer) | If the polling consumer did not poll 
any files you can enable this option to send an empty message (no body) 
instead. | false | boolean
+| *exceptionHandler* (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
options is not in use. By default the consumer will deal with exceptions that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
+| *exchangePattern* (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
+| *pollStrategy* (consumer) | A pluggable 
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your 
custom implementation to control error handling usually occurred during the 
poll operation before an Exchange have been created and being routed in Camel. 
|  | PollingConsumerPoll Strategy
+| *append* (producer) | Append to existing file. Notice that not all HDFS file 
systems support the append option. | false | boolean
+| *overwrite* (producer) | Whether to overwrite existing files with the same 
name | true | boolean
+| *blockSize* (advanced) | The size of the HDFS blocks | 67108864 | long
+| *bufferSize* (advanced) | The buffer size used by HDFS | 4096 | int
+| *checkIdleInterval* (advanced) | How often (time in millis) in to run the 
idle checker background task. This option is only in use if the splitter 
strategy is IDLE. | 500 | int
+| *chunkSize* (advanced) | When reading a normal file this is split into 
chunks producing a message per chunk. | 4096 | int
+| *compressionCodec* (advanced) | The compression codec to use | DEFAULT | 
HdfsCompressionCodec
+| *compressionType* (advanced) | The compression type to use (is default not 
in use) | NONE | CompressionType
+| *openedSuffix* (advanced) | When a file is opened for reading/writing the 
file is renamed with this suffix to avoid to read it during the writing phase. 
| opened | String
+| *readSuffix* (advanced) | Once the file has been read is renamed with this 
suffix to avoid to read it again. | read | String
+| *replication* (advanced) | The HDFS replication factor | 3 | short
+| *splitStrategy* (advanced) | In the current version of Hadoop opening a file 
in append mode is disabled since it's not very reliable. So for the moment it's 
only possible to create new files. The Camel HDFS endpoint tries to solve this 
problem in this way: If the split strategy option has been defined the hdfs 
path will be used as a directory and files will be created using the configured 
UuidGenerator. Every time a splitting condition is met a new file is created. 
The splitStrategy option is defined as a string with the following syntax: 
splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created 
and the old is closed when the number of written bytes is more than value 
MESSAGES a new file is created and the old is closed when the number of written 
messages is more than value IDLE a new file is created and the old is closed 
when no writing happened in the last value milliseconds |  | String
+| *synchronous* (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
+| *backoffErrorThreshold* (scheduler) | The number of subsequent error polls 
(failed due some error) that should happen before the backoffMultipler should 
kick-in. |  | int
+| *backoffIdleThreshold* (scheduler) | The number of subsequent idle polls 
that should happen before the backoffMultipler should kick-in. |  | int
+| *backoffMultiplier* (scheduler) | To let the scheduled polling consumer 
backoff if there has been a number of subsequent idles/errors in a row. The 
multiplier is then the number of polls that will be skipped before the next 
actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |  | 
int
+| *greedy* (scheduler) | If greedy is enabled then the ScheduledPollConsumer 
will run immediately again if the previous run polled 1 or more messages. | 
false | boolean
+| *runLoggingLevel* (scheduler) | The consumer logs a start/complete log line 
when it polls. This option allows you to configure the logging level for that. 
| TRACE | LoggingLevel
+| *scheduledExecutorService* (scheduler) | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. |  | ScheduledExecutor Service
+| *scheduler* (scheduler) | To use a cron scheduler from either camel-spring 
or camel-quartz2 component | none | ScheduledPollConsumer Scheduler
+| *schedulerProperties* (scheduler) | To configure additional properties when 
using a custom scheduler or any of the Quartz2 Spring based scheduler. |  | Map
+| *startScheduler* (scheduler) | Whether the scheduler should be auto started. 
| true | boolean
+| *timeUnit* (scheduler) | Time unit for initialDelay and delay options. | 
MILLISECONDS | TimeUnit
+| *useFixedDelay* (scheduler) | Controls if fixed delay or fixed rate is used. 
See ScheduledExecutorService in JDK for details. | true | boolean
+|===
 // endpoint options: END
 
 

http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc 
b/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
index 4c29e18..0db362a 100644
--- a/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
+++ b/components/camel-hdfs2/src/main/docs/hdfs2-component.adoc
@@ -1,4 +1,4 @@
-## HDFS2 Component
+== HDFS2 Component
 
 *Available as of Camel version 2.14*
 
@@ -58,11 +58,11 @@ The HDFS2 component supports 2 options which are listed 
below.
 
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **jAASConfiguration** (common) | To use the given configuration for security 
with JAAS. |  | Configuration
-| **resolveProperty Placeholders** (advanced) | Whether the component should 
resolve property placeholders on itself when starting. Only properties which 
are of String type can use property placeholders. | true | boolean
-|=======================================================================
+| *jAASConfiguration* (common) | To use the given configuration for security 
with JAAS. |  | Configuration
+| *resolveProperty Placeholders* (advanced) | Whether the component should 
resolve property placeholders on itself when starting. Only properties which 
are of String type can use property placeholders. | true | boolean
+|===
 // component options: END
 
 
@@ -73,64 +73,66 @@ The HDFS2 component supports 2 options which are listed 
below.
 // endpoint options: START
 The HDFS2 endpoint is configured using URI syntax:
 
-    hdfs2:hostName:port/path
+----
+hdfs2:hostName:port/path
+----
 
 with the following path and query parameters:
 
-#### Path Parameters (3 parameters):
+==== Path Parameters (3 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **hostName** | *Required* HDFS host to use |  | String
-| **port** | HDFS port to use | 8020 | int
-| **path** | *Required* The directory path to use |  | String
-|=======================================================================
+| *hostName* | *Required* HDFS host to use |  | String
+| *port* | HDFS port to use | 8020 | int
+| *path* | *Required* The directory path to use |  | String
+|===
 
-#### Query Parameters (38 parameters):
+==== Query Parameters (38 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **connectOnStartup** (common) | Whether to connect to the HDFS file system 
on starting the producer/consumer. If false then the connection is created 
on-demand. Notice that HDFS may take up till 15 minutes to establish a 
connection as it has hardcoded 45 x 20 sec redelivery. By setting this option 
to false allows your application to startup and not block for up till 15 
minutes. | true | boolean
-| **fileSystemType** (common) | Set to LOCAL to not use HDFS but local 
java.io.File instead. | HDFS | HdfsFileSystemType
-| **fileType** (common) | The file type to use. For more details see Hadoop 
HDFS documentation about the various files types. | NORMAL_FILE | HdfsFileType
-| **keyType** (common) | The type for the key in case of sequence or map 
files. | NULL | WritableType
-| **owner** (common) | The file owner must match this owner for the consumer 
to pickup the file. Otherwise the file is skipped. |  | String
-| **valueType** (common) | The type for the key in case of sequence or map 
files | BYTES | WritableType
-| **bridgeErrorHandler** (consumer) | Allows for bridging the consumer to the 
Camel routing Error Handler which mean any exceptions occurred while the 
consumer is trying to pickup incoming messages or the likes will now be 
processed as a message and handled by the routing Error Handler. By default the 
consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN or ERROR level and ignored. | false | 
boolean
-| **pattern** (consumer) | The pattern used for scanning the directory | * | 
String
-| **sendEmptyMessageWhenIdle** (consumer) | If the polling consumer did not 
poll any files you can enable this option to send an empty message (no body) 
instead. | false | boolean
-| **exceptionHandler** (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
options is not in use. By default the consumer will deal with exceptions that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
-| **exchangePattern** (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
-| **pollStrategy** (consumer) | A pluggable 
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your 
custom implementation to control error handling usually occurred during the 
poll operation before an Exchange have been created and being routed in Camel. 
|  | PollingConsumerPoll Strategy
-| **append** (producer) | Append to existing file. Notice that not all HDFS 
file systems support the append option. | false | boolean
-| **overwrite** (producer) | Whether to overwrite existing files with the same 
name | true | boolean
-| **blockSize** (advanced) | The size of the HDFS blocks | 67108864 | long
-| **bufferSize** (advanced) | The buffer size used by HDFS | 4096 | int
-| **checkIdleInterval** (advanced) | How often (time in millis) in to run the 
idle checker background task. This option is only in use if the splitter 
strategy is IDLE. | 500 | int
-| **chunkSize** (advanced) | When reading a normal file this is split into 
chunks producing a message per chunk. | 4096 | int
-| **compressionCodec** (advanced) | The compression codec to use | DEFAULT | 
HdfsCompressionCodec
-| **compressionType** (advanced) | The compression type to use (is default not 
in use) | NONE | CompressionType
-| **openedSuffix** (advanced) | When a file is opened for reading/writing the 
file is renamed with this suffix to avoid to read it during the writing phase. 
| opened | String
-| **readSuffix** (advanced) | Once the file has been read is renamed with this 
suffix to avoid to read it again. | read | String
-| **replication** (advanced) | The HDFS replication factor | 3 | short
-| **splitStrategy** (advanced) | In the current version of Hadoop opening a 
file in append mode is disabled since it's not very reliable. So for the moment 
it's only possible to create new files. The Camel HDFS endpoint tries to solve 
this problem in this way: If the split strategy option has been defined the 
hdfs path will be used as a directory and files will be created using the 
configured UuidGenerator. Every time a splitting condition is met a new file is 
created. The splitStrategy option is defined as a string with the following 
syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is 
created and the old is closed when the number of written bytes is more than 
value MESSAGES a new file is created and the old is closed when the number of 
written messages is more than value IDLE a new file is created and the old is 
closed when no writing happened in the last value milliseconds |  | String
-| **synchronous** (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
-| **backoffErrorThreshold** (scheduler) | The number of subsequent error polls 
(failed due some error) that should happen before the backoffMultipler should 
kick-in. |  | int
-| **backoffIdleThreshold** (scheduler) | The number of subsequent idle polls 
that should happen before the backoffMultipler should kick-in. |  | int
-| **backoffMultiplier** (scheduler) | To let the scheduled polling consumer 
backoff if there has been a number of subsequent idles/errors in a row. The 
multiplier is then the number of polls that will be skipped before the next 
actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |  | 
int
-| **delay** (scheduler) | Milliseconds before the next poll. You can also 
specify time values using units such as 60s (60 seconds) 5m30s (5 minutes and 
30 seconds) and 1h (1 hour). | 500 | long
-| **greedy** (scheduler) | If greedy is enabled then the ScheduledPollConsumer 
will run immediately again if the previous run polled 1 or more messages. | 
false | boolean
-| **initialDelay** (scheduler) | Milliseconds before the first poll starts. 
You can also specify time values using units such as 60s (60 seconds) 5m30s (5 
minutes and 30 seconds) and 1h (1 hour). | 1000 | long
-| **runLoggingLevel** (scheduler) | The consumer logs a start/complete log 
line when it polls. This option allows you to configure the logging level for 
that. | TRACE | LoggingLevel
-| **scheduledExecutorService** (scheduler) | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. |  | ScheduledExecutor Service
-| **scheduler** (scheduler) | To use a cron scheduler from either camel-spring 
or camel-quartz2 component | none | ScheduledPollConsumer Scheduler
-| **schedulerProperties** (scheduler) | To configure additional properties 
when using a custom scheduler or any of the Quartz2 Spring based scheduler. |  
| Map
-| **startScheduler** (scheduler) | Whether the scheduler should be auto 
started. | true | boolean
-| **timeUnit** (scheduler) | Time unit for initialDelay and delay options. | 
MILLISECONDS | TimeUnit
-| **useFixedDelay** (scheduler) | Controls if fixed delay or fixed rate is 
used. See ScheduledExecutorService in JDK for details. | true | boolean
-|=======================================================================
+| *connectOnStartup* (common) | Whether to connect to the HDFS file system on 
starting the producer/consumer. If false then the connection is created 
on-demand. Notice that HDFS may take up till 15 minutes to establish a 
connection as it has hardcoded 45 x 20 sec redelivery. By setting this option 
to false allows your application to startup and not block for up till 15 
minutes. | true | boolean
+| *fileSystemType* (common) | Set to LOCAL to not use HDFS but local 
java.io.File instead. | HDFS | HdfsFileSystemType
+| *fileType* (common) | The file type to use. For more details see Hadoop HDFS 
documentation about the various files types. | NORMAL_FILE | HdfsFileType
+| *keyType* (common) | The type for the key in case of sequence or map files. 
| NULL | WritableType
+| *owner* (common) | The file owner must match this owner for the consumer to 
pickup the file. Otherwise the file is skipped. |  | String
+| *valueType* (common) | The type for the key in case of sequence or map files 
| BYTES | WritableType
+| *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the 
Camel routing Error Handler which mean any exceptions occurred while the 
consumer is trying to pickup incoming messages or the likes will now be 
processed as a message and handled by the routing Error Handler. By default the 
consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN or ERROR level and ignored. | false | 
boolean
+| *pattern* (consumer) | The pattern used for scanning the directory | * | 
String
+| *sendEmptyMessageWhenIdle* (consumer) | If the polling consumer did not poll 
any files you can enable this option to send an empty message (no body) 
instead. | false | boolean
+| *exceptionHandler* (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
options is not in use. By default the consumer will deal with exceptions that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
+| *exchangePattern* (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
+| *pollStrategy* (consumer) | A pluggable 
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your 
custom implementation to control error handling usually occurred during the 
poll operation before an Exchange have been created and being routed in Camel. 
|  | PollingConsumerPoll Strategy
+| *append* (producer) | Append to existing file. Notice that not all HDFS file 
systems support the append option. | false | boolean
+| *overwrite* (producer) | Whether to overwrite existing files with the same 
name | true | boolean
+| *blockSize* (advanced) | The size of the HDFS blocks | 67108864 | long
+| *bufferSize* (advanced) | The buffer size used by HDFS | 4096 | int
+| *checkIdleInterval* (advanced) | How often (time in millis) in to run the 
idle checker background task. This option is only in use if the splitter 
strategy is IDLE. | 500 | int
+| *chunkSize* (advanced) | When reading a normal file this is split into 
chunks producing a message per chunk. | 4096 | int
+| *compressionCodec* (advanced) | The compression codec to use | DEFAULT | 
HdfsCompressionCodec
+| *compressionType* (advanced) | The compression type to use (is default not 
in use) | NONE | CompressionType
+| *openedSuffix* (advanced) | When a file is opened for reading/writing the 
file is renamed with this suffix to avoid to read it during the writing phase. 
| opened | String
+| *readSuffix* (advanced) | Once the file has been read is renamed with this 
suffix to avoid to read it again. | read | String
+| *replication* (advanced) | The HDFS replication factor | 3 | short
+| *splitStrategy* (advanced) | In the current version of Hadoop opening a file 
in append mode is disabled since it's not very reliable. So for the moment it's 
only possible to create new files. The Camel HDFS endpoint tries to solve this 
problem in this way: If the split strategy option has been defined the hdfs 
path will be used as a directory and files will be created using the configured 
UuidGenerator. Every time a splitting condition is met a new file is created. 
The splitStrategy option is defined as a string with the following syntax: 
splitStrategy=ST:valueST:value... where ST can be: BYTES a new file is created 
and the old is closed when the number of written bytes is more than value 
MESSAGES a new file is created and the old is closed when the number of written 
messages is more than value IDLE a new file is created and the old is closed 
when no writing happened in the last value milliseconds |  | String
+| *synchronous* (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
+| *backoffErrorThreshold* (scheduler) | The number of subsequent error polls 
(failed due some error) that should happen before the backoffMultipler should 
kick-in. |  | int
+| *backoffIdleThreshold* (scheduler) | The number of subsequent idle polls 
that should happen before the backoffMultipler should kick-in. |  | int
+| *backoffMultiplier* (scheduler) | To let the scheduled polling consumer 
backoff if there has been a number of subsequent idles/errors in a row. The 
multiplier is then the number of polls that will be skipped before the next 
actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |  | 
int
+| *delay* (scheduler) | Milliseconds before the next poll. You can also 
specify time values using units such as 60s (60 seconds) 5m30s (5 minutes and 
30 seconds) and 1h (1 hour). | 500 | long
+| *greedy* (scheduler) | If greedy is enabled then the ScheduledPollConsumer 
will run immediately again if the previous run polled 1 or more messages. | 
false | boolean
+| *initialDelay* (scheduler) | Milliseconds before the first poll starts. You 
can also specify time values using units such as 60s (60 seconds) 5m30s (5 
minutes and 30 seconds) and 1h (1 hour). | 1000 | long
+| *runLoggingLevel* (scheduler) | The consumer logs a start/complete log line 
when it polls. This option allows you to configure the logging level for that. 
| TRACE | LoggingLevel
+| *scheduledExecutorService* (scheduler) | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. |  | ScheduledExecutor Service
+| *scheduler* (scheduler) | To use a cron scheduler from either camel-spring 
or camel-quartz2 component | none | ScheduledPollConsumer Scheduler
+| *schedulerProperties* (scheduler) | To configure additional properties when 
using a custom scheduler or any of the Quartz2 Spring based scheduler. |  | Map
+| *startScheduler* (scheduler) | Whether the scheduler should be auto started. 
| true | boolean
+| *timeUnit* (scheduler) | Time unit for initialDelay and delay options. | 
MILLISECONDS | TimeUnit
+| *useFixedDelay* (scheduler) | Controls if fixed delay or fixed rate is used. 
See ScheduledExecutorService in JDK for details. | true | boolean
+|===
 // endpoint options: END
 
 

http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-hessian/src/main/docs/hessian-dataformat.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hessian/src/main/docs/hessian-dataformat.adoc 
b/components/camel-hessian/src/main/docs/hessian-dataformat.adoc
index 97f3197..916085d 100644
--- a/components/camel-hessian/src/main/docs/hessian-dataformat.adoc
+++ b/components/camel-hessian/src/main/docs/hessian-dataformat.adoc
@@ -1,4 +1,4 @@
-## Hessian DataFormat
+== Hessian DataFormat
 
 *Available as of Camel version 2.17*
 
@@ -24,10 +24,10 @@ The Hessian dataformat supports 1 options which are listed 
below.
 
 
 [width="100%",cols="2s,1m,1m,6",options="header"]
-|=======================================================================
+|===
 | Name | Default | Java Type | Description
 | contentTypeHeader | false | Boolean | Whether the data format should set the 
Content-Type header with the type from the data format if the data format is 
capable of doing so. For example application/xml for data formats marshalling 
to XML or application/json for data formats marshalling to JSon etc.
-|=======================================================================
+|===
 // dataformat options: END
 
 ### Using the Hessian data format in Java DSL
@@ -48,4 +48,4 @@ The Hessian dataformat supports 1 options which are listed 
below.
             <marshal ref="hessian"/>
         </route>
     </camelContext>
---------------------------------------------------------------------------------
\ No newline at end of file
+--------------------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-hipchat/src/main/docs/hipchat-component.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hipchat/src/main/docs/hipchat-component.adoc 
b/components/camel-hipchat/src/main/docs/hipchat-component.adoc
index e9f49b5..2b15e62 100644
--- a/components/camel-hipchat/src/main/docs/hipchat-component.adoc
+++ b/components/camel-hipchat/src/main/docs/hipchat-component.adoc
@@ -1,4 +1,4 @@
-## Hipchat Component
+== Hipchat Component
 
 *Available as of Camel version 2.15*
 
@@ -37,47 +37,49 @@ The Hipchat component has no options.
 // endpoint options: START
 The Hipchat endpoint is configured using URI syntax:
 
-    hipchat:protocol:host:port
+----
+hipchat:protocol:host:port
+----
 
 with the following path and query parameters:
 
-#### Path Parameters (3 parameters):
+==== Path Parameters (3 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **protocol** | *Required* The protocol for the hipchat server such as http. 
|  | String
-| **host** | *Required* The host for the hipchat server such as 
api.hipchat.com |  | String
-| **port** | The port for the hipchat server. Is by default 80. | 80 | Integer
-|=======================================================================
+| *protocol* | *Required* The protocol for the hipchat server such as http. |  
| String
+| *host* | *Required* The host for the hipchat server such as api.hipchat.com 
|  | String
+| *port* | The port for the hipchat server. Is by default 80. | 80 | Integer
+|===
 
-#### Query Parameters (21 parameters):
+==== Query Parameters (21 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **authToken** (common) | OAuth 2 auth token |  | String
-| **consumeUsers** (common) | Username(s) when consuming messages from the 
hiptchat server. Multiple user names can be separated by comma. |  | String
-| **bridgeErrorHandler** (consumer) | Allows for bridging the consumer to the 
Camel routing Error Handler which mean any exceptions occurred while the 
consumer is trying to pickup incoming messages or the likes will now be 
processed as a message and handled by the routing Error Handler. By default the 
consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN or ERROR level and ignored. | false | 
boolean
-| **sendEmptyMessageWhenIdle** (consumer) | If the polling consumer did not 
poll any files you can enable this option to send an empty message (no body) 
instead. | false | boolean
-| **exceptionHandler** (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
options is not in use. By default the consumer will deal with exceptions that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
-| **exchangePattern** (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
-| **pollStrategy** (consumer) | A pluggable 
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your 
custom implementation to control error handling usually occurred during the 
poll operation before an Exchange have been created and being routed in Camel. 
|  | PollingConsumerPoll Strategy
-| **synchronous** (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
-| **backoffErrorThreshold** (scheduler) | The number of subsequent error polls 
(failed due some error) that should happen before the backoffMultipler should 
kick-in. |  | int
-| **backoffIdleThreshold** (scheduler) | The number of subsequent idle polls 
that should happen before the backoffMultipler should kick-in. |  | int
-| **backoffMultiplier** (scheduler) | To let the scheduled polling consumer 
backoff if there has been a number of subsequent idles/errors in a row. The 
multiplier is then the number of polls that will be skipped before the next 
actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |  | 
int
-| **delay** (scheduler) | Milliseconds before the next poll. You can also 
specify time values using units such as 60s (60 seconds) 5m30s (5 minutes and 
30 seconds) and 1h (1 hour). | 500 | long
-| **greedy** (scheduler) | If greedy is enabled then the ScheduledPollConsumer 
will run immediately again if the previous run polled 1 or more messages. | 
false | boolean
-| **initialDelay** (scheduler) | Milliseconds before the first poll starts. 
You can also specify time values using units such as 60s (60 seconds) 5m30s (5 
minutes and 30 seconds) and 1h (1 hour). | 1000 | long
-| **runLoggingLevel** (scheduler) | The consumer logs a start/complete log 
line when it polls. This option allows you to configure the logging level for 
that. | TRACE | LoggingLevel
-| **scheduledExecutorService** (scheduler) | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. |  | ScheduledExecutor Service
-| **scheduler** (scheduler) | To use a cron scheduler from either camel-spring 
or camel-quartz2 component | none | ScheduledPollConsumer Scheduler
-| **schedulerProperties** (scheduler) | To configure additional properties 
when using a custom scheduler or any of the Quartz2 Spring based scheduler. |  
| Map
-| **startScheduler** (scheduler) | Whether the scheduler should be auto 
started. | true | boolean
-| **timeUnit** (scheduler) | Time unit for initialDelay and delay options. | 
MILLISECONDS | TimeUnit
-| **useFixedDelay** (scheduler) | Controls if fixed delay or fixed rate is 
used. See ScheduledExecutorService in JDK for details. | true | boolean
-|=======================================================================
+| *authToken* (common) | OAuth 2 auth token |  | String
+| *consumeUsers* (common) | Username(s) when consuming messages from the 
hiptchat server. Multiple user names can be separated by comma. |  | String
+| *bridgeErrorHandler* (consumer) | Allows for bridging the consumer to the 
Camel routing Error Handler which mean any exceptions occurred while the 
consumer is trying to pickup incoming messages or the likes will now be 
processed as a message and handled by the routing Error Handler. By default the 
consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN or ERROR level and ignored. | false | 
boolean
+| *sendEmptyMessageWhenIdle* (consumer) | If the polling consumer did not poll 
any files you can enable this option to send an empty message (no body) 
instead. | false | boolean
+| *exceptionHandler* (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
options is not in use. By default the consumer will deal with exceptions that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
+| *exchangePattern* (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
+| *pollStrategy* (consumer) | A pluggable 
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your 
custom implementation to control error handling usually occurred during the 
poll operation before an Exchange have been created and being routed in Camel. 
|  | PollingConsumerPoll Strategy
+| *synchronous* (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
+| *backoffErrorThreshold* (scheduler) | The number of subsequent error polls 
(failed due some error) that should happen before the backoffMultipler should 
kick-in. |  | int
+| *backoffIdleThreshold* (scheduler) | The number of subsequent idle polls 
that should happen before the backoffMultipler should kick-in. |  | int
+| *backoffMultiplier* (scheduler) | To let the scheduled polling consumer 
backoff if there has been a number of subsequent idles/errors in a row. The 
multiplier is then the number of polls that will be skipped before the next 
actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |  | 
int
+| *delay* (scheduler) | Milliseconds before the next poll. You can also 
specify time values using units such as 60s (60 seconds) 5m30s (5 minutes and 
30 seconds) and 1h (1 hour). | 500 | long
+| *greedy* (scheduler) | If greedy is enabled then the ScheduledPollConsumer 
will run immediately again if the previous run polled 1 or more messages. | 
false | boolean
+| *initialDelay* (scheduler) | Milliseconds before the first poll starts. You 
can also specify time values using units such as 60s (60 seconds) 5m30s (5 
minutes and 30 seconds) and 1h (1 hour). | 1000 | long
+| *runLoggingLevel* (scheduler) | The consumer logs a start/complete log line 
when it polls. This option allows you to configure the logging level for that. 
| TRACE | LoggingLevel
+| *scheduledExecutorService* (scheduler) | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. |  | ScheduledExecutor Service
+| *scheduler* (scheduler) | To use a cron scheduler from either camel-spring 
or camel-quartz2 component | none | ScheduledPollConsumer Scheduler
+| *schedulerProperties* (scheduler) | To configure additional properties when 
using a custom scheduler or any of the Quartz2 Spring based scheduler. |  | Map
+| *startScheduler* (scheduler) | Whether the scheduler should be auto started. 
| true | boolean
+| *timeUnit* (scheduler) | Time unit for initialDelay and delay options. | 
MILLISECONDS | TimeUnit
+| *useFixedDelay* (scheduler) | Controls if fixed delay or fixed rate is used. 
See ScheduledExecutorService in JDK for details. | true | boolean
+|===
 // endpoint options: END
 
 

http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-hl7/src/main/docs/hl7-dataformat.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hl7/src/main/docs/hl7-dataformat.adoc 
b/components/camel-hl7/src/main/docs/hl7-dataformat.adoc
index 9e51a03..d9e4482 100644
--- a/components/camel-hl7/src/main/docs/hl7-dataformat.adoc
+++ b/components/camel-hl7/src/main/docs/hl7-dataformat.adoc
@@ -1,4 +1,4 @@
-## HL7 DataFormat
+== HL7 DataFormat
 
 *Available as of Camel version 2.0*
 
@@ -215,11 +215,11 @@ The HL7 dataformat supports 2 options which are listed 
below.
 
 
 [width="100%",cols="2s,1m,1m,6",options="header"]
-|=======================================================================
+|===
 | Name | Default | Java Type | Description
 | validate | true | Boolean | Whether to validate the HL7 message Is by 
default true.
 | contentTypeHeader | false | Boolean | Whether the data format should set the 
Content-Type header with the type from the data format if the data format is 
capable of doing so. For example application/xml for data formats marshalling 
to XML or application/json for data formats marshalling to JSon etc.
-|=======================================================================
+|===
 // dataformat options: END
 
 * `marshal` = from Message to byte stream (can be used when responding

http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-hl7/src/main/docs/terser-language.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hl7/src/main/docs/terser-language.adoc 
b/components/camel-hl7/src/main/docs/terser-language.adoc
index aff4578..5395322 100644
--- a/components/camel-hl7/src/main/docs/terser-language.adoc
+++ b/components/camel-hl7/src/main/docs/terser-language.adoc
@@ -1,4 +1,4 @@
-## HL7 Terser Language
+== HL7 Terser Language
 ### Terser language
 *Available as of Camel version 2.11.0*
 
@@ -36,8 +36,8 @@ The HL7 Terser language supports 1 options which are listed 
below.
 
 
 [width="100%",cols="2,1m,1m,6",options="header"]
-|=======================================================================
+|===
 | Name | Default | Java Type | Description
 | trim | true | Boolean | Whether to trim the value to remove leading and 
trailing whitespaces and line breaks
-|=======================================================================
+|===
 // language options: END

http://git-wip-us.apache.org/repos/asf/camel/blob/4f4f2e45/components/camel-http/src/main/docs/http-component.adoc
----------------------------------------------------------------------
diff --git a/components/camel-http/src/main/docs/http-component.adoc 
b/components/camel-http/src/main/docs/http-component.adoc
index f20599f..0cfa36a 100644
--- a/components/camel-http/src/main/docs/http-component.adoc
+++ b/components/camel-http/src/main/docs/http-component.adoc
@@ -1,4 +1,4 @@
-## HTTP Component (deprecated)
+== HTTP Component (deprecated)
 
 *Available as of Camel version 1.0*
 
@@ -119,17 +119,17 @@ The HTTP component supports 8 options which are listed 
below.
 
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **httpClientConfigurer** (advanced) | To use the custom HttpClientConfigurer 
to perform configuration of the HttpClient that will be used. |  | 
HttpClientConfigurer
-| **httpConnectionManager** (advanced) | To use a custom HttpConnectionManager 
to manage connections |  | HttpConnectionManager
-| **httpBinding** (producer) | To use a custom HttpBinding to control the 
mapping between Camel message and HttpClient. |  | HttpBinding
-| **httpConfiguration** (producer) | To use the shared HttpConfiguration as 
base configuration. |  | HttpConfiguration
-| **allowJavaSerialized Object** (producer) | Whether to allow java 
serialization when a request uses 
context-type=application/x-java-serialized-object This is by default turned 
off. If you enable this then be aware that Java will deserialize the incoming 
data from the request to Java and that can be a potential security risk. | 
false | boolean
-| **useGlobalSslContext Parameters** (security) | Enable usage of global SSL 
context parameters. | false | boolean
-| **headerFilterStrategy** (filter) | To use a custom 
org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel 
message. |  | HeaderFilterStrategy
-| **resolveProperty Placeholders** (advanced) | Whether the component should 
resolve property placeholders on itself when starting. Only properties which 
are of String type can use property placeholders. | true | boolean
-|=======================================================================
+| *httpClientConfigurer* (advanced) | To use the custom HttpClientConfigurer 
to perform configuration of the HttpClient that will be used. |  | 
HttpClientConfigurer
+| *httpConnectionManager* (advanced) | To use a custom HttpConnectionManager 
to manage connections |  | HttpConnectionManager
+| *httpBinding* (producer) | To use a custom HttpBinding to control the 
mapping between Camel message and HttpClient. |  | HttpBinding
+| *httpConfiguration* (producer) | To use the shared HttpConfiguration as base 
configuration. |  | HttpConfiguration
+| *allowJavaSerialized Object* (producer) | Whether to allow java 
serialization when a request uses 
context-type=application/x-java-serialized-object This is by default turned 
off. If you enable this then be aware that Java will deserialize the incoming 
data from the request to Java and that can be a potential security risk. | 
false | boolean
+| *useGlobalSslContext Parameters* (security) | Enable usage of global SSL 
context parameters. | false | boolean
+| *headerFilterStrategy* (filter) | To use a custom 
org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel 
message. |  | HeaderFilterStrategy
+| *resolveProperty Placeholders* (advanced) | Whether the component should 
resolve property placeholders on itself when starting. Only properties which 
are of String type can use property placeholders. | true | boolean
+|===
 // component options: END
 
 
@@ -143,62 +143,64 @@ The HTTP component supports 8 options which are listed 
below.
 // endpoint options: START
 The HTTP endpoint is configured using URI syntax:
 
-    http:httpUri
+----
+http:httpUri
+----
 
 with the following path and query parameters:
 
-#### Path Parameters (1 parameters):
+==== Path Parameters (1 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **httpUri** | *Required* The url of the HTTP endpoint to call. |  | URI
-|=======================================================================
+| *httpUri* | *Required* The url of the HTTP endpoint to call. |  | URI
+|===
 
-#### Query Parameters (38 parameters):
+==== Query Parameters (38 parameters):
 
 [width="100%",cols="2,5,^1,2",options="header"]
-|=======================================================================
+|===
 | Name | Description | Default | Type
-| **disableStreamCache** (common) | Determines whether or not the raw input 
stream from Servlet is cached or not (Camel will read the stream into a in 
memory/overflow to file Stream caching) cache. By default Camel will cache the 
Servlet input stream to support reading it multiple times to ensure it Camel 
can retrieve all data from the stream. However you can set this option to true 
when you for example need to access the raw stream such as streaming it 
directly to a file or other persistent store. DefaultHttpBinding will copy the 
request input stream into a stream cache and put it into message body if this 
option is false to support reading the stream multiple times. If you use 
Servlet to bridge/proxy an endpoint then consider enabling this option to 
improve performance in case you do not need to read the message payload 
multiple times. The http/http4 producer will by default cache the response body 
stream. If setting this option to true then the producers will not cache the 
respon
 se body stream but use the response stream as-is as the message body. | false 
| boolean
-| **headerFilterStrategy** (common) | To use a custom HeaderFilterStrategy to 
filter header to and from Camel message. |  | HeaderFilterStrategy
-| **httpBinding** (common) | To use a custom HttpBinding to control the 
mapping between Camel message and HttpClient. |  | HttpBinding
-| **bridgeEndpoint** (producer) | If the option is true HttpProducer will 
ignore the Exchange.HTTP_URI header and use the endpoint's URI for request. You 
may also set the option throwExceptionOnFailure to be false to let the 
HttpProducer send all the fault response back. | false | boolean
-| **chunked** (producer) | If this option is false the Servlet will disable 
the HTTP streaming and set the content-length header on the response | true | 
boolean
-| **connectionClose** (producer) | Specifies whether a Connection Close header 
must be added to HTTP Request. By default connectionClose is false. | false | 
boolean
-| **copyHeaders** (producer) | If this option is true then IN exchange headers 
will be copied to OUT exchange headers according to copy strategy. Setting this 
to false allows to only include the headers from the HTTP response (not 
propagating IN headers). | true | boolean
-| **httpMethod** (producer) | Configure the HTTP method to use. The HttpMethod 
header cannot override this option if set. |  | HttpMethods
-| **ignoreResponseBody** (producer) | If this option is true The http producer 
won't read response body and cache the input stream | false | boolean
-| **preserveHostHeader** (producer) | If the option is true HttpProducer will 
set the Host header to the value contained in the current exchange Host header 
useful in reverse proxy applications where you want the Host header received by 
the downstream server to reflect the URL called by the upstream client this 
allows applications which use the Host header to generate accurate URL's for a 
proxied service | false | boolean
-| **throwExceptionOnFailure** (producer) | Option to disable throwing the 
HttpOperationFailedException in case of failed responses from the remote 
server. This allows you to get all responses regardless of the HTTP status 
code. | true | boolean
-| **transferException** (producer) | If enabled and an Exchange failed 
processing on the consumer side and if the caused Exception was send back 
serialized in the response as a application/x-java-serialized-object content 
type. On the producer side the exception will be deserialized and thrown as is 
instead of the HttpOperationFailedException. The caused exception is required 
to be serialized. This is by default turned off. If you enable this then be 
aware that Java will deserialize the incoming data from the request to Java and 
that can be a potential security risk. | false | boolean
-| **cookieHandler** (producer) | Configure a cookie handler to maintain a HTTP 
session |  | CookieHandler
-| **okStatusCodeRange** (producer) | The status codes which is considered a 
success response. The values are inclusive. The range must be defined as 
from-to with the dash included. | 200-299 | String
-| **urlRewrite** (producer) | *Deprecated* Refers to a custom 
org.apache.camel.component.http.UrlRewrite which allows you to rewrite urls 
when you bridge/proxy endpoints. See more details at 
http://camel.apache.org/urlrewrite.html |  | UrlRewrite
-| **httpClientConfigurer** (advanced) | Register a custom configuration 
strategy for new HttpClient instances created by producers or consumers such as 
to configure authentication mechanisms etc |  | HttpClientConfigurer
-| **httpClientOptions** (advanced) | To configure the HttpClient using the 
key/values from the Map. |  | Map
-| **httpConnectionManager** (advanced) | To use a custom HttpConnectionManager 
to manage connections |  | HttpConnectionManager
-| **httpConnectionManager Options** (advanced) | To configure the 
HttpConnectionManager using the key/values from the Map. |  | Map
-| **mapHttpMessageBody** (advanced) | If this option is true then IN exchange 
Body of the exchange will be mapped to HTTP body. Setting this to false will 
avoid the HTTP mapping. | true | boolean
-| **mapHttpMessageFormUrl EncodedBody** (advanced) | If this option is true 
then IN exchange Form Encoded body of the exchange will be mapped to HTTP. 
Setting this to false will avoid the HTTP Form Encoded body mapping. | true | 
boolean
-| **mapHttpMessageHeaders** (advanced) | If this option is true then IN 
exchange Headers of the exchange will be mapped to HTTP headers. Setting this 
to false will avoid the HTTP Headers mapping. | true | boolean
-| **synchronous** (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
-| **proxyAuthDomain** (proxy) | Proxy authentication domain to use with NTML | 
 | String
-| **proxyAuthHost** (proxy) | Proxy authentication host |  | String
-| **proxyAuthMethod** (proxy) | Proxy authentication method to use |  | String
-| **proxyAuthPassword** (proxy) | Proxy authentication password |  | String
-| **proxyAuthPort** (proxy) | Proxy authentication port |  | int
-| **proxyAuthScheme** (proxy) | Proxy authentication scheme to use |  | String
-| **proxyAuthUsername** (proxy) | Proxy authentication username |  | String
-| **proxyHost** (proxy) | Proxy hostname to use |  | String
-| **proxyPort** (proxy) | Proxy port to use |  | int
-| **authDomain** (security) | Authentication domain to use with NTML |  | 
String
-| **authHost** (security) | Authentication host to use with NTML |  | String
-| **authMethod** (security) | Authentication methods allowed to use as a comma 
separated list of values Basic Digest or NTLM. |  | String
-| **authMethodPriority** (security) | Which authentication method to 
prioritize to use either as Basic Digest or NTLM. |  | String
-| **authPassword** (security) | Authentication password |  | String
-| **authUsername** (security) | Authentication username |  | String
-|=======================================================================
+| *disableStreamCache* (common) | Determines whether or not the raw input 
stream from Servlet is cached or not (Camel will read the stream into a in 
memory/overflow to file Stream caching) cache. By default Camel will cache the 
Servlet input stream to support reading it multiple times to ensure it Camel 
can retrieve all data from the stream. However you can set this option to true 
when you for example need to access the raw stream such as streaming it 
directly to a file or other persistent store. DefaultHttpBinding will copy the 
request input stream into a stream cache and put it into message body if this 
option is false to support reading the stream multiple times. If you use 
Servlet to bridge/proxy an endpoint then consider enabling this option to 
improve performance in case you do not need to read the message payload 
multiple times. The http/http4 producer will by default cache the response body 
stream. If setting this option to true then the producers will not cache the 
response
  body stream but use the response stream as-is as the message body. | false | 
boolean
+| *headerFilterStrategy* (common) | To use a custom HeaderFilterStrategy to 
filter header to and from Camel message. |  | HeaderFilterStrategy
+| *httpBinding* (common) | To use a custom HttpBinding to control the mapping 
between Camel message and HttpClient. |  | HttpBinding
+| *bridgeEndpoint* (producer) | If the option is true HttpProducer will ignore 
the Exchange.HTTP_URI header and use the endpoint's URI for request. You may 
also set the option throwExceptionOnFailure to be false to let the HttpProducer 
send all the fault response back. | false | boolean
+| *chunked* (producer) | If this option is false the Servlet will disable the 
HTTP streaming and set the content-length header on the response | true | 
boolean
+| *connectionClose* (producer) | Specifies whether a Connection Close header 
must be added to HTTP Request. By default connectionClose is false. | false | 
boolean
+| *copyHeaders* (producer) | If this option is true then IN exchange headers 
will be copied to OUT exchange headers according to copy strategy. Setting this 
to false allows to only include the headers from the HTTP response (not 
propagating IN headers). | true | boolean
+| *httpMethod* (producer) | Configure the HTTP method to use. The HttpMethod 
header cannot override this option if set. |  | HttpMethods
+| *ignoreResponseBody* (producer) | If this option is true The http producer 
won't read response body and cache the input stream | false | boolean
+| *preserveHostHeader* (producer) | If the option is true HttpProducer will 
set the Host header to the value contained in the current exchange Host header 
useful in reverse proxy applications where you want the Host header received by 
the downstream server to reflect the URL called by the upstream client this 
allows applications which use the Host header to generate accurate URL's for a 
proxied service | false | boolean
+| *throwExceptionOnFailure* (producer) | Option to disable throwing the 
HttpOperationFailedException in case of failed responses from the remote 
server. This allows you to get all responses regardless of the HTTP status 
code. | true | boolean
+| *transferException* (producer) | If enabled and an Exchange failed 
processing on the consumer side and if the caused Exception was send back 
serialized in the response as a application/x-java-serialized-object content 
type. On the producer side the exception will be deserialized and thrown as is 
instead of the HttpOperationFailedException. The caused exception is required 
to be serialized. This is by default turned off. If you enable this then be 
aware that Java will deserialize the incoming data from the request to Java and 
that can be a potential security risk. | false | boolean
+| *cookieHandler* (producer) | Configure a cookie handler to maintain a HTTP 
session |  | CookieHandler
+| *okStatusCodeRange* (producer) | The status codes which is considered a 
success response. The values are inclusive. The range must be defined as 
from-to with the dash included. | 200-299 | String
+| *urlRewrite* (producer) | *Deprecated* Refers to a custom 
org.apache.camel.component.http.UrlRewrite which allows you to rewrite urls 
when you bridge/proxy endpoints. See more details at 
http://camel.apache.org/urlrewrite.html |  | UrlRewrite
+| *httpClientConfigurer* (advanced) | Register a custom configuration strategy 
for new HttpClient instances created by producers or consumers such as to 
configure authentication mechanisms etc |  | HttpClientConfigurer
+| *httpClientOptions* (advanced) | To configure the HttpClient using the 
key/values from the Map. |  | Map
+| *httpConnectionManager* (advanced) | To use a custom HttpConnectionManager 
to manage connections |  | HttpConnectionManager
+| *httpConnectionManager Options* (advanced) | To configure the 
HttpConnectionManager using the key/values from the Map. |  | Map
+| *mapHttpMessageBody* (advanced) | If this option is true then IN exchange 
Body of the exchange will be mapped to HTTP body. Setting this to false will 
avoid the HTTP mapping. | true | boolean
+| *mapHttpMessageFormUrl EncodedBody* (advanced) | If this option is true then 
IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting 
this to false will avoid the HTTP Form Encoded body mapping. | true | boolean
+| *mapHttpMessageHeaders* (advanced) | If this option is true then IN exchange 
Headers of the exchange will be mapped to HTTP headers. Setting this to false 
will avoid the HTTP Headers mapping. | true | boolean
+| *synchronous* (advanced) | Sets whether synchronous processing should be 
strictly used or Camel is allowed to use asynchronous processing (if 
supported). | false | boolean
+| *proxyAuthDomain* (proxy) | Proxy authentication domain to use with NTML |  
| String
+| *proxyAuthHost* (proxy) | Proxy authentication host |  | String
+| *proxyAuthMethod* (proxy) | Proxy authentication method to use |  | String
+| *proxyAuthPassword* (proxy) | Proxy authentication password |  | String
+| *proxyAuthPort* (proxy) | Proxy authentication port |  | int
+| *proxyAuthScheme* (proxy) | Proxy authentication scheme to use |  | String
+| *proxyAuthUsername* (proxy) | Proxy authentication username |  | String
+| *proxyHost* (proxy) | Proxy hostname to use |  | String
+| *proxyPort* (proxy) | Proxy port to use |  | int
+| *authDomain* (security) | Authentication domain to use with NTML |  | String
+| *authHost* (security) | Authentication host to use with NTML |  | String
+| *authMethod* (security) | Authentication methods allowed to use as a comma 
separated list of values Basic Digest or NTLM. |  | String
+| *authMethodPriority* (security) | Which authentication method to prioritize 
to use either as Basic Digest or NTLM. |  | String
+| *authPassword* (security) | Authentication password |  | String
+| *authUsername* (security) | Authentication username |  | String
+|===
 // endpoint options: END
 
 

Reply via email to