This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git


The following commit(s) were added to refs/heads/master by this push:
     new 34f02fa  Regen
34f02fa is described below

commit 34f02fa96a045c94e96a321b5be4b968cb609014
Author: Andrea Cosentino <anco...@gmail.com>
AuthorDate: Fri Dec 20 19:27:28 2019 +0100

    Regen
---
 .../src/main/docs/salesforce-component.adoc        |   2 +-
 .../endpoint/dsl/SparkEndpointBuilderFactory.java  | 347 ++++++---------------
 .../modules/ROOT/pages/elsql-component.adoc        |   2 +-
 .../modules/ROOT/pages/log-component.adoc          |   4 +-
 .../modules/ROOT/pages/salesforce-component.adoc   |   2 +-
 5 files changed, 109 insertions(+), 248 deletions(-)

diff --git 
a/components/camel-salesforce/camel-salesforce-component/src/main/docs/salesforce-component.adoc
 
b/components/camel-salesforce/camel-salesforce-component/src/main/docs/salesforce-component.adoc
index f940311..c28f894 100644
--- 
a/components/camel-salesforce/camel-salesforce-component/src/main/docs/salesforce-component.adoc
+++ 
b/components/camel-salesforce/camel-salesforce-component/src/main/docs/salesforce-component.adoc
@@ -668,7 +668,7 @@ The Salesforce component supports 33 options, which are 
listed below.
 | *httpProxyExcluded Addresses* (proxy) | A list of addresses for which HTTP 
proxy server should not be used. |  | Set
 | *httpProxyAuthUri* (security) | Used in authentication against the HTTP 
proxy server, needs to match the URI of the proxy server in order for the 
httpProxyUsername and httpProxyPassword to be used for authentication. |  | 
String
 | *httpProxyRealm* (security) | Realm of the proxy server, used in preemptive 
Basic/Digest authentication methods against the HTTP proxy server. |  | String
-| *httpProxyUseDigest Auth* (security) | If set to true Digest authentication 
will be used when authenticating to the HTTP proxy,otherwise Basic 
authorization method will be used | false | boolean
+| *httpProxyUseDigest Auth* (security) | If set to true Digest authentication 
will be used when authenticating to the HTTP proxy, otherwise Basic 
authorization method will be used | false | boolean
 | *packages* (common) | In what packages are the generated DTO classes. 
Typically the classes would be generated using camel-salesforce-maven-plugin. 
Set it if using the generated DTOs to gain the benefit of using short SObject 
names in parameters/header values. |  | String[]
 | *basicPropertyBinding* (advanced) | Whether the component should use basic 
property binding (Camel 2.x) or the newer property binding with additional 
capabilities | false | boolean
 | *lazyStartProducer* (producer) | Whether the producer should be started lazy 
(on the first message). By starting lazy you can use this to allow CamelContext 
and routes to startup in situations where a producer may otherwise fail during 
starting and cause the route to fail being started. By deferring this startup 
to be lazy then the startup failure can be handled during routing messages via 
Camel's routing error handlers. Beware that when the first message is processed 
then creating and [...]
diff --git 
a/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
 
b/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
index c3eea0f..1c0237d 100644
--- 
a/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
+++ 
b/core/camel-endpointdsl/src/main/java/org/apache/camel/builder/endpoint/dsl/SparkEndpointBuilderFactory.java
@@ -17,15 +17,13 @@
 package org.apache.camel.builder.endpoint.dsl;
 
 import javax.annotation.Generated;
-import org.apache.camel.ExchangePattern;
 import org.apache.camel.builder.EndpointConsumerBuilder;
 import org.apache.camel.builder.EndpointProducerBuilder;
 import org.apache.camel.builder.endpoint.AbstractEndpointBuilder;
-import org.apache.camel.spi.ExceptionHandler;
 
 /**
- * The spark-rest component is used for hosting REST services which has been
- * defined using Camel rest-dsl.
+ * The spark component can be used to send RDD or DataFrame jobs to Apache 
Spark
+ * cluster.
  * 
  * Generated by camel-package-maven-plugin - do not edit this file!
  */
@@ -34,262 +32,179 @@ public interface SparkEndpointBuilderFactory {
 
 
     /**
-     * Builder for endpoint for the Spark Rest component.
+     * Builder for endpoint for the Spark component.
      */
-    public interface SparkEndpointBuilder extends EndpointConsumerBuilder {
+    public interface SparkEndpointBuilder extends EndpointProducerBuilder {
         default AdvancedSparkEndpointBuilder advanced() {
             return (AdvancedSparkEndpointBuilder) this;
         }
         /**
-         * Accept type such as: 'text/xml', or 'application/json'. By default 
we
-         * accept all kinds of types.
-         * 
-         * The option is a: <code>java.lang.String</code> type.
-         * 
-         * Group: consumer
-         */
-        default SparkEndpointBuilder accept(String accept) {
-            doSetProperty("accept", accept);
-            return this;
-        }
-        /**
-         * Allows for bridging the consumer to the Camel routing Error Handler,
-         * which mean any exceptions occurred while the consumer is trying to
-         * pickup incoming messages, or the likes, will now be processed as a
-         * message and handled by the routing Error Handler. By default the
-         * consumer will use the org.apache.camel.spi.ExceptionHandler to deal
-         * with exceptions, that will be logged at WARN or ERROR level and
-         * ignored.
-         * 
-         * The option is a: <code>boolean</code> type.
-         * 
-         * Group: consumer
-         */
-        default SparkEndpointBuilder bridgeErrorHandler(
-                boolean bridgeErrorHandler) {
-            doSetProperty("bridgeErrorHandler", bridgeErrorHandler);
-            return this;
-        }
-        /**
-         * Allows for bridging the consumer to the Camel routing Error Handler,
-         * which mean any exceptions occurred while the consumer is trying to
-         * pickup incoming messages, or the likes, will now be processed as a
-         * message and handled by the routing Error Handler. By default the
-         * consumer will use the org.apache.camel.spi.ExceptionHandler to deal
-         * with exceptions, that will be logged at WARN or ERROR level and
-         * ignored.
-         * 
-         * The option will be converted to a <code>boolean</code> type.
-         * 
-         * Group: consumer
-         */
-        default SparkEndpointBuilder bridgeErrorHandler(
-                String bridgeErrorHandler) {
-            doSetProperty("bridgeErrorHandler", bridgeErrorHandler);
-            return this;
-        }
-        /**
-         * Determines whether or not the raw input stream from Spark
-         * HttpRequest#getContent() is cached or not (Camel will read the 
stream
-         * into a in light-weight memory based Stream caching) cache. By 
default
-         * Camel will cache the Netty input stream to support reading it
-         * multiple times to ensure Camel can retrieve all data from the 
stream.
-         * However you can set this option to true when you for example need to
-         * access the raw stream, such as streaming it directly to a file or
-         * other persistent store. Mind that if you enable this option, then 
you
-         * cannot read the Netty stream multiple times out of the box, and you
-         * would need manually to reset the reader index on the Spark raw
-         * stream.
+         * Indicates if results should be collected or counted.
          * 
          * The option is a: <code>boolean</code> type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder disableStreamCache(
-                boolean disableStreamCache) {
-            doSetProperty("disableStreamCache", disableStreamCache);
+        default SparkEndpointBuilder collect(boolean collect) {
+            doSetProperty("collect", collect);
             return this;
         }
         /**
-         * Determines whether or not the raw input stream from Spark
-         * HttpRequest#getContent() is cached or not (Camel will read the 
stream
-         * into a in light-weight memory based Stream caching) cache. By 
default
-         * Camel will cache the Netty input stream to support reading it
-         * multiple times to ensure Camel can retrieve all data from the 
stream.
-         * However you can set this option to true when you for example need to
-         * access the raw stream, such as streaming it directly to a file or
-         * other persistent store. Mind that if you enable this option, then 
you
-         * cannot read the Netty stream multiple times out of the box, and you
-         * would need manually to reset the reader index on the Spark raw
-         * stream.
+         * Indicates if results should be collected or counted.
          * 
          * The option will be converted to a <code>boolean</code> type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder disableStreamCache(
-                String disableStreamCache) {
-            doSetProperty("disableStreamCache", disableStreamCache);
+        default SparkEndpointBuilder collect(String collect) {
+            doSetProperty("collect", collect);
             return this;
         }
         /**
-         * If this option is enabled, then during binding from Spark to Camel
-         * Message then the headers will be mapped as well (eg added as header
-         * to the Camel Message as well). You can turn off this option to
-         * disable this. The headers can still be accessed from the
-         * org.apache.camel.component.sparkrest.SparkMessage message with the
-         * method getRequest() that returns the Spark HTTP request instance.
+         * DataFrame to compute against.
          * 
-         * The option is a: <code>boolean</code> type.
+         * The option is a:
+         * 
<code>org.apache.spark.sql.Dataset&lt;org.apache.spark.sql.Row&gt;</code> type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder mapHeaders(boolean mapHeaders) {
-            doSetProperty("mapHeaders", mapHeaders);
+        default SparkEndpointBuilder dataFrame(Object dataFrame) {
+            doSetProperty("dataFrame", dataFrame);
             return this;
         }
         /**
-         * If this option is enabled, then during binding from Spark to Camel
-         * Message then the headers will be mapped as well (eg added as header
-         * to the Camel Message as well). You can turn off this option to
-         * disable this. The headers can still be accessed from the
-         * org.apache.camel.component.sparkrest.SparkMessage message with the
-         * method getRequest() that returns the Spark HTTP request instance.
+         * DataFrame to compute against.
          * 
-         * The option will be converted to a <code>boolean</code> type.
+         * The option will be converted to a
+         * 
<code>org.apache.spark.sql.Dataset&lt;org.apache.spark.sql.Row&gt;</code> type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder mapHeaders(String mapHeaders) {
-            doSetProperty("mapHeaders", mapHeaders);
+        default SparkEndpointBuilder dataFrame(String dataFrame) {
+            doSetProperty("dataFrame", dataFrame);
             return this;
         }
         /**
-         * If enabled and an Exchange failed processing on the consumer side,
-         * and if the caused Exception was send back serialized in the response
-         * as a application/x-java-serialized-object content type. This is by
-         * default turned off. If you enable this then be aware that Java will
-         * deserialize the incoming data from the request to Java and that can
-         * be a potential security risk.
+         * Function performing action against an DataFrame.
          * 
-         * The option is a: <code>boolean</code> type.
+         * The option is a:
+         * <code>org.apache.camel.component.spark.DataFrameCallback</code> 
type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder transferException(boolean 
transferException) {
-            doSetProperty("transferException", transferException);
+        default SparkEndpointBuilder dataFrameCallback(Object 
dataFrameCallback) {
+            doSetProperty("dataFrameCallback", dataFrameCallback);
             return this;
         }
         /**
-         * If enabled and an Exchange failed processing on the consumer side,
-         * and if the caused Exception was send back serialized in the response
-         * as a application/x-java-serialized-object content type. This is by
-         * default turned off. If you enable this then be aware that Java will
-         * deserialize the incoming data from the request to Java and that can
-         * be a potential security risk.
+         * Function performing action against an DataFrame.
          * 
-         * The option will be converted to a <code>boolean</code> type.
+         * The option will be converted to a
+         * <code>org.apache.camel.component.spark.DataFrameCallback</code> 
type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder transferException(String 
transferException) {
-            doSetProperty("transferException", transferException);
+        default SparkEndpointBuilder dataFrameCallback(String 
dataFrameCallback) {
+            doSetProperty("dataFrameCallback", dataFrameCallback);
             return this;
         }
         /**
-         * If this option is enabled, then during binding from Spark to Camel
-         * Message then the header values will be URL decoded (eg %20 will be a
-         * space character.).
+         * Whether the producer should be started lazy (on the first message).
+         * By starting lazy you can use this to allow CamelContext and routes 
to
+         * startup in situations where a producer may otherwise fail during
+         * starting and cause the route to fail being started. By deferring 
this
+         * startup to be lazy then the startup failure can be handled during
+         * routing messages via Camel's routing error handlers. Beware that 
when
+         * the first message is processed then creating and starting the
+         * producer may take a little time and prolong the total processing 
time
+         * of the processing.
          * 
          * The option is a: <code>boolean</code> type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder urlDecodeHeaders(boolean 
urlDecodeHeaders) {
-            doSetProperty("urlDecodeHeaders", urlDecodeHeaders);
+        default SparkEndpointBuilder lazyStartProducer(boolean 
lazyStartProducer) {
+            doSetProperty("lazyStartProducer", lazyStartProducer);
             return this;
         }
         /**
-         * If this option is enabled, then during binding from Spark to Camel
-         * Message then the header values will be URL decoded (eg %20 will be a
-         * space character.).
+         * Whether the producer should be started lazy (on the first message).
+         * By starting lazy you can use this to allow CamelContext and routes 
to
+         * startup in situations where a producer may otherwise fail during
+         * starting and cause the route to fail being started. By deferring 
this
+         * startup to be lazy then the startup failure can be handled during
+         * routing messages via Camel's routing error handlers. Beware that 
when
+         * the first message is processed then creating and starting the
+         * producer may take a little time and prolong the total processing 
time
+         * of the processing.
          * 
          * The option will be converted to a <code>boolean</code> type.
          * 
-         * Group: consumer
+         * Group: producer
          */
-        default SparkEndpointBuilder urlDecodeHeaders(String urlDecodeHeaders) 
{
-            doSetProperty("urlDecodeHeaders", urlDecodeHeaders);
+        default SparkEndpointBuilder lazyStartProducer(String 
lazyStartProducer) {
+            doSetProperty("lazyStartProducer", lazyStartProducer);
             return this;
         }
-    }
-
-    /**
-     * Advanced builder for endpoint for the Spark Rest component.
-     */
-    public interface AdvancedSparkEndpointBuilder
-            extends
-                EndpointConsumerBuilder {
-        default SparkEndpointBuilder basic() {
-            return (SparkEndpointBuilder) this;
-        }
         /**
-         * To let the consumer use a custom ExceptionHandler. Notice if the
-         * option bridgeErrorHandler is enabled then this option is not in use.
-         * By default the consumer will deal with exceptions, that will be
-         * logged at WARN or ERROR level and ignored.
+         * RDD to compute against.
          * 
-         * The option is a: <code>org.apache.camel.spi.ExceptionHandler</code>
+         * The option is a: <code>org.apache.spark.api.java.JavaRDDLike</code>
          * type.
          * 
-         * Group: consumer (advanced)
+         * Group: producer
          */
-        default AdvancedSparkEndpointBuilder exceptionHandler(
-                ExceptionHandler exceptionHandler) {
-            doSetProperty("exceptionHandler", exceptionHandler);
+        default SparkEndpointBuilder rdd(Object rdd) {
+            doSetProperty("rdd", rdd);
             return this;
         }
         /**
-         * To let the consumer use a custom ExceptionHandler. Notice if the
-         * option bridgeErrorHandler is enabled then this option is not in use.
-         * By default the consumer will deal with exceptions, that will be
-         * logged at WARN or ERROR level and ignored.
+         * RDD to compute against.
          * 
          * The option will be converted to a
-         * <code>org.apache.camel.spi.ExceptionHandler</code> type.
+         * <code>org.apache.spark.api.java.JavaRDDLike</code> type.
          * 
-         * Group: consumer (advanced)
+         * Group: producer
          */
-        default AdvancedSparkEndpointBuilder exceptionHandler(
-                String exceptionHandler) {
-            doSetProperty("exceptionHandler", exceptionHandler);
+        default SparkEndpointBuilder rdd(String rdd) {
+            doSetProperty("rdd", rdd);
             return this;
         }
         /**
-         * Sets the exchange pattern when the consumer creates an exchange.
+         * Function performing action against an RDD.
          * 
-         * The option is a: <code>org.apache.camel.ExchangePattern</code> type.
+         * The option is a:
+         * <code>org.apache.camel.component.spark.RddCallback</code> type.
          * 
-         * Group: consumer (advanced)
+         * Group: producer
          */
-        default AdvancedSparkEndpointBuilder exchangePattern(
-                ExchangePattern exchangePattern) {
-            doSetProperty("exchangePattern", exchangePattern);
+        default SparkEndpointBuilder rddCallback(Object rddCallback) {
+            doSetProperty("rddCallback", rddCallback);
             return this;
         }
         /**
-         * Sets the exchange pattern when the consumer creates an exchange.
+         * Function performing action against an RDD.
          * 
          * The option will be converted to a
-         * <code>org.apache.camel.ExchangePattern</code> type.
+         * <code>org.apache.camel.component.spark.RddCallback</code> type.
          * 
-         * Group: consumer (advanced)
+         * Group: producer
          */
-        default AdvancedSparkEndpointBuilder exchangePattern(
-                String exchangePattern) {
-            doSetProperty("exchangePattern", exchangePattern);
+        default SparkEndpointBuilder rddCallback(String rddCallback) {
+            doSetProperty("rddCallback", rddCallback);
             return this;
         }
+    }
+
+    /**
+     * Advanced builder for endpoint for the Spark component.
+     */
+    public interface AdvancedSparkEndpointBuilder
+            extends
+                EndpointProducerBuilder {
+        default SparkEndpointBuilder basic() {
+            return (SparkEndpointBuilder) this;
+        }
         /**
          * Whether the endpoint should use basic property binding (Camel 2.x) 
or
          * the newer property binding with additional capabilities.
@@ -317,56 +232,6 @@ public interface SparkEndpointBuilderFactory {
             return this;
         }
         /**
-         * Whether or not the consumer should try to find a target consumer by
-         * matching the URI prefix if no exact match is found.
-         * 
-         * The option is a: <code>boolean</code> type.
-         * 
-         * Group: advanced
-         */
-        default AdvancedSparkEndpointBuilder matchOnUriPrefix(
-                boolean matchOnUriPrefix) {
-            doSetProperty("matchOnUriPrefix", matchOnUriPrefix);
-            return this;
-        }
-        /**
-         * Whether or not the consumer should try to find a target consumer by
-         * matching the URI prefix if no exact match is found.
-         * 
-         * The option will be converted to a <code>boolean</code> type.
-         * 
-         * Group: advanced
-         */
-        default AdvancedSparkEndpointBuilder matchOnUriPrefix(
-                String matchOnUriPrefix) {
-            doSetProperty("matchOnUriPrefix", matchOnUriPrefix);
-            return this;
-        }
-        /**
-         * To use a custom SparkBinding to map to/from Camel message.
-         * 
-         * The option is a:
-         * <code>org.apache.camel.component.sparkrest.SparkBinding</code> type.
-         * 
-         * Group: advanced
-         */
-        default AdvancedSparkEndpointBuilder sparkBinding(Object sparkBinding) 
{
-            doSetProperty("sparkBinding", sparkBinding);
-            return this;
-        }
-        /**
-         * To use a custom SparkBinding to map to/from Camel message.
-         * 
-         * The option will be converted to a
-         * <code>org.apache.camel.component.sparkrest.SparkBinding</code> type.
-         * 
-         * Group: advanced
-         */
-        default AdvancedSparkEndpointBuilder sparkBinding(String sparkBinding) 
{
-            doSetProperty("sparkBinding", sparkBinding);
-            return this;
-        }
-        /**
          * Sets whether synchronous processing should be strictly used, or 
Camel
          * is allowed to use asynchronous processing (if supported).
          * 
@@ -392,28 +257,24 @@ public interface SparkEndpointBuilderFactory {
         }
     }
     /**
-     * Spark Rest (camel-spark-rest)
-     * The spark-rest component is used for hosting REST services which has 
been
-     * defined using Camel rest-dsl.
-     * 
-     * Category: rest
-     * Since: 2.14
-     * Maven coordinates: org.apache.camel:camel-spark-rest
+     * Spark (camel-spark)
+     * The spark component can be used to send RDD or DataFrame jobs to Apache
+     * Spark cluster.
      * 
-     * Syntax: <code>spark-rest:verb:path</code>
+     * Category: bigdata,iot
+     * Since: 2.17
+     * Maven coordinates: org.apache.camel:camel-spark
      * 
-     * Path parameter: verb (required)
-     * get, post, put, patch, delete, head, trace, connect, or options.
-     * The value can be one of: get, post, put, patch, delete, head, trace,
-     * connect, options
+     * Syntax: <code>spark:endpointType</code>
      * 
-     * Path parameter: path (required)
-     * The content path which support Spark syntax.
+     * Path parameter: endpointType (required)
+     * Type of the endpoint (rdd, dataframe, hive).
+     * The value can be one of: rdd, dataframe, hive
      */
-    default SparkEndpointBuilder sparkRest(String path) {
+    default SparkEndpointBuilder spark(String path) {
         class SparkEndpointBuilderImpl extends AbstractEndpointBuilder 
implements SparkEndpointBuilder, AdvancedSparkEndpointBuilder {
             public SparkEndpointBuilderImpl(String path) {
-                super("spark-rest", path);
+                super("spark", path);
             }
         }
         return new SparkEndpointBuilderImpl(path);
diff --git a/docs/components/modules/ROOT/pages/elsql-component.adoc 
b/docs/components/modules/ROOT/pages/elsql-component.adoc
index 837b6f5..69af080 100644
--- a/docs/components/modules/ROOT/pages/elsql-component.adoc
+++ b/docs/components/modules/ROOT/pages/elsql-component.adoc
@@ -117,7 +117,7 @@ with the following path and query parameters:
 | *onConsumeFailed* (consumer) | After processing each row then this query can 
be executed, if the Exchange failed, for example to mark the row as failed. The 
query can have parameter. |  | String
 | *routeEmptyResultSet* (consumer) | Sets whether empty resultset should be 
allowed to be sent to the next hop. Defaults to false. So the empty resultset 
will be filtered out. | false | boolean
 | *sendEmptyMessageWhenIdle* (consumer) | If the polling consumer did not poll 
any files, you can enable this option to send an empty message (no body) 
instead. | false | boolean
-| *transacted* (consumer) | Enables or disables transaction. If enabled then 
if processing an exchange failed then the consumerbreak out processing any 
further exchanges to cause a rollback eager. | false | boolean
+| *transacted* (consumer) | Enables or disables transaction. If enabled then 
if processing an exchange failed then the consumer breaks out processing any 
further exchanges to cause a rollback eager. | false | boolean
 | *useIterator* (consumer) | Sets how resultset should be delivered to route. 
Indicates delivery as either a list or individual object. defaults to true. | 
true | boolean
 | *exceptionHandler* (consumer) | To let the consumer use a custom 
ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this 
option is not in use. By default the consumer will deal with exceptions, that 
will be logged at WARN or ERROR level and ignored. |  | ExceptionHandler
 | *exchangePattern* (consumer) | Sets the exchange pattern when the consumer 
creates an exchange. |  | ExchangePattern
diff --git a/docs/components/modules/ROOT/pages/log-component.adoc 
b/docs/components/modules/ROOT/pages/log-component.adoc
index 50532f5..3e05da8 100644
--- a/docs/components/modules/ROOT/pages/log-component.adoc
+++ b/docs/components/modules/ROOT/pages/log-component.adoc
@@ -122,7 +122,7 @@ with the following path and query parameters:
 | *showAll* (formatting) | Quick option for turning all options on. 
(multiline, maxChars has to be manually set if to be used) | false | boolean
 | *showBody* (formatting) | Show the message body. | true | boolean
 | *showBodyType* (formatting) | Show the body Java type. | true | boolean
-| *showCaughtException* (formatting) | f the exchange has a caught exception, 
show the exception message (no stack trace).A caught exception is stored as a 
property on the exchange (using the key 
org.apache.camel.Exchange#EXCEPTION_CAUGHT) and for instance a doCatch can 
catch exceptions. | false | boolean
+| *showCaughtException* (formatting) | If the exchange has a caught exception, 
show the exception message (no stack trace). A caught exception is stored as a 
property on the exchange (using the key 
org.apache.camel.Exchange#EXCEPTION_CAUGHT) and for instance a doCatch can 
catch exceptions. | false | boolean
 | *showException* (formatting) | If the exchange has an exception, show the 
exception message (no stacktrace) | false | boolean
 | *showExchangeId* (formatting) | Show the unique exchange ID. | false | 
boolean
 | *showExchangePattern* (formatting) | Shows the Message Exchange Pattern (or 
MEP for short). | true | boolean
@@ -132,7 +132,7 @@ with the following path and query parameters:
 | *showProperties* (formatting) | Show the exchange properties. | false | 
boolean
 | *showStackTrace* (formatting) | Show the stack trace, if an exchange has an 
exception. Only effective if one of showAll, showException or 
showCaughtException are enabled. | false | boolean
 | *showStreams* (formatting) | Whether Camel should show stream bodies or not 
(eg such as java.io.InputStream). Beware if you enable this option then you may 
not be able later to access the message body as the stream have already been 
read by this logger. To remedy this you will have to use Stream Caching. | 
false | boolean
-| *skipBodyLineSeparator* (formatting) | Whether to skip line separators when 
logging the message body.This allows to log the message body in one line, 
setting this option to false will preserve any line separators from the body, 
which then will log the body as is. | true | boolean
+| *skipBodyLineSeparator* (formatting) | Whether to skip line separators when 
logging the message body. This allows to log the message body in one line, 
setting this option to false will preserve any line separators from the body, 
which then will log the body as is. | true | boolean
 | *style* (formatting) | Sets the outputs style to use. | Default | OutputStyle
 |===
 // endpoint options: END
diff --git a/docs/components/modules/ROOT/pages/salesforce-component.adoc 
b/docs/components/modules/ROOT/pages/salesforce-component.adoc
index e83b606..0931aec 100644
--- a/docs/components/modules/ROOT/pages/salesforce-component.adoc
+++ b/docs/components/modules/ROOT/pages/salesforce-component.adoc
@@ -669,7 +669,7 @@ The Salesforce component supports 33 options, which are 
listed below.
 | *httpProxyExcluded Addresses* (proxy) | A list of addresses for which HTTP 
proxy server should not be used. |  | Set
 | *httpProxyAuthUri* (security) | Used in authentication against the HTTP 
proxy server, needs to match the URI of the proxy server in order for the 
httpProxyUsername and httpProxyPassword to be used for authentication. |  | 
String
 | *httpProxyRealm* (security) | Realm of the proxy server, used in preemptive 
Basic/Digest authentication methods against the HTTP proxy server. |  | String
-| *httpProxyUseDigest Auth* (security) | If set to true Digest authentication 
will be used when authenticating to the HTTP proxy,otherwise Basic 
authorization method will be used | false | boolean
+| *httpProxyUseDigest Auth* (security) | If set to true Digest authentication 
will be used when authenticating to the HTTP proxy, otherwise Basic 
authorization method will be used | false | boolean
 | *packages* (common) | In what packages are the generated DTO classes. 
Typically the classes would be generated using camel-salesforce-maven-plugin. 
Set it if using the generated DTOs to gain the benefit of using short SObject 
names in parameters/header values. |  | String[]
 | *basicPropertyBinding* (advanced) | Whether the component should use basic 
property binding (Camel 2.x) or the newer property binding with additional 
capabilities | false | boolean
 | *lazyStartProducer* (producer) | Whether the producer should be started lazy 
(on the first message). By starting lazy you can use this to allow CamelContext 
and routes to startup in situations where a producer may otherwise fail during 
starting and cause the route to fail being started. By deferring this startup 
to be lazy then the startup failure can be handled during routing messages via 
Camel's routing error handlers. Beware that when the first message is processed 
then creating and [...]

Reply via email to