This is an automated email from the ASF dual-hosted git repository.

davsclaus pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel.git


The following commit(s) were added to refs/heads/master by this push:
     new b03eaca  Regen
b03eaca is described below

commit b03eaca034696be2b2d91f5e26a3affae9568a91
Author: Claus Ibsen <claus.ib...@gmail.com>
AuthorDate: Fri May 31 06:16:28 2019 +0200

    Regen
---
 .../src/main/docs/tokenize-language.adoc           |  3 +-
 .../modules/ROOT/pages/claimCheck-eip.adoc         | 41 +++++++++++++++-------
 .../modules/ROOT/pages/tokenize-language.adoc      |  3 +-
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/core/camel-base/src/main/docs/tokenize-language.adoc 
b/core/camel-base/src/main/docs/tokenize-language.adoc
index 63937d7..b2cdb73 100644
--- a/core/camel-base/src/main/docs/tokenize-language.adoc
+++ b/core/camel-base/src/main/docs/tokenize-language.adoc
@@ -17,7 +17,7 @@ seeĀ Splitter.
 === Tokenize Options
 
 // language options: START
-The Tokenize language supports 10 options, which are listed below.
+The Tokenize language supports 11 options, which are listed below.
 
 
 
@@ -32,6 +32,7 @@ The Tokenize language supports 10 options, which are listed 
below.
 | xml | false | Boolean | Whether the input is XML messages. This option must 
be set to true if working with XML payloads.
 | includeTokens | false | Boolean | Whether to include the tokens in the parts 
when using pairs The default value is false
 | group |  | String | To group N parts together, for example to split big 
files into chunks of 1000 lines. You can use simple language as the group to 
support dynamic group sizes.
+| groupDelimiter |  | String | Sets the delimiter to use when grouping. If 
this has not been set then token will be used as the delimiter.
 | skipFirst | false | Boolean | To skip the very first element
 | trim | true | Boolean | Whether to trim the value to remove leading and 
trailing whitespaces and line breaks
 |===
diff --git a/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc 
b/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc
index a3fa60a..2643bfb 100644
--- a/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc
+++ b/docs/user-manual/modules/ROOT/pages/claimCheck-eip.adoc
@@ -20,7 +20,7 @@ The Claim Check EIP supports 5 options which are listed below:
 |===
 | Name | Description | Default | Type
 | *operation* | *Required* The claim check operation to use. The following 
operations is supported: Get - Gets (does not remove) the claim check by the 
given key. GetAndRemove - Gets and remove the claim check by the given key. Set 
- Sets a new (will override if key already exists) claim check with the given 
key. Push - Sets a new claim check on the stack (does not use key). Pop - Gets 
the latest claim check from the stack (does not use key). |  | 
ClaimCheckOperation
-| *key* | To use a specific key for claim check id. |  | String
+| *key* | To use a specific key for claim check id (for dynamic keys use 
simple language syntax as the key). |  | String
 | *filter* | Specified a filter to control what data gets merging data back 
from the claim check repository. The following syntax is supported: body - to 
aggregate the message body attachments - to aggregate all the message 
attachments headers - to aggregate all the message headers header:pattern - to 
aggregate all the message headers that matches the pattern. The pattern uses 
the following rules are applied in this order: exact match, returns true 
wildcard match (pattern ends with a and [...]
 | *strategyRef* | To use a custom AggregationStrategy instead of the default 
implementation. Notice you cannot use both custom aggregation strategy and 
configure data at the same time. |  | String
 | *strategyMethodName* | This option can be used to explicit declare the 
method name to use, when using POJOs as the AggregationStrategy. |  | String
@@ -67,35 +67,30 @@ You can specify multiple rules separated by comma.
 
 For example to include the message body and all headers starting with _foo_:
 
-[text]
 ----
 body,header:foo*
 ----
 
 To only merge back the message body:
 
-[text]
 ----
 body
 ----
 
 To only merge back the message attachments:
 
-[text]
 ----
 attachments
 ----
 
 To only merge back headers:
 
-[text]
 ----
 headers
 ----
 
 To only merge back a header name foo:
 
-[text]
 ----
 header:foo
 ----
@@ -104,7 +99,7 @@ If the filter rule is specified as empty or as wildcard then 
everything is merge
 
 Notice that when merging back data, then any existing data is overwritten, and 
any other existing data is preserved.
 
-==== Fine grained filtering with include and explude pattern
+==== Fine grained filtering with include and exclude pattern
 
 The syntax also supports the following prefixes which can be used to specify 
include,exclude, or remove
 
@@ -129,12 +124,32 @@ You can also instruct to remove headers when merging data 
back, for example to r
 
 Note you cannot have both include (`+`) and exclude (`-`) `header:pattern` at 
the same time.
 
+=== Dynamic keys
+
+The claim check key are static, but you can use the `simple` language syntax 
to define dynamic keys,
+for example to use a header from the message named `myKey`:
+
+[source,java]
+----
+from("direct:start")
+    .to("mock:a")
+    .claimCheck(ClaimCheckOperation.Set, "${header.myKey}")
+    .transform().constant("Bye World")
+    .to("mock:b")
+    .claimCheck(ClaimCheckOperation.Get, "${header.myKey}")
+    .to("mock:c")
+    .transform().constant("Hi World")
+    .to("mock:d")
+    .claimCheck(ClaimCheckOperation.Get, "${header.myKey}")
+    .to("mock:e");
+----
+
 
 === Java Examples
 
 The following example shows the `Push` and `Pop` operations in action;
 
-[java]
+[source,java]
 ----
 from("direct:start")
     .to("mock:a")
@@ -151,7 +166,7 @@ then the original message body is retrieved and merged back 
so `mock:c` will ret
 
 Here is an example using `Get` and `Set` operations, which uses the key `foo`:
 
-[java]
+[source,java]
 ----
 from("direct:start")
     .to("mock:a")
@@ -171,7 +186,7 @@ to get the data once, you can use `GetAndRemove`.
 
 The last example shows how to use the `filter` option where we only want to 
get back header named `foo` or `bar`:
 
-[java]
+[source,java]
 ----
 from("direct:start")
     .to("mock:a")
@@ -189,7 +204,7 @@ from("direct:start")
 
 The following example shows the `Push` and `Pop` operations in action;
 
-[xml]
+[source,xml]
 ----
 <route>
   <from uri="direct:start"/>
@@ -210,7 +225,7 @@ then the original message body is retrieved and merged back 
so `mock:c` will ret
 
 Here is an example using `Get` and `Set` operations, which uses the key `foo`:
 
-[xml]
+[source,xml]
 ----
 <route>
   <from uri="direct:start"/>
@@ -236,7 +251,7 @@ to get the data once, you can use `GetAndRemove`.
 
 The last example shows how to use the `filter` option where we only want to 
get back header named `foo` or `bar`:
 
-[xml]
+[source,xml]
 ----
 <route>
   <from uri="direct:start"/>
diff --git a/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc 
b/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc
index 63937d7..b2cdb73 100644
--- a/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc
+++ b/docs/user-manual/modules/ROOT/pages/tokenize-language.adoc
@@ -17,7 +17,7 @@ seeĀ Splitter.
 === Tokenize Options
 
 // language options: START
-The Tokenize language supports 10 options, which are listed below.
+The Tokenize language supports 11 options, which are listed below.
 
 
 
@@ -32,6 +32,7 @@ The Tokenize language supports 10 options, which are listed 
below.
 | xml | false | Boolean | Whether the input is XML messages. This option must 
be set to true if working with XML payloads.
 | includeTokens | false | Boolean | Whether to include the tokens in the parts 
when using pairs The default value is false
 | group |  | String | To group N parts together, for example to split big 
files into chunks of 1000 lines. You can use simple language as the group to 
support dynamic group sizes.
+| groupDelimiter |  | String | Sets the delimiter to use when grouping. If 
this has not been set then token will be used as the delimiter.
 | skipFirst | false | Boolean | To skip the very first element
 | trim | true | Boolean | Whether to trim the value to remove leading and 
trailing whitespaces and line breaks
 |===

Reply via email to