Author: buildbot Date: Fri Oct 18 23:20:45 2013 New Revision: 883204 Log: Production update by buildbot for camel
Modified: websites/production/camel/content/book-component-appendix.html websites/production/camel/content/book-in-one-page.html websites/production/camel/content/cache/main.pageCache websites/production/camel/content/camel-2130-release.html websites/production/camel/content/hdfs.html Modified: websites/production/camel/content/book-component-appendix.html ============================================================================== --- websites/production/camel/content/book-component-appendix.html (original) +++ websites/production/camel/content/book-component-appendix.html Fri Oct 18 23:20:45 2013 @@ -6708,7 +6708,7 @@ hdfs://hostname[:port][/path][?options] <p>You can append query options to the URI in the following format, <tt>?option=value&option=value&...</tt><br clear="none"> The path is treated in the following way:</p> -<ol><li>as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.</li><li>as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named seg0, seg1, seg2, etc.</li></ol> +<ol><li>as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.</li><li>as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a>.</li></ol> <h3><a shape="rect" name="BookComponentAppendix-Options"></a>Options</h3> @@ -6725,7 +6725,7 @@ The path is treated in the following way <h3><a shape="rect" name="BookComponentAppendix-SplittingStrategy"></a>Splitting Strategy</h3> <p>In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way:</p> -<ul><li>If the split strategy option has been defined, the actual file name will become a directory name and a <file name>/seg0 will be initially created.</li><li>Every time a splitting condition is met a new file is created with name <original file name>/segN where N is 1, 2, 3, etc.<br clear="none"> +<ul><li>If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a></li><li>Every time a splitting condition is met, a new file is created.<br clear="none"> The splitStrategy option is defined as a string with the following syntax:<br clear="none"> splitStrategy=<ST>:<value>,<ST>:<value>,*</li></ul> @@ -6740,7 +6740,7 @@ splitStrategy=<ST>:<value>,& hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5 ]]></script> </div></div> -<p>it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running <tt>hadoop fs -ls /tmp/simple-file</tt> you'll find the following files seg0, seg1, seg2, etc</p> +<p>it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running <tt>hadoop fs -ls /tmp/simple-file</tt> you'll find multiple files created named using the <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a>, etc</p> <h3><a shape="rect" name="BookComponentAppendix-MessageHeaders"></a>Message Headers</h3> Modified: websites/production/camel/content/book-in-one-page.html ============================================================================== --- websites/production/camel/content/book-in-one-page.html (original) +++ websites/production/camel/content/book-in-one-page.html Fri Oct 18 23:20:45 2013 @@ -28191,7 +28191,7 @@ hdfs://hostname[:port][/path][?options] <p>You can append query options to the URI in the following format, <tt>?option=value&option=value&...</tt><br clear="none"> The path is treated in the following way:</p> -<ol><li>as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.</li><li>as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named seg0, seg1, seg2, etc.</li></ol> +<ol><li>as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.</li><li>as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a>.</li></ol> <h3><a shape="rect" name="BookInOnePage-Options"></a>Options</h3> @@ -28208,7 +28208,7 @@ The path is treated in the following way <h3><a shape="rect" name="BookInOnePage-SplittingStrategy"></a>Splitting Strategy</h3> <p>In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way:</p> -<ul><li>If the split strategy option has been defined, the actual file name will become a directory name and a <file name>/seg0 will be initially created.</li><li>Every time a splitting condition is met a new file is created with name <original file name>/segN where N is 1, 2, 3, etc.<br clear="none"> +<ul><li>If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a></li><li>Every time a splitting condition is met, a new file is created.<br clear="none"> The splitStrategy option is defined as a string with the following syntax:<br clear="none"> splitStrategy=<ST>:<value>,<ST>:<value>,*</li></ul> @@ -28223,7 +28223,7 @@ splitStrategy=<ST>:<value>,& hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5 ]]></script> </div></div> -<p>it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running <tt>hadoop fs -ls /tmp/simple-file</tt> you'll find the following files seg0, seg1, seg2, etc</p> +<p>it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running <tt>hadoop fs -ls /tmp/simple-file</tt> you'll find multiple files created named using the <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a>, etc</p> <h3><a shape="rect" name="BookInOnePage-MessageHeaders"></a>Message Headers</h3> Modified: websites/production/camel/content/cache/main.pageCache ============================================================================== Binary files - no diff available. Modified: websites/production/camel/content/camel-2130-release.html ============================================================================== --- websites/production/camel/content/camel-2130-release.html (original) +++ websites/production/camel/content/camel-2130-release.html Fri Oct 18 23:20:45 2013 @@ -99,7 +99,7 @@ <h3><a shape="rect" name="Camel2.13.0Release-FixedIssues"></a>Fixed Issues</h3> -<ul><li>Fixed an <tt>ArrayIndexOutOfBoundsException</tt> with <a shape="rect" href="message-history.html" title="Message History">Message History</a> when using <a shape="rect" href="seda.html" title="SEDA">SEDA</a></li><li>Fixed <tt>requestTimeout</tt> on <a shape="rect" href="netty.html" title="Netty">Netty</a> not triggering when we have received message.</li><li>Fixed <a shape="rect" href="parameter-binding-annotations.html" title="Parameter Binding Annotations">Parameter Binding Annotations</a> on boolean types to evaluate as <a shape="rect" href="predicate.html" title="Predicate">Predicate</a> instead of <a shape="rect" href="expression.html" title="Expression">Expression</a></li><li>Fixed using <a shape="rect" href="file2.html" title="File2">File</a> consumer with <tt>delete=true&readLock=fileLock</tt> not being able to delete the file on Windows.</li><li>Fixed <a shape="rect" href="throttler.html" title="Throttler">Throttler</a> to honor time slots after period expires ( eg so it works consistently and as expected).</li><li>Fixed getting JMSXUserID property when consuming from <a shape="rect" href="activemq.html" title="ActiveMQ">ActiveMQ</a></li><li>Fixed <a shape="rect" href="intercept.html" title="Intercept">interceptFrom</a> to support property placeholders</li><li>Fixed a race condition in initializing <tt>SSLContext</tt> in <a shape="rect" href="netty.html" title="Netty">Netty</a> and <a shape="rect" href="netty-http.html" title="Netty HTTP">Netty HTTP</a></li><li>Fixed using <a shape="rect" href="recipient-list.html" title="Recipient List">Recipient List</a>, <a shape="rect" href="routing-slip.html" title="Routing Slip">Routing Slip</a> calling another route which is configured with <tt>NoErrorHandler</tt>, and an exception occurred in that route, would be propagated back as not-exhausted, allow the caller route to have its error handler react on the exception.</li><li>Fixed <a shape="rect" href="quartz.html" title="Quartz">Quartz</a> and exc eption was thrown when scheduling a job, would affect during shutdown, assuming the job was still in progress, and not shutdown the Quartz scheduler.</li><li>Fixed so you can configure <a shape="rect" href="stomp.html" title="Stomp">Stomp</a> endpoints using <a shape="rect" href="uris.html" title="URIs">URIs</a></li><li>Fixed memory leak when using <a shape="rect" href="language.html" title="Language">Language</a> component with <tt>camel-script</tt> languages and having <tt>contentCache=false</tt></li><li>Fixed <a shape="rect" href="error-handler.html" title="Error Handler">Error Handler</a> may log at <tt>WARN</tt> level "Cannot determine current route from Exchange" when using <a shape="rect" href="splitter.html" title="Splitter">Splitter</a></li><li>Fixed <tt>camel-fop</tt> to work in Apache <a shape="rect" href="karaf.html" title="Karaf">Karaf</a> and ServiceMix</li></ul> +<ul><li>Fixed an <tt>ArrayIndexOutOfBoundsException</tt> with <a shape="rect" href="message-history.html" title="Message History">Message History</a> when using <a shape="rect" href="seda.html" title="SEDA">SEDA</a></li><li>Fixed <tt>requestTimeout</tt> on <a shape="rect" href="netty.html" title="Netty">Netty</a> not triggering when we have received message.</li><li>Fixed <a shape="rect" href="parameter-binding-annotations.html" title="Parameter Binding Annotations">Parameter Binding Annotations</a> on boolean types to evaluate as <a shape="rect" href="predicate.html" title="Predicate">Predicate</a> instead of <a shape="rect" href="expression.html" title="Expression">Expression</a></li><li>Fixed using <a shape="rect" href="file2.html" title="File2">File</a> consumer with <tt>delete=true&readLock=fileLock</tt> not being able to delete the file on Windows.</li><li>Fixed <a shape="rect" href="throttler.html" title="Throttler">Throttler</a> to honor time slots after period expires ( eg so it works consistently and as expected).</li><li>Fixed getting JMSXUserID property when consuming from <a shape="rect" href="activemq.html" title="ActiveMQ">ActiveMQ</a></li><li>Fixed <a shape="rect" href="intercept.html" title="Intercept">interceptFrom</a> to support property placeholders</li><li>Fixed a race condition in initializing <tt>SSLContext</tt> in <a shape="rect" href="netty.html" title="Netty">Netty</a> and <a shape="rect" href="netty-http.html" title="Netty HTTP">Netty HTTP</a></li><li>Fixed using <a shape="rect" href="recipient-list.html" title="Recipient List">Recipient List</a>, <a shape="rect" href="routing-slip.html" title="Routing Slip">Routing Slip</a> calling another route which is configured with <tt>NoErrorHandler</tt>, and an exception occurred in that route, would be propagated back as not-exhausted, allow the caller route to have its error handler react on the exception.</li><li>Fixed <a shape="rect" href="quartz.html" title="Quartz">Quartz</a> and exc eption was thrown when scheduling a job, would affect during shutdown, assuming the job was still in progress, and not shutdown the Quartz scheduler.</li><li>Fixed so you can configure <a shape="rect" href="stomp.html" title="Stomp">Stomp</a> endpoints using <a shape="rect" href="uris.html" title="URIs">URIs</a></li><li>Fixed memory leak when using <a shape="rect" href="language.html" title="Language">Language</a> component with <tt>camel-script</tt> languages and having <tt>contentCache=false</tt></li><li>Fixed <a shape="rect" href="error-handler.html" title="Error Handler">Error Handler</a> may log at <tt>WARN</tt> level "Cannot determine current route from Exchange" when using <a shape="rect" href="splitter.html" title="Splitter">Splitter</a></li><li>Fixed <tt>camel-fop</tt> to work in Apache <a shape="rect" href="karaf.html" title="Karaf">Karaf</a> and ServiceMix</li><li>Fixed <a shape="rect" href="hdfs.html" title="HDFS">HDFS</a> producer to use the configured <a shape="rect" h ref="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a> when generating split file names to avoid filename collisions</li></ul> <h3><a shape="rect" name="Camel2.13.0Release-NewEnterpriseIntegrationPatterns"></a>New <a shape="rect" href="enterprise-integration-patterns.html" title="Enterprise Integration Patterns">Enterprise Integration Patterns</a></h3> Modified: websites/production/camel/content/hdfs.html ============================================================================== --- websites/production/camel/content/hdfs.html (original) +++ websites/production/camel/content/hdfs.html Fri Oct 18 23:20:45 2013 @@ -112,7 +112,7 @@ hdfs://hostname[:port][/path][?options] <p>You can append query options to the URI in the following format, <tt>?option=value&option=value&...</tt><br clear="none"> The path is treated in the following way:</p> -<ol><li>as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.</li><li>as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named seg0, seg1, seg2, etc.</li></ol> +<ol><li>as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type.</li><li>as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a>.</li></ol> <h3><a shape="rect" name="HDFS-Options"></a>Options</h3> @@ -129,7 +129,7 @@ The path is treated in the following way <h3><a shape="rect" name="HDFS-SplittingStrategy"></a>Splitting Strategy</h3> <p>In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way:</p> -<ul><li>If the split strategy option has been defined, the actual file name will become a directory name and a <file name>/seg0 will be initially created.</li><li>Every time a splitting condition is met a new file is created with name <original file name>/segN where N is 1, 2, 3, etc.<br clear="none"> +<ul><li>If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a></li><li>Every time a splitting condition is met, a new file is created.<br clear="none"> The splitStrategy option is defined as a string with the following syntax:<br clear="none"> splitStrategy=<ST>:<value>,<ST>:<value>,*</li></ul> @@ -144,7 +144,7 @@ splitStrategy=<ST>:<value>,& hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5 ]]></script> </div></div> -<p>it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running <tt>hadoop fs -ls /tmp/simple-file</tt> you'll find the following files seg0, seg1, seg2, etc</p> +<p>it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running <tt>hadoop fs -ls /tmp/simple-file</tt> you'll find multiple files created named using the <a shape="rect" href="uuidgenerator.html" title="UuidGenerator">UuidGenerator</a>, etc</p> <h3><a shape="rect" name="HDFS-MessageHeaders"></a>Message Headers</h3>