Author: buildbot Date: Fri Dec 11 18:19:39 2015 New Revision: 975295 Log: Production update by buildbot for camel
Modified: websites/production/camel/content/apache-spark.html websites/production/camel/content/cache/main.pageCache Modified: websites/production/camel/content/apache-spark.html ============================================================================== --- websites/production/camel/content/apache-spark.html (original) +++ websites/production/camel/content/apache-spark.html Fri Dec 11 18:19:39 2015 @@ -84,7 +84,7 @@ <tbody> <tr> <td valign="top" width="100%"> -<div class="wiki-content maincontent"><h2 id="ApacheSpark-ApacheSparkcomponent">Apache Spark component</h2><div class="confluence-information-macro confluence-information-macro-information"><span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p> Apache Spark component is available starting from Camel <strong>2.17</strong>.</p></div></div><p> </p><p><span style="line-height: 1.5625;font-size: 16.0px;">This documentation page covers the <a shape="rect" class="external-link" href="http://spark.apache.org/">Apache Spark</a> component for the Apache Camel. The main purpose of the Spark integration with Camel is to provide a bridge between Camel connectors and Spark tasks. In particular Camel connector provides a way to route message from various transports, dynamically choose a task to execute, use incoming message as input data for that task and finally deliver the results of the execut ion back to the Camel pipeline.</span></p><h3 id="ApacheSpark-Supportedarchitecturalstyles"><span>Supported architectural styles</span></h3><p><span style="line-height: 1.5625;font-size: 16.0px;">Spark component can be used as a driver application deployed into an application server (or executed as a fat jar).</span></p><p><span style="line-height: 1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper confluence-embedded-manual-size"><img class="confluence-embedded-image" height="250" src="apache-spark.data/camel_spark_driver.png" data-image-src="/confluence/download/attachments/61331559/camel_spark_driver.png?version=2&modificationDate=1449478362000&api=v2" data-unresolved-comment-count="0" data-linked-resource-id="61331563" data-linked-resource-version="2" data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_driver.png" data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png" data -linked-resource-container-id="61331559" data-linked-resource-container-version="11"></span><br clear="none"></span></p><p><span style="line-height: 1.5625;font-size: 16.0px;">Spark component can also be submitted as a job directly into the Spark cluster.</span></p><p><span style="line-height: 1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper confluence-embedded-manual-size"><img class="confluence-embedded-image" height="250" src="apache-spark.data/camel_spark_cluster.png" data-image-src="/confluence/download/attachments/61331559/camel_spark_cluster.png?version=1&modificationDate=1449478393000&api=v2" data-unresolved-comment-count="0" data-linked-resource-id="61331565" data-linked-resource-version="1" data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_cluster.png" data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png" data-linked-resource-container-id="61331559" data-linked- resource-container-version="11"></span><br clear="none"></span></p><p><span style="line-height: 1.5625;font-size: 16.0px;">While Spark component is primary designed to work as a <em>long running job</em> serving as an bridge between Spark cluster and the other endpoints, you can also use it as a <em>fire-once</em> short job.  </span> </p><h3 id="ApacheSpark-RunningSparkinOSGiservers"><span>Running Spark in OSGi servers</span></h3><p>Currently the Spark component doesn't support execution in the OSGi container. Spark has been designed to be executed as a fat jar, usually submitted as a job to a cluster. For those reasons running Spark in an OSGi server is at least challenging and is not support by Camel as well.</p><h3 id="ApacheSpark-URIformat">URI format</h3><p>Currently the Spark component supports only producers - it it intended to invoke a Spark job and return results. You can call RDD, data frame or Hive SQL job.</p><div><p> </p><div class="code panel pdl" s tyle="border-width: 1px;"><div class="codeHeader panelHeader pdl" style="border-bottom-width: 1px;"><b>Spark URI format</b></div><div class="codeContent panelContent pdl"> +<div class="wiki-content maincontent"><h2 id="ApacheSpark-ApacheSparkcomponent">Apache Spark component</h2><div class="confluence-information-macro confluence-information-macro-information"><span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p> Apache Spark component is available starting from Camel <strong>2.17</strong>.</p></div></div><p> </p><p><span style="line-height: 1.5625;font-size: 16.0px;">This documentation page covers the <a shape="rect" class="external-link" href="http://spark.apache.org/">Apache Spark</a> component for the Apache Camel. The main purpose of the Spark integration with Camel is to provide a bridge between Camel connectors and Spark tasks. In particular Camel connector provides a way to route message from various transports, dynamically choose a task to execute, use incoming message as input data for that task and finally deliver the results of the execut ion back to the Camel pipeline.</span></p><h3 id="ApacheSpark-Supportedarchitecturalstyles"><span>Supported architectural styles</span></h3><p><span style="line-height: 1.5625;font-size: 16.0px;">Spark component can be used as a driver application deployed into an application server (or executed as a fat jar).</span></p><p><span style="line-height: 1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper confluence-embedded-manual-size"><img class="confluence-embedded-image" height="250" src="apache-spark.data/camel_spark_driver.png" data-image-src="/confluence/download/attachments/61331559/camel_spark_driver.png?version=2&modificationDate=1449478362000&api=v2" data-unresolved-comment-count="0" data-linked-resource-id="61331563" data-linked-resource-version="2" data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_driver.png" data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png" data -linked-resource-container-id="61331559" data-linked-resource-container-version="13"></span><br clear="none"></span></p><p><span style="line-height: 1.5625;font-size: 16.0px;">Spark component can also be submitted as a job directly into the Spark cluster.</span></p><p><span style="line-height: 1.5625;font-size: 16.0px;"><span class="confluence-embedded-file-wrapper confluence-embedded-manual-size"><img class="confluence-embedded-image" height="250" src="apache-spark.data/camel_spark_cluster.png" data-image-src="/confluence/download/attachments/61331559/camel_spark_cluster.png?version=1&modificationDate=1449478393000&api=v2" data-unresolved-comment-count="0" data-linked-resource-id="61331565" data-linked-resource-version="1" data-linked-resource-type="attachment" data-linked-resource-default-alias="camel_spark_cluster.png" data-base-url="https://cwiki.apache.org/confluence" data-linked-resource-content-type="image/png" data-linked-resource-container-id="61331559" data-linked- resource-container-version="13"></span><br clear="none"></span></p><p><span style="line-height: 1.5625;font-size: 16.0px;">While Spark component is primary designed to work as a <em>long running job</em> serving as an bridge between Spark cluster and the other endpoints, you can also use it as a <em>fire-once</em> short job.  </span> </p><h3 id="ApacheSpark-RunningSparkinOSGiservers"><span>Running Spark in OSGi servers</span></h3><p>Currently the Spark component doesn't support execution in the OSGi container. Spark has been designed to be executed as a fat jar, usually submitted as a job to a cluster. For those reasons running Spark in an OSGi server is at least challenging and is not support by Camel as well.</p><h3 id="ApacheSpark-URIformat">URI format</h3><p>Currently the Spark component supports only producers - it it intended to invoke a Spark job and return results. You can call RDD, data frame or Hive SQL job.</p><div><p> </p><div class="code panel pdl" s tyle="border-width: 1px;"><div class="codeHeader panelHeader pdl" style="border-bottom-width: 1px;"><b>Spark URI format</b></div><div class="codeContent panelContent pdl"> <script class="brush: java; gutter: false; theme: Default" type="syntaxhighlighter"><![CDATA[spark:{rdd|dataframe|hive}]]></script> </div></div><p> </p></div><h3 id="ApacheSpark-RDDjobs">RDD jobs </h3><p> </p><div>To invoke an RDD job, use the following URI:</div><div class="code panel pdl" style="border-width: 1px;"><div class="codeHeader panelHeader pdl" style="border-bottom-width: 1px;"><b>Spark RDD producer</b></div><div class="codeContent panelContent pdl"> <script class="brush: java; gutter: false; theme: Default" type="syntaxhighlighter"><![CDATA[spark:rdd?rdd=#testFileRdd&rddCallback=#transformation]]></script> @@ -131,7 +131,7 @@ RddCallback<Long> rddCallback(Came }; }; }]]></script> -</div></div><h4 id="ApacheSpark-AnnotatedRDDcallbacks">Annotated RDD callbacks</h4><p>Probably the easiest way to work with the RDD callbacks is to provide class with method marked with <code>@RddCallback</code> annotation:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeHeader panelHeader pdl" style="border-bottom-width: 1px;"><b>Spark RDD definition</b></div><div class="codeContent panelContent pdl"> +</div></div><h4 id="ApacheSpark-AnnotatedRDDcallbacks">Annotated RDD callbacks</h4><p>Probably the easiest way to work with the RDD callbacks is to provide class with method marked with <code>@RddCallback</code> annotation:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeHeader panelHeader pdl" style="border-bottom-width: 1px;"><b>Annotated RDD callback definition</b></div><div class="codeContent panelContent pdl"> <script class="brush: java; gutter: false; theme: Default" type="syntaxhighlighter"><![CDATA[import static org.apache.camel.component.spark.annotations.AnnotatedRddCallback.annotatedRddCallback; Â @Bean @@ -151,7 +151,33 @@ public class MyTransformation { } Â }]]></script> -</div></div><p><br clear="none"></p><h3 id="ApacheSpark-SeeAlso">See Also</h3> +</div></div><p>If you will pass CamelContext to the annotated RDD callback factory method, the created callback will be able to convert incoming payloads to match the parameters of the annotated method:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeHeader panelHeader pdl" style="border-bottom-width: 1px;"><b>Body conversions for annotated RDD callbacks</b></div><div class="codeContent panelContent pdl"> +<script class="brush: java; gutter: false; theme: Default" type="syntaxhighlighter"><![CDATA[import static org.apache.camel.component.spark.annotations.AnnotatedRddCallback.annotatedRddCallback; +Â +@Bean +RddCallback<Long> rddCallback(CamelContext camelContext) { + return annotatedRddCallback(new MyTransformation(), camelContext); +} +Â +... + +Â +import org.apache.camel.component.spark.annotation.RddCallback; +Â +public class MyTransformation { +Â + @RddCallback + long countLines(JavaRDD<String> textFile, int first, int second) { + return textFile.count() * first * second; + } +Â +} +Â +... +Â +// Convert String "10" to integer +long result = producerTemplate.requestBody("spark:rdd?rdd=#rdd&rddCallback=#rddCallback" Arrays.asList(10, "10"), long.class);]]></script> +</div></div><p></p><h3 id="ApacheSpark-SeeAlso">See Also</h3> <ul><li><a shape="rect" href="configuring-camel.html">Configuring Camel</a></li><li><a shape="rect" href="component.html">Component</a></li><li><a shape="rect" href="endpoint.html">Endpoint</a></li><li><a shape="rect" href="getting-started.html">Getting Started</a></li></ul></div> </td> <td valign="top"> Modified: websites/production/camel/content/cache/main.pageCache ============================================================================== Binary files - no diff available.