Repository: camel
Updated Branches:
  refs/heads/master 6fdf1cdbf -> aac97e1c0


Added camel-hdfs docs to gitbook


Project: http://git-wip-us.apache.org/repos/asf/camel/repo
Commit: http://git-wip-us.apache.org/repos/asf/camel/commit/aac97e1c
Tree: http://git-wip-us.apache.org/repos/asf/camel/tree/aac97e1c
Diff: http://git-wip-us.apache.org/repos/asf/camel/diff/aac97e1c

Branch: refs/heads/master
Commit: aac97e1c0f84bae405ae2390508866967b34b12c
Parents: 6fdf1cd
Author: Andrea Cosentino <anco...@gmail.com>
Authored: Mon Apr 4 17:51:31 2016 +0200
Committer: Andrea Cosentino <anco...@gmail.com>
Committed: Mon Apr 4 17:51:31 2016 +0200

----------------------------------------------------------------------
 components/camel-hdfs/src/main/docs/hdfs.adoc | 238 +++++++++++++++++++++
 docs/user-manual/en/SUMMARY.md                |   1 +
 2 files changed, 239 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/camel/blob/aac97e1c/components/camel-hdfs/src/main/docs/hdfs.adoc
----------------------------------------------------------------------
diff --git a/components/camel-hdfs/src/main/docs/hdfs.adoc 
b/components/camel-hdfs/src/main/docs/hdfs.adoc
new file mode 100644
index 0000000..c84082a
--- /dev/null
+++ b/components/camel-hdfs/src/main/docs/hdfs.adoc
@@ -0,0 +1,238 @@
+[[HDFS-HDFSComponent]]
+HDFS Component
+~~~~~~~~~~~~~~
+
+*Available as of Camel 2.8*
+
+The *hdfs* component enables you to read and write messages from/to an
+HDFS file system. HDFS is the distributed file system at the heart of
+http://hadoop.apache.org[Hadoop].
+
+Maven users will need to add the following dependency to their `pom.xml`
+for this component:
+
+[source,xml]
+------------------------------------------------------------
+<dependency>
+    <groupId>org.apache.camel</groupId>
+    <artifactId>camel-hdfs</artifactId>
+    <version>x.x.x</version>
+    <!-- use the same version as your Camel core version -->
+</dependency>
+------------------------------------------------------------
+
+[[HDFS-URIformat]]
+URI format
+^^^^^^^^^^
+
+[source,java]
+---------------------------------------
+hdfs://hostname[:port][/path][?options]
+---------------------------------------
+
+You can append query options to the URI in the following format,
+`?option=value&option=value&...` +
+ The path is treated in the following way:
+
+1.  as a consumer, if it's a file, it just reads the file, otherwise if
+it represents a directory it scans all the file under the path
+satisfying the configured pattern. All the files under that directory
+must be of the same type.
+2.  as a producer, if at least one split strategy is defined, the path
+is considered a directory and under that directory the producer creates
+a different file per split named using the configured
+link:uuidgenerator.html[UuidGenerator].
+
+*Note*
+
+When consuming from hdfs then in normal mode, a file is split into
+chunks, producing a message per chunk. You can configure the size of the
+chunk using the chunkSize option. If you want to read from hdfs and
+write to a regular file using the file component, then you can use the
+fileMode=Append to append each of the chunks together.
+
+ 
+
+[[HDFS-Options]]
+Options
+^^^^^^^
+
+
+// component options: START
+The HDFS component supports 1 options which are listed below.
+
+
+
+[width="100%",cols="2s,1m,8",options="header"]
+|=======================================================================
+| Name | Java Type | Description
+| jAASConfiguration | Configuration | To use the given configuration for 
security with JAAS.
+|=======================================================================
+// component options: END
+
+
+
+// endpoint options: START
+The HDFS component supports 41 endpoint options which are listed below:
+
+[width="100%",cols="2s,1,1m,1m,5",options="header"]
+|=======================================================================
+| Name | Group | Default | Java Type | Description
+| hostName | common |  | String | *Required* HDFS host to use
+| port | common | 8020 | int | HDFS port to use
+| path | common |  | String | *Required* The directory path to use
+| blockSize | common | 67108864 | long | The size of the HDFS blocks
+| bufferSize | common | 4096 | int | The buffer size used by HDFS
+| checkIdleInterval | common | 500 | int | How often (time in millis) in to 
run the idle checker background task. This option is only in use if the 
splitter strategy is IDLE.
+| chunkSize | common | 4096 | int | When reading a normal file this is split 
into chunks producing a message per chunk.
+| compressionCodec | common | DEFAULT | HdfsCompressionCodec | The compression 
codec to use
+| compressionType | common | NONE | CompressionType | The compression type to 
use (is default not in use)
+| connectOnStartup | common | true | boolean | Whether to connect to the HDFS 
file system on starting the producer/consumer. If false then the connection is 
created on-demand. Notice that HDFS may take up till 15 minutes to establish a 
connection as it has hardcoded 45 x 20 sec redelivery. By setting this option 
to false allows your application to startup and not block for up till 15 
minutes.
+| fileSystemType | common | HDFS | HdfsFileSystemType | Set to LOCAL to not 
use HDFS but local java.io.File instead.
+| fileType | common | NORMAL_FILE | HdfsFileType | The file type to use. For 
more details see Hadoop HDFS documentation about the various files types.
+| keyType | common | NULL | WritableType | The type for the key in case of 
sequence or map files.
+| openedSuffix | common | opened | String | When a file is opened for 
reading/writing the file is renamed with this suffix to avoid to read it during 
the writing phase.
+| owner | common |  | String | The file owner must match this owner for the 
consumer to pickup the file. Otherwise the file is skipped.
+| readSuffix | common | read | String | Once the file has been read is renamed 
with this suffix to avoid to read it again.
+| replication | common | 3 | short | The HDFS replication factor
+| splitStrategy | common |  | String | In the current version of Hadoop 
opening a file in append mode is disabled since it's not very reliable. So for 
the moment it's only possible to create new files. The Camel HDFS endpoint 
tries to solve this problem in this way: If the split strategy option has been 
defined the hdfs path will be used as a directory and files will be created 
using the configured UuidGenerator. Every time a splitting condition is met a 
new file is created. The splitStrategy option is defined as a string with the 
following syntax: splitStrategy=ST:valueST:value... where ST can be: BYTES a 
new file is created and the old is closed when the number of written bytes is 
more than value MESSAGES a new file is created and the old is closed when the 
number of written messages is more than value IDLE a new file is created and 
the old is closed when no writing happened in the last value milliseconds
+| valueType | common | BYTES | WritableType | The type for the key in case of 
sequence or map files
+| bridgeErrorHandler | consumer | false | boolean | Allows for bridging the 
consumer to the Camel routing Error Handler which mean any exceptions occurred 
while the consumer is trying to pickup incoming messages or the likes will now 
be processed as a message and handled by the routing Error Handler. By default 
the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with 
exceptions that will be logged at WARN/ERROR level and ignored.
+| delay | consumer | 1000 | long | The interval (milliseconds) between the 
directory scans.
+| initialDelay | consumer |  | long | For the consumer how much to wait 
(milliseconds) before to start scanning the directory.
+| pattern | consumer | * | String | The pattern used for scanning the directory
+| sendEmptyMessageWhenIdle | consumer | false | boolean | If the polling 
consumer did not poll any files you can enable this option to send an empty 
message (no body) instead.
+| exceptionHandler | consumer (advanced) |  | ExceptionHandler | To let the 
consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler 
is enabled then this options is not in use. By default the consumer will deal 
with exceptions that will be logged at WARN/ERROR level and ignored.
+| pollStrategy | consumer (advanced) |  | PollingConsumerPollStrategy | A 
pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to 
provide your custom implementation to control error handling usually occurred 
during the poll operation before an Exchange have been created and being routed 
in Camel.
+| append | producer | false | boolean | Append to existing file. Notice that 
not all HDFS file systems support the append option.
+| overwrite | producer | true | boolean | Whether to overwrite existing files 
with the same name
+| exchangePattern | advanced | InOnly | ExchangePattern | Sets the default 
exchange pattern when creating an exchange
+| synchronous | advanced | false | boolean | Sets whether synchronous 
processing should be strictly used or Camel is allowed to use asynchronous 
processing (if supported).
+| backoffErrorThreshold | scheduler |  | int | The number of subsequent error 
polls (failed due some error) that should happen before the backoffMultipler 
should kick-in.
+| backoffIdleThreshold | scheduler |  | int | The number of subsequent idle 
polls that should happen before the backoffMultipler should kick-in.
+| backoffMultiplier | scheduler |  | int | To let the scheduled polling 
consumer backoff if there has been a number of subsequent idles/errors in a 
row. The multiplier is then the number of polls that will be skipped before the 
next actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
+| greedy | scheduler | false | boolean | If greedy is enabled then the 
ScheduledPollConsumer will run immediately again if the previous run polled 1 
or more messages.
+| runLoggingLevel | scheduler | TRACE | LoggingLevel | The consumer logs a 
start/complete log line when it polls. This option allows you to configure the 
logging level for that.
+| scheduledExecutorService | scheduler |  | ScheduledExecutorService | Allows 
for configuring a custom/shared thread pool to use for the consumer. By default 
each consumer has its own single threaded thread pool.
+| scheduler | scheduler | none | ScheduledPollConsumerScheduler | To use a 
cron scheduler from either camel-spring or camel-quartz2 component
+| schedulerProperties | scheduler |  | Map | To configure additional 
properties when using a custom scheduler or any of the Quartz2 Spring based 
scheduler.
+| startScheduler | scheduler | true | boolean | Whether the scheduler should 
be auto started.
+| timeUnit | scheduler | MILLISECONDS | TimeUnit | Time unit for initialDelay 
and delay options.
+| useFixedDelay | scheduler | true | boolean | Controls if fixed delay or 
fixed rate is used. See ScheduledExecutorService in JDK for details.
+|=======================================================================
+// endpoint options: END
+
+
+
+[[HDFS-KeyTypeandValueType]]
+KeyType and ValueType
++++++++++++++++++++++
+
+* NULL it means that the key or the value is absent
+* BYTE for writing a byte, the java Byte class is mapped into a BYTE
+* BYTES for writing a sequence of bytes. It maps the java ByteBuffer
+class
+* INT for writing java integer
+* FLOAT for writing java float
+* LONG for writing java long
+* DOUBLE for writing java double
+* TEXT for writing java strings
+
+BYTES is also used with everything else, for example, in Camel a file is
+sent around as an InputStream, int this case is written in a sequence
+file or a map file as a sequence of bytes.
+
+[[HDFS-SplittingStrategy]]
+Splitting Strategy
+^^^^^^^^^^^^^^^^^^
+
+In the current version of Hadoop opening a file in append mode is
+disabled since it's not very reliable. So, for the moment, it's only
+possible to create new files. The Camel HDFS endpoint tries to solve
+this problem in this way:
+
+* If the split strategy option has been defined, the hdfs path will be
+used as a directory and files will be created using the configured
+link:uuidgenerator.html[UuidGenerator]
+* Every time a splitting condition is met, a new file is created. +
+ The splitStrategy option is defined as a string with the following
+syntax: +
+ splitStrategy=<ST>:<value>,<ST>:<value>,*
+
+where <ST> can be:
+
+* BYTES a new file is created, and the old is closed when the number of
+written bytes is more than <value>
+* MESSAGES a new file is created, and the old is closed when the number
+of written messages is more than <value>
+* IDLE a new file is created, and the old is closed when no writing
+happened in the last <value> milliseconds
+
+*Note*
+
+note that this strategy currently requires either setting an IDLE value
+or setting the HdfsConstants.HDFS_CLOSE header to false to use the
+BYTES/MESSAGES configuration...otherwise, the file will be closed with
+each message
+
+for example:
+
+[source,java]
+----------------------------------------------------------------
+hdfs://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5
+----------------------------------------------------------------
+
+it means: a new file is created either when it has been idle for more
+than 1 second or if more than 5 bytes have been written. So, running
+`hadoop fs -ls /tmp/simple-file` you'll see that multiple files have
+been created.
+
+[[HDFS-MessageHeaders]]
+Message Headers
+^^^^^^^^^^^^^^^
+
+The following headers are supported by this component:
+
+[[HDFS-Produceronly]]
+Producer only
++++++++++++++
+
+[width="100%",cols="10%,90%",options="header",]
+|=======================================================================
+|Header |Description
+
+|`CamelFileName` |*Camel 2.13:* Specifies the name of the file to write 
(relative to the
+endpoint path). The name can be a `String` or an
+link:expression.html[Expression] object. Only relevant when not using a
+split strategy.
+|=======================================================================
+
+[[HDFS-Controllingtoclosefilestream]]
+Controlling to close file stream
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+*Available as of Camel 2.10.4*
+
+When using the link:hdfs.html[HDFS] producer *without* a split strategy,
+then the file output stream is by default closed after the write.
+However you may want to keep the stream open, and only explicitly close
+the stream later. For that you can use the header
+`HdfsConstants.HDFS_CLOSE` (value = `"CamelHdfsClose"`) to control this.
+Setting this value to a boolean allows you to explicit control whether
+the stream should be closed or not.
+
+Notice this does not apply if you use a split strategy, as there are
+various strategies that can control when the stream is closed.
+
+[[HDFS-UsingthiscomponentinOSGi]]
+Using this component in OSGi
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This component is fully functional in an OSGi environment, however, it
+requires some actions from the user. Hadoop uses the thread context
+class loader in order to load resources. Usually, the thread context
+classloader will be the bundle class loader of the bundle that contains
+the routes. So, the default configuration files need to be visible from
+the bundle class loader. A typical way to deal with it is to keep a copy
+of core-default.xml in your bundle root. That file can be found in the
+hadoop-common.jar.

http://git-wip-us.apache.org/repos/asf/camel/blob/aac97e1c/docs/user-manual/en/SUMMARY.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/SUMMARY.md b/docs/user-manual/en/SUMMARY.md
index ef23a7b..b1037e2 100644
--- a/docs/user-manual/en/SUMMARY.md
+++ b/docs/user-manual/en/SUMMARY.md
@@ -148,6 +148,7 @@
     * [Hawtdb](hawtdb.adoc)
     * [Hazelcast](hazelcast.adoc)
     * [Hbase](hbase.adoc)
+    * [HDFS](hdfs.adoc)
     * [Ironmq](ironmq.adoc)
     * [JMS](jms.adoc)
     * [JMX](jmx.adoc)

Reply via email to