This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch antora
in repository https://gitbox.apache.org/repos/asf/camel-k.git


The following commit(s) were added to refs/heads/antora by this push:
     new eb1f150  Completing manual
eb1f150 is described below

commit eb1f15023d5a68cb24fd38714f8b17f5df3c0a6f
Author: nferraro <ni.ferr...@gmail.com>
AuthorDate: Mon Dec 17 14:29:30 2018 +0100

    Completing manual
---
 README.adoc                                        | 201 ++------------
 contributing.adoc                                  |   7 +-
 docs/modules/ROOT/nav.adoc                         |  19 +-
 .../ROOT/pages/configuration/components.adoc       |  20 ++
 .../ROOT/pages/configuration/configmap-secret.adoc |  79 ++++++
 .../ROOT/pages/configuration/configuration.adoc    |  23 ++
 .../ROOT/pages/configuration/dependencies.adoc     |  24 ++
 docs/modules/ROOT/pages/configuration/index.adoc   | 161 -----------
 docs/modules/ROOT/pages/configuration/logging.adoc |  15 ++
 docs/modules/ROOT/pages/developers.adoc            | 186 -------------
 docs/modules/ROOT/pages/index.adoc                 |   4 +-
 .../installation/{index.adoc => installation.adoc} |   2 +-
 docs/modules/ROOT/pages/installation/minikube.adoc |   2 +-
 .../modules/ROOT/pages/installation/minishift.adoc |   2 +-
 .../modules/ROOT/pages/installation/openshift.adoc |   2 +-
 docs/modules/ROOT/pages/languages.adoc             | 296 ---------------------
 docs/modules/ROOT/pages/languages/groovy.adoc      | 112 ++++++++
 docs/modules/ROOT/pages/languages/java.adoc        |  25 ++
 docs/modules/ROOT/pages/languages/javascript.adoc  |  33 +++
 docs/modules/ROOT/pages/languages/kotlin.adoc      |  99 +++++++
 docs/modules/ROOT/pages/languages/languages.adoc   |  21 ++
 docs/modules/ROOT/pages/languages/xml.adoc         |  23 ++
 docs/modules/ROOT/pages/running.adoc               | 176 ------------
 docs/modules/ROOT/pages/running/dev-mode.adoc      |  44 +++
 docs/modules/ROOT/pages/running/running.adoc       |  42 +++
 docs/modules/ROOT/pages/uninstalling.adoc          |   9 +
 26 files changed, 611 insertions(+), 1016 deletions(-)

diff --git a/README.adoc b/README.adoc
index a142ef5..e2a0904 100644
--- a/README.adoc
+++ b/README.adoc
@@ -9,208 +9,41 @@ image:https://badges.gitter.im/apache/camel-k.png["Chat on 
Gitter", link="https:
 
 Apache Camel K is a lightweight integration framework built from Apache Camel 
that runs natively on Kubernetes and is specifically designed for serverless 
and microservice architectures.
 
-[[getting-started]]
-== Getting Started
+Users of Camel K can instantly run integration code written in Camel DSL on 
their preferred cloud (Kubernetes or OpenShift).
 
-Camel K allows to run integrations directly on a Kubernetes or OpenShift 
cluster.
-To use it, you need to be connected to a cloud environment or to a local 
cluster created for development purposes.
+[[how-it-works]]
+== How It Works
 
-If you need help on how to create a local development environment based on 
*Minishift* or *Minikube*, you can follow the 
link:/docs/cluster-setup.adoc[local cluster setup guide].
+Just write a _helloworld.groovy_ integration file with the following content:
 
-[[installation]]
-=== Installation
-
-Make sure you apply specific configuration settings for your cluster before 
installing Camel K. Customized instructions are needed for
-the following cluster types:
-
-- link:/docs/cluster-setup.adoc[Minishift or Minikube]
-- link:/docs/gke-setup.adoc[Google Kubernetes Engine (GKE)]
-
-Other cluster types (such as OpenShift clusters) should not need prior 
configuration.
-
-To start using Camel K you need the **"kamel"** binary, that can be used to 
both configure the cluster and run integrations.
-Look into the https://github.com/apache/camel-k/releases[release page] for 
latest version of the `kamel` tool.
-
-If you want to contribute, you can also **build it from source!** Refer to the 
link:/contributing.adoc[developer's guide]
-for information on how to do it.
-
-Once you have the "kamel" binary, log into your cluster using the standard 
"oc" (OpenShift) or "kubectl" (Kubernetes) client tool and execute the 
following command to install Camel K:
-
-```
-kamel install
-```
-
-This will configure the cluster with the Camel K custom resource definitions 
and install the operator on the current namespace.
-
-IMPORTANT: Custom Resource Definitions (CRD) are cluster-wide objects and you 
need admin rights to install them. Fortunately this
-operation can be done *once per cluster*. So, if the `kamel install` operation 
fails, you'll be asked to repeat it when logged as admin.
-For Minishift, this means executing `oc login -u system:admin` then `kamel 
install --cluster-setup` only for first-time installation.
-
-=== Running an Integration
-
-After the initial setup, you can run a Camel integration on the cluster by 
executing:
-
-```
-kamel run examples/Sample.java
-```
-
-A "Sample.java" file is included in the link:/examples[/examples] folder of 
this repository. You can change the content of the file and execute the command 
again to see the changes.
-
-==== Configure Integration properties
-
-Properties associated to an integration can be configured either using a 
ConfigMap/Secret or by setting using the "--property" flag, i.e.
-
-```
-kamel run --property my.message=test examples/props.js
-```
-
-==== Configure Integration Logging
-
-camel-k runtime uses log4j2 as logging framework and can be configured through 
integration properties.
-If you need to change the logging level of various loggers, you can do so by 
using the `logging.level` prefix:
-
-```
-logging.level.org.apache.camel = DEBUG
+.helloworld.groovy
+```groovy
+from('timer:tick?period=3s')
+  .setBody().constant('Hello world from Camel K')
+  .to('log:info')
 ```
 
-==== Configure Integration Components
-
-camel-k component can be configured programmatically inside an integration or 
using properties with the following syntax.
+You can then execute the following command:
 
 ```
-camel.component.${scheme}.${property} = ${value}
+kamel run helloworld.groovy
 ```
 
-As example if you want to change the queue size of the seda component, you can 
use the following property:
+The integration code immediately runs in the cloud. **Nothing else** is needed.
 
-```
-camel.component.seda.queueSize = 10
-```
+[[documentation]]
+== Documentation
 
-=== Running Integrations in "Dev" Mode for Fast Feedback
+Camel K full documentation is available online at 
https://camel.apache.org/staging/camel-k/latest/.
 
-If you want to iterate quickly on an integration to have fast feedback on the 
code you're writing, you can use by running it in **"dev" mode**:
-
-```
-kamel run examples/Sample.java --dev
-```
-
-The `--dev` flag deploys immediately the integration and shows the integration 
logs in the console. You can then change the code and see
-the **changes automatically applied (instantly)** to the remote integration 
pod.
-
-The console follows automatically all redeploys of the integration.
-
-Here's an example of the output:
-
-```
-[nferraro@localhost camel-k]$ kamel run examples/Sample.java --dev
-integration "sample" created
-integration "sample" in phase Building
-integration "sample" in phase Deploying
-integration "sample" in phase Running
-[1] Monitoring pod sample-776db787c4-zjhfr[1] Starting the Java application 
using /opt/run-java/run-java.sh ...
-[1] exec java 
-javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml
 -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp 
.:/deployments/* org.apache.camel.k.jvm.Application
-[1] [INFO ] 2018-09-20 21:24:35.953 [main] Application - Routes: 
file:/etc/camel/conf/Sample.java
-[1] [INFO ] 2018-09-20 21:24:35.955 [main] Application - Language: java
-[1] [INFO ] 2018-09-20 21:24:35.956 [main] Application - Locations: 
file:/etc/camel/conf/application.properties
-[1] [INFO ] 2018-09-20 21:24:36.506 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) is starting
-[1] [INFO ] 2018-09-20 21:24:36.578 [main] ManagedManagementStrategy - JMX is 
enabled
-[1] [INFO ] 2018-09-20 21:24:36.680 [main] DefaultTypeConverter - Type 
converters loaded (core: 195, classpath: 0)
-[1] [INFO ] 2018-09-20 21:24:36.777 [main] DefaultCamelContext - StreamCaching 
is not in use. If using streams then its recommended to enable stream caching. 
See more details at http://camel.apache.org/stream-caching.html
-[1] [INFO ] 2018-09-20 21:24:36.817 [main] DefaultCamelContext - Route: route1 
started and consuming from: timer://tick
-[1] [INFO ] 2018-09-20 21:24:36.818 [main] DefaultCamelContext - Total 1 
routes, of which 1 are started
-[1] [INFO ] 2018-09-20 21:24:36.820 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) started in 0.314 seconds
-
-```
-
-=== Dependencies and Component Resolution
-
-Camel components used in an integration are automatically resolved. For 
example, take the following integration:
-
-```
-from("imap://ad...@myserver.com")
-  .to("seda:output")
-```
-
-Since the integration is using the **"imap:" prefix**, Camel K is able to 
**automatically add the "camel-mail" component** to the list of required 
dependencies.
-This will be transparent to the user, that will just see the integration 
running.
-
-Automatic resolution is also a nice feature in `--dev` mode, because you are 
allowed to add all components you need without exiting the dev loop.
-
-You can also use the `-d` flag to pass additional explicit dependencies to the 
Camel client tool:
-
-```
-kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 
Integration.java
-```
-
-=== Not Just Java
-
-Camel K supports multiple languages for writing integrations:
-
-.Languages
-[options="header"]
-|=======================
-| Language                     | Description
-| Java                         | Both integrations in source `.java` files or 
compiled `.class` file can be run.
-| XML                          | Integrations written in plain XML DSL are 
supported (Spring XML or Blueprint not supported).
-| Groovy                       | Groovy `.groovy` files are supported 
(experimental).
-| JavaScript        | JavaScript `.js` files are supported (experimental).
-| Kotlin                       | Kotlin Script `.kts` files are supported 
(experimental).
-|=======================
-
-More information about supported languages is provided in the 
link:docs/languages.adoc[languages guide].
-
-Integrations written in different languages are provided in the 
link:/examples[examples] directory.
-
-An example of integration written in JavaScript is the 
link:/examples/dns.js[/examples/dns.js] integration.
-Here's the content:
-
-```
-// Lookup every second the 'www.google.com' domain name and log the output
-from('timer:dns?period=1s')
-    .routeId('dns')
-    .setHeader('dns.domain')
-        .constant('www.google.com')
-    .to('dns:ip')
-    .to('log:dns');
-```
-
-To run it, you need just to execute:
-
-```
-kamel run examples/dns.js
-```
-
-=== Traits
-
-The details of how the integration is mapped into Kubernetes resources can be 
*customized using traits*.
-More information is provided in the link:docs/traits.adoc[traits section].
-
-=== Monitoring the Status
-
-Camel K integrations follow a lifecycle composed of several steps before 
getting into the `Running` state.
-You can check the status of all integrations by executing the following 
command:
-
-```
-kamel get
-```
+It covers all aspects of Camel K, including installation on different clouds.
 
 [[contributing]]
 == Contributing
 
 We love contributions and we want to make Camel K great!
 
-Contributing is easy, just take a look at our 
link:/contributing.adoc[developer's guide].
-
-[[uninstalling]]
-== Uninstalling
-
-If you really need to, it is possible to completely uninstall Camel K from 
OpenShift or Kubernetes with the following command, using the "oc" or "kubectl" 
tool:
-
-```
-# kubectl on plain Kubernetes
-oc delete 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
-```
+Contributing is easy, just take a look at our 
link:/contributing.adoc[contributing guide].
 
 [[licensing]]
 == Licensing
diff --git a/contributing.adoc b/contributing.adoc
index 9589be3..92593c4 100644
--- a/contributing.adoc
+++ b/contributing.adoc
@@ -1,6 +1,5 @@
-[[developers]]
-Developer's Guide
-=================
+[[contributing]]
+= Contributing to Camel K
 
 We love contributions!
 
@@ -50,7 +49,7 @@ This is a high level overview of the project structure:
 | link:/cmd[/cmd]                      | Contains the entry points (the *main* 
functions) for the **camel-k** binary and the **kamel** client tool.
 | link:/build[/build]          | Contains scripts used during make operations 
for building the project.
 | link:/deploy[/deploy]                | Contains Kubernetes resource files 
that are used by the **kamel** client during installation. The 
`/deploy/resources.go` file is kept in sync with the content of the directory 
(`make build-embed-resources`), so that resources can be used from within the 
go code.
-| link:/docs[/docs]                    | Contains this documentation.
+| link:/docs[/docs]                    | Contains the documentation website 
based on https://antora.org/[Antora].
 | link:/pkg[/pkg]                      | This is where the code resides. The 
code is divided in multiple subpackages.
 | link:/runtime[/runtime]      | The Java runtime code that is used inside the 
integration Docker containers.
 | link:/test[/test]                    | Include integration tests to ensure 
that the software interacts correctly with Kubernetes and OpenShift.
diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc
index 6bd6a7c..43770b0 100644
--- a/docs/modules/ROOT/nav.adoc
+++ b/docs/modules/ROOT/nav.adoc
@@ -1,7 +1,20 @@
-* xref:installation/index.adoc[Installation]
+* xref:installation/installation.adoc[Installation]
 ** xref:installation/minikube.adoc[Minikube]
 ** xref:installation/minishift.adoc[Minishift]
 ** xref:installation/gke.adoc[Google Kubernetes Engine (GKE)]
 ** xref:installation/openshift.adoc[OpenShift]
-* xref:running.adoc[Running]
-* xref:configuration/index.adoc[Configuration]
\ No newline at end of file
+* xref:running/running.adoc[Running]
+** xref:running/dev-mode.adoc[Dev Mode]
+* xref:configuration/configuration.adoc[Configuration]
+** xref:configuration/components.adoc[Components]
+** xref:configuration/logging.adoc[Logging]
+** xref:configuration/dependencies.adoc[Dependencies]
+** xref:configuration/configmap-secret.adoc[ConfigMap/Secret]
+* xref:languages/languages.adoc[Languages]
+** xref:languages/groovy.adoc[Groovy]
+** xref:languages/kotlin.adoc[Kotlin]
+** xref:languages/javascript.adoc[JavaScript]
+** xref:languages/java.adoc[Java]
+** xref:languages/xml.adoc[XML]
+* xref:traits.adoc[Traits]
+* xref:uninstalling.adoc[Uninstalling]
\ No newline at end of file
diff --git a/docs/modules/ROOT/pages/configuration/components.adoc 
b/docs/modules/ROOT/pages/configuration/components.adoc
new file mode 100644
index 0000000..ce95b99
--- /dev/null
+++ b/docs/modules/ROOT/pages/configuration/components.adoc
@@ -0,0 +1,20 @@
+= Configure Integration Components
+
+Camel components can be configured programmatically (within the integration 
code) or using properties with the following syntax:
+
+```
+camel.component.${scheme}.${property}=${value}
+```
+
+As example if you want to change the queue size of the seda component, you can 
use the following property:
+
+```
+camel.component.seda.queueSize=10
+```
+
+For example, you can do it when running the integration from the command line:
+
+```
+kamel run --property camel.component.seda.queueSize=10 examples/routes.groovy
+```
+
diff --git a/docs/modules/ROOT/pages/configuration/configmap-secret.adoc 
b/docs/modules/ROOT/pages/configuration/configmap-secret.adoc
new file mode 100644
index 0000000..25a27a9
--- /dev/null
+++ b/docs/modules/ROOT/pages/configuration/configmap-secret.adoc
@@ -0,0 +1,79 @@
+= Configuration via ConfigMap or Secret
+
+Camel K allows to define property values using Kubernetes ConfigMap or Secrets.
+
+For the sake of simplicity, consider the following integration:
+
+[source,groovy]
+.props.groovy
+----
+from('timer:props?period=1s')
+    .log('{{my.message}}')
+----
+
+In addition to xref:configuration/configuration.adoc[command line property 
configuration], Camel K provides the following options.
+
+== Configuration via ConfigMap
+
+You can create a _ConfigMap_ containing your configuration properties and link 
it to a Camel K integration.
+
+For example, you can define the following _ConfigMap_:
+
+[source,yaml]
+.my-config.yaml
+----
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: my-config
+data:
+  application.properties: |
+    my.message=Hello World
+    logging.level.org.apache.camel=DEBUG
+----
+
+In the _ConfigMap_ above we've set both the value of the `my.message` property 
and also the xref:configuration/logging.adoc[logging level] for the 
`org.apache.camel` package.
+
+You need to create the _ConfigMap_ first (in the same Kubernetes namespace):
+
+```
+kubectl apply -f my-config.yaml
+```
+
+You can now run the integration with the following command to reference the 
_ConfigMap_:
+
+```
+kamel run --configmap=my-config props.groovy
+```
+
+== Configuration via Secret
+
+Configuration via _Secrets_ is similar to the configuration via _ConfigMap_. 
The difference is that you may need to `base64` encode the content of the
+`application.properties` file inside the secret.
+
+For example, the following _Secret_ is equivalent to the previous _ConfigMap_:
+
+[source,yaml]
+.my-secret.yaml
+----
+apiVersion: v1
+kind: Secret
+metadata:
+  name: my-secret
+data:
+  application.properties: |
+    
bXkubWVzc2FnZT1IZWxsbyBXb3JsZAogICAgbG9nZ2luZy5sZXZlbC5vcmcuYXBhY2hlLmNhbWVs
+    PURFQlVHCg==
+----
+
+You need to create the _Secret_ first (in the same Kubernetes namespace):
+
+```
+kubectl apply -f my-secret.yaml
+```
+
+You can now run the integration with the following command to reference the 
_Secret_:
+
+```
+kamel run --secret=my-secret props.groovy
+```
diff --git a/docs/modules/ROOT/pages/configuration/configuration.adoc 
b/docs/modules/ROOT/pages/configuration/configuration.adoc
new file mode 100644
index 0000000..d41a3bf
--- /dev/null
+++ b/docs/modules/ROOT/pages/configuration/configuration.adoc
@@ -0,0 +1,23 @@
+[[configuration]]
+= Configure Integrations
+
+Properties associated to an integration can be configured either using a 
ConfigMap/Secret or by setting using the `--property` flag when running the 
integration.
+
+The property value can be used inside Camel K integrations using the *property 
placeholder* mechanism.
+
+The syntax for properties has the form `{{my.property}}`, for example:
+
+[source,groovy]
+.props.groovy
+----
+from('timer:props?period=1s')
+    .log('{{my.message}}')
+----
+
+In order to give a value to the `my.message` property you can pass it using 
the command line:
+
+```
+kamel run --property my.message="Hello World" examples/props.js
+```
+
+An alternative, is to provide a value using a Kubernetes 
xref:configuration/configmap-secret.adoc[ConfigMap or Secret]
\ No newline at end of file
diff --git a/docs/modules/ROOT/pages/configuration/dependencies.adoc 
b/docs/modules/ROOT/pages/configuration/dependencies.adoc
new file mode 100644
index 0000000..fa6d49c
--- /dev/null
+++ b/docs/modules/ROOT/pages/configuration/dependencies.adoc
@@ -0,0 +1,24 @@
+= Dependencies and Component Resolution
+
+Camel K tries to resolve automatically a wide range of dependencies that are 
required to run your integration code.
+
+For example, take the following integration:
+
+```
+from("imap://ad...@myserver.com")
+  .to("seda:output")
+```
+
+Since the integration has a endpoint starting with the **"imap:" prefix**, 
Camel K is able to **automatically add the "camel-mail" component** to the list 
of required dependencies.
+The `seda:` endpoint belongs to `camel-core` that is automatically added to 
all integrations, so Camel K will not add additional dependencies for it.
+This dependency resolution mechanism is transparent to the user, that will 
just see the integration running.
+
+Automatic resolution is also a nice feature in xref:running/dev-mode.adoc[dev 
mode], because you are allowed to add all components you need *without exiting 
the dev loop*.
+
+You can explicitly add dependency explicitly using the `-d` flag. This is 
useful when you need to use dependencies that are not included in the Camel 
catalog. For example:
+
+```
+kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 
Integration.java
+```
+
+This feature can also be disabled if needed (although we discourage you from 
doing it) by disabling the _dependencies_ trait (`-t 
dependencies.enabled=false`).
diff --git a/docs/modules/ROOT/pages/configuration/index.adoc 
b/docs/modules/ROOT/pages/configuration/index.adoc
deleted file mode 100644
index 2b22c19..0000000
--- a/docs/modules/ROOT/pages/configuration/index.adoc
+++ /dev/null
@@ -1,161 +0,0 @@
-[[configuration]]
-= Configure Integrations
-
-Properties associated to an integration can be configured either using a 
ConfigMap/Secret or by setting using the "--property" flag, i.e.
-
-```
-kamel run --property my.message=test examples/props.js
-```
-
-==== Configure Integration Logging
-
-camel-k runtime uses log4j2 as logging framework and can be configured through 
integration properties.
-If you need to change the logging level of various loggers, you can do so by 
using the `logging.level` prefix:
-
-```
-logging.level.org.apache.camel = DEBUG
-```
-
-==== Configure Integration Components
-
-camel-k component can be configured programmatically inside an integration or 
using properties with the following syntax.
-
-```
-camel.component.${scheme}.${property} = ${value}
-```
-
-As example if you want to change the queue size of the seda component, you can 
use the following property:
-
-```
-camel.component.seda.queueSize = 10
-```
-
-=== Running Integrations in "Dev" Mode for Fast Feedback
-
-If you want to iterate quickly on an integration to have fast feedback on the 
code you're writing, you can use by running it in **"dev" mode**:
-
-```
-kamel run examples/Sample.java --dev
-```
-
-The `--dev` flag deploys immediately the integration and shows the integration 
logs in the console. You can then change the code and see
-the **changes automatically applied (instantly)** to the remote integration 
pod.
-
-The console follows automatically all redeploys of the integration.
-
-Here's an example of the output:
-
-```
-[nferraro@localhost camel-k]$ kamel run examples/Sample.java --dev
-integration "sample" created
-integration "sample" in phase Building
-integration "sample" in phase Deploying
-integration "sample" in phase Running
-[1] Monitoring pod sample-776db787c4-zjhfr[1] Starting the Java application 
using /opt/run-java/run-java.sh ...
-[1] exec java 
-javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml
 -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp 
.:/deployments/* org.apache.camel.k.jvm.Application
-[1] [INFO ] 2018-09-20 21:24:35.953 [main] Application - Routes: 
file:/etc/camel/conf/Sample.java
-[1] [INFO ] 2018-09-20 21:24:35.955 [main] Application - Language: java
-[1] [INFO ] 2018-09-20 21:24:35.956 [main] Application - Locations: 
file:/etc/camel/conf/application.properties
-[1] [INFO ] 2018-09-20 21:24:36.506 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) is starting
-[1] [INFO ] 2018-09-20 21:24:36.578 [main] ManagedManagementStrategy - JMX is 
enabled
-[1] [INFO ] 2018-09-20 21:24:36.680 [main] DefaultTypeConverter - Type 
converters loaded (core: 195, classpath: 0)
-[1] [INFO ] 2018-09-20 21:24:36.777 [main] DefaultCamelContext - StreamCaching 
is not in use. If using streams then its recommended to enable stream caching. 
See more details at http://camel.apache.org/stream-caching.html
-[1] [INFO ] 2018-09-20 21:24:36.817 [main] DefaultCamelContext - Route: route1 
started and consuming from: timer://tick
-[1] [INFO ] 2018-09-20 21:24:36.818 [main] DefaultCamelContext - Total 1 
routes, of which 1 are started
-[1] [INFO ] 2018-09-20 21:24:36.820 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) started in 0.314 seconds
-
-```
-
-=== Dependencies and Component Resolution
-
-Camel components used in an integration are automatically resolved. For 
example, take the following integration:
-
-```
-from("imap://ad...@myserver.com")
-  .to("seda:output")
-```
-
-Since the integration is using the **"imap:" prefix**, Camel K is able to 
**automatically add the "camel-mail" component** to the list of required 
dependencies.
-This will be transparent to the user, that will just see the integration 
running.
-
-Automatic resolution is also a nice feature in `--dev` mode, because you are 
allowed to add all components you need without exiting the dev loop.
-
-You can also use the `-d` flag to pass additional explicit dependencies to the 
Camel client tool:
-
-```
-kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 
Integration.java
-```
-
-=== Not Just Java
-
-Camel K supports multiple languages for writing integrations:
-
-.Languages
-[options="header"]
-|=======================
-| Language                     | Description
-| Java                         | Both integrations in source `.java` files or 
compiled `.class` file can be run.
-| XML                          | Integrations written in plain XML DSL are 
supported (Spring XML or Blueprint not supported).
-| Groovy                       | Groovy `.groovy` files are supported 
(experimental).
-| JavaScript        | JavaScript `.js` files are supported (experimental).
-| Kotlin                       | Kotlin Script `.kts` files are supported 
(experimental).
-|=======================
-
-More information about supported languages is provided in the 
link:docs/languages.adoc[languages guide].
-
-Integrations written in different languages are provided in the 
link:/examples[examples] directory.
-
-An example of integration written in JavaScript is the 
link:/examples/dns.js[/examples/dns.js] integration.
-Here's the content:
-
-```
-// Lookup every second the 'www.google.com' domain name and log the output
-from('timer:dns?period=1s')
-    .routeId('dns')
-    .setHeader('dns.domain')
-        .constant('www.google.com')
-    .to('dns:ip')
-    .to('log:dns');
-```
-
-To run it, you need just to execute:
-
-```
-kamel run examples/dns.js
-```
-
-=== Traits
-
-The details of how the integration is mapped into Kubernetes resources can be 
*customized using traits*.
-More information is provided in the link:docs/traits.adoc[traits section].
-
-=== Monitoring the Status
-
-Camel K integrations follow a lifecycle composed of several steps before 
getting into the `Running` state.
-You can check the status of all integrations by executing the following 
command:
-
-```
-kamel get
-```
-
-[[contributing]]
-== Contributing
-
-We love contributions and we want to make Camel K great!
-
-Contributing is easy, just take a look at our 
link:/docs/developers.adoc[developer's guide].
-
-[[uninstalling]]
-== Uninstalling
-
-If you really need to, it is possible to completely uninstall Camel K from 
OpenShift or Kubernetes with the following command, using the "oc" or "kubectl" 
tool:
-
-```
-# kubectl on plain Kubernetes
-oc delete 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
-```
-
-[[licensing]]
-== Licensing
-
-This software is licensed under the terms you may find in the file named 
LICENSE in this directory.
diff --git a/docs/modules/ROOT/pages/configuration/logging.adoc 
b/docs/modules/ROOT/pages/configuration/logging.adoc
new file mode 100644
index 0000000..52bcaea
--- /dev/null
+++ b/docs/modules/ROOT/pages/configuration/logging.adoc
@@ -0,0 +1,15 @@
+[[logging-configuration]]
+= Configure Integration Logging
+
+Camel K uses log4j2 as logging framework and can be configured through 
integration properties.
+If you need to change the logging level of various loggers, you can do so by 
using the `logging.level` prefix:
+
+```
+logging.level.org.apache.camel=DEBUG
+```
+
+For example, you can do it when running the integration from the command line:
+
+```
+kamel run --property logging.level.org.apache.camel=DEBUG 
examples/routes.groovy
+```
diff --git a/docs/modules/ROOT/pages/developers.adoc 
b/docs/modules/ROOT/pages/developers.adoc
deleted file mode 100644
index 9589be3..0000000
--- a/docs/modules/ROOT/pages/developers.adoc
+++ /dev/null
@@ -1,186 +0,0 @@
-[[developers]]
-Developer's Guide
-=================
-
-We love contributions!
-
-The project is written in https://golang.org/[go] and contains some parts 
written in Java for the link:/runtime[integration runtime]
-Camel K is built on top of Kubernetes through *Custom Resource Definitions*. 
The https://github.com/operator-framework/operator-sdk[Operator SDK] is used
-to manage the lifecycle of those custom resources.
-
-[[requirements]]
-== Requirements
-
-In order to build the project, you need to comply with the following 
requirements:
-
-* **Go version 1.10+**: needed to compile and test the project. Refer to the 
https://golang.org/[Go website] for the installation.
-* **Dep version 0.5.0**: for managing dependencies. You can find installation 
instructions in the https://github.com/golang/dep[dep GitHub repository].
-* **Operator SDK v0.0.7+**: used to build the operator and the Docker images. 
Instructions in the https://github.com/operator-framework/operator-sdk[Operator 
SDK website] (binary downloads available in the release page).
-* **GNU Make**: used to define composite build actions. This should be already 
installed or available as package if you have a good OS 
(https://www.gnu.org/software/make/).
-
-[[checks]]
-== Running checks
-Checks rely on `golangci-lint` being installed, to install it look at the 
https://github.com/golangci/golangci-lint#local-installation[Local 
Installation] instructions.
-
-You can run checks via `make lint` or you can install a GIT pre-commit hook 
and have the checks run via https://pre-commit.com[pre-commit]; then make sure 
to install the pre-commit hooks after installing pre-commit by running
-
- $ pre-commit install
-
-[[checking-out]]
-== Checking Out the Sources
-
-You can create a fork of this project from Github, then clone your fork with 
the `git` command line tool.
-
-You need to put the project in your $GOPATH (refer to 
https://golang.org/doc/install[Go documentation] for information).
-So, make sure that the **root** of the github repo is in the path:
-
-```
-$GOPATH/src/github.com/apache/camel-k/
-```
-
-[[structure]]
-== Structure
-
-This is a high level overview of the project structure:
-
-.Structure
-[options="header"]
-|=======================
-| Path                                         | Content
-| link:/cmd[/cmd]                      | Contains the entry points (the *main* 
functions) for the **camel-k** binary and the **kamel** client tool.
-| link:/build[/build]          | Contains scripts used during make operations 
for building the project.
-| link:/deploy[/deploy]                | Contains Kubernetes resource files 
that are used by the **kamel** client during installation. The 
`/deploy/resources.go` file is kept in sync with the content of the directory 
(`make build-embed-resources`), so that resources can be used from within the 
go code.
-| link:/docs[/docs]                    | Contains this documentation.
-| link:/pkg[/pkg]                      | This is where the code resides. The 
code is divided in multiple subpackages.
-| link:/runtime[/runtime]      | The Java runtime code that is used inside the 
integration Docker containers.
-| link:/test[/test]                    | Include integration tests to ensure 
that the software interacts correctly with Kubernetes and OpenShift.
-| link:/tmp[/tmp]                      | Scripts and Docker configuration 
files used by the operator-sdk.
-| /vendor                                      | Project dependencies (not 
staged in git).
-| link:/version[/version]      | Contains the global version of the project.
-|=======================
-
-
-[[building]]
-== Building
-
-Go dependencies in the *vendor* directory are not included when you clone the 
project.
-
-Before compiling the source code, you need to sync your local *vendor* 
directory with the project dependencies, using the following command:
-
-```
-make dep
-```
-
-The `make dep` command runs `dep ensure -v` under the hood, so make sure that 
`dep` is properly installed.
-
-To build the whole project you now need to run:
-
-```
-make
-```
-
-This execute a full build of both the Java and Go code. If you need to build 
the components separately you can execute:
-
-* `make build-operator`: to build the operator binary only.
-* `make build-kamel`: to build the `kamel` client tool only.
-* `make build-runtime`: to build the Java-based runtime code only.
-
-After a successful build, if you're connected to a Docker daemon, you can 
build the operator Docker image by running:
-
-```
-make images
-```
-
-[[testing]]
-== Testing
-
-Unit tests are executed automatically as part of the build. They use the 
standard go testing framework.
-
-Integration tests (aimed at ensuring that the code integrates correctly with 
Kubernetes and OpenShift), need special care.
-
-The **convention** used in this repo is to name unit tests `xxx_test.go`, and 
name integration tests `yyy_integration_test.go`.
-Integration tests are all in the link:/test[/test] dir.
-
-Since both names end with `_test.go`, both would be executed by go during 
build, so you need to put a special **build tag** to mark
-integration tests. A integration test should start with the following line:
-
-```
-// +build integration
-```
-
-Look into the link:/test[/test] directory for examples of integration tests.
-
-Before running a integration test, you need to be connected to a 
Kubernetes/OpenShift namespace.
-After you log in into your cluster, you can run the following command to 
execute **all** integration tests:
-
-```
-make test-integration
-```
-
-[running]
-== Running
-
-If you want to install everything you have in your source code and see it 
running on Kubernetes, you need to run the following command:
-
-=== For Minishift
-
-* Run `make install-minishift` (or just `make install`): to build the project 
and install it in the current namespace on Minishift
-* You can specify a different namespace with `make install-minishift 
project=myawesomeproject`
-
-This command assumes you have an already running Minishift instance.
-
-=== For Minikube
-
-* Run `make install-minikube`: to build the project and install it in the 
current namespace on Minikube
-
-This command assumes you have an already running Minikube instance.
-
-=== Use
-
-Now you can play with Camel K:
-
-```
-./kamel run examples/Sample.java
-```
-
-To add additional dependencies to your routes:
-
-```
-./kamel run -d camel:dns examples/dns.js
-```
-
-[[debugging]]
-== Debugging and Running from IDE
-
-Sometimes it's useful to debug the code from the IDE when troubleshooting.
-
-.**Debugging the `kamel` binary**
-
-It should be straightforward: just execute the 
link:/cmd/kamel/main.go[/cmd/kamel/main.go] file from the IDE (e.g. Goland) in 
debug mode.
-
-.**Debugging the operator**
-
-It is a bit more complex (but not so much).
-
-You are going to run the operator code **outside** OpenShift in your IDE so, 
first of all, you need to **stop the operator running inside**:
-
-```
-// use kubectl in plain Kubernetes
-oc scale deployment/camel-k-operator --replicas 0
-```
-
-You can scale it back to 1 when you're done and you have updated the operator 
image.
-
-You can setup the IDE (e.g. Goland) to execute the 
link:/cmd/camel-k/main.go[/cmd/camel-k/main.go] file in debug mode.
-
-When configuring the IDE task, make sure to add all required environment 
variables in the *IDE task configuration screen*:
-
-* Set the `KUBERNETES_CONFIG` environment variable to point to your Kubernetes 
configuration file (usually `<homedir>/.kube/config`).
-* Set the `WATCH_NAMESPACE` environment variable to a Kubernetes namespace you 
have access to.
-* Set the `OPERATOR_NAME` environment variable to `camel-k`.
-
-After you setup the IDE task, you can run and debug the operator process.
-
-NOTE: The operator can be fully debugged in Minishift, because it uses 
OpenShift S2I binary builds under the hood.
-The build phase cannot be (currently) debugged in Minikube because the Kaniko 
builder requires that the operator and the publisher pod
-share a common persistent volume.
diff --git a/docs/modules/ROOT/pages/index.adoc 
b/docs/modules/ROOT/pages/index.adoc
index b91f547..dc65ad9 100644
--- a/docs/modules/ROOT/pages/index.adoc
+++ b/docs/modules/ROOT/pages/index.adoc
@@ -3,7 +3,7 @@ Apache Camel K
 
 Apache Camel K is a lightweight integration framework built from Apache Camel 
that runs natively on Kubernetes and is specifically designed for serverless 
and microservice architectures.
 
-Users of Camel K can instantly run integration code written in Camel DSL on 
their preferred cloud.
+Users of Camel K can instantly run integration code written in Camel DSL on 
their preferred cloud (Kubernetes or OpenShift).
 
 [[how-it-works]]
 == How It Works
@@ -24,4 +24,4 @@ kamel run helloworld.groovy
 
 The integration code immediately runs in the cloud. **Nothing else** is needed.
 
-Continue reading the documentation to xref:installation/index.adoc[install and 
get started with Camel K].
+Continue reading the documentation to 
xref:installation/installation.adoc[install and get started with Camel K].
diff --git a/docs/modules/ROOT/pages/installation/index.adoc 
b/docs/modules/ROOT/pages/installation/installation.adoc
similarity index 96%
rename from docs/modules/ROOT/pages/installation/index.adoc
rename to docs/modules/ROOT/pages/installation/installation.adoc
index 2762e90..ddc9767 100644
--- a/docs/modules/ROOT/pages/installation/index.adoc
+++ b/docs/modules/ROOT/pages/installation/installation.adoc
@@ -38,4 +38,4 @@ IMPORTANT: Custom Resource Definitions (CRD) are cluster-wide 
objects and you ne
 operation can be done *once per cluster*. So, if the `kamel install` operation 
fails, you'll be asked to repeat it when logged as admin.
 For Minishift, this means executing `oc login -u system:admin` then `kamel 
install --cluster-setup` only for first-time installation.
 
-You're now ready to xref:running.adoc[run some integrations].
+You're now ready to xref:running/running.adoc[run some integrations].
diff --git a/docs/modules/ROOT/pages/installation/minikube.adoc 
b/docs/modules/ROOT/pages/installation/minikube.adoc
index 5f26f2d..960e70d 100644
--- a/docs/modules/ROOT/pages/installation/minikube.adoc
+++ b/docs/modules/ROOT/pages/installation/minikube.adoc
@@ -16,4 +16,4 @@ After the startup process is completed, you need to **enable 
the `registry` addo
 minikube addons enable registry
 ```
 
-You can now proceed with the xref:installation/index.adoc#procedure[standard 
Camel K installation procedure].
+You can now proceed with the 
xref:installation/installation.adoc#procedure[standard Camel K installation 
procedure].
diff --git a/docs/modules/ROOT/pages/installation/minishift.adoc 
b/docs/modules/ROOT/pages/installation/minishift.adoc
index 7e2564d..8b1056f 100644
--- a/docs/modules/ROOT/pages/installation/minishift.adoc
+++ b/docs/modules/ROOT/pages/installation/minishift.adoc
@@ -18,4 +18,4 @@ Then you can start the cluster with:
 minishift start
 ```
 
-You can now proceed with the xref:installation/index.adoc#procedure[standard 
Camel K installation procedure].
+You can now proceed with the 
xref:installation/installation.adoc#procedure[standard Camel K installation 
procedure].
diff --git a/docs/modules/ROOT/pages/installation/openshift.adoc 
b/docs/modules/ROOT/pages/installation/openshift.adoc
index f2f1fd5..f13b60f 100644
--- a/docs/modules/ROOT/pages/installation/openshift.adoc
+++ b/docs/modules/ROOT/pages/installation/openshift.adoc
@@ -16,4 +16,4 @@ kamel install --cluster-setup
 ```
 
 Once you've done this **only once per the whole cluster**, you can **login as 
a standard user** and
-continue with the xref:installation/index.adoc#procedure[standard Camel K 
installation procedure].
+continue with the xref:installation/installation.adoc#procedure[standard Camel 
K installation procedure].
diff --git a/docs/modules/ROOT/pages/languages.adoc 
b/docs/modules/ROOT/pages/languages.adoc
deleted file mode 100644
index 42f0557..0000000
--- a/docs/modules/ROOT/pages/languages.adoc
+++ /dev/null
@@ -1,296 +0,0 @@
-[[languages]]
-= Languages
-
-Camel K supports integration written in the following languages:name: value
-
-[options="header"]
-|=======================
-| Language      | Description
-| Java          | Both integrations in source `.java` files or compiled 
`.class` file can be run.
-| XML           | Integrations written in plain XML DSL are supported (Spring 
XML or Blueprint not supported).
-| Groovy        | Groovy `.groovy` files are supported (experimental).
-| JavaScript    | JavaScript `.js` files are supported (experimental).
-| Kotlin        | Kotlin Script `.kts` files are supported (experimental).
-|=======================
-
-[WARNING]
-====
-Work In Progress
-====
-
-=== Java
-
-Using Java to write an integration to be deployed using camel-k is not 
different from defining your routing rules in Camel with the only difference 
that you do not need to build and package it as a jar.
-
-[source,java]
-.Example
-----
-import org.apache.camel.builder.RouteBuilder;
-
-public class Sample extends RouteBuilder {
-    @Override
-    public void configure() throws Exception {
-        from("timer:tick")
-            .setBody()
-              .constant("Hello Camel K!")
-            .to("log:info");
-    }
-}
-----
-
-=== XML
-
-Camel K support the standard Camel Routes XML DSL:
-
-[source,xml]
-.Example
-----
-<routes xmlns="http://camel.apache.org/schema/spring";>
-    <route>
-        <from uri="timer:tick"/>
-        <setBody>
-            <constant>Hello Camel K!</constant>
-         </setBody>
-        <to uri="log:info"/>
-    </route>
-</routes>
-----
-
-=== Groovy
-
-An integration written in Groovy looks very similar to a Java one except it 
can leverages Groovy's language enhancements over Java:
-
-[source,java]
-----
-from('timer:tick')
-    .process { it.in.body = 'Hello Camel K!' }
-    .to('log:info')
-----
-
-Camel K extends the Camel Java DSL making it easier to configure the context 
in which the integration runs using the top level _context_ block
-
-[source,java]
-----
-context {
-  // configure the context here
-}
-----
-
-At the moment the enhanced DSL provides a way to bind items to the registry, 
to configure the components the context creates and some improvements over the 
REST DSL.
-
-- *Registry*
-+
-The registry is accessible using the _registry_ block inside the _context_ one:
-+
-[source,java]
-----
-context {
-    registry {
-      bind "my-cache", Caffeine.newBuilder().build() // <1>
-      bind "my-processor", processor { // <2>
-         it.in.body = 'Hello Camel K!'
-      }
-      bind "my-predicate", predicate { // <3>
-         it.in.body != null
-      }
-    }
-}
-----
-<1> bind a bean to the context
-<2> define a custom processor to be used later in the routes by ref
-<2> define a custom predicate to be used later in the routes by ref
-
-
-- *Components*
-+
-Components can be configured within the _components_ block inside the 
_context_ one:
-+
-[source,java]
-----
-context {
-    components {
-        'seda' { // <1>
-            queueSize = 1234
-            concurrentConsumers = 12
-        }
-
-        'log' { // <2>
-            exchangeFormatter = {
-                'body ==> ' + it.in.body
-            } as org.apache.camel.spi.ExchangeFormatter
-        }
-    }
-}
-----
-<1> configure the properties of the component whit name _seda_
-<2> configure the properties of the component whit name _log_
-+
-Setting the property _exchangeFormatter_ looks a little ugly as you have to 
declare the type of your closure. For demonstration purpose we have created a 
Groovy extension module that simplify configuring the _exchangeFormatter_ so 
you can rewrite your DSL as
-+
-[source,java]
-----
-context {
-    components {
-        ...
-
-        'log' {
-            formatter {
-                'body ==> ' + it.in.body
-            }
-        }
-    }
-}
-----
-+
-which is much better.
-+
-[TIP]
-====
-You can provide your custom extensions by packaging them in a dependency you 
declare for your integration.
-====
-
-- *Rest*
-+
-Integrations's REST endpoints can be configured using the top level _rest_ 
block:
-+
-[source,java]
-----
-rest {
-    configuration { // <1>
-        host = 'my-host'
-        port '9192'
-    }
-
-    path('/my/path') { // <2>
-        // standard Rest DSL
-    }
-}
-----
-<1> Configure the rest engine
-<2> Configure the rest endpoint for the base path '/my/path'
-
-=== Kotlin
-
-An integration written in Kotlin looks very similar to a Java one except it 
can leverages Kotlin's language enhancements over Java:
-
-[source,java]
-----
-from('timer:tick')
-    .process { e -> e.getIn().body = 'Hello Camel K!' }
-    .to('log:info');
-----
-
-Camel K extends the Camel Java DSL making it easier to configure the context 
in which the integration runs using the top level _context_ block
-
-[source,java]
-----
-context {
-  // configure the context here
-}
-----
-
-At the moment the enhanced DSL provides a way to bind items to the registry, 
to configure the components the context creates and some improvements over the 
REST DSL.
-
-- *Registry*
-+
-The registry is accessible using the _registry_ block inside the _context_ one:
-+
-[source,java]
-----
-context {
-    registry {
-        bind("my-cache", Caffeine.newBuilder().build()) // <1>
-        bind("my-processor", processor { // <2>
-            e -> e.getIn().body = "Hello"
-        })
-        bind("my-predicate", predicate { // <2>
-            e -> e.getIn().body != null
-        })
-    }
-}
-----
-<1> bind a simple bean to the context
-<2> define a custom processor to be used later in the routes by ref
-<2> define a custom predicate to be used later in the routes by ref
-
-
-- *Components*
-+
-Components can be configured within the _components_ block inside the 
_context_ one:
-+
-[source,java]
-----
-context {
-    components {
-        component<SedaComponent>("seda") { //<1>
-            queueSize = 1234
-            concurrentConsumers = 12
-        }
-
-        component<SedaComponent>("mySeda") { // <2>
-            queueSize = 4321
-            concurrentConsumers = 21
-        }
-
-        component<LogComponent>("log") { // <3>
-           setExchangeFormatter {
-               e: Exchange -> "" + e.getIn().body
-           }
-       }
-    }
-}
-----
-<1> configure the properties of a component whit type _SedaComponent_ and name 
_seda_
-<2> configure the properties of a component with type SedaComponent and name 
_mySeda_, note that as _mySeda_ does not represent a valid component scheme, a 
new component of the required type will be instantiated.
-<3> configure the properties of the component whit name _log_
-+
-[NOTE]
-====
-As for Groovy, you can provide your custom extension to the DSL
-====
-
-- *Rest*
-+
-Integrations's REST endpoints can be configured using the top level _rest_ 
block:
-+
-[source,java]
-----
-rest {
-    configuration {
-        host = "my-host"
-        port = "9192"
-    }
-
-    path("/my/path") { // <2>
-        // standard Rest DSL
-    }
-}
-----
-<1> Configure the rest engine
-<2> Configure the rest endpoint for the base path '/my/path'
-
-=== JavaScript
-
-The integration written in JavaScript looks very similar to a Java one:
-
-[source,js]
-----
-function proc(e) {
-    e.getIn().setBody('Hello Camel K!')
-}
-
-from('timer:tick')
-    .process(proc)
-    .to('log:info')
-----
-
-For JavaScript integrations, Camel K does not yet provide an enhanced DSL but 
you can access to some global bounded objects such as a writable registry and 
the camel context so to set the property _exchangeFormatter_ of the 
_LogComponent_ as done in previous example, you can do something like:
-
-[source,js]
-----
-
-l = context.getComponent('log', true, false)
-l.exchangeFormatter = function(e) {
-    return "log - body=" + e.in.body + ", headers=" + e.in.headers
-}
-----
diff --git a/docs/modules/ROOT/pages/languages/groovy.adoc 
b/docs/modules/ROOT/pages/languages/groovy.adoc
new file mode 100644
index 0000000..2ba9ba0
--- /dev/null
+++ b/docs/modules/ROOT/pages/languages/groovy.adoc
@@ -0,0 +1,112 @@
+= Writing Integrations in Groovy
+
+An integration written in Groovy looks very similar to a Java one except it 
can leverages Groovy's language enhancements over Java:
+
+[source,groovy]
+----
+from('timer:tick')
+    .process { it.in.body = 'Hello Camel K!' }
+    .to('log:info')
+----
+
+Camel K extends the Camel Java DSL making it easier to configure the context 
in which the integration runs using the top level _context_ block
+
+[source,groovy]
+----
+context {
+  // configure the context here
+}
+----
+
+At the moment the enhanced DSL provides a way to bind items to the registry, 
to configure the components the context creates and some improvements over the 
REST DSL.
+
+== Registry Configuration
+
+The registry is accessible using the _registry_ block inside the _context_ one:
+
+[source,groovy]
+----
+context {
+    registry {
+      bind "my-cache", Caffeine.newBuilder().build() // <1>
+      bind "my-processor", processor { // <2>
+         it.in.body = 'Hello Camel K!'
+      }
+      bind "my-predicate", predicate { // <3>
+         it.in.body != null
+      }
+    }
+}
+----
+<1> bind a bean to the context
+<2> define a custom processor to be used later in the routes by ref
+<3> define a custom predicate to be used later in the routes by ref
+
+
+== Components Configuration
+
+Components can be configured within the _components_ block inside the 
_context_ one:
+
+[source,groovy]
+----
+context {
+    components {
+        'seda' { // <1>
+            queueSize = 1234
+            concurrentConsumers = 12
+        }
+
+        'log' { // <2>
+            exchangeFormatter = {
+                'body ==> ' + it.in.body
+            } as org.apache.camel.spi.ExchangeFormatter
+        }
+    }
+}
+----
+<1> configure the properties of the component whit name _seda_
+<2> configure the properties of the component whit name _log_
+
+Setting the property _exchangeFormatter_ looks a little ugly as you have to 
declare the type of your closure. For demonstration purpose we have created a 
Groovy extension module that simplify configuring the _exchangeFormatter_ so 
you can rewrite your DSL as
+
+[source,groovy]
+----
+context {
+    components {
+        ...
+
+        'log' {
+            formatter {
+                'body ==> ' + it.in.body
+            }
+        }
+    }
+}
+----
+
+which is much better.
+
+[TIP]
+====
+You can provide your custom extensions by packaging them in a dependency you 
declare for your integration.
+====
+
+== Rest Endpoints
+
+Integrations's REST endpoints can be configured using the top level _rest_ 
block:
+
+[source,groovy]
+----
+rest {
+    configuration { // <1>
+        host = 'my-host'
+        port '9192'
+    }
+
+    path('/my/path') { // <2>
+        // standard Rest DSL
+    }
+}
+----
+<1> Configure the rest engine
+<2> Configure the rest endpoint for the base path '/my/path'
diff --git a/docs/modules/ROOT/pages/languages/java.adoc 
b/docs/modules/ROOT/pages/languages/java.adoc
new file mode 100644
index 0000000..127a7d9
--- /dev/null
+++ b/docs/modules/ROOT/pages/languages/java.adoc
@@ -0,0 +1,25 @@
+= Writing Integrations in Java
+
+Using Java to write an integration to be deployed using Camel K is no 
different from defining your routing rules in Camel with the only difference 
that you do not need to build and package it as a jar.
+
+[source,java]
+.Example.java
+----
+import org.apache.camel.builder.RouteBuilder;
+
+public class Sample extends RouteBuilder {
+    @Override
+    public void configure() throws Exception {
+        from("timer:tick")
+            .setBody()
+              .constant("Hello Camel K!")
+            .to("log:info");
+    }
+}
+----
+
+You can run it with the standard command:
+
+```
+kamel run Example.java
+```
diff --git a/docs/modules/ROOT/pages/languages/javascript.adoc 
b/docs/modules/ROOT/pages/languages/javascript.adoc
new file mode 100644
index 0000000..20491f8
--- /dev/null
+++ b/docs/modules/ROOT/pages/languages/javascript.adoc
@@ -0,0 +1,33 @@
+= Writing Integrations in Javascript
+
+An integration written in JavaScript looks very similar to a Java one:
+
+[source,js]
+.hello.js
+----
+function proc(e) {
+    e.getIn().setBody('Hello Camel K!')
+}
+
+from('timer:tick')
+    .process(proc)
+    .to('log:info')
+----
+
+To run it, you need just to execute:
+
+```
+kamel run hello.js
+```
+
+For JavaScript integrations, Camel K does not yet provide an enhanced DSL but 
you can access to some global bounded objects such as a writable registry and 
the camel context so to set the property _exchangeFormatter_ of the 
_LogComponent_ as done in previous example, you can do something like:
+
+[source,js]
+----
+
+l = context.getComponent('log', true, false)
+l.exchangeFormatter = function(e) {
+    return "log - body=" + e.in.body + ", headers=" + e.in.headers
+}
+----
+
diff --git a/docs/modules/ROOT/pages/languages/kotlin.adoc 
b/docs/modules/ROOT/pages/languages/kotlin.adoc
new file mode 100644
index 0000000..54f4555
--- /dev/null
+++ b/docs/modules/ROOT/pages/languages/kotlin.adoc
@@ -0,0 +1,99 @@
+= Writing Integrations in Kotlin
+
+An integration written in Kotlin looks very similar to a Java one except it 
can leverages Kotlin's language enhancements over Java:
+
+[source,kotlin]
+----
+from('timer:tick')
+    .process { e -> e.getIn().body = 'Hello Camel K!' }
+    .to('log:info');
+----
+
+Camel K extends the Camel Java DSL making it easier to configure the context 
in which the integration runs using the top level _context_ block
+
+[source,kotlin]
+----
+context {
+  // configure the context here
+}
+----
+
+At the moment the enhanced DSL provides a way to bind items to the registry, 
to configure the components the context creates and some improvements over the 
REST DSL.
+
+== Registry Configuration
+
+The registry is accessible using the _registry_ block inside the _context_ one:
+
+[source,kotlin]
+----
+context {
+    registry {
+        bind("my-cache", Caffeine.newBuilder().build()) // <1>
+        bind("my-processor", processor { // <2>
+            e -> e.getIn().body = "Hello"
+        })
+        bind("my-predicate", predicate { // <3>
+            e -> e.getIn().body != null
+        })
+    }
+}
+----
+<1> bind a simple bean to the context
+<2> define a custom processor to be used later in the routes by ref
+<3> define a custom predicate to be used later in the routes by ref
+
+
+== Components Configuration
+
+Components can be configured within the _components_ block inside the 
_context_ one:
+
+[source,kotlin]
+----
+context {
+    components {
+        component<SedaComponent>("seda") { //<1>
+            queueSize = 1234
+            concurrentConsumers = 12
+        }
+
+        component<SedaComponent>("mySeda") { // <2>
+            queueSize = 4321
+            concurrentConsumers = 21
+        }
+
+        component<LogComponent>("log") { // <3>
+           setExchangeFormatter {
+               e: Exchange -> "" + e.getIn().body
+           }
+       }
+    }
+}
+----
+<1> configure the properties of a component whit type _SedaComponent_ and name 
_seda_
+<2> configure the properties of a component with type SedaComponent and name 
_mySeda_, note that as _mySeda_ does not represent a valid component scheme, a 
new component of the required type will be instantiated.
+<3> configure the properties of the component whit name _log_
+
+[NOTE]
+====
+As for Groovy, you can provide your custom extension to the DSL
+====
+
+== Rest Endpoints
+
+Integrations's REST endpoints can be configured using the top level _rest_ 
block:
+
+[source,kotlin]
+----
+rest {
+    configuration { // <1>
+        host = "my-host"
+        port = "9192"
+    }
+
+    path("/my/path") { // <2>
+        // standard Rest DSL
+    }
+}
+----
+<1> Configure the rest engine
+<2> Configure the rest endpoint for the base path '/my/path'
diff --git a/docs/modules/ROOT/pages/languages/languages.adoc 
b/docs/modules/ROOT/pages/languages/languages.adoc
new file mode 100644
index 0000000..775370a
--- /dev/null
+++ b/docs/modules/ROOT/pages/languages/languages.adoc
@@ -0,0 +1,21 @@
+[[languages]]
+= Languages
+
+Camel K supports multiple languages for writing integrations:
+
+.Supported Languages
+[options="header"]
+[cols="30%,70%"]
+|=======================
+| Language                     | Description
+| xref:languages/groovy.adoc[Groovy]                   | Groovy `.groovy` 
files are supported.
+| xref:languages/kotlin.adoc[Kotlin]                   | Kotlin Script `.kts` 
files are supported.
+| xref:languages/javascript.adoc[JavaScript]   | JavaScript `.js` files are 
supported.
+| xref:languages/java.adoc[Java]                               | Both 
integrations in source `.java` files or compiled `.class` file can be run.
+| xref:languages/xml.adoc[XML]                                 | Integrations 
written in plain XML DSL are supported (Spring XML or Blueprint not supported).
+|=======================
+
+More information about supported languages is provided in the language 
specific section.
+
+Integrations written in different languages are provided in the examples pack 
that is downloadable from the 
https://github.com/apache/camel-k/releases[release page].
+
diff --git a/docs/modules/ROOT/pages/languages/xml.adoc 
b/docs/modules/ROOT/pages/languages/xml.adoc
new file mode 100644
index 0000000..c03876c
--- /dev/null
+++ b/docs/modules/ROOT/pages/languages/xml.adoc
@@ -0,0 +1,23 @@
+= Writing Integrations in XML
+
+Camel K support the classic XML DSL available in Camel:
+
+[source,xml]
+.example.xml
+----
+<routes xmlns="http://camel.apache.org/schema/spring";>
+    <route>
+        <from uri="timer:tick"/>
+        <setBody>
+            <constant>Hello Camel K!</constant>
+         </setBody>
+        <to uri="log:info"/>
+    </route>
+</routes>
+----
+
+You can run it by executing:
+
+```
+kamel run example.xml
+```
diff --git a/docs/modules/ROOT/pages/running.adoc 
b/docs/modules/ROOT/pages/running.adoc
deleted file mode 100644
index 7d037ff..0000000
--- a/docs/modules/ROOT/pages/running.adoc
+++ /dev/null
@@ -1,176 +0,0 @@
-[[running]]
-= Running
-
-After completing the xref:installation/index.adoc[installation] you should be 
connected to a Kubernetes/OpenShift cluster
-and have the "kamel" CLI correctly configured.
-
-*TODO* Continue from here!
-
-You can now, you can run a Camel integration on the cluster by executing:
-
-```
-kamel run examples/Sample.java
-```
-
-A "Sample.java" file is included in the link:/examples[/examples] folder of 
this repository. You can change the content of the file and execute the command 
again to see the changes.
-
-==== Configure Integration properties
-
-Properties associated to an integration can be configured either using a 
ConfigMap/Secret or by setting using the "--property" flag, i.e.
-
-```
-kamel run --property my.message=test examples/props.js
-```
-
-==== Configure Integration Logging
-
-camel-k runtime uses log4j2 as logging framework and can be configured through 
integration properties.
-If you need to change the logging level of various loggers, you can do so by 
using the `logging.level` prefix:
-
-```
-logging.level.org.apache.camel = DEBUG
-```
-
-==== Configure Integration Components
-
-camel-k component can be configured programmatically inside an integration or 
using properties with the following syntax.
-
-```
-camel.component.${scheme}.${property} = ${value}
-```
-
-As example if you want to change the queue size of the seda component, you can 
use the following property:
-
-```
-camel.component.seda.queueSize = 10
-```
-
-=== Running Integrations in "Dev" Mode for Fast Feedback
-
-If you want to iterate quickly on an integration to have fast feedback on the 
code you're writing, you can use by running it in **"dev" mode**:
-
-```
-kamel run examples/Sample.java --dev
-```
-
-The `--dev` flag deploys immediately the integration and shows the integration 
logs in the console. You can then change the code and see
-the **changes automatically applied (instantly)** to the remote integration 
pod.
-
-The console follows automatically all redeploys of the integration.
-
-Here's an example of the output:
-
-```
-[nferraro@localhost camel-k]$ kamel run examples/Sample.java --dev
-integration "sample" created
-integration "sample" in phase Building
-integration "sample" in phase Deploying
-integration "sample" in phase Running
-[1] Monitoring pod sample-776db787c4-zjhfr[1] Starting the Java application 
using /opt/run-java/run-java.sh ...
-[1] exec java 
-javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml
 -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp 
.:/deployments/* org.apache.camel.k.jvm.Application
-[1] [INFO ] 2018-09-20 21:24:35.953 [main] Application - Routes: 
file:/etc/camel/conf/Sample.java
-[1] [INFO ] 2018-09-20 21:24:35.955 [main] Application - Language: java
-[1] [INFO ] 2018-09-20 21:24:35.956 [main] Application - Locations: 
file:/etc/camel/conf/application.properties
-[1] [INFO ] 2018-09-20 21:24:36.506 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) is starting
-[1] [INFO ] 2018-09-20 21:24:36.578 [main] ManagedManagementStrategy - JMX is 
enabled
-[1] [INFO ] 2018-09-20 21:24:36.680 [main] DefaultTypeConverter - Type 
converters loaded (core: 195, classpath: 0)
-[1] [INFO ] 2018-09-20 21:24:36.777 [main] DefaultCamelContext - StreamCaching 
is not in use. If using streams then its recommended to enable stream caching. 
See more details at http://camel.apache.org/stream-caching.html
-[1] [INFO ] 2018-09-20 21:24:36.817 [main] DefaultCamelContext - Route: route1 
started and consuming from: timer://tick
-[1] [INFO ] 2018-09-20 21:24:36.818 [main] DefaultCamelContext - Total 1 
routes, of which 1 are started
-[1] [INFO ] 2018-09-20 21:24:36.820 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) started in 0.314 seconds
-
-```
-
-=== Dependencies and Component Resolution
-
-Camel components used in an integration are automatically resolved. For 
example, take the following integration:
-
-```
-from("imap://ad...@myserver.com")
-  .to("seda:output")
-```
-
-Since the integration is using the **"imap:" prefix**, Camel K is able to 
**automatically add the "camel-mail" component** to the list of required 
dependencies.
-This will be transparent to the user, that will just see the integration 
running.
-
-Automatic resolution is also a nice feature in `--dev` mode, because you are 
allowed to add all components you need without exiting the dev loop.
-
-You can also use the `-d` flag to pass additional explicit dependencies to the 
Camel client tool:
-
-```
-kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 
Integration.java
-```
-
-=== Not Just Java
-
-Camel K supports multiple languages for writing integrations:
-
-.Languages
-[options="header"]
-|=======================
-| Language                     | Description
-| Java                         | Both integrations in source `.java` files or 
compiled `.class` file can be run.
-| XML                          | Integrations written in plain XML DSL are 
supported (Spring XML or Blueprint not supported).
-| Groovy                       | Groovy `.groovy` files are supported 
(experimental).
-| JavaScript        | JavaScript `.js` files are supported (experimental).
-| Kotlin                       | Kotlin Script `.kts` files are supported 
(experimental).
-|=======================
-
-More information about supported languages is provided in the 
link:docs/languages.adoc[languages guide].
-
-Integrations written in different languages are provided in the 
link:/examples[examples] directory.
-
-An example of integration written in JavaScript is the 
link:/examples/dns.js[/examples/dns.js] integration.
-Here's the content:
-
-```
-// Lookup every second the 'www.google.com' domain name and log the output
-from('timer:dns?period=1s')
-    .routeId('dns')
-    .setHeader('dns.domain')
-        .constant('www.google.com')
-    .to('dns:ip')
-    .to('log:dns');
-```
-
-To run it, you need just to execute:
-
-```
-kamel run examples/dns.js
-```
-
-=== Traits
-
-The details of how the integration is mapped into Kubernetes resources can be 
*customized using traits*.
-More information is provided in the link:docs/traits.adoc[traits section].
-
-=== Monitoring the Status
-
-Camel K integrations follow a lifecycle composed of several steps before 
getting into the `Running` state.
-You can check the status of all integrations by executing the following 
command:
-
-```
-kamel get
-```
-
-[[contributing]]
-== Contributing
-
-We love contributions and we want to make Camel K great!
-
-Contributing is easy, just take a look at our 
link:/docs/developers.adoc[developer's guide].
-
-[[uninstalling]]
-== Uninstalling
-
-If you really need to, it is possible to completely uninstall Camel K from 
OpenShift or Kubernetes with the following command, using the "oc" or "kubectl" 
tool:
-
-```
-# kubectl on plain Kubernetes
-oc delete 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
-```
-
-[[licensing]]
-== Licensing
-
-This software is licensed under the terms you may find in the file named 
LICENSE in this directory.
diff --git a/docs/modules/ROOT/pages/running/dev-mode.adoc 
b/docs/modules/ROOT/pages/running/dev-mode.adoc
new file mode 100644
index 0000000..59577a0
--- /dev/null
+++ b/docs/modules/ROOT/pages/running/dev-mode.adoc
@@ -0,0 +1,44 @@
+[[dev-mode]]
+= Running in Dev Mode
+
+Camel K provides a specific flag for quickly iterating on integrations during 
development and have fast feedbacks on the code you're writing.
+It's called *dev mode*.
+
+Differently from other frameworks, artifacts generated by Camel K in *dev mode 
are no different* from the one you run in production.
+Dev mode is just a helper to let you be quicker during development.
+
+Tu enable dev mode, just add the `--dev` flag when running the integration:
+
+```
+kamel run examples/Sample.java --dev
+```
+
+The `--dev` flag deploys immediately the integration and shows the integration 
logs in the console. You can then change the code and see
+the **changes automatically applied (instantly)** to the remote integration 
pod.
+
+The console follows automatically all redeploys of the integration.
+
+Here's an example of the output:
+
+```
+[nferraro@localhost camel-k]$ kamel run examples/Sample.java --dev
+integration "sample" created
+integration "sample" in phase Building
+integration "sample" in phase Deploying
+integration "sample" in phase Running
+[1] Monitoring pod sample-776db787c4-zjhfr[1] Starting the Java application 
using /opt/run-java/run-java.sh ...
+[1] exec java 
-javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml
 -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp 
.:/deployments/* org.apache.camel.k.jvm.Application
+[1] [INFO ] 2018-09-20 21:24:35.953 [main] Application - Routes: 
file:/etc/camel/conf/Sample.java
+[1] [INFO ] 2018-09-20 21:24:35.955 [main] Application - Language: java
+[1] [INFO ] 2018-09-20 21:24:35.956 [main] Application - Locations: 
file:/etc/camel/conf/application.properties
+[1] [INFO ] 2018-09-20 21:24:36.506 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) is starting
+[1] [INFO ] 2018-09-20 21:24:36.578 [main] ManagedManagementStrategy - JMX is 
enabled
+[1] [INFO ] 2018-09-20 21:24:36.680 [main] DefaultTypeConverter - Type 
converters loaded (core: 195, classpath: 0)
+[1] [INFO ] 2018-09-20 21:24:36.777 [main] DefaultCamelContext - StreamCaching 
is not in use. If using streams then its recommended to enable stream caching. 
See more details at http://camel.apache.org/stream-caching.html
+[1] [INFO ] 2018-09-20 21:24:36.817 [main] DefaultCamelContext - Route: route1 
started and consuming from: timer://tick
+[1] [INFO ] 2018-09-20 21:24:36.818 [main] DefaultCamelContext - Total 1 
routes, of which 1 are started
+[1] [INFO ] 2018-09-20 21:24:36.820 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) started in 0.314 seconds
+
+```
+
+You can write your own integration from scratch or start from one of the 
examples available in the https://github.com/apache/camel-k/releases[release 
page].
diff --git a/docs/modules/ROOT/pages/running/running.adoc 
b/docs/modules/ROOT/pages/running/running.adoc
new file mode 100644
index 0000000..da8f0d3
--- /dev/null
+++ b/docs/modules/ROOT/pages/running/running.adoc
@@ -0,0 +1,42 @@
+[[running]]
+= Running
+
+After completing the xref:installation/installation.adoc[installation] you 
should be connected to a Kubernetes/OpenShift cluster
+and have the "kamel" CLI correctly configured.
+
+Ensure you're connected to the cluster by executing a simple command using the 
Kubernetes CLI:
+
+```
+kubectl get pod
+```
+
+Just replace `kubectl` with `oc` if you're using OpenShift. If everything is 
correctly configured you should get a response from the Kuerbetes API
+server (you should see at least the `camel-k-operator` running).
+
+You are now ready to create your first integration using Camel K. Just create 
a new Groovy file with the following content:
+
+.hello.groovy
+```groovy
+from('timer:tick?period=3s')
+  .setBody().constant('Hello world from Camel K')
+  .to('log:info')
+```
+
+You can run it on the cluster by executing:
+
+```
+kamel run hello.groovy
+```
+
+Integrations can be written in multiple languages. We provide an archive with 
common examples in the https://github.com/apache/camel-k/releases[release page].
+
+You can change the content of the `hello.groovy` file and execute the command 
again to see the changes, or use the xref:running/dev-mode.adoc[dev mode] to 
have even quicker turnaround times.
+
+== Monitoring the Status
+
+Camel K integrations follow a lifecycle composed of several steps before 
getting into the `Running` state.
+You can check the status of all integrations by executing the following 
command:
+
+```
+kamel get
+```
diff --git a/docs/modules/ROOT/pages/uninstalling.adoc 
b/docs/modules/ROOT/pages/uninstalling.adoc
new file mode 100644
index 0000000..ff460af
--- /dev/null
+++ b/docs/modules/ROOT/pages/uninstalling.adoc
@@ -0,0 +1,9 @@
+[[uninstalling]]
+= Uninstalling Camel K
+
+If you really need to, it is possible to completely uninstall Camel K from 
OpenShift or Kubernetes with the following command, using the "oc" or "kubectl" 
tool:
+
+```
+# kubectl on plain Kubernetes
+oc delete 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
+```

Reply via email to