This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 5ceb47589494db996381f91009743620708a524d
Author: nferraro <ni.ferr...@gmail.com>
AuthorDate: Thu Sep 20 15:18:47 2018 +0200

    New Asciidoc documentation and docs directory
---
 README.adoc                                        | 165 +++++++++++++++
 README.md                                          | 229 ---------------------
 .../{kamel_k_operator.go => camel_k_operator.go}   |   0
 docs/cluster-setup.adoc                            |  26 +++
 docs/developers.adoc                               | 164 +++++++++++++++
 5 files changed, 355 insertions(+), 229 deletions(-)

diff --git a/README.adoc b/README.adoc
new file mode 100644
index 0000000..dbf0328
--- /dev/null
+++ b/README.adoc
@@ -0,0 +1,165 @@
+Apache Camel K
+==============
+
+Apache Camel K (a.k.a. Kamel) is a lightweight integration framework built 
from Apache Camel that runs natively on Kubernetes and is specifically designed 
for serverless and microservice architectures.
+
+[[getting-started]]
+== Getting Started
+
+Camel K allows to run integrations directly on a Kubernetes or Openshift 
cluster.
+To use it, you need to be connected to a cloud environment or to a local 
cluster created for development purposes.
+
+If you need help on how to create a local development environment based on 
*Minishift* or *Minikube* (Minikube will be supported soon, stay tuned!), you 
can follow the link:/docs/cluster-setup.adoc[local cluster setup guide].
+
+[[installation]]
+=== Installation
+
+To start using Camel K you need the **"kamel"** binary, that can be used to 
both configure the cluster and run integrations.
+Look into the https://github.com/apache/camel-k/releases[release page] for 
latest version of the `kamel` tool.
+
+If you want to contribute, you can also **build it from source!** Refer to the 
link:/docs/developers.adoc[developer's guide]
+for information on how to do it.
+
+Once you have the "kamel" binary, log into your cluster using the standard 
"oc" (Openshift) or "kubectl" (Kubernetes) client tool and execute the 
following command to install Camel K:
+
+```
+kamel install
+```
+
+This will configure the cluster with the Camel K custom resource definitions 
and install the operator on the current namespace.
+
+IMPORTANT: Custom Resource Definitions (CRD) are cluster-wide objects and you 
need admin rights to install them. Fortunately this
+operation can be done *once per cluster*. So, if the `kamel install` operation 
fails, you'll be asked to repeat it when logged as admin.
+For Minishift, this means executing `oc login -u system:admin` then `kamel 
install --cluster-setup` only for first-time installation.
+
+=== Running a Integration
+
+After the initial setup, you can run a Camel integration on the cluster by 
executing:
+
+```
+kamel run runtime/examples/Sample.java
+```
+
+A "Sample.java" file is included in the 
link:/runtime/examples[/runtime/examples] folder of this repository. You can 
change the content of the file and execute the command again to see the changes.
+
+=== Running Integrations in "Dev" Mode for Fast Feedback
+
+If you want to iterate quickly on a integration to have fast feedback on the 
code you're writing, you can use run it in **"dev" mode**:
+
+```
+kamel run runtime/examples/Sample.java --dev
+```
+
+The `--dev` flag deploys immediately the integration and shows the integration 
logs in the console. You can then change the code and see
+the **changes automatically applied (instantly)** to the remote integration 
pod.
+
+The console follows automatically all redeploys of the integration.
+
+Here's an example of the output:
+
+```
+[nferraro@localhost camel-k]$ kamel run runtime/examples/Sample.java --dev
+integration "sample" created
+integration "sample" in phase Building
+integration "sample" in phase Deploying
+integration "sample" in phase Running
+[1] Monitoring pod sample-776db787c4-zjhfr[1] Starting the Java application 
using /opt/run-java/run-java.sh ...
+[1] exec java 
-javaagent:/opt/prometheus/jmx_prometheus_javaagent.jar=9779:/opt/prometheus/prometheus-config.yml
 -XX:+UseParallelGC -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 
-XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:+ExitOnOutOfMemoryError -cp 
.:/deployments/* org.apache.camel.k.jvm.Application
+[1] [INFO ] 2018-09-20 21:24:35.953 [main] Application - Routes: 
file:/etc/camel/conf/Sample.java
+[1] [INFO ] 2018-09-20 21:24:35.955 [main] Application - Language: java
+[1] [INFO ] 2018-09-20 21:24:35.956 [main] Application - Locations: 
file:/etc/camel/conf/application.properties
+[1] [INFO ] 2018-09-20 21:24:36.506 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) is starting
+[1] [INFO ] 2018-09-20 21:24:36.578 [main] ManagedManagementStrategy - JMX is 
enabled
+[1] [INFO ] 2018-09-20 21:24:36.680 [main] DefaultTypeConverter - Type 
converters loaded (core: 195, classpath: 0)
+[1] [INFO ] 2018-09-20 21:24:36.777 [main] DefaultCamelContext - StreamCaching 
is not in use. If using streams then its recommended to enable stream caching. 
See more details at http://camel.apache.org/stream-caching.html
+[1] [INFO ] 2018-09-20 21:24:36.817 [main] DefaultCamelContext - Route: route1 
started and consuming from: timer://tick
+[1] [INFO ] 2018-09-20 21:24:36.818 [main] DefaultCamelContext - Total 1 
routes, of which 1 are started
+[1] [INFO ] 2018-09-20 21:24:36.820 [main] DefaultCamelContext - Apache Camel 
2.22.1 (CamelContext: camel-1) started in 0.314 seconds
+
+```
+
+=== Dependencies and Component Resolution
+
+Camel components used in a integration are automatically resolved. For 
example, take the following integration:
+
+```
+from("imap://ad...@myserver.com")
+  .to("seda:output")
+```
+
+Since the integration is using the **"imap:" prefix**, Camel K is able to 
**automatically add the "camel-mail" component** to the list of required 
dependencies.
+This will be transparent to the user, that will just see the integration 
running.
+
+Automatic resolution is also a nice feature in `--dev` mode, because you are 
allowed to add all components you need without exiting the dev loop.
+
+You can also use the `-d` flag to pass additional explicit dependencies to the 
Camel client tool:
+
+```
+kamel run -d mvn:com.google.guava:guava:26.0-jre -d camel-mina2 
Integration.java
+```
+
+=== Not Just Java
+
+Camel K supports multiple languages for writing integrations:
+
+.Languages
+[options="header"]
+|=======================
+| Language                     | Description
+| Java                         | Both integrations in source `.java` files or 
compiled `.class` file can be run.
+| XML                          | Integrations written in plain XML DSL are 
supported (Spring XML or Blueprint not supported).
+| Groovy                       | Groovy `.groovy` files are supported 
(experimental).
+| JavaScript           | JavaScript `.js` files are supported (experimental).
+|=======================
+
+Integrations written in different languages are provided in the 
link:/runtime/examples[examples] directory.
+
+An example of integration written in JavaScript is the 
link:/runtime/examples/dns.js[/runtime/examples/dns.js] integration.
+Here's the content:
+
+```
+// Lookup every second the 'www.google.com' domain name and log the output
+from('timer:dns?period=1s')
+    .routeId('dns')
+    .setHeader('dns.domain')
+        .constant('www.google.com')
+    .to('dns:ip')
+    .to('log:dns');
+```
+
+To run it, you need just to execute:
+
+```
+kamel run runtime/examples/dns.js
+```
+
+=== Monitoring the Status
+
+Camel K integrations follow a lifecycle composed of several steps before 
getting into the `Running` state.
+You can check the status of all integrations by executing the following 
command:
+
+```
+kamel get
+```
+
+[[contributing]]
+== Contributing
+
+We love contributions and we want to make Camel K great!
+
+Contributing is easy, just take a look at our 
link:/docs/developers.adoc[developer's guide].
+
+[[uninstalling]]
+== Uninstalling
+
+If you really need to, it is possible to completely uninstall Camel K from 
OpenShift or Kubernetes with the following command, using the "oc" or "kubectl" 
tool:
+
+```
+# kubectl on plain Kubernetes
+oc delete 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
+```
+
+[[licensing]]
+== Licensing
+
+This software is licensed under the terms you may find in the file named 
LICENSE in this directory.
\ No newline at end of file
diff --git a/README.md b/README.md
deleted file mode 100644
index 4bbdf09..0000000
--- a/README.md
+++ /dev/null
@@ -1,229 +0,0 @@
-# Apache Camel K
-
-Apache Camel K (a.k.a. Kamel) is a lightweight integration framework built 
from Apache Camel that runs natively on Kubernetes and is specifically designed 
for serverless and microservice architectures.
-
-## Getting Started
-
-Camel K allows to run integrations on a Kubernetes or Openshift cluster. If 
you don't have a cloud instance of Kubernetes or Openshift, you can create a 
development cluster following the instructions below.
-
-### Creating a Development Cluster
-There are various options for creating a development cluster:
-
-**Minishift**
-
-You can run Camel K integrations on Openshift using the Minishift cluster 
creation tool.
-Follow the instructions in the [getting started 
guide](https://github.com/minishift/minishift#getting-started) for the 
installation.
-
-After installing the `minishift` binary, you need to enable the `admin-user` 
addon:
-
-```
-minishift addons enable admin-user
-```
-
-Then you can start the cluster with:
-
-```
-minishift start
-```
-
-**Minikube**
-
-Minikube and Kubernetes are not yet supported (but support is coming soon).
-
-### Setting Up the Cluster
-
-To start using Camel K you need the **"kamel"** binary, that can be used to 
both configure the cluster and run integrations.
-Look into the [release page](https://github.com/apache/camel-k/releases) for 
latest version of the `kamel` tool.
-
-If you wanto to contribute, you can also **build it from source!** Refer to 
the [contributing guide](#contributing)
-for information on how to do it.
-
-Once you have the "kamel" binary, log into your cluster using the "oc" or 
"kubectl" tool and execute the following command to install Camel K:
-
-```
-kamel install
-```
-
-This will configure the cluster with the Camel K custom resource definitions 
and install the operator on the current namespace.
-
-**Note:** Custom Resource Definitions (CRD) are cluster-wide objects and you 
need admin rights to install them. Fortunately this
-operation can be done once per cluster. So, if the `kamel install` operation 
fails, you'll be asked to repeat it when logged as admin.
-For Minishift, this means executing `oc login -u system:admin` then `kamel 
install --cluster-setup` only for first-time installation.
-
-### Running a Integration
-
-After the initial setup, you can run a Camel integration on the cluster 
executing:
-
-```
-kamel run runtime/examples/Sample.java
-```
-
-A "Sample.java" file is included in the folder runtime/examples of this 
repository. You can change the content of the file and execute the command 
again to see the changes.
-
-A JavaScript integration has also been provided as example, to run it:
-
-```
-kamel run runtime/examples/routes.js
-```
-
-### Monitoring the Status
-
-Camel K integrations follow a lifecycle composed of several steps before 
getting into the `Running` state.
-You can check the status of all integrations by executing the following 
command:
-
-```
-kamel get
-```
-
-## Contributing
-
-We love contributions!
-
-The project is written in [Go](https://golang.org/) and contains some parts 
written in Java for the [integration runtime](/runtime).
-Camel K is built on top of Kubernetes through *Custom Resource Definitions*. 
The [Operator SDK](https://github.com/operator-framework/operator-sdk) is used
-to manage the lifecycle of those custom resources.
-
-### Requirements
-
-In order to build the project, you need to comply with the following 
requirements:
-- **Go version 1.10+**: needed to compile and test the project. Refer to the 
[Go website](https://golang.org/) for the installation.
-- **Dep version 0.5.0**: for managing dependencies. You can find installation 
instructions in the [dep GitHub repository](https://github.com/golang/dep).
-- **Operator SDK v0.0.6+**: used to build the operator and the Docker images. 
Instructions in the [Operator SDK 
website](https://github.com/operator-framework/operator-sdk).
-- **GNU Make**: used to define composite build actions. This should be already 
installed or available as package if you have a good OS 
(https://www.gnu.org/software/make/).
-
-### Checking Out the Sources
-
-You can create a fork of this project from Github, then clone your fork with 
the `git` command line tool.
-
-You need to put the project in your $GOPATH (refer to [Go 
documentation](https://golang.org/doc/install) for information).
-So, make sure that the **root** of the github repo is in the path:
-
-```
-$GOPATH/src/github.com/apache/camel-k/
-```
-
-### Structure
-
-This is a high level overview of the project structure:
-
-- [/cmd](/cmd): contains the entry points (the *main* functions) for the 
**camel-k-operator** binary and the **kamel** client tool.
-- [/build](/build): contains scripts used during make operations for building 
the project.
-- [/deploy](/deploy): contains Kubernetes resource files that are used by the 
**kamel** client during installation. The `/deploy/resources.go` file is kept 
in sync with the content of the directory (`make build-embed-resources`), so 
that resources can be used from within the go code.
-- [/pkg](/pkg): this is where the code resides. The code is divided in 
multiple subpackages.
-- [/runtime](/runtime): the Java runtime code that is used inside the 
integration Docker containers.
-- [/test](/test): include integration tests to ensure that the software 
interacts correctly with Kubernetes and Openshift.
-- [/tmp](/tmp): scripts and Docker configuration files used by the 
operator-sdk.
-- [/vendor](/vendor): project dependencies.
-- [/version](/version): contains the global version of the project.
-
-### Building
-
-Go dependencies in the *vendor* directory are not included when you clone the 
project.
-
-Before compiling the source code, you need to sync your local *vendor* 
directory with the project dependencies, using the following command:
-
-```
-make dep
-```
-
-The `make dep` command runs `dep ensure -v` under the hood, so make sure that 
`dep` is properly installed.
-
-To build the whole project you now need to run:
-
-```
-make
-```
-
-This execute a full build of both the Java and Go code. If you need to build 
the components separately you can execute:
-- `make build-operator`: to build the operator binary only.
-- `make build-kamel`: to build the `kamel` client tool only.
-- `make build-runtime`: to build the Java-based runtime code only.
-
-After a successful build, if you're connected to a Docker daemon, you can 
build the operator Docker image by running:
-
-```
-make images
-```
-
-### Testing
-
-Unit tests are executed automatically as part of the build. They use the 
standard go testing framework.
-
-Integration tests (aimed at ensuring that the code integrates correctly with 
Kubernetes and Openshift), need special care.
-
-The **convention** used in this repo is to name unit tests `xxx_test.go`, and 
name integration tests `yyy_integration_test.go`.
-Integration tests are all in the [/test](/test) dir.
-
-Since both names end with `_test.go`, both would be executed by go during 
build, so you need to put a special **build tag** to mark
-integration tests. A integration test should start with the following line:
-
-```
-// +build integration
-```
-
-An [example is provided 
here](https://github.com/apache/camel-k/blob/ff672fbf54c358fca970da6c59df378c8535d4d8/pkg/build/build_manager_integration_test.go#L1).
-
-Before running a integration test, you need to:
-- Login to a Kubernetes/Openshift cluster.
-- Set the `KUBERNETES_CONFIG` environment variable to point to your Kubernetes 
configuration file (usually `<home-dir>/.kube/config`).
-- Set the `WATCH_NAMESPACE` environment variable to a Kubernetes namespace you 
have access to.
-- Set the `OPERATOR_NAME` environment variable to `camel-k-operator`.
-
-When the configuration is done, you can run the following command to execute 
**all** integration tests:
-
-```
-make test-integration
-```
-
-### Running
-
-If you want to install everything you have in your source code and see it 
running on Kubernetes, you need to run the following command:
-- Run `make install-minishift` (or just `make install`): to build the project 
and install it in the current namespace on Minishift
-- You can specify a different namespace with `make install-minishift 
project=myawesomeproject`
-
-This command assumes you have an already running Minishift instance.
-
-Now you can play with Camel K:
-
-```
-./kamel run runtime/examples/Sample.java
-```
-
-To add additional dependencies to your routes: 
-
-```
-./kamel run -d camel:dns runtime/examples/dns.js
-```
-
-### Debugging and Running from IDE
-
-Sometimes it's useful to debug the code from the IDE when troubleshooting.
-
-**Debugging the `kamel` binary**
-
-It should be straightforward: just execute the 
[/cmd/kamel/kamel.go]([/cmd/kamel/kamel.go]) file from the IDE (e.g. Goland) in 
debug mode.
-
-**Debugging the operator**
-
-It is a bit more complex (but not so much).
-
-You are going to run the operator code **outside** Openshift in your IDE so, 
first of all, you need to **stop the operator running inside**:
-
-```
-oc scale deployment/camel-k-operator --replicas 0
-```
-
-You can scale it back to 1 when you're done and you have updated the operator 
image.
-
-You can setup the IDE (e.g. Goland) to execute the 
[/cmd/camel-k-operator/camel_k_operator.go]([/cmd/camel-k-operator/camel_k_operator.go])
 file in debug mode.
-
-When configuring the IDE task, make sure to add all required environment 
variables in the *IDE task configuration screen* (such as `KUBERNETES_CONFIG`, 
as explained in the [testing](#testing) section).
-
-## Uninstalling Camel K
-
-If required, it is possible to completely uninstall Camel K from OpenShift or 
Kubernetes with the following command, using the "oc" or "kubectl" tool:
-
-```
-# kubectl if using kubernetes
-oc delete 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
-```
diff --git a/cmd/camel-k-operator/kamel_k_operator.go 
b/cmd/camel-k-operator/camel_k_operator.go
similarity index 100%
rename from cmd/camel-k-operator/kamel_k_operator.go
rename to cmd/camel-k-operator/camel_k_operator.go
diff --git a/docs/cluster-setup.adoc b/docs/cluster-setup.adoc
new file mode 100644
index 0000000..79567df
--- /dev/null
+++ b/docs/cluster-setup.adoc
@@ -0,0 +1,26 @@
+[[creating-cluster]]
+Creating a Development Cluster
+==============================
+
+There are various options for creating a development cluster:
+
+.*Minishift*
+
+You can run Camel K integrations on Openshift using the Minishift cluster 
creation tool.
+Follow the instructions in the 
https://github.com/minishift/minishift#getting-started[getting started guide] 
for the installation.
+
+After installing the `minishift` binary, you need to enable the `admin-user` 
addon:
+
+```
+minishift addons enable admin-user
+```
+
+Then you can start the cluster with:
+
+```
+minishift start
+```
+
+.*Minikube*
+
+Minikube and Kubernetes are not yet supported (but support is coming soon).
\ No newline at end of file
diff --git a/docs/developers.adoc b/docs/developers.adoc
new file mode 100644
index 0000000..24affc4
--- /dev/null
+++ b/docs/developers.adoc
@@ -0,0 +1,164 @@
+[[developers]]
+Developer's Guide
+=================
+
+We love contributions!
+
+The project is written in https://golang.org/[go] and contains some parts 
written in Java for the [integration runtime](/runtime).
+Camel K is built on top of Kubernetes through *Custom Resource Definitions*. 
The https://github.com/operator-framework/operator-sdk[Operator SDK] is used
+to manage the lifecycle of those custom resources.
+
+[[requirements]]
+== Requirements
+
+In order to build the project, you need to comply with the following 
requirements:
+
+* **Go version 1.10+**: needed to compile and test the project. Refer to the 
https://golang.org/[Go website] for the installation.
+* **Dep version 0.5.0**: for managing dependencies. You can find installation 
instructions in the https://github.com/golang/dep[dep GitHub repository].
+* **Operator SDK v0.0.6+**: used to build the operator and the Docker images. 
Instructions in the https://github.com/operator-framework/operator-sdk[Operator 
SDK website].
+* **GNU Make**: used to define composite build actions. This should be already 
installed or available as package if you have a good OS 
(https://www.gnu.org/software/make/).
+
+[[checking-out]]
+== Checking Out the Sources
+
+You can create a fork of this project from Github, then clone your fork with 
the `git` command line tool.
+
+You need to put the project in your $GOPATH (refer to 
https://golang.org/doc/install[Go documentation] for information).
+So, make sure that the **root** of the github repo is in the path:
+
+```
+$GOPATH/src/github.com/apache/camel-k/
+```
+
+[[structure]]
+== Structure
+
+This is a high level overview of the project structure:
+
+.Structure
+[options="header"]
+|=======================
+| Path                                         | Content
+| link:/cmd[/cmd]                      | Contains the entry points (the *main* 
functions) for the **camel-k-operator** binary and the **kamel** client tool.
+| link:/build[/build]          | Contains scripts used during make operations 
for building the project.
+| link:/deploy[/deploy]                | Contains Kubernetes resource files 
that are used by the **kamel** client during installation. The 
`/deploy/resources.go` file is kept in sync with the content of the directory 
(`make build-embed-resources`), so that resources can be used from within the 
go code.
+| link:/docs[/docs]                    | Contains this documentation.
+| link:/pkg[/pkg]                      | This is where the code resides. The 
code is divided in multiple subpackages.
+| link:/runtime[/runtime]      | The Java runtime code that is used inside the 
integration Docker containers.
+| link:/test[/test]                    | Include integration tests to ensure 
that the software interacts correctly with Kubernetes and Openshift.
+| link:/tmp[/tmp]                      | Scripts and Docker configuration 
files used by the operator-sdk.
+| /vendor                                      | Project dependencies (not 
staged in git).
+| link:/version[/version]      | Contains the global version of the project.
+|=======================
+
+
+[[building]]
+== Building
+
+Go dependencies in the *vendor* directory are not included when you clone the 
project.
+
+Before compiling the source code, you need to sync your local *vendor* 
directory with the project dependencies, using the following command:
+
+```
+make dep
+```
+
+The `make dep` command runs `dep ensure -v` under the hood, so make sure that 
`dep` is properly installed.
+
+To build the whole project you now need to run:
+
+```
+make
+```
+
+This execute a full build of both the Java and Go code. If you need to build 
the components separately you can execute:
+
+* `make build-operator`: to build the operator binary only.
+* `make build-kamel`: to build the `kamel` client tool only.
+* `make build-runtime`: to build the Java-based runtime code only.
+
+After a successful build, if you're connected to a Docker daemon, you can 
build the operator Docker image by running:
+
+```
+make images
+```
+
+[[testing]]
+== Testing
+
+Unit tests are executed automatically as part of the build. They use the 
standard go testing framework.
+
+Integration tests (aimed at ensuring that the code integrates correctly with 
Kubernetes and Openshift), need special care.
+
+The **convention** used in this repo is to name unit tests `xxx_test.go`, and 
name integration tests `yyy_integration_test.go`.
+Integration tests are all in the link:/test[/test] dir.
+
+Since both names end with `_test.go`, both would be executed by go during 
build, so you need to put a special **build tag** to mark
+integration tests. A integration test should start with the following line:
+
+```
+// +build integration
+```
+
+Look into the link:/test[/test] directory for examples of integration tests.
+
+Before running a integration test, you need to be connected to a 
Kubernetes/Openshift namespace.
+After you log in into your cluster, you can run the following command to 
execute **all** integration tests:
+
+```
+make test-integration
+```
+
+[running]
+== Running
+
+If you want to install everything you have in your source code and see it 
running on Kubernetes, you need to run the following command:
+
+* Run `make install-minishift` (or just `make install`): to build the project 
and install it in the current namespace on Minishift
+* You can specify a different namespace with `make install-minishift 
project=myawesomeproject`
+
+This command assumes you have an already running Minishift instance.
+
+Now you can play with Camel K:
+
+```
+./kamel run runtime/examples/Sample.java
+```
+
+To add additional dependencies to your routes:
+
+```
+./kamel run -d camel:dns runtime/examples/dns.js
+```
+
+[[debugging]]
+== Debugging and Running from IDE
+
+Sometimes it's useful to debug the code from the IDE when troubleshooting.
+
+.**Debugging the `kamel` binary**
+
+It should be straightforward: just execute the 
link:/cmd/kamel/kamel.go[/cmd/kamel/kamel.go] file from the IDE (e.g. Goland) 
in debug mode.
+
+.**Debugging the operator**
+
+It is a bit more complex (but not so much).
+
+You are going to run the operator code **outside** Openshift in your IDE so, 
first of all, you need to **stop the operator running inside**:
+
+```
+// use kubectl in plain Kubernetes
+oc scale deployment/camel-k-operator --replicas 0
+```
+
+You can scale it back to 1 when you're done and you have updated the operator 
image.
+
+You can setup the IDE (e.g. Goland) to execute the 
link:/cmd/camel-k-operator/camel_k_operator.go[/cmd/camel-k-operator/camel_k_operator.go]
 file in debug mode.
+
+When configuring the IDE task, make sure to add all required environment 
variables in the *IDE task configuration screen*:
+
+* Set the `KUBERNETES_CONFIG` environment variable to point to your Kubernetes 
configuration file (usually `<homedir>/.kube/config`).
+* Set the `WATCH_NAMESPACE` environment variable to a Kubernetes namespace you 
have access to.
+* Set the `OPERATOR_NAME` environment variable to `camel-k-operator`.
+
+After you setup the IDE task, you can run and debug the operator process.

Reply via email to