This is an automated email from the ASF dual-hosted git repository.

nferraro pushed a commit to branch antora
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit 1af7e84f777743fa60dd9800e417aabd53f452d6
Author: nferraro <ni.ferr...@gmail.com>
AuthorDate: Fri Dec 14 09:05:52 2018 +0100

    Adding openshift section
---
 README.adoc                                        |   4 +-
 contributing.adoc                                  | 186 +++++++++++++++++++++
 docs/README.adoc                                   |  30 ++++
 docs/README.md                                     |  25 ---
 docs/modules/ROOT/nav.adoc                         |   5 +-
 docs/modules/ROOT/pages/installation/index.adoc    |   3 +-
 .../modules/ROOT/pages/installation/openshift.adoc |  19 +++
 docs/modules/ROOT/pages/running.adoc               |   2 +-
 8 files changed, 243 insertions(+), 31 deletions(-)

diff --git a/README.adoc b/README.adoc
index cd0ea3f..a142ef5 100644
--- a/README.adoc
+++ b/README.adoc
@@ -31,7 +31,7 @@ Other cluster types (such as OpenShift clusters) should not 
need prior configura
 To start using Camel K you need the **"kamel"** binary, that can be used to 
both configure the cluster and run integrations.
 Look into the https://github.com/apache/camel-k/releases[release page] for 
latest version of the `kamel` tool.
 
-If you want to contribute, you can also **build it from source!** Refer to the 
link:/docs/developers.adoc[developer's guide]
+If you want to contribute, you can also **build it from source!** Refer to the 
link:/contributing.adoc[developer's guide]
 for information on how to do it.
 
 Once you have the "kamel" binary, log into your cluster using the standard 
"oc" (OpenShift) or "kubectl" (Kubernetes) client tool and execute the 
following command to install Camel K:
@@ -200,7 +200,7 @@ kamel get
 
 We love contributions and we want to make Camel K great!
 
-Contributing is easy, just take a look at our 
link:/docs/developers.adoc[developer's guide].
+Contributing is easy, just take a look at our 
link:/contributing.adoc[developer's guide].
 
 [[uninstalling]]
 == Uninstalling
diff --git a/contributing.adoc b/contributing.adoc
new file mode 100644
index 0000000..9589be3
--- /dev/null
+++ b/contributing.adoc
@@ -0,0 +1,186 @@
+[[developers]]
+Developer's Guide
+=================
+
+We love contributions!
+
+The project is written in https://golang.org/[go] and contains some parts 
written in Java for the link:/runtime[integration runtime]
+Camel K is built on top of Kubernetes through *Custom Resource Definitions*. 
The https://github.com/operator-framework/operator-sdk[Operator SDK] is used
+to manage the lifecycle of those custom resources.
+
+[[requirements]]
+== Requirements
+
+In order to build the project, you need to comply with the following 
requirements:
+
+* **Go version 1.10+**: needed to compile and test the project. Refer to the 
https://golang.org/[Go website] for the installation.
+* **Dep version 0.5.0**: for managing dependencies. You can find installation 
instructions in the https://github.com/golang/dep[dep GitHub repository].
+* **Operator SDK v0.0.7+**: used to build the operator and the Docker images. 
Instructions in the https://github.com/operator-framework/operator-sdk[Operator 
SDK website] (binary downloads available in the release page).
+* **GNU Make**: used to define composite build actions. This should be already 
installed or available as package if you have a good OS 
(https://www.gnu.org/software/make/).
+
+[[checks]]
+== Running checks
+Checks rely on `golangci-lint` being installed, to install it look at the 
https://github.com/golangci/golangci-lint#local-installation[Local 
Installation] instructions.
+
+You can run checks via `make lint` or you can install a GIT pre-commit hook 
and have the checks run via https://pre-commit.com[pre-commit]; then make sure 
to install the pre-commit hooks after installing pre-commit by running
+
+ $ pre-commit install
+
+[[checking-out]]
+== Checking Out the Sources
+
+You can create a fork of this project from Github, then clone your fork with 
the `git` command line tool.
+
+You need to put the project in your $GOPATH (refer to 
https://golang.org/doc/install[Go documentation] for information).
+So, make sure that the **root** of the github repo is in the path:
+
+```
+$GOPATH/src/github.com/apache/camel-k/
+```
+
+[[structure]]
+== Structure
+
+This is a high level overview of the project structure:
+
+.Structure
+[options="header"]
+|=======================
+| Path                                         | Content
+| link:/cmd[/cmd]                      | Contains the entry points (the *main* 
functions) for the **camel-k** binary and the **kamel** client tool.
+| link:/build[/build]          | Contains scripts used during make operations 
for building the project.
+| link:/deploy[/deploy]                | Contains Kubernetes resource files 
that are used by the **kamel** client during installation. The 
`/deploy/resources.go` file is kept in sync with the content of the directory 
(`make build-embed-resources`), so that resources can be used from within the 
go code.
+| link:/docs[/docs]                    | Contains this documentation.
+| link:/pkg[/pkg]                      | This is where the code resides. The 
code is divided in multiple subpackages.
+| link:/runtime[/runtime]      | The Java runtime code that is used inside the 
integration Docker containers.
+| link:/test[/test]                    | Include integration tests to ensure 
that the software interacts correctly with Kubernetes and OpenShift.
+| link:/tmp[/tmp]                      | Scripts and Docker configuration 
files used by the operator-sdk.
+| /vendor                                      | Project dependencies (not 
staged in git).
+| link:/version[/version]      | Contains the global version of the project.
+|=======================
+
+
+[[building]]
+== Building
+
+Go dependencies in the *vendor* directory are not included when you clone the 
project.
+
+Before compiling the source code, you need to sync your local *vendor* 
directory with the project dependencies, using the following command:
+
+```
+make dep
+```
+
+The `make dep` command runs `dep ensure -v` under the hood, so make sure that 
`dep` is properly installed.
+
+To build the whole project you now need to run:
+
+```
+make
+```
+
+This execute a full build of both the Java and Go code. If you need to build 
the components separately you can execute:
+
+* `make build-operator`: to build the operator binary only.
+* `make build-kamel`: to build the `kamel` client tool only.
+* `make build-runtime`: to build the Java-based runtime code only.
+
+After a successful build, if you're connected to a Docker daemon, you can 
build the operator Docker image by running:
+
+```
+make images
+```
+
+[[testing]]
+== Testing
+
+Unit tests are executed automatically as part of the build. They use the 
standard go testing framework.
+
+Integration tests (aimed at ensuring that the code integrates correctly with 
Kubernetes and OpenShift), need special care.
+
+The **convention** used in this repo is to name unit tests `xxx_test.go`, and 
name integration tests `yyy_integration_test.go`.
+Integration tests are all in the link:/test[/test] dir.
+
+Since both names end with `_test.go`, both would be executed by go during 
build, so you need to put a special **build tag** to mark
+integration tests. A integration test should start with the following line:
+
+```
+// +build integration
+```
+
+Look into the link:/test[/test] directory for examples of integration tests.
+
+Before running a integration test, you need to be connected to a 
Kubernetes/OpenShift namespace.
+After you log in into your cluster, you can run the following command to 
execute **all** integration tests:
+
+```
+make test-integration
+```
+
+[running]
+== Running
+
+If you want to install everything you have in your source code and see it 
running on Kubernetes, you need to run the following command:
+
+=== For Minishift
+
+* Run `make install-minishift` (or just `make install`): to build the project 
and install it in the current namespace on Minishift
+* You can specify a different namespace with `make install-minishift 
project=myawesomeproject`
+
+This command assumes you have an already running Minishift instance.
+
+=== For Minikube
+
+* Run `make install-minikube`: to build the project and install it in the 
current namespace on Minikube
+
+This command assumes you have an already running Minikube instance.
+
+=== Use
+
+Now you can play with Camel K:
+
+```
+./kamel run examples/Sample.java
+```
+
+To add additional dependencies to your routes:
+
+```
+./kamel run -d camel:dns examples/dns.js
+```
+
+[[debugging]]
+== Debugging and Running from IDE
+
+Sometimes it's useful to debug the code from the IDE when troubleshooting.
+
+.**Debugging the `kamel` binary**
+
+It should be straightforward: just execute the 
link:/cmd/kamel/main.go[/cmd/kamel/main.go] file from the IDE (e.g. Goland) in 
debug mode.
+
+.**Debugging the operator**
+
+It is a bit more complex (but not so much).
+
+You are going to run the operator code **outside** OpenShift in your IDE so, 
first of all, you need to **stop the operator running inside**:
+
+```
+// use kubectl in plain Kubernetes
+oc scale deployment/camel-k-operator --replicas 0
+```
+
+You can scale it back to 1 when you're done and you have updated the operator 
image.
+
+You can setup the IDE (e.g. Goland) to execute the 
link:/cmd/camel-k/main.go[/cmd/camel-k/main.go] file in debug mode.
+
+When configuring the IDE task, make sure to add all required environment 
variables in the *IDE task configuration screen*:
+
+* Set the `KUBERNETES_CONFIG` environment variable to point to your Kubernetes 
configuration file (usually `<homedir>/.kube/config`).
+* Set the `WATCH_NAMESPACE` environment variable to a Kubernetes namespace you 
have access to.
+* Set the `OPERATOR_NAME` environment variable to `camel-k`.
+
+After you setup the IDE task, you can run and debug the operator process.
+
+NOTE: The operator can be fully debugged in Minishift, because it uses 
OpenShift S2I binary builds under the hood.
+The build phase cannot be (currently) debugged in Minikube because the Kaniko 
builder requires that the operator and the publisher pod
+share a common persistent volume.
diff --git a/docs/README.adoc b/docs/README.adoc
new file mode 100644
index 0000000..a91e1ee
--- /dev/null
+++ b/docs/README.adoc
@@ -0,0 +1,30 @@
+Camel K Documentation
+=====================
+
+== Environment Setup
+
+To setup the environment you need to execute the following command once (and 
every time you change yarn dependencies):
+
+```
+yarn install
+```
+
+== Build the Documentation Website
+
+To generate the documentation website, execute:
+
+```
+yarn build
+```
+
+To preview it in the local browser, execute:
+
+```
+yarn preview
+```
+
+To both build and preview, execute:
+
+```
+yarn dev
+```
diff --git a/docs/README.md b/docs/README.md
deleted file mode 100644
index 69580ef..0000000
--- a/docs/README.md
+++ /dev/null
@@ -1,25 +0,0 @@
-# Camel K Documentation
-
-To setup environment you need to execute the following command once (and every 
time you change yarn dependencies):
-
-```
-yarn install
-```
-
-To generate the documentation website, execute:
-
-```
-yarn build
-```
-
-To preview the documentation website in the local browser, execute:
-
-```
-yarn preview
-```
-
-To both build and preview the documentation website, execute:
-
-```
-yarn dev
-```
diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc
index 1f1f340..6bd6a7c 100644
--- a/docs/modules/ROOT/nav.adoc
+++ b/docs/modules/ROOT/nav.adoc
@@ -1,6 +1,7 @@
-* xref:installation/index.adoc[Installing Camel K]
+* xref:installation/index.adoc[Installation]
 ** xref:installation/minikube.adoc[Minikube]
 ** xref:installation/minishift.adoc[Minishift]
 ** xref:installation/gke.adoc[Google Kubernetes Engine (GKE)]
-* xref:running.adoc[Running Integrations]
+** xref:installation/openshift.adoc[OpenShift]
+* xref:running.adoc[Running]
 * xref:configuration/index.adoc[Configuration]
\ No newline at end of file
diff --git a/docs/modules/ROOT/pages/installation/index.adoc 
b/docs/modules/ROOT/pages/installation/index.adoc
index bc6df6b..2762e90 100644
--- a/docs/modules/ROOT/pages/installation/index.adoc
+++ b/docs/modules/ROOT/pages/installation/index.adoc
@@ -1,5 +1,5 @@
 [[installation]]
-= Installing Camel K
+= Installation
 
 Camel K allows to run integrations directly on a Kubernetes or OpenShift 
cluster.
 To use it, you need to be connected to a cloud environment or to a local 
cluster created for development purposes.
@@ -13,6 +13,7 @@ before installing it. Customized instructions are needed for 
the following clust
 - xref:installation/minikube.adoc[Minikube]
 - xref:installation/minishift.adoc[Minishift]
 - xref:installation/gke.adoc[Google Kubernetes Engine (GKE)]
+- xref:installation/openshift.adoc[OpenShift]
 
 Other cluster types (such as OpenShift clusters) should *not need* prior 
configuration.
 
diff --git a/docs/modules/ROOT/pages/installation/openshift.adoc 
b/docs/modules/ROOT/pages/installation/openshift.adoc
new file mode 100644
index 0000000..f2f1fd5
--- /dev/null
+++ b/docs/modules/ROOT/pages/installation/openshift.adoc
@@ -0,0 +1,19 @@
+[[installation-on-openshift]]
+= Installing Camel K on OpenShift
+
+Installation of Camel K on OpenShift requires that you execute first some 
specific actions as cluster-admin.
+
+OpenShift does not always provide full cluster-admin rights to all users, so 
you may need to contact an administrator to install the
+Kubernetes custom resources and roles needed by Camel K.
+
+You need to get the *kamel* CLI (_camel-k-client_) tool the from 
https://github.com/apache/camel-k/releases[release page]
+and put it on your system path (e.g. on `/usr/bin/kamel` on Linux).
+
+To install the custom resource definitions and related roles, just execute 
(with **cluster-admin role**):
+
+```
+kamel install --cluster-setup
+```
+
+Once you've done this **only once per the whole cluster**, you can **login as 
a standard user** and
+continue with the xref:installation/index.adoc#procedure[standard Camel K 
installation procedure].
diff --git a/docs/modules/ROOT/pages/running.adoc 
b/docs/modules/ROOT/pages/running.adoc
index d12c3f8..7d037ff 100644
--- a/docs/modules/ROOT/pages/running.adoc
+++ b/docs/modules/ROOT/pages/running.adoc
@@ -1,5 +1,5 @@
 [[running]]
-= Running Integrations
+= Running
 
 After completing the xref:installation/index.adoc[installation] you should be 
connected to a Kubernetes/OpenShift cluster
 and have the "kamel" CLI correctly configured.

Reply via email to