This is an automated email from the ASF dual-hosted git repository. acosentino pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/camel-k-examples.git
commit 929cba6b98f8b2313fcb015d707e566f9c8f92a5 Author: Thomas Stuber <tstu...@gmail.com> AuthorDate: Fri Aug 6 18:39:00 2021 +0200 streamlined README's for kamelets examples --- .../README.md | 12 +-- kamelets/aws-s3-to-log-with-secret/README.md | 10 +- kamelets/aws-s3-to-log/README.md | 10 +- .../kafka-to-kafka-with-manual-commit/README.md | 6 +- .../kafka-to-kafka-with-regex-router/README.md | 10 +- .../kafka-to-kafka-with-timestamp-router/README.md | 10 +- kamelets/kafka-to-log-with-value-to-key/README.md | 10 +- kamelets/kafka-to-s3-streaming-upload/README.md | 104 ++++++++++----------- kamelets/kafka-to-sqlserver/README.md | 59 ++++++------ 9 files changed, 115 insertions(+), 116 deletions(-) diff --git a/kamelets/aws-s3-to-kafka-with-timestamp-router/README.md b/kamelets/aws-s3-to-kafka-with-timestamp-router/README.md index a83a955..36af2f5 100644 --- a/kamelets/aws-s3-to-kafka-with-timestamp-router/README.md +++ b/kamelets/aws-s3-to-kafka-with-timestamp-router/README.md @@ -4,19 +4,19 @@ - Install camel-k on the kafka namespalce -- Open the flow-binding.yaml file and insert the correct credentials for AWS S3 account and the bucket name. +- Open the `flow-binding.yaml` file and insert the correct credentials for AWS S3 account and the bucket name. - The Log Sink Kamelet is not available out of the box in 1.5.0 Camel-K release so you'll have to install it before installing the flow binding. -- Set the correct credentials for S3 in the flow-binding.yaml +- Set the correct credentials for S3 in the `flow-binding.yaml` - Run the following commands - - kubectl apply -f log-sink.kamelet.yaml -n kafka - - kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f log-sink.kamelet.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Check logs - - kamel logs s3-to-kafka-with-timestamp-router -n kafka + kamel logs s3-to-kafka-with-timestamp-router -n kafka -You should see the file content from AWS S3 logged and appearing in a topic based on the Last modified metadata field of the S3 file consumed. The granularity of the topic names is minutes. +- You should see the file content from AWS S3 logged and appearing in a topic based on the Last modified metadata field of the S3 file consumed. The granularity of the topic names is minutes. diff --git a/kamelets/aws-s3-to-log-with-secret/README.md b/kamelets/aws-s3-to-log-with-secret/README.md index 5546117..551f5ba 100644 --- a/kamelets/aws-s3-to-log-with-secret/README.md +++ b/kamelets/aws-s3-to-log-with-secret/README.md @@ -2,19 +2,19 @@ - The Log Sink Kamelet is not available out of the box in 1.5.0 Camel-K release so you'll have to install it before installing the flow binding. -- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the command (-n <namespace_name>) +- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the command (`-n <namespace_name>`) - Run the following command to create the secret related to AWS S3 credentials - kubectl create secret generic aws-s3-secret --from-literal=accessKey=<accessKey> --from-literal=secretKey=<secretKey> + kubectl create secret generic aws-s3-secret --from-literal=accessKey=<accessKey> --from-literal=secretKey=<secretKey> - Run the following commands - kubectl apply -f log-sink.kamelet.yaml - kubectl apply -f flow-binding.yaml + kubectl apply -f log-sink.kamelet.yaml + kubectl apply -f flow-binding.yaml - Check logs - kamel logs s3-to-log-with-secret + kamel logs s3-to-log-with-secret - If you have files on your S3 bucket you should see their content consumed and the file in the bucket deleted diff --git a/kamelets/aws-s3-to-log/README.md b/kamelets/aws-s3-to-log/README.md index e881808..276b530 100644 --- a/kamelets/aws-s3-to-log/README.md +++ b/kamelets/aws-s3-to-log/README.md @@ -1,18 +1,18 @@ # AWS S3 to Log -- Open the flow-binding.yaml file and insert the correct credentials for AWS S3 account and the bucket name. +- Open the `flow-binding.yaml` file and insert the correct credentials for AWS S3 account and the bucket name. - The Log Sink Kamelet is not available out of the box in 1.5.0 Camel-K release so you'll have to install it before installing the flow binding. -- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the command (-n <namespace_name>) +- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the command (`-n <namespace_name>`) - Run the following commands - - kubectl apply -f log-sink.kamelet.yaml - - kubectl apply -f flow-binding.yaml + kubectl apply -f log-sink.kamelet.yaml + kubectl apply -f flow-binding.yaml - Check logs - - kamel logs aws-s3-to-log + kamel logs aws-s3-to-log - If you have files on your S3 bucket you should see their content consumed and the file in the bucket deleted diff --git a/kamelets/kafka-to-kafka-with-manual-commit/README.md b/kamelets/kafka-to-kafka-with-manual-commit/README.md index a9a5245..a2c5821 100644 --- a/kamelets/kafka-to-kafka-with-manual-commit/README.md +++ b/kamelets/kafka-to-kafka-with-manual-commit/README.md @@ -8,9 +8,9 @@ - Run the following commands - - kubectl apply -f log-sink.kamelet.yaml -n kafka - - kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f log-sink.kamelet.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Check logs -kamel logs kafka-to-kafka-with-manual-commit -n kafka + kamel logs kafka-to-kafka-with-manual-commit -n kafka diff --git a/kamelets/kafka-to-kafka-with-regex-router/README.md b/kamelets/kafka-to-kafka-with-regex-router/README.md index af26143..8d85121 100644 --- a/kamelets/kafka-to-kafka-with-regex-router/README.md +++ b/kamelets/kafka-to-kafka-with-regex-router/README.md @@ -4,15 +4,15 @@ - The Log Sink Kamelet is not available out of the box in 1.5.0 Camel-K release so you'll have to install it before installing the flow binding. -- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (-n <namespace_name>) +- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (`-n <namespace_name>`) - Run the following commands - - kubectl apply -f log-sink.kamelet.yaml -n kafka - - kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f log-sink.kamelet.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Check logs -kamel logs kafka-to-kafka-with-regex-router + kamel logs kafka-to-kafka-with-regex-router -You should data ingesting into the topic-1 topic, after regex router override the topic name. +You should data ingesting into the `topic-1` topic, after regex router override the topic name. diff --git a/kamelets/kafka-to-kafka-with-timestamp-router/README.md b/kamelets/kafka-to-kafka-with-timestamp-router/README.md index 87c25db..41745be 100644 --- a/kamelets/kafka-to-kafka-with-timestamp-router/README.md +++ b/kamelets/kafka-to-kafka-with-timestamp-router/README.md @@ -4,15 +4,15 @@ - The Log Sink Kamelet is not available out of the box in 1.5.0 Camel-K release so you'll have to install it before installing the flow binding. -- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (-n <namespace_name>) +- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (`-n <namespace_name>`) - Run the following commands - - kubectl apply -f log-sink.kamelet.yaml -n kafka - - kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f log-sink.kamelet.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Check logs -kamel logs kafka-to-kafka-with-timestamp-router + kamel logs kafka-to-kafka-with-timestamp-router -You should data ingesting from test-topic topic, into a $[topic]_$[timestamp] topic, based on the record timestamp. +You should data ingesting from `test-topic` topic, into a `$[topic]_$[timestamp]` topic, based on the record timestamp. diff --git a/kamelets/kafka-to-log-with-value-to-key/README.md b/kamelets/kafka-to-log-with-value-to-key/README.md index acb9fdb..1b6fc82 100644 --- a/kamelets/kafka-to-log-with-value-to-key/README.md +++ b/kamelets/kafka-to-log-with-value-to-key/README.md @@ -4,15 +4,15 @@ - The Log Sink Kamelet is not available out of the box in 1.5.0 Camel-K release so you'll have to install it before installing the flow binding. -- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (-n <namespace_name>) +- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (`-n <namespace_name>`) - Run the following commands - - kubectl apply -f log-sink.kamelet.yaml -n kafka - - kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f log-sink.kamelet.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Check logs -kamel logs kafka-to-log-with-value-to-key + kamel logs kafka-to-log-with-value-to-key -You should data ingesting into the test-topic topic, you should see logged a record with key composed of foo and bar values. +You should data ingesting into the `test-topic` topic, you should see logged a record with key composed of foo and bar values. diff --git a/kamelets/kafka-to-s3-streaming-upload/README.md b/kamelets/kafka-to-s3-streaming-upload/README.md index 2dd82e4..f7b8261 100644 --- a/kamelets/kafka-to-s3-streaming-upload/README.md +++ b/kamelets/kafka-to-s3-streaming-upload/README.md @@ -8,66 +8,64 @@ - Run the following commands -kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Check logs -kamel logs kafka-to-s3-streaming-upload -n kafka + kamel logs kafka-to-s3-streaming-upload -n kafka - [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:3 started:3) - [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (kamelet://kafka-not-secured-source/source) - [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started source (kafka://test-topic) - [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started sink (kamelet://source) - [1] 2021-07-30 05:45:28,278 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.11.0 (camel-1) started in 1s751ms (build:0ms init:75ms start:1s676ms) - [1] 2021-07-30 05:45:28,281 INFO [io.quarkus] (main) camel-k-integration 1.5.0 on JVM (powered by Quarkus 2.0.0.Final) started in 7.710s. - [1] 2021-07-30 05:45:28,281 INFO [io.quarkus] (main) Profile prod activated. - [1] 2021-07-30 05:45:28,282 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.ticket.renew.window.factor' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,284 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.kinit.cmd' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,285 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'specific.avro.reader' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,284 INFO [io.quarkus] (main) Installed features: [camel-aws2-commons, camel-aws2-s3, camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-kafka, camel-kamelet, camel-support-common, camel-support-commons-logging, camel-support-httpclient, camel-yaml-dsl, cdi, kafka-client] - [1] 2021-07-30 05:45:28,287 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.ticket.renew.jitter' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,288 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.trustmanager.algorithm' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,288 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.keystore.type' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,289 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.min.time.before.relogin' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,289 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.endpoint.identification.algorithm' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,289 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.protocol' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,290 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.enabled.protocols' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,291 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.truststore.type' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,291 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.keymanager.algorithm' was supplied but isn't a known config. - [1] 2021-07-30 05:45:28,292 INFO [org.apa.kaf.com.uti.AppInfoParser] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Kafka version: 2.8.0 - [1] 2021-07-30 05:45:28,292 INFO [org.apa.kaf.com.uti.AppInfoParser] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Kafka commitId: ebb1d6e21cc92130 - [1] 2021-07-30 05:45:28,292 INFO [org.apa.kaf.com.uti.AppInfoParser] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Kafka startTimeMs: 1627623928292 - [1] 2021-07-30 05:45:28,292 INFO [org.apa.cam.com.kaf.KafkaConsumer] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Subscribing test-topic-Thread 0 to topic test-topic - [1] 2021-07-30 05:45:28,293 INFO [org.apa.kaf.cli.con.KafkaConsumer] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Subscribed to topic(s): test-topic - [1] 2021-07-30 05:45:28,582 WARN [org.apa.kaf.cli.NetworkClient] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Error while fetching metadata with correlation id 2 : {test-topic=LEADER_NOT_AVAILABLE} - [1] 2021-07-30 05:45:28,583 INFO [org.apa.kaf.cli.Metadata] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Cluster ID: r6q2BGnHT7awUAK0diO1hA - [1] 2021-07-30 05:45:28,585 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Discovered group coordinator my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092 (id: 2147483647 rack: null) - [1] 2021-07-30 05:45:28,635 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] (Re-)joining group - [1] 2021-07-30 05:45:28,755 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] (Re-)joining group - [1] 2021-07-30 05:45:28,759 WARN [org.apa.kaf.cli.NetworkClient] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Error while fetching metadata with correlation id 6 : {test-topic=LEADER_NOT_AVAILABLE} - [1] 2021-07-30 05:45:28,868 WARN [org.apa.kaf.cli.NetworkClient] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Error while fetching metadata with correlation id 8 : {test-topic=LEADER_NOT_AVAILABLE} - [1] 2021-07-30 05:45:31,766 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Successfully joined group with generation Generation{generationId=1, memberId='consumer-camel-k-integration-2-a03ed9eb-5a45-4170-b519-97f9a32e019d', protocol='range'} - [1] 2021-07-30 05:45:31,772 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Finished assignment for group at generation 1: {consumer-camel-k-integration-2-a03ed9eb-5a45-4170-b519-97f9a32e019d=Assignment(partitions=[test-topic-0])} - [1] 2021-07-30 05:45:31,790 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Successfully synced group in generation Generation{generationId=1, memberId='consumer-camel-k-integration-2-a03ed9eb-5a45-4170-b519-97f9a32e019d', protocol='range'} - [1] 2021-07-30 05:45:31,791 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Notifying assignor about the new Assignment(partitions=[test-topic-0]) - [1] 2021-07-30 05:45:31,797 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Adding newly assigned partitions: test-topic-0 - [1] 2021-07-30 05:45:31,808 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Found no committed offset for partition test-topic-0 - [1] 2021-07-30 05:45:31,818 INFO [org.apa.kaf.cli.con.int.SubscriptionState] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Resetting offset for partition test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092 (id: 0 rack: null)], epoch=0}}. + [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:3 started:3) + [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (kamelet://kafka-not-secured-source/source) + [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started source (kafka://test-topic) + [1] 2021-07-30 05:45:28,277 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started sink (kamelet://source) + [1] 2021-07-30 05:45:28,278 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.11.0 (camel-1) started in 1s751ms (build:0ms init:75ms start:1s676ms) + [1] 2021-07-30 05:45:28,281 INFO [io.quarkus] (main) camel-k-integration 1.5.0 on JVM (powered by Quarkus 2.0.0.Final) started in 7.710s. + [1] 2021-07-30 05:45:28,281 INFO [io.quarkus] (main) Profile prod activated. + [1] 2021-07-30 05:45:28,282 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.ticket.renew.window.factor' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,284 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.kinit.cmd' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,285 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'specific.avro.reader' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,284 INFO [io.quarkus] (main) Installed features: [camel-aws2-commons, camel-aws2-s3, camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-kafka, camel-kamelet, camel-support-common, camel-support-commons-logging, camel-support-httpclient, camel-yaml-dsl, cdi, kafka-client] + [1] 2021-07-30 05:45:28,287 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.ticket.renew.jitter' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,288 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.trustmanager.algorithm' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,288 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.keystore.type' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,289 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'sasl.kerberos.min.time.before.relogin' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,289 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.endpoint.identification.algorithm' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,289 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.protocol' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,290 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.enabled.protocols' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,291 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.truststore.type' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,291 WARN [org.apa.kaf.cli.con.ConsumerConfig] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) The configuration 'ssl.keymanager.algorithm' was supplied but isn't a known config. + [1] 2021-07-30 05:45:28,292 INFO [org.apa.kaf.com.uti.AppInfoParser] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Kafka version: 2.8.0 + [1] 2021-07-30 05:45:28,292 INFO [org.apa.kaf.com.uti.AppInfoParser] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Kafka commitId: ebb1d6e21cc92130 + [1] 2021-07-30 05:45:28,292 INFO [org.apa.kaf.com.uti.AppInfoParser] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Kafka startTimeMs: 1627623928292 + [1] 2021-07-30 05:45:28,292 INFO [org.apa.cam.com.kaf.KafkaConsumer] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Subscribing test-topic-Thread 0 to topic test-topic + [1] 2021-07-30 05:45:28,293 INFO [org.apa.kaf.cli.con.KafkaConsumer] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Subscribed to topic(s): test-topic + [1] 2021-07-30 05:45:28,582 WARN [org.apa.kaf.cli.NetworkClient] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Error while fetching metadata with correlation id 2 : {test-topic=LEADER_NOT_AVAILABLE} + [1] 2021-07-30 05:45:28,583 INFO [org.apa.kaf.cli.Metadata] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Cluster ID: r6q2BGnHT7awUAK0diO1hA + [1] 2021-07-30 05:45:28,585 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Discovered group coordinator my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092 (id: 2147483647 rack: null) + [1] 2021-07-30 05:45:28,635 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] (Re-)joining group + [1] 2021-07-30 05:45:28,755 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] (Re-)joining group + [1] 2021-07-30 05:45:28,759 WARN [org.apa.kaf.cli.NetworkClient] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Error while fetching metadata with correlation id 6 : {test-topic=LEADER_NOT_AVAILABLE} + [1] 2021-07-30 05:45:28,868 WARN [org.apa.kaf.cli.NetworkClient] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Error while fetching metadata with correlation id 8 : {test-topic=LEADER_NOT_AVAILABLE} + [1] 2021-07-30 05:45:31,766 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Successfully joined group with generation Generation{generationId=1, memberId='consumer-camel-k-integration-2-a03ed9eb-5a45-4170-b519-97f9a32e019d', protocol='range'} + [1] 2021-07-30 05:45:31,772 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Finished assignment for group at generation 1: {consumer-camel-k-integration-2-a03ed9eb-5a45-4170-b519-97f9a32e019d=Assignment(partitions=[test-topic-0])} + [1] 2021-07-30 05:45:31,790 INFO [org.apa.kaf.cli.con.int.AbstractCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Successfully synced group in generation Generation{generationId=1, memberId='consumer-camel-k-integration-2-a03ed9eb-5a45-4170-b519-97f9a32e019d', protocol='range'} + [1] 2021-07-30 05:45:31,791 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Notifying assignor about the new Assignment(partitions=[test-topic-0]) + [1] 2021-07-30 05:45:31,797 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Adding newly assigned partitions: test-topic-0 + [1] 2021-07-30 05:45:31,808 INFO [org.apa.kaf.cli.con.int.ConsumerCoordinator] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Found no committed offset for partition test-topic-0 + [1] 2021-07-30 05:45:31,818 INFO [org.apa.kaf.cli.con.int.SubscriptionState] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) [Consumer clientId=consumer-camel-k-integration-2, groupId=camel-k-integration] Resetting offset for partition test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[my-cluster-kafka-0.my-cluster-kafka-brokers.kafka.svc:9092 (id: 0 rack: null)], epoch=0}}. -- Send some data to Kafka topic +- Send some data to Kafka topic. Run the following command -Run the following command - -kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.24.0-kafka-2.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic test-topic -If you don't see a command prompt, try pressing enter. ->l1 ->l2 ->l3 ->l4 ->l5 + kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.24.0-kafka-2.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic test-topic + If you don't see a command prompt, try pressing enter. + >l1 + >l2 + >l3 + >l4 + >l5 - In the integration logs you should now read - [1] 2021-07-30 05:46:12,156 INFO [org.apa.cam.com.aws.s3.str.AWS2S3StreamUploadProducer] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Completed upload for the part 1 with etag "03204a1ad503637fc7b6cef6ac290fd4-1" at index 5 + [1] 2021-07-30 05:46:12,156 INFO [org.apa.cam.com.aws.s3.str.AWS2S3StreamUploadProducer] (Camel (camel-1) thread #0 - KafkaConsumer[test-topic]) Completed upload for the part 1 with etag "03204a1ad503637fc7b6cef6ac290fd4-1" at index 5 -- If you go into the kamelets-demo bucket of your account, you should see only one file KafkaTestFile.txt, containing the file messages concatenated. +- If you go into the kamelets-demo bucket of your account, you should see only one file `KafkaTestFile.txt`, containing the file messages concatenated. diff --git a/kamelets/kafka-to-sqlserver/README.md b/kamelets/kafka-to-sqlserver/README.md index cfe2972..2a8e81c 100644 --- a/kamelets/kafka-to-sqlserver/README.md +++ b/kamelets/kafka-to-sqlserver/README.md @@ -2,59 +2,60 @@ - Use the quickstart for https://strimzi.io/quickstarts/ and follow the minikube guide. -- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (-n <namespace_name>) +- If camel-k has been installed in a specific namespace different from the default one, you'll need to add a parameter to all the commands (`-n <namespace_name>`) - Run the following command - > kubectl run mssql-1 --image=mcr.microsoft.com/mssql/server:2017-latest --port=1433 --env 'ACCEPT_EULA=Y' --env 'SA_PASSWORD=Password!' -n kafka + kubectl run mssql-1 --image=mcr.microsoft.com/mssql/server:2017-latest --port=1433 --env 'ACCEPT_EULA=Y' --env 'SA_PASSWORD=Password!' -n kafka - Once the pod is up and running we'll need to create the table and populate it with same starting data - > kubectl -n kafka exec -it mssql-1 -- bash - > root@mssql-1:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Password!" - 1> CREATE TABLE accounts (user_id INT PRIMARY KEY, username VARCHAR ( 50 ) UNIQUE NOT NULL, city VARCHAR ( 50 ) NOT NULL ); - 2> GO - 1> INSERT into accounts (user_id,username,city) VALUES (1, 'andrea', 'Roma'); - 2> GO - 1> INSERT into accounts (user_id,username,city) VALUES (2, 'John', 'New York'); - 2> GO + kubectl -n kafka exec -it mssql-1 -- bash + root@mssql-1:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Password!" + 1> CREATE TABLE accounts (user_id INT PRIMARY KEY, username VARCHAR ( 50 ) UNIQUE NOT NULL, city VARCHAR ( 50 ) NOT NULL ); + 2> GO + 1> INSERT into accounts (user_id,username,city) VALUES (1, 'andrea', 'Roma'); + 2> GO + 1> INSERT into accounts (user_id,username,city) VALUES (2, 'John', 'New York'); + 2> GO - So we now have two rows in the database. - Open a different terminal -- Add the correct credentials and container address in the flow-binding yaml for the MSSQL Server database. +- Add the correct credentials and container address in the `flow-binding.yaml` for the MSSQL Server database. - Run the following commands - kubectl apply -f flow-binding.yaml -n kafka + kubectl apply -f flow-binding.yaml -n kafka - Open a different terminal and run the following command - kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.24.0-kafka-2.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic test-topic-1 + kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.24.0-kafka-2.8.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9092 --topic test-topic-1 - Send some messages to the kafka topic like for example - { "user_id":"3", "username":"Vittorio", "city":"Roma" } - { "user_id":"4", "username":"Hugo", "city":"Paris" } -- Now we can check the database + { "user_id":"3", "username":"Vittorio", "city":"Roma" } + { "user_id":"4", "username":"Hugo", "city":"Paris" } - > kubectl exec -it mssql-1 -- bash - > root@mssql-1:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Password!" - 1> SELECT * from accounts; - 2> GO - user_id username city - ----------- -------------------------------------------------- -------------------------------------------------- - 1 andrea Roma - 2 John New York - 3 Vittorio Roma - 4 Hugo Paris - (4 rows affected) +- Now we can check the database -- Check logs to see the integration running - kamel logs kafka-to-sqlserver + kubectl exec -it mssql-1 -- bash + root@mssql-1:/# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "Password!" + 1> SELECT * from accounts; + 2> GO + user_id username city + ----------- -------------------------------------------------- -------------------------------------------------- + 1 andrea Roma + 2 John New York + 3 Vittorio Roma + 4 Hugo Paris + (4 rows affected) + +- Check logs to see the integration running + kamel logs kafka-to-sqlserver