saranyaeu2987 edited a comment on issue #251: URL: https://github.com/apache/camel-kafka-connector/issues/251#issuecomment-636048341
@oscerd @valdar If that the case, how file variable is resolved? ``` NOT RESOLVED -->camel.sink.url: aws-s3://selumalai-kafka-s3?keyName=${date:now:yyyyMMdd-HHmmssSSS}-${exchangeId}. RESOLVED -->camel.component.aws-s3.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws-credentials.properties:aws_access_key_id}. ``` ======================================================================= ***Basically, I wanted to preserve all Kafka topic data consumed via sinkconnector in S3 without overwriting.*** Any suggestion on 1. How to prevent overwriting S3 file after consuming data from a kafka topic? **OR** How can I add dynamic part to url so that sink connector writes topics data to different files in s3 2. Is someway to write in batches in S3 ? _Current State:_ S3 file overwritten after every consumption _Desired State :_ Preserve every consumed data in S3. If batch consumption (say 1000 messages/file) available, it would be fantastic!!) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org