nidhijwt opened a new issue #324:
URL: https://github.com/apache/camel-kafka-connector/issues/324


   **Overview** 
   
   I am using CamelAzurestorageblobSinkConnector to sink to Archive data on my 
Kafka topics.
   
   **The problem**
   
   Data that I am sending, goes in Appendblob to Azure blob. What it does is, 
it creates another block in end every time and adds the block to an azure 
append blob. Azure gives a maximum limit of 50,000 blocks. Thus after 50,000 
records data gets blob gets filled up and this happens within a few minutes. 
Append blob offers block size is upto 4 MB, which in my case is not being used 
fully as it saves only one record per block. The size of messages is really 
small (say 1 kb each). What happens is after 50,000 records my Append blob get 
full and I get an error saying
   
   com.azure.storage.blob.models.BlobStorageException: Status code 409, 
\"<?xml version=\"1.0\" 
encoding=\"utf-8\"?><Error><Code>BlockCountExceedsLimit</Code><Message>The 
committed block count cannot exceed the maximum limit of 50,000 
blocks.\nRequestId:b112f700-301e-0022-476f-5ea523000000\nTime:2020-07-20T08:23:40.1508526Z</Message></Error>\"\n\tat
 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n\tat 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n\tat
   
   **Ask**
   I do not see a feature where I can buffer a few records and then insert 
using a connector. Please suggest if any such feature exists as this seems 
basic thing when archiving using connectors.
   
   I such a thing does not exist what work around other people use?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to