ok, I fixed my issue by making sure that the total chunk I read in is
equaled the total size of the file. Using Bytes.getArray(bytebuff) to
write out and my total file read and write is now the same size.
Thanks,
Noah
On Monday, September 22, 2014 at 6:50:51 AM UTC-4, Carlos Scheidecker wro
Carlos,
I am having the same issue as you. I used below:
for (Row row : rs) {
bytebuff=row.getBytes("data");
byte data[];
data=Bytes.getArray(bytebuff);
out.write(data);
Did you ever find the answer for this?
Thanks,
Noah
On Monday, September 22, 2014 at 6:50:51 AM UTC-4,
I figured out the issue. Mainly the setting is related to
commitlog_segment_size_in_mb
and the sync I am currently have across three other remote site.
thanks,
Noah
On Thu, Jul 30, 2015 at 1:05 PM, noah chanmala wrote:
> All,
>
> Would you please point me to location where I c
All,
Would you please point me to location where I can adjust/reconfig, so that
I can insert into Blob field more than 500 bytes chunks without Cluster
crash on me.
I read from the user forum and saw people were able to insert 50MB chunks.
There must be some where that I can adjust.
Thanks,
Noa