The java version that we were using and which turns out to be causing this
issue was OpenJdk 1.7 u191
On 16-May-2019 06:02, "sankalp kohli" wrote:
> which exact version you saw this?
>
> On Wed, May 15, 2019 at 12:03 PM keshava
> wrote:
>
>> I gave a try with
I gave a try with changing java version , and it worked. seems to be some
issue with java version of choice.
On 10-May-2019 14:48, "keshava" wrote:
> i will try with changing java version.
> w.r.t other point about hardware, i have this issue in multiple setups. so
> i reall
recover data
> later.
>
>
> --
> Jeff Jirsa
>
>
> On May 9, 2019, at 10:53 PM, keshava wrote:
>
> yes we do have compression enabled using
> "org.apache.cassandra.io.compress.LZ4Compressor"
> it is spreading..
> as the no of inserts increases it is spr
yes we do have compression enabled using
"org.apache.cassandra.io.compress.LZ4Compressor" it is spreading..
as the no of inserts increases it is spreading across.
yes it did started with JDK and OS upgrade.
Best regards :)
keshava Hosahalli
On Thu, May 9, 2019 at 7:11 PM Jeff Ji
Hi , our application is running in to data corruption issue. Application
uses cassandra 2.1.11 with datastax java driver version 2.1.9. So far all
working fine. recently we changed our deployment environment to openjdk
1.7.191 (earlier it was 1.7.181) and centos 7.4 (earlier 6.8) This is
randomly h