cation.
>
> Write are doing good. but when comes to reads i have obsereved that
> cassandra is getting into too many open files issues. When i check the logs
> its not able to open the cassandra data files any more before of the file
> descriptors limits.
>
>
> Can some one su
Thursday, November 07, 2013 4:22 AM
> To: user@cassandra.apache.org
> Subject: RE: Getting into Too many open files issues
>
> Hi Murthy,
>
> 32768 is a bit low (I know datastax docs recommend this). But our production
> env is now running on 1kk, or you can even pu
, November 07, 2013 4:22 AM
To: user@cassandra.apache.org
Subject: RE: Getting into Too many open files issues
Hi Murthy,
32768 is a bit low (I know datastax docs recommend this). But our production
env is now running on 1kk, or you can even put it on unlimited.
Pieter
From: Murthy Chelankuri
: Getting into Too many open files issues
Thanks Pieter for giving quick reply.
I have downloaded the tar ball. And have changed the limits.conf as per the
documentation like below.
* soft nofile 32768
* hard nofile 32768
root soft nofile 32768
root hard nofile 32768
* soft memlock unlimited
* hard
However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was
> too low.
>
>
>
> Kind regards,
>
> Pieter Callewaert
>
>
>
> *From:* Murthy Chelankuri [mailto:kmurt...@gmail.com]
> *Sent:* donderdag 7 november 2013 12:15
> *To:* user@cassandra
@cassandra.apache.org
Subject: Getting into Too many open files issues
I have experimenting cassandra latest version for storing the huge the in our
application.
Write are doing good. but when comes to reads i have obsereved that cassandra
is getting into too many open files issues. When i check the logs its not
I have experimenting cassandra latest version for storing the huge the in
our application.
Write are doing good. but when comes to reads i have obsereved that
cassandra is getting into too many open files issues. When i check the logs
its not able to open the cassandra data files any more before