Paul Nickerson;
curious, did you get a solution to your problem ?
Regards,Jan/
On Tuesday, February 10, 2015 5:48 PM, Flavien Charlon
wrote:
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incre
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incremental repair
while there is a medium to high insert load on the cluster. The repair goes
in a bad state and starts creating way more SSTables than it should (eve
This kind of recovery is definitely not my strong point, so feedback on
this approach would certainly be welcome.
As I understand it, if you really want to keep that data, you ought to be
able to mv it out of the way to get your node online, then move those files
in a several thousand at a time, n
yeah... probably just 2.1.2 things and not compactions. Still probably
want to do something about the 1.6 million files though. It may be worth
just mv/rm'ing to 60 sec rollup data though unless really attached to it.
Chris
On Tue, Feb 10, 2015 at 4:04 PM, Paul Nickerson wrote:
> I was having
I was having trouble with snapshots failing while trying to repair that
table (http://www.mail-archive.com/user@cassandra.apache.org/msg40686.html).
I have a repair running on it now, and it seems to be going successfully
this time. I am going to wait for that to finish, then try a
manual nodetool
Your cluster is probably having issues with compactions (with STCS you
should never have this many). I would probably punt with
OpsCenter/rollups60. Turn the node off and move all of the sstables off to
a different directory for backup (or just rm if you really don't care about
1 minute metrics),
Thank you Rob. I tried a 12 GiB heap size, and still crashed out. There
are 1,617,289 files under OpsCenter/rollups60.
Once I downgraded Cassandra to 2.1.1 (apt-get install cassandra=2.1.1), I
was able to start up Cassandra OK with the default heap size formula.
Now my cluster is running multiple
On Tue, Feb 10, 2015 at 11:02 AM, Paul Nickerson wrote:
> I am getting an out of memory error why I try to start Cassandra on one of
> my nodes. Cassandra will run for a minute, and then exit without outputting
> any error in the log file. It is happening while SSTableReader is opening a
> couple
I am getting an out of memory error why I try to start Cassandra on one of
my nodes. Cassandra will run for a minute, and then exit without outputting
any error in the log file. It is happening while SSTableReader is opening a
couple hundred thousand things.
I am running a 6 node cluster using Apa