duh, sorry. That estimate is 2 TB would be 15 nodes rf = 3
From: Poziombka, Wade L [mailto:wade.l.poziom...@intel.com]
Sent: Friday, December 07, 2012 7:15 AM
To: user@cassandra.apache.org
Subject: RE: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
compaction.
So if my calculations
elastpickle.com]
Sent: Thursday, December 06, 2012 9:43 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
compaction.
Meaning terabyte size databases.
Lots of people have TB sized systems. Just add
de
From: aaron morton
[mailto:aa...@thelastpickle.com<mailto:aa...@thelastpickle.com>]
Sent: Wednesday, December 05, 2012 9:23 PM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
compaction.
Basically we w
red to recover downed node.
> But this 300-400MB business is interesting to me.
>
>
>
> Thanks in advance.
>
>
>
> Wade
>
>
>
>
>
> From: aaron morton [mailto:aa...@thelastpickle.com]
> Sent: Wednesday, December 05, 2012 9:23 PM
> To: user@ca
I think Aaron meant 300-400GB instead of 300-400MB.
Thanks.
-Wei
- Original Message -
From: "Wade L Poziombka"
To: user@cassandra.apache.org
Sent: Thursday, December 6, 2012 6:53:53 AM
Subject: RE: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
compaction.
“
;
> Wade
>
>
>
> From: aaron morton [mailto:aa...@thelastpickle.com]
> Sent: Wednesday, December 05, 2012 9:23 PM
> To: user@cassandra.apache.org
> Subject: Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
> compaction.
>
>
>
> Basically we were
dnesday, December 05, 2012 9:23 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
> compaction.
>
> ** **
>
> Basically we were successful on two of the nodes. They both took ~2 days
> and 11 hours to compl
12 9:23 PM
To: user@cassandra.apache.org
Subject: Re: Freeing up disk space on Cassandra 1.1.5 with Size-Tiered
compaction.
Basically we were successful on two of the nodes. They both took ~2 days and 11
hours to complete and at the end we saw one very large file ~900GB and the rest
much sm
> Basically we were successful on two of the nodes. They both took ~2 days and
> 11 hours to complete and at the end we saw one very large file ~900GB and the
> rest much smaller (the overall size decreased). This is what we expected!
I would recommend having up to 300MB to 400MB per node on a re
Hi guys,
Sorry for the late follow-up but I waited to run major compactions on all 3
nodes at a time before replying with my findings.
Basically we were successful on two of the nodes. They both took ~2 days
and 11 hours to complete and at the end we saw one very large file ~900GB
and the rest muc
> From what I know having too much data on one node is bad, not really sure
> why, but I think that performance will go down due to the size of indexes
> and bloom filters (I may be wrong on the reasons but I'm quite sure you can't
> store too much data per node).
If you have many hundreds of
Hi Alexandru,
"We are running a 3 node Cassandra 1.1.5 cluster with a 3TB Raid 0 disk per
node for the data dir and separate disk for the commitlog, 12 cores, 24 GB
RAM"
I think you should tune your architecture in a very different way. From
what I know having too much data on one node is bad, no
Hello everyone,
We are running a 3 node Cassandra 1.1.5 cluster with a 3TB Raid 0 disk per
node for the data dir and separate disk for the commitlog, 12 cores, 24 GB
RAM (12GB to Cassandra heap).
We now have 1.1 TB worth of data per node (RF = 2).
Our data input is between 20 to 30 GB per day, d
13 matches
Mail list logo