You can watch this: https://www.youtube.com/watch?v=uoggWahmWYI
Aaron is discussing about support for big nodes
On Wed, May 14, 2014 at 3:13 AM, Yatong Zhang wrote:
> Thank you Aaron, but we're planning about 20T per node, is that feasible?
>
>
> On Mon, May 12, 2014 at 4:33 PM, Aaron Morto
Hi Michael, thanks for the reply,
I would RAID0 all those data drives, personally, and give up managing them
> separately. They are on multiple PCIe controllers, one drive per channel,
> right?
>
Raid 0 is a simple way to go but one disk failure can cause the whole
volume down, so I am afraid rai
On 05/13/2014 08:13 PM, Yatong Zhang wrote:
Thank you Aaron, but we're planning about 20T per node, is that feasible?
20T per node is 5x greater than the max recommended data per node on
high-end spec hardware of 5T/node on nodes with 16+ cores, 128-256G,
SSD, and 10gigE.
pgs 12-13 (the who
Thank you Aaron, but we're planning about 20T per node, is that feasible?
On Mon, May 12, 2014 at 4:33 PM, Aaron Morton wrote:
> We've learned that compaction strategy would be an important point cause
> we've ran into 'no space' trouble because of the 'sized tiered' compaction
> strategy.
>
>
Hi,
We're going to deploy a large Cassandra cluster in PB level. Our scenario
would be:
1. Lots of writes, about 150 writes/second at average, and about 300K size
per write.
2. Relatively very small reads
3. Our data will be never updated
4. But we will delete old data periodically to free space
> We've learned that compaction strategy would be an important point cause
> we've ran into 'no space' trouble because of the 'sized tiered' compaction
> strategy.
If you want to get the most out of the raw disk space LCS is the way to go,
remember it uses approximately twice the disk IO.
> F