Re: Fill disks more than 50%

2011-02-25 Thread Edward Capriolo
On Fri, Feb 25, 2011 at 7:38 AM, Terje Marthinussen wrote: >> >> @Thibaut Britz >> Caveat:Using simple strategy. >> This works because cassandra scans data at startup and then serves >> what it finds. For a join for example you can rsync all the data from >> the node below/to the right of where th

Re: Fill disks more than 50%

2011-02-25 Thread Terje Marthinussen
> > > @Thibaut Britz > Caveat:Using simple strategy. > This works because cassandra scans data at startup and then serves > what it finds. For a join for example you can rsync all the data from > the node below/to the right of where the new node is joining. Then > join without bootstrap then cleanu

Re: Fill disks more than 50%

2011-02-25 Thread Terje Marthinussen
I am suggesting that your probably want to rethink your scheme design > since partitioning by year is going to be bad performance since the > old servers are going to be nothing more then expensive tape drives. > You fail to see the obvious It is just the fact that most of the data is stale

Re: Fill disks more than 50%

2011-02-24 Thread Edward Capriolo
On Thu, Feb 24, 2011 at 4:08 AM, Thibaut Britz wrote: > Hi, > > How would you use rsync instead of repair in case of a node failure? > > Rsync all files from the data directories from the adjacant nodes > (which are part of the quorum group) and then run a compactation which > will? remove all the

Re: Fill disks more than 50%

2011-02-24 Thread Thibaut Britz
Hi, How would you use rsync instead of repair in case of a node failure? Rsync all files from the data directories from the adjacant nodes (which are part of the quorum group) and then run a compactation which will? remove all the unneeded keys? Thanks, Thibaut On Thu, Feb 24, 2011 at 4:22 AM,

Re: Fill disks more than 50%

2011-02-23 Thread Edward Capriolo
On Wed, Feb 23, 2011 at 9:39 PM, Terje Marthinussen wrote: > Hi, > Given that you have have always increasing key values (timestamps) and never > delete and hardly ever overwrite data. > If you want to minimize work on rebalancing and statically assign (new) > token ranges to new nodes as you add

Fill disks more than 50%

2011-02-23 Thread Terje Marthinussen
Hi, Given that you have have always increasing key values (timestamps) and never delete and hardly ever overwrite data. If you want to minimize work on rebalancing and statically assign (new) token ranges to new nodes as you add them so they always get the latest data Lets say you add a new n