You're welcome, if you have some feedback you can comment the blog post :-).
C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France
The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com
2016-02-25 12:28 GMT+01:00 Anishek Agarwal :
> Nice thanks !
Nice thanks !
On Thu, Feb 25, 2016 at 1:51 PM, Alain RODRIGUEZ wrote:
> For what it is worth, I finally wrote a blog post about this -->
> http://thelastpickle.com/blog/2016/02/25/removing-a-disk-mapping-from-cassandra.html
>
> If you are not done yet, every step is detailed in there.
>
> C*heer
For what it is worth, I finally wrote a blog post about this -->
http://thelastpickle.com/blog/2016/02/25/removing-a-disk-mapping-from-cassandra.html
If you are not done yet, every step is detailed in there.
C*heers,
---
Alain Rodriguez - al...@thelastpickle.com
France
The La
>
> Alain, thanks for sharing! I'm confused why you do so many repetitive
> rsyncs. Just being cautious or is there another reason? Also, why do you
> have --delete-before when you're copying data to a temp (assumed empty)
> directory?
Since they are immutable I do a first sync while everythi
Jan, thanks! That makes perfect sense to run a second time before stopping
cassandra. I'll add that in when I do the production cluster.
On Fri, Feb 19, 2016 at 12:16 AM, Jan Kesten wrote:
> Hi Branton,
>
> two cents from me - I didnt look through the script, but for the rsyncs I
> do pretty m
Here's what I ended up doing on a test cluster. It seemed to work well.
I'm running a full repair on the production cluster, probably over the
weekend, then I'll have a go at the test cluster again and go for broke.
# sync to temporary directory on original volume
rsync -azvuiP /var/data/cassandr
Hi Branton,
two cents from me - I didnt look through the script, but for the rsyncs I do
pretty much the same when moving them. Since they are immutable I do a first
sync while everything is up and running to the new location which runs really
long. Meanwhile new ones are created and I sync the
Alain, thanks for sharing! I'm confused why you do so many repetitive
rsyncs. Just being cautious or is there another reason? Also, why do you
have --delete-before when you're copying data to a temp (assumed empty)
directory?
On Thu, Feb 18, 2016 at 4:12 AM, Alain RODRIGUEZ wrote:
> I did the
I did the process a few weeks ago and ended up writing a runbook and a
script. I have anonymised and share it fwiw.
https://github.com/arodrime/cassandra-tools/tree/master/remove_disk
It is basic bash. I tried to have the shortest down time possible, making
this a bit more complex, but it allows
Hey Branton,
Please do let us know if you face any problems doing this.
Thanks
anishek
On Thu, Feb 18, 2016 at 3:33 AM, Branton Davis
wrote:
> We're about to do the same thing. It shouldn't be necessary to shut down
> the entire cluster, right?
>
> On Wed, Feb 17, 2016 at 12:45 PM, Robert Co
you can do this in a "rolling" fashion (one node at a time).
On Wed, 17 Feb 2016 at 14:03 Branton Davis
wrote:
> We're about to do the same thing. It shouldn't be necessary to shut down
> the entire cluster, right?
>
> On Wed, Feb 17, 2016 at 12:45 PM, Robert Coli
> wrote:
>
>>
>>
>> On Tue, F
We're about to do the same thing. It shouldn't be necessary to shut down
the entire cluster, right?
On Wed, Feb 17, 2016 at 12:45 PM, Robert Coli wrote:
>
>
> On Tue, Feb 16, 2016 at 11:29 PM, Anishek Agarwal
> wrote:
>>
>> To accomplish this can I just copy the data from disk1 to disk2 with i
On Tue, Feb 16, 2016 at 11:29 PM, Anishek Agarwal wrote:
>
> To accomplish this can I just copy the data from disk1 to disk2 with in
> the relevant cassandra home location folders, change the cassanda.yaml
> configuration and restart the node. before starting i will shutdown the
> cluster.
>
Yes.
Additional note we are using cassandra 2.0.15 have 5 nodes in cluster ,
going to expand to 8 nodes.
On Wed, Feb 17, 2016 at 12:59 PM, Anishek Agarwal wrote:
> Hello,
>
> We started with two 800GB SSD on each cassandra node based on our initial
> estimations of read/write rate. As we started on b
Hello,
We started with two 800GB SSD on each cassandra node based on our initial
estimations of read/write rate. As we started on boarding additional
traffic we find that CPU is becoming a bottleneck and we are not able to
run the NICE jobs like compaction very well. We have started expanding the
15 matches
Mail list logo