Hi Anand,
Yes you should change the replication factor of the system_auth keyspace.
By default it is RF = 1 and it s very dangerous in production. If you loose
one node an user can loose all access.(it is a SPOF)
You should use a multi DC replication for this keyspace.
Regards,
Julien
2014-04
Hi,
When you are increasing the RF, you need to perform repair for the keyspace
on each node.(Because datas are not automaticaly streamed).
After that you should perform a cleanup on each node to remove obsolete
sstable.
Good luck :)
Julien Campan.
2013/12/18 Aaron Morton
> -
choose
between them.
In the code, for this case, they add something that returns the first
result from the lexical-order.
I think that doing many updates with same timestamp is not a good pattern
with Cassandra and you should try to find another way to perform yours
operations.
Julien CAMPAN
or the repair. After the compaction this is
>> streamed to the different nodes in order to repair them.
>>
>> If you trigger this on every node simultaneously you basically take the
>> performance away from your cluster. I would expect cassandra still to
>> function, ju
Hi,
I'm working with Cassandra 1.2.2 and I have a question about nodetool
cleanup.
In the documentation , it's writted " Wait for cleanup to complete on one
node before doing the next"
I would like to know, why we can't perform a lot of cleanup in a same time
?
Thanks
adding the new server address in the
seeds list and normally should work :)
You should add your new node into the seeds list only after the bootstrap
operation.
Julien Campan
2013/11/21 Tamar Rosen
> Hi,
>
> We are testing the process of adding a node to a cluster using
space_name/cf_name/
4 Use sstableloader to load the sstable of each repository. Sstableloader
guarantees putting the data on a good node.
5 Make a repair on each node.
Sstableloader is the right tool to make this kind of operation.
Good luck :)
Julien Campan.
2013/11/19 Aaron Morton
> we
down at the same time , you
can put only three nodes.
Julien Campan
2013/6/24 Hiller, Dean
> For ease of use, we actually had a single cassandra.yaml deployed to every
> machine and a script that swapped out the token and listen address. I had
> seed nodes ip1,ip2,ip3 as the seeds b
Hi Chistophe,
I noticed your email just now. Do you still need some feedback for your
thesis on NoSQL?
Cheers,
Julien
2013/4/8 Christophe Caron
> Hi all,
>
> I'm currently preparing my master's thesis in IT sciences at Itescia
> school and UPMC university in France. This thesis focuses on NoS
sandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 3/04/2013, at 8:11 PM, julien Campan wrote:
>
> Hi,
>
> I'm working with cassandra 1.2.2.
>
> When I try to drop a column , it's not working.
>
> This is
Hi,
I'm working with cassandra 1.2.2.
When I try to drop a column , it's not working.
This is what I tried :
CREATE TABLE cust (
ise text PRIMARY KEY,
id_avatar_1 uuid,
id_avatar_2 uuid,
id_avatar_3 uuid,
id_avatar_4 uuid
) ;
cqlsh> ALTER TABLE cust DROP id_avatar_1 ;
==>Bad Reques
Ok, thank for the answer.
I have created a bug report : number 5329.
2013/3/11 Sylvain Lebresne
>
> It seems to me that the "repair –pr" is not compatible with vnode cluster.
>> is it true ?
>>
>
> I'm afraid that's probably true. "repair --pr" should repair every
> "primary range" for all
12 matches
Mail list logo