sk space.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: linkedin.com/in/carlosjuzarterolo
> Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> www.pythian.com
>
> On Mon
1 814 | Tel: +1 613 565 8696 x1649
> www.pythian.com
>
> On Mon, Apr 20, 2015 at 11:02 AM, Or Sher wrote:
>>
>> Hi all,
>> In the near future I'll need to add more than 10 nodes to a 2.0.9
>> cluster (using vnodes).
>> I read this documentation on da
not using racks configuration and from reading this
documentation I'm not really sure is it safe for us to bootstrap all
nodes together (with two minutes between each other).
I really hate the tought of doing it one by one, I assume it will take
more than 6H per node.
What do you say?
--
Or Sher
> way with an usb docking station) will be much faster and produce less IO
> and CPU impact on your cluster.
>
> Keep that in Mind :-)
>
> Cheers,
> Jan
>
> Am 22.12.2014 um 10:58 schrieb Or Sher:
>
> Great. replace_address works great.
> From some reason I thou
rrelevant, the
> process is the same, a "new node" can be the same hostname and ip or it can
> have totally different ones.
>
> On Sun, Dec 21, 2014 at 6:01 AM, Or Sher wrote:
>>
>> If I'll use the replace_address parameter with the same IP address,
>>
If I'll use the replace_address parameter with the same IP address, would
that do the job?
On Sun, Dec 21, 2014 at 11:20 AM, Or Sher wrote:
> What I want to do is kind of replacing a dead node -
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_
What I want to do is kind of replacing a dead node -
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_replace_node_t.html
But replacing it with a clean node with the same IP and hostname.
On Sun, Dec 21, 2014 at 9:53 AM, Or Sher wrote:
> Thanks guys.
> I h
on the node
> - everything should be fine ;-)
>
> Of course you will need a replication factor > 1 for this to work ;-)
>
> Just my 2 cents,
> Jan
>
> rsync the full contents there,
>
> Am 18.12.2014 um 16:17 schrieb Or Sher:
>
> Hi all,
>>
>> We
a 250G
data node?
Thanks in advance,
Or.
--
Or Sher
hange in order to decrease latency.
--
Or Sher
think it's some kind of a linux kernel bug..
BTW, atd was always stopped, so I'm not really sure yet if it was part of
the problem or not.
HTH,
Or.
On Wed, Aug 13, 2014 at 9:22 AM, Or Sher wrote:
> Will do the same!
> Thanks,
> Or.
>
>
> On Tue, Aug 12, 2014 at 6:47
I'll post here if I figure out it
> (please do the same!). My working hypothesis now is that we had some
> kind of OOM problem.
>
> Best regards,
> Clint
>
> On Tue, Aug 12, 2014 at 12:23 AM, Or Sher wrote:
> > Clint, did you find anything?
> > I just notic
tem level. As CASSANDRA-7507 indicates, JVM OOM does not
> > necessarily result in the cassandra process dying, and can in fact
> trigger
> > clean shutdown.
> >
> > System level OOM will in fact send the equivalent of KILL, which will not
> > trigger the clean shutdown hook in Cassandra.
> >
> > =Rob
>
--
Or Sher
gt; The system_auth keyspace is set to replicate X times given X nodes in
>>>> each datacenter, and at the time of the exception all nodes are reporting
>>>> as online and healthy. After a short period (i.e. 30 minutes), it will let
>>>> me in again.
>>>>
>>>> What could be the cause of this?
>>>>
>>>
>>>
>>
>
--
Or Sher
1. Thanks, make sense.
2. Is there a special reason for that? Is it somewhere in the road map?
On Thu, Jul 17, 2014 at 2:45 AM, Tyler Hobbs wrote:
>
> On Mon, Jul 7, 2014 at 1:08 AM, Or Sher wrote:
>
>> 1. What exactly does the CoordinatorScanLatency means?
>>
>
>
Hi,
I found that 2.0.9 exposes "CoordinatorReadLatency" and
"CoordinatorScanLatency" on a column family resolution.
1. What exactly does the CoordinatorScanLatency means?
2. Is there a write equivalent metric?
--
Or Sher
ow ? If so, the answer is yes, you need either to use INSERT or
> UPDATE by specifying all the columns in your statement.
>
> "Wouldn't it be simpler if Cassandra just let us change the ttl on the
> row marker?" --> This is internal impl details, not supposed to be exposed
>
rker?
On Wed, Jun 11, 2014 at 12:11 PM, DuyHai Doan wrote:
> Yes, the TTL is also set on an internal row marker. More details on this
> here: https://issues.apache.org/jira/browse/CASSANDRA-6668
>
>
> On Wed, Jun 11, 2014 at 10:38 AM, Or Sher wrote:
>
>> Hi,
>>
>
, Jun 11, 2014 at 11:18 AM, DuyHai Doan wrote:
> Hello Or Sher,
>
> The behavior is quite normal:
>
> 1) insert into test_table (p1,p2,c1,d1,d2) values ('a','b','c','d','e');
> --> Insert 5 columns without any TTL
> 2)
t into test_table (p1,p2,c1,d1,d2) values
('a','b','c','---','---') using ttl 10;
cqlsh:testks> select * from test_table;
p1 | p2 | c1 | d1 | d2
+++-+-
a | b | c | --- | ---
(1 rows)
cqlsh:testks> select * from te
received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
--
Or Sher
> You cannot change partitioner
>
>
> http://www.datastax.com/documentation/cassandra/1.2/webhelp/cassandra/architecture/architecturePartitionerAbout_c.html
>
>
> On Thu, Jan 16, 2014 at 2:04 AM, Or Sher wrote:
>
>> Hi,
>>
>> In order to upgrade our env fr
t;?
3. What am I'm missing in the process?
Thanks in advance,
--
Or Sher
I think I rather wait until I'll be able to upgrade the current cluster and
then make the migration.
Thanks!
On Thu, Jan 9, 2014 at 8:41 PM, Robert Coli wrote:
> On Thu, Jan 9, 2014 at 6:54 AM, Or Sher wrote:
>
>> I want to use sstableloader in order to load 1.0.9 data to a
Hi all,
I want to use sstableloader in order to load 1.0.9 data to a 2.0.* cluster.
I know that the sstable format is incompatible between the two versions.
What are my options?
Is there a tool to upgrade sstables directly without any real nodes
involvement?
--
Or Sher
I never changed that
> setting the the config file.
>
> Sent from my iPhone
>
> On Jan 4, 2014, at 11:22 PM, Or Sher wrote:
>
> Robert, is it possible you've changed the partitioner during the upgrade?
> (e.g. from RandomPartitioner to Murmur3Partitioner ?)
>
>
&
d "topics" which has a count of 47 on one
>>>> node, 59 on another and 49 on another node. It was my understanding with a
>>>> replication factor of 3 and 3 nodes in each ring that the nodes should be
>>>> equal so I could lose a node in the ring and h
27 matches
Mail list logo