m cassandra.
Thanks for the help in clarifying all of this, it is very much appreciated.
Regards,
Rob
On Tue, Feb 11, 2014 at 11:25 AM, Andrey Ilinykh wrote:
>
>
>
> On Tue, Feb 11, 2014 at 10:14 AM, Mullen, Robert <
> robert.mul...@pearson.com> wrote:
>
>> Thanks fo
irst makes decision what nodes
> should reply. It is not correct.
>
>
> On Tue, Feb 11, 2014 at 9:36 AM, Mullen, Robert > wrote:
>
>> So is that picture incorrect, or just incomplete missing the piece on how
>> the nodes reply to the coordinator node.
>>
>>
play back to the co-ordinator in DC1. So if you
> have replication of DC1:3,DC2:3. A co-ordinator node will get 6 responses
> back if it is not in the replica set.
> Hope that answers your question.
>
>
> On Tue, Feb 11, 2014 at 8:16 AM, Mullen, Robert > wrote:
>
>> I
It is more like filtering a list of column
>> from a row(which is exactly I can do that in #1 example).
>> But then if I don't create index first, the cql statement will run into
>> syntax error.
>>
>>
>>
>>
>> On Tue, Jan 28, 2014 at 11:37 AM, M
I would do #2. Take a look at this blog which talks about secondary
indexes, cardinality, and what it means for cassandra. Secondary indexes
in cassandra are a different beast, so often old rules of thumb about
indexes don't apply. http://www.wentnet.com/blog/?p=77
On Tue, Jan 28, 2014 at 1
om my iPhone
>>
>> On Jan 4, 2014, at 11:22 PM, Or Sher wrote:
>>
>> Robert, is it possible you've changed the partitioner during the upgrade?
>> (e.g. from RandomPartitioner to Murmur3Partitioner ?)
>>
>>
>> On Sat, Jan 4, 2014 at 9:32 PM, Mullen, R
itself. I
still don't understand why it's reporting 16% for each node when 100% seems
to reflect the state of the cluster better. I didn't find any info in
those issues you posted that would relate to the % changing from 100%
->16%.
On Sat, Jan 4, 2014 at 12:26 PM, Mullen, Rober
from cql
cqlsh>select count(*) from topics;
On Sat, Jan 4, 2014 at 12:18 PM, Robert Coli wrote:
> On Sat, Jan 4, 2014 at 11:10 AM, Mullen, Robert > wrote:
>
>> I have a column family called "topics" which has a count of 47 on one
>> node, 59 on another
l so I could lose a node in the ring and have no loss of data. Based
upon that I would expect the counts across the nodes to all be 59 in this
case.
thanks,
Rob
On Fri, Jan 3, 2014 at 5:14 PM, Robert Coli wrote:
> On Fri, Jan 3, 2014 at 3:33 PM, Mullen, Robert
> wrote:
>
>> I have
Hello,
I have a multi region cluster with 3 nodes in each data center, ec2 us-east
and and west. Prior to upgrading to 2.0.2 from 1.2.6, the owns % of each
node was 100%, which made sense because I had a replication factor of 3 for
each data center. After upgrading to 2.0.2 each node claims to ow
10 matches
Mail list logo