f you have received this message in error, please advise the sender
> immediately by reply email and delete this message. Thank you.
>
>
>
> This message may contain confidential and/or privileged information.
> If you are not the addressee or authorized to receive this on behalf of
t contain many of your rows.
>
> Chris Lohfink
>
> On Wed, Jul 27, 2016 at 1:44 PM, Luke Jolly wrote:
>
>> I have a table that I'm storing ad impression data in with every row
>> being an impression. I want to get a count of total rows / impressions. I
>>
I have a table that I'm storing ad impression data in with every row being
an impression. I want to get a count of total rows / impressions. I know
that there is in the ball park of 200-400 million rows in this table and
from my reading "Number of keys" in the output of cfstats should be a
reason
n Wed, May 25, 2016 at 3:11 PM Luke Jolly wrote:
> So I figured out the main cause of the problem. The seed node was
> itself. That's what got it in a weird state. The second part was that I
> didn't know the default repair is incremental as I was accidently looking
> at
>>> partitions. Luke, are you sure the repair is succeeding? You don't have
>>> other keyspaces/duplicate data/extra data in your cassandra data directory?
>>> Also, you could try querying on the node with less data to confirm if it
>>> has the same dataset.
>&
moved
> around during repair, but I didnt find evidence of it. However I see no
> reason to because if the node didnt have data then streaming tombstones
> does not make a lot of sense.
>
> Regards,
> Bhuvan
>
> On Tue, May 24, 2016 at 11:06 PM, Luke Jolly wrote:
>
>
not to have everything as it only has a load of 5.55 GB.
On Mon, May 23, 2016 at 7:28 PM, kurt Greaves wrote:
> Do you have 1 node in each DC or 2? If you're saying you have 1 node in
> each DC then a RF of 2 doesn't make sense. Can you clarify on what your set
> up is?
>
&g
I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed that it didn't
actually syn