rger wrote:
I'm seeing some odd behavior with RC1 and counters - from cqlsh:
cqlsh> select * from doc.seq;
id | doccount
--+--
DS | 1
DS_1 | 844
(2 rows)
cqlsh> update doc.seq set doccount=doccount+1 where id='DS_1';
OperationTimedOut: errors=
avior with RC1 and counters - from cqlsh:
cqlsh> select * from doc.seq;
id | doccount
--+--
DS | 1
DS_1 | 844
(2 rows)
cqlsh> update doc.seq set doccount=doccount+1 where id='DS_1';
OperationTimedOut: errors={'172.16.100.208:9042'
; in last 5000 ms: 0 internal and 1 cross node. Mean internal dropped
> latency: 0 ms and Mean cross-node dropped latency: 21356 ms
>
> -joe
>
> On 5/5/2021 9:35 AM, Joe Obernberger wrote:
> > I'm seeing some odd behavior with RC1 and counters - from cqlsh:
> &
-joe
On 5/5/2021 9:35 AM, Joe Obernberger wrote:
I'm seeing some odd behavior with RC1 and counters - from cqlsh:
cqlsh> select * from doc.seq;
id | doccount
--+--
DS | 1
DS_1 | 844
(2 rows)
cqlsh> update doc.seq set doccount=doccount+1 whe
I'm seeing some odd behavior with RC1 and counters - from cqlsh:
cqlsh> select * from doc.seq;
id | doccount
--+--
DS | 1
DS_1 | 844
(2 rows)
cqlsh> update doc.seq set doccount=doccount+1 where id='DS_1';
OperationTimedOut: errors={'172
> More time we work with Cassandra we keep hearing more and more: "you
> should not use counter tables because ."
> Yes, we also feel here and there the trade off is too much restrictive -
> for us what hurts now days is that deleting counters it seems not that
> simple
in our app we have several things to
> count (business logic)
>
> More time we work with Cassandra we keep hearing more and more: "you should
> not use counter tables because ."
> Yes, we also feel here and there the trade off is too much restrictive - for
> us wh
s too much restrictive -
for us what hurts now days is that deleting counters it seems not that
simple... Also the TTL possibility we do miss a lot.
But I have to confess I do not see an obvious migration strategy here...
What bothers me e.g.: concurrency, and wrong results thanks to that
namely
If
It's possible to overcount when a server is overwhelmed or slow to respond
and you're getting exceptions on the client. If you retry your query, it's
possible you'll increment twice, once for the original query (which maybe
threw an exception) and again on the retry.
Use c
What about repairs? Can I just repair that table on a regular basis as any
other?
‐‐‐ Original Message ‐‐‐
On Wednesday, 30 October 2019 16:26, Jon Haddad wrote:
> Counters are good for things like page views, bad for money. Yes they can
> under or overcount in certain situ
Counters are good for things like page views, bad for money. Yes they can
under or overcount in certain situations. If your cluster is stable,
you'll see very little of it in practice.
I've done quite a bit of tuning of counters. Here's the main takeaways:
* They do a read bef
Hi,
I would like to use counters but I am not sure I should.
I read a lot of articles on the Internet how counters are bad / wrong /
inaccurate etc etc ...
Let's be honest, counters in Cassandra have quite a bad reputation.
But all stuff I read about that was quite old, I know ther
Hi Tarun,
That documentation page is a bit ambiguous. My understanding of it is that:
* Cassandra guarantees that counters are updated consistently across the
cluster by doing background reads, that don't affect write latency.
* If you use a consistency level stricter than ONE, the same re
Hi
I stumbled on this
<https://docs.datastax.com/en/archived/cql/3.0/cql/ddl/ddl_counters_c.html>
post which says use consistency level ONE with counters. I'm using
cassandra 3 with 3 copies in one data center. I've to support consistent
reads.
Can we do LOCAL_QUORUM read/write
ategy for counters.
> Often counter tables size is relatively low (compared to events / raw data).
> So depending on the workload you might want to pick one or the other. Given
> the high number of reads the table will have to face (during reads +
> writes), LCS might be a good choice if ther
Hello,
I believe there is not a really specifically good strategy for counters.
Often counter tables size is relatively low (compared to events / raw
data). So depending on the workload you might want to pick one or the
other. Given the high number of reads the table will have to face (during
Hello!
I am using Cassandra 3.10.
I have a counter table, with the following schema and RF=1
CREATE TABLE edges (
src_id text,
src_type text,
source text
weight counter,
PRIMARY KEY ((src_id, src_type), source)
);
SELECT vs UPDATE requests ratio for this table is 0.1
READ vs W
/cassandra/db/SinglePartitionReadCommand.java#L518>.
It looks like the code only uses a filter on the partition for reading if
the read does not involve collections or counters. Can anyone familiar with
the source code confirm if this is true and whether we're looking at the
right lines of code that show what data
Thanks for reporting this, I've opened CASSANDRA-12909
<https://issues.apache.org/jira/browse/CASSANDRA-12909> with all the
details.
You can apply the patch linked in that ticket if you want a quick
workaround, but the root cause is still not fully understood.
The reason why only c
Hi guys,
we are making a simple tool which allows us to transform table
via COPY TO -> drop table -> transform schema/data -> create table -> COPY
FROM.
It works well in most cases, but we have problem with loading of counter
columns, it fails with "ParseError - argument for 's' must be a string,
"they require knowing the key in advance in order to look up the counters"
--> Wrong
Imagine your table
partition_key uuid,
first_map map,
second_map map
With my proposed data model:
SELECT first_map FROM table would translate to
SELECT map_key, count FROM my_counters_map WHERE
last 2 solutions is, they require knowing the key in
advance in order to look up the counters.
The keys however are dynamic in my case.
On Wed, Nov 9, 2016 at 5:47 PM, DuyHai Doan <doanduy...@gmail.com> wrote:
"Is there a way to do this in c* which doesn't require crea
The only issue with the last 2 solutions is, they require knowing the key
in advance in order to look up the counters.
The keys however are dynamic in my case.
On Wed, Nov 9, 2016 at 5:47 PM, DuyHai Doan wrote:
> "Is there a way to do this in c* which doesn't require creating
er,
PRIMARY KEY ((id), map_name, map_key)
);
This table can be seen as:
Map >>
The couple (map_key, counter) simulates your map
The clustering column map_name allows you to have multiple maps of counters
for a single partition_key
On Wed, Nov 9, 2016 at 1:32 PM, Vladimir Yudovin
wrote:
Unfortunately it's impossible nor to use counters inside collections neither
mix them with other non-counter columns :
CREATE TABLE cnt (id int PRIMARY KEY , cntmap MAP<int,counter>);
InvalidRequest: Error from server: code=2200 [Invalid query] message="Counters
are no
I have a use-case where I need to have a dynamic number of counters.
The easiest way to do this would be to have a map where the
int is the key, and the counter is the value which is incremented /
decremented. E.g if something related to 5 happened, then i'd get the
counter for 5 and incr
Just to make sure I understand, you've got a queue where you can stand
missing processing the items in it?
On Wed, Jul 20, 2016 at 1:13 PM Kevin Burton wrote:
> On Wed, Jul 20, 2016 at 11:53 AM, Jeff Jirsa
> wrote:
>
>> Can you tolerate the value being “close, but not perfectly accurate”? If
>>
On Wed, Jul 20, 2016 at 11:53 AM, Jeff Jirsa
wrote:
> Can you tolerate the value being “close, but not perfectly accurate”? If
> not, don’t use a counter.
>
>
>
yeah.. agreed.. this is a problem which is something I was considering. I
guess it depends on whether they are 10x faster..
--
We’r
Can you tolerate the value being “close, but not perfectly accurate”? If not,
don’t use a counter.
From: on behalf of Kevin Burton
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, July 20, 2016 at 11:48 AM
To: "user@cassandra.apache.org"
Subject: Are counter
We ended up implementing a task/queue system which uses a global pointer.
Basically the pointer just increments ... so we have thousands of tasks
that just increment this one pointer.
The problem is that we're seeing contention on it and not being able to
write this record properly.
We're just d
I’m planning an upgrade from 2.0 to 2.1, and was reading about counters, and
ended up with a question. I read that in 2.0, counters are implemented by
storing deltas, and in 2.1, read-before-write is used to store totals instead.
What does this mean for the following scenario?
Suppose we have
;
> What is the accuracy improvement of counter in 2.1 over 2.0?
>
> This below post, it mentioned 2.0.x issues fixed in 2.1 and perfomance
improvement.
>
http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implementation-of-counters
>
> But how accurate are the coun
Hi,
What is the accuracy improvement of counter in 2.1 over 2.0?
This below post, it mentioned 2.0.x issues fixed in 2.1 and perfomance
improvement.
http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implementation-of-counters
But how accurate are the counter 2.1.x or any
to Cassandra. The data models will use
> counters pretty heavily so I'd like to understand what kind of accuracy
> should I expect from Cassandra 2.1 when increment the counters.
>
>-
>
> http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implement
Hi All,
I'm fairly new to Cassandra and am planning on using it as a datastore for
an Apache Spark cluster.
The use case is fairly simple, read the raw data and perform aggregates and
push the rolled up data back to Cassandra. The data models will use
counters pretty heavily so I'
Thanks Rob, this was helpful.
More counters will be added soon, I'll let you know if those have any
problems.
On Mon, Jun 15, 2015 at 4:32 PM, Robert Coli wrote:
> On Mon, Jun 15, 2015 at 2:52 PM, Dan Kinder wrote:
>
>> Potentially relevant facts:
>> - Recently upgrad
; Mainly wondering:
>
> - Is this known or expected? I know Cassandra counters have had issues but
> thought by now it should be able to keep a consistent counter or at least
> repair it...
>
All counters which haven't been written to after 2.1 "new counters" are
still on
see the
wrong value returned from this same node.
Potentially relevant facts:
- Recently upgraded to 2.1.6 from 2.0.14
- This table has ~million rows, low contention, and fairly high increment
rate
Mainly wondering:
- Is this known or expected? I know Cassandra counters have had issues bu
On 20 Dec 2014, at 09:46, Robert Coli wrote:
> On Thu, Dec 18, 2014 at 7:19 PM, Rajath Subramanyam
> wrote:
> Thanks Ken. Any other use cases where counters are used apart from Rainbird ?
>
> Disqus use(d? s?) them behind an in-memory accumulator which batches and
> pe
On Thu, Dec 18, 2014 at 7:19 PM, Rajath Subramanyam
wrote:
>
> Thanks Ken. Any other use cases where counters are used apart from
> Rainbird ?
>
Disqus use(d? s?) them behind an in-memory accumulator which batches and
periodically flushes. This is the best way to use "ol
Thanks Ken. Any other use cases where counters are used apart from Rainbird
?
Rajath Subramanyam
On Thu, Dec 18, 2014 at 5:12 PM, Ken Hancock
wrote:
>
> Here's one from Twitter...
>
>
> http://www.slideshare.net/kevinweil/rainbird-realtime-analyti
are using Cassandra counters practically.
>
> Thanks in advance.
>
> Regards,
> Rajath
>
> Rajath Subramanyam
>
>
--
*Ken Hancock *| System Architect, Advanced Advertising
SeaChange International
50 Nagog Park
Acton, Massachusetts 01720
ken.hanc
Hi Folks,
Have any of you come across blogs that describe how companies in the
industry are using Cassandra counters practically.
Thanks in advance.
Regards,
Rajath
Rajath Subramanyam
troubles in areas such as compaction, repair, and bootstrap).
However, I still suspect you may benefit by keying the counters table
primarily by date, but maybe add another key rotator in there, like ((day,
subpartition), doc_id). Compute your sub partition deterministically but
in an evenly
and aggregate at read time),
or you can make each row a rolling 24 hours (aggregating at write time),
depending on which use case fits your needs better.
On Sun Nov 23 2014 at 8:42:11 AM Robert Wille
mailto:rwi...@fold3.com>> wrote:
I’m working on moving a bunch of counters out of our relati
ach row a rolling 24 hours (aggregating at
write time), depending on which use case fits your needs better.
On Sun Nov 23 2014 at 8:42:11 AM Robert Wille wrote:
> I’m working on moving a bunch of counters out of our relational database
> to Cassandra. For the most part, Cassandra is a very
I’m working on moving a bunch of counters out of our relational database to
Cassandra. For the most part, Cassandra is a very nice fit, except for one
feature on our website. We manage a time series of view counts for each
document, and display a list of the most popular documents in the last
and
https://issues.apache.org/jira/browse/CASSANDRA-7346
Have some good discussion.
Is it safe to delete an entire row of counters?
Not unless :
a) you will never use that particular counter row again
OR
b) gc_grace_seconds has passed and you have repaired and run a major
compaction on every node, such tha
At the Cassandra Summit I became aware of that there are issues with deleting
counters. I have a few questions about that. What is the bad thing that happens
(or can possibly happen) when a counter is deleted? Is it safe to delete an
entire row of counters? Is there any 2.0.x version of
On 10.09.14 02:09, Robert Coli wrote:
On Tue, Sep 9, 2014 at 2:36 PM, Eugene Voytitsky mailto:viy@gmail.com>> wrote:
As I understand, atomic batch for counters can't work correctly
(atomically) prior to 2.1 because of counters implementation.
[Link:
http://www.d
On Tue, Sep 9, 2014 at 2:36 PM, Eugene Voytitsky wrote:
> As I understand, atomic batch for counters can't work correctly
> (atomically) prior to 2.1 because of counters implementation.
> [Link: http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2]
>
> Cassandra 2
What is recommended read/write consistency level (CL) for counters?
Yes I know that write_CL + read_CL > RF is recommended.
But, I got strange results when run my junit tests with different CLs
against 3 nodes cluster.
I checked 9 combinations: (write=ONE,QUORUM,ALL) x (read=ONE,QUORUM,
As I understand, atomic batch for counters can't work correctly
(atomically) prior to 2.1 because of counters implementation.
[Link: http://www.datastax.com/dev/blog/atomic-batches-in-cassandra-1-2]
Cassandra 2.1. reimplements the counters.
Will atomic batch of counters work as exp
Thanks, good article.
But some of my questions are still unanswered.
I will reformulate and post them as short separate emails.
On 05.09.14 01:01, Ken Hancock wrote:
Counters are way more complicated than what you're illustrating.
Datastax did a good blog post on this:
http://www.datasta
Counters are way more complicated than what you're illustrating. Datastax
did a good blog post on this:
http://www.datastax.com/dev/blog/whats-new-in-cassandra-2-1-a-better-implementation-of-counters
On Thu, Sep 4, 2014 at 6:34 AM, Eugene Voytitsky wrote:
> Hi all,
>
> I am u
Hi all,
I am using Cassandra 2.0.x. and Astyanax 1.56.x (2.0.1 shows the same
results) driver via Thrift protocol.
Questions about counters:
1. Consistency.
Consider simplest case when we update value of single counter.
1.1. Is there any difference between updating counter with ONE or
Thanks Aaron. I've mitigated this by removing the dependency on idempotent
counters. But its good to know the limitations of counters.
Thanks
Jabbar Azam
On 19 May 2014 08:36, "Aaron Morton" wrote:
> Does anybody else use another technique for achieving this idempoten
> Does anybody else use another technique for achieving this idempotency with
> counters?
The idempotency problem with counters has to do with what will happen when you
get a timeout. If you reply the write there is a chance of the increment been
applied twice. This is inherent in the c
Hello,
Do people use counters when they want to have idempotent operations in
cassandra?
I have a use case for using a counter to check for a count of objects in a
partition. If the counter is more than some value then the data in the
partition is moved into two different partitions. I can
On Thu, Dec 5, 2013 at 7:44 AM, Christopher Wirt wrote:
> I want to build a really simple column family which counts the occurrence
> of a single event X.
>
>
The guys from Disqus are big into counters:
https://www.youtube.com/watch?v=A2WdS0YQADo
http://www.slideshare.net/pla
5 December 2013 16:04
To: user@cassandra.apache.org
Subject: Re: Counters question - is there a better way to count
Some big systems using Cassandra's counters were built (such as Rainbird:
http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-s
trata-2011 ) and seem to
Some big systems using Cassandra's counters were built (such as Rainbird:
http://www.slideshare.net/kevinweil/rainbird-realtime-analytics-at-twitter-strata-2011)
and seem to be doing great job.
If you are concerned with performance, then maybe using memory-based store
(such as Redis) will b
gt;
>
>
> The obvious way to do this is with a counter CF.
>
>
>
> CREATE TABLE xcounter1 (
>
> id uuid,
>
> someid int,
>
> count counter
>
> ) PRIMARY KEY (uid, someid)
>
>
>
> This is how I’v
,
count counter
) PRIMARY KEY (uid, someid)
This is how I've always done it in the past, but I've been told to avoid
counters for various reasons, performance, consistency etc..
I'm not too bothered about 100% absolute consistency, however read
performance is
Here's another example that may help:
-- put this in AND run using 'cqlsh -f
DROP KEYSPACE bryce_test;
CREATE KEYSPACE bryce_test WITH replication = {
'class': 'SimpleStrategy',
'replication_factor' : 1
};
USE bryce_test;
CREATE TABLE samples (
name text,
bucket text,
cou
Something like this would work:
CREATE TABLE foo (
interface text,
property text,
bucket timestamp,
count counter,
PRIMARY KEY ((interface, property), bucket)
)
interface is 'NIC1' and property is 'Total' or 'Count'.
To query over a date range, you'd run a query like:
SELECT
I'm looking for some guidance on how to model some stat tracking over time,
bucketed to some type of interval (15 min, hour, etc).
As an example, let's say I would like to track network traffic throughput and
bucket it to 15 minute intervals. In our old model, using thrift I would
create a col
On Mon, 23 Sep 2013 21:39:50 +
Stephanie Jackson wrote:
> How can I figure out why there's such a huge difference in results on
> one node and not on the other?
Tiny question - are you running two (or more) nodes on the same
physical machine, by using different bind IP addresses? I'm running
Hi all,
I'm working on getting a new cassandra implementation up and functional. We're
running cassandra 2.0 on Centos 6.4.
Right now, the issue that we've run into is that counters are vastly different
depending on what hosts they're hitting.
Our keyspace has a rep
ugust 2013 20:30
To: user@cassandra.apache.org
Subject: Re: Counters and replication
On 5 August 2013 20:04, Christopher Wirt wrote:
Hello,
Question about counters, replication and the ReplicateOnWriteStage
I've recently turned on a new CF which uses a counter column.
We have a three DC setu
We've seen high CPU in tests on stress tests with counters. With our
workload, we had some hot counters (e.g. ones with 100s increments/sec)
with RF = 3, which caused the load to spike and replicate on write tasks to
back up on those three nodes. Richard already gave a good overview of why
On 5 August 2013 20:04, Christopher Wirt wrote:
> Hello,
>
> ** **
>
> Question about counters, replication and the ReplicateOnWriteStage
>
> ** **
>
> I’ve recently turned on a new CF which uses a counter column.
>
> ** **
>
> We have a thre
Hello,
Question about counters, replication and the ReplicateOnWriteStage
I've recently turned on a new CF which uses a counter column.
We have a three DC setup running Cassandra 1.2.4 with vNodes, hex core
processors, 32Gb memory.
DC 1 - 9 nodes with RF 3
DC 2 - 3 nodes with
or certain counters where we don't need an exact
figure.
YMMV, of course, but I'd look at the likelihood of all the products being
purchased from the same location during one week at least once and start the
modeling from there. :)
/Janne
On 13 Jun 2013, at 21:19, Darren Smythe
uct X purchased in location Y last week'.
It seems like we'll end up with trillions of counters for even these basic
permutations. Is this a cause for concern?
TIA
-- Darren
>>
Date: Tuesday, March 5, 2013 9:50 AM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: LCS and counters
Well no one says my assertion is false, so it is probably true.
Going further, what would be the step
appreciated.
2013/2/25 Janne Jalkanen
>
> At least for our use case (reading slices from varyingly sized rows from
> 10-100k composite columns with counters and hundreds of writes/second) LCS
> has a nice ~75% lower read latency than Size Tiered. And compactions don't
> stop the wor
Don't delete them either!
On Friday, March 1, 2013, Alain RODRIGUEZ wrote:
> "DO *NOT* USE THAT!!!"
>
> Crystal clear ;-). Thanks for the warning.
>
> Alain
>
> 2013/3/1 Sylvain Lebresne
>>>
>>> On C* 1.2.1 I see that the following query wor
"DO *NOT* USE THAT!!!"
Crystal clear ;-). Thanks for the warning.
Alain
2013/3/1 Sylvain Lebresne
> On C* 1.2.1 I see that the following query works:
>>
>>update counters set value=value+5 where owner_id='1' and
>> counter_type='trash
>
> On C* 1.2.1 I see that the following query works:
>
>update counters set value=value+5 where owner_id='1' and
> counter_type='trash';
>
> ...while the following one gives an error (Bad Request: Invalid
> non-counter operation on counter tabl
Greetings.
On this document:
http://cassandra.apache.org/doc/cql3/CQL.html#updateStmt
…I read, under UPDATE section, that:
"The c = c + 3 form of is used to increment/decrement counters.
The identifier after the ‘=’ sign must be the same than the one before the ‘=’
sign (Only incr
Is there a way to use COPY FROM on a column family with a counter type column ?
--
Marco Matarazzo
== Hex Keep ==
W: http://www.hexkeep.com
M: +39 347 8798528
E: marco.matara...@hexkeep.com
"You can learn more about a man
in one hour of play
than in one year of conversation.” - Plato
At least for our use case (reading slices from varyingly sized rows from
10-100k composite columns with counters and hundreds of writes/second) LCS has
a nice ~75% lower read latency than Size Tiered. And compactions don't stop the
world anymore. Repairs do easily trigger a few hu
Since you're asking about counters, I'll note too that the internal
representation of counters is pretty fat. In you RF=2 case, each counter is
probably about 64 bytes internally, while on the client side you send only
a 8 bytes value for each increment. So I don't think th
assandra 1.1.7.
>> - 3 DC with 2 nodes each.
>> - NetworkTopology replication strategy with 2 replicas per DC (so
>> basically each node contains full data set).
>> - 100 clients concurrently incrementing counters at the rate of the
>> roughly 100 / second (i.e. about 10k incre
etup:
> - Cassandra 1.1.7.
> - 3 DC with 2 nodes each.
> - NetworkTopology replication strategy with 2 replicas per DC (so
> basically each node contains full data set).
> - 100 clients concurrently incrementing counters at the rate of the
> roughly 100 / second (i.e. about 10k i
6206177506523).
>> calculation took 62ms for 12941 columns
>> INFO [MemoryMeter:1] 2012-12-27 19:30:12,752 Memtable.java (line 213)
>> CFS(Keyspace='Disco', ColumnFamily='Namespace') liveRatio is
>> 20.097473571044617 (just-counted was 20.097473571044617)
lumnFamily='NamespaceDir') liveRatio is
> 4.801010311533358 (just-counted was 4.801010311533358). calculation took
> 96ms for 3138 columns
>
>
>> Also post how many writes and reads along with avg row size
>
> All rows have 3-6 counters. As for writes
33358). calculation took 96ms
for 3138 columns
> Also post how many writes and reads along with avg row size
All rows have 3-6 counters. As for writes and reads:
Column Family: UserQuotas
SSTable count: 3
Space used (live): 2609839
Can you post gc settings? Also check logs and see what it says
Also post how many writes and reads along with avg row size
Sent from my iPhone
On Dec 29, 2012, at 12:28 PM, rohit bhatia wrote:
> i assume u mean 8 seconds and not 8ms..
> thats pretty huge to be caused by gc. Is there lot of lo
i assume u mean 8 seconds and not 8ms..
thats pretty huge to be caused by gc. Is there lot of load on your servers?
You might also need to check for memory contention
Regarding GC, since its parnew all u can really do is increase heap and
young gen size, or modify tenuring rate. But that can't be
On 29/12/2012, at 16:59, rohit bhatia wrote:
> Reads during a write still occur during a counter increment with CL ONE, but
> that latency is not counted in the request latency for the write. Your local
> node write latency of 45 microseconds is pretty quick. what is your timeout
> and the wri
issues and we could trace the timeouts to parnew gc collections which
were quite frequent. You might just want to take a look there too.
On Sat, Dec 29, 2012 at 4:44 PM, André Cruz wrote:
> Hello.
>
> I recently was having some timeout issues while updating counters and
> turned
Hello.
I recently was having some timeout issues while updating counters and turned on
row cache for that particular CF. This is its stats:
Column Family: UserQuotas
SSTable count: 3
Space used (live): 2687239
Space used (total
On Wed, Nov 28, 2012 at 7:15 AM, Edward Capriolo wrote:
> I may be wrong but during a bootstrap hints can be silently discarded, if
> the node they are destined for leaves the ring.
Yeah : https://issues.apache.org/jira/browse/CASSANDRA-2434
> A user like this might benefit from DANGER
Just for reference HBase's counters also do a local read. I am not saying
they work better/worse/faster/slower but I would not suspect any system
that reads on increment to me significantly faster then what Cassandra
does.
Just saying your counter throughput is read bound, this is not unique
write=false.
Although yes, I probably should try that row cache you mentioned -- I saw
that key cache was going unused (so saw no reason to try to enable row
cache), but I think it was on RF=1, it might be different on RF=2.
Sylvain Lebresne-3 wrote
> Counters replication works in different ways
I may be wrong but during a bootstrap hints can be silently discarded, if
the node they are destined for leaves the ring.
There are a large number of people using counters for 5 minute "real-time"
statistics. On the back end they use ETL based reporting to compute the
true stats at a
On Tue, Nov 27, 2012 at 3:21 PM, Edward Capriolo wrote:
> I mispoke really. It is not dangerous you just have to understand what it
> means. this jira discusses it.
>
> https://issues.apache.org/jira/browse/CASSANDRA-3868
Per Sylvain on the referenced ticket :
"
I don't disagree about the effici
.
On Wed, Nov 28, 2012 at 9:24 AM, Sylvain Lebresne wrote:
> Counters replication works in different ways than the one of "normal"
> writes. Namely, a counter update is written to a first replica, then a read
> is perform and the result of that is replicated to the other node
1 - 100 of 308 matches
Mail list logo