I have a 30+ node cluster that is under heavy read and write load. Based on
the fact that we never delete data, and all data is inserted with TTLs and is
somewhat temporal if not upserted, and we are fine with the consistency of one
and read repair chance, we elected to never repair. The reaso
in that regard?
Wayne
On Aug 13, 2014, at 1:10 PM, Robert Coli
mailto:rc...@eventbrite.com>> wrote:
On Wed, Aug 13, 2014 at 9:16 AM, Wayne Schroeder
mailto:wschroe...@pinsightmedia.com>> wrote:
Are there hidden costs to LWT (paxos) that are not represented in the total
time a
I have to come up with a “event dupe check” system that handles race conditions
where two requests come in at the same time. Obviously this can be solved with
lightweight transactions (if not exists), however I am concerned that there may
be costs/issues hidden to me for doing significant amoun
I've been doing a lot of reading on SSTable fragmentation due to updates and
the costs associated with reconstructing the end data from multiple SSTables
that have been created over time and not yet compacted. One question is stuck
in my head: If you re-insert entire rows instead of updating on
Perhaps I should clarify my question. Is this possible / how might I
accomplish this with cassandra?
Wayne
On Mar 31, 2014, at 12:58 PM, Robert Coli
mailto:rc...@eventbrite.com>>
wrote:
On Mon, Mar 31, 2014 at 9:37 AM, Wayne Schroeder
mailto:wschroe...@pinsightmedia.com>> wrot
I found a lot of documentation about the read path for key and row caches, but
I haven't found anything in regard to the write path. My app has the need to
record a large quantity of very short lived temporal data that will expire
within seconds and only have a small percentage of the rows acce
I think it will work just fine. I was just asking for opinions on if there was
some reason it would not work that I was not thinking of.
On Mar 10, 2014, at 4:37 PM, Tupshin Harper
mailto:tups...@tupshin.com>> wrote:
Oh sorry, I misunderstood. But now I'm confused about how what you are tryi
The plan IS to do the whole write as a lightweight transaction because I do
need to rely on the behavior. I am just vetting the expected behavior--that
doing it as a conditional update, i.e. a light weight transaction, that I am
not missing something and it will behave as I outlined without som
As I understand it, even though a quorum write fails, the data is still (more
than likely) saved and will become eventually consistent through the well known
mechanisms. I have a case where I would rather this not happen--where I would
prefer that if the quorum write fails, that data NEVER beco
vely change the default of ONE that I am expecting. This is obviously
specific to my application, but hopefully it helps anyone who has followed that
pattern as well.
Wayne
On Feb 28, 2014, at 12:18 PM, Wayne Schroeder
wrote:
> After upgrading to the 2.0 driver branch, I received a lot
After upgrading to the 2.0 driver branch, I received a lot of warnings about
re-preparing previously prepared statements. I read about this issue, and my
work around was to cache my prepared statements in a Map internally in my app via a common prepare method, where the
string key was the CQL q
I have some conditional insert/update operations that set quorum consistency.
I was using this with the 1.0 driver, back before the 2.0 features required the
2.0 driver. Now that I'm on the 2.0 driver, I have found the new
setSerialConsistencyLevel routine for statements. Multiple places it r
12 matches
Mail list logo