Hello
Say I have 4 nodes: A, B, C and D and wish to have consistency level
for writes defined in such as way that writes meet the following
consistency level:
(A or B) AND C AND !D,
i.e. either of A or B will suffice and C to be included into
consistency level as well. But the write should not wait
> I'm not sure I entirely follow. By the oldest data, do you mean the
> primary key corresponding to the limit of the time horizon? Unfortunately,
> unique IDs and the timstamps do not correlate in the sense that
> chronologically
> "newer" entries might have a smaller sequential ID. That's because
Thanks Peter,
I'm not sure I entirely follow. By the oldest data, do you mean the
primary key corresponding to the limit of the time horizon? Unfortunately,
unique IDs and the timstamps do not correlate in the sense that
chronologically
"newer" entries might have a smaller sequential ID. That's
p the data into "buckets" each representing one day in the
system's activity.
I create the "DATE" attribute and add it to each row, e.g. it's a column
{'DATE','2013'}.
Hmm, so why is pushing this into the row key and then deleting the
en
> I do limit the number of rows I'm asking for in Pycassa. Queries on primary
> keys still work fine,
Is it feasable in your situation to keep track of the oldest possible
data (for example, if there is a single sequential writer that rotates
old entries away it could keep a record of what the old
he data into "buckets" each representing one day in the
> system's activity.
> I create the "DATE" attribute and add it to each row, e.g. it's a column
> {'DATE','2013'}.
Hmm, so why is pushing this into the row key and then deleting the
e
enting one day in
the system's activity.
I create the "DATE" attribute and add it to each row, e.g. it's a column
{'DATE','2013'}.
I create an index on that column, along with a few others.
Now, I want to rotate the data out of my database, on daily basi
On Sun, Nov 13, 2011 at 6:55 PM, Maxim Potekhin wrote:
> Thanks to all for valuable insight!
>
> Two comments:
> a) this is not actually time series data, but yes, each item has
> a timestamp and thus chronological attribution.
>
> b) so, what do you practically recommend? I need to delete
> half
Thanks to all for valuable insight!
Two comments:
a) this is not actually time series data, but yes, each item has
a timestamp and thus chronological attribution.
b) so, what do you practically recommend? I need to delete
half a million to a million entries daily, then insert fresh data.
What's
Deletions in Cassandra imply the use of tombstones (see
http://wiki.apache.org/cassandra/DistributedDeletes) and under some
circumstances reads can turn O(n) with respect to the amount of
columns deleted, depending. It sounds like this is what you're seeing.
For example, suppose you're inserting a
On Sun, Nov 13, 2011 at 5:57 PM, Maxim Potekhin wrote:
> I've done more experimentation and the behavior persists: I start with a
> normal dataset which is searcheable by a secondary index. I select by that
> index the entries that match a certain criterion, then delete those. I tried
> two method
I've done more experimentation and the behavior persists: I start with a
normal dataset which is searcheable by a secondary index. I select by
that index the entries that match a certain criterion, then delete
those. I tried two methods of deletion -- individual cf.remove() as well
as batch rem
Yes, correct, it's not going to clean itself. Using your example with
a little more detail:
1 ) A(T1) reads previous location (T0,L0) from index_entries for user U0
2 ) B(T2) reads previous location (T0,L0) from index_entries for user U0
3 ) A(T1) deletes previous location (T0,L0) from index_entr
I need to create mapping from (s) to (s) which need to
provide for fast lookups service ?
Also I need to provide a mapping from to inorder to
implement search functionality in my application.
What could be a good strategy to implement this ? (I would welcome
suggestions to use any new technologi
Due to some application dependencies I've been holding off on a
Cassandra upgrade for a while. Now that my last application using the
old thrift client is updated I have the green light to prep my
upgrade. Since I'm on .6 the upgrade is obviously a bit trickier. Do
the standard instructions for
> I would like to know it also - actually is should be similar, plus there are
> no dependencies to sun.misc packages.
I don't remember the discussion, but I assume the reason is that
allocateDirect() is not freeable except by waiting for soft ref
counting. This is enforced by the API in order to
https://issues.apache.org/jira/browse/CASSANDRA-3488
On Nov 12, 2011, at 9:52 AM, Jeremy Hanna wrote:
> It sounds like that's just a message in compactionstats that's a no-op. This
> is reporting for about an hour that it's building a secondary index on a
> specific column family. Not sure if
Let's catch up. I am available in Mumbai.
Using C* in dev env. Love to share or hear experience's.
On Fri, Nov 11, 2011 at 10:25 PM, Adi wrote:
> Hey GeekTalks/any other cassandra users around Mumbai/Pune,
>
> I will be around Mumbai from last week of Nov through Third week of
> December. I have
I believe https://issues.apache.org/jira/browse/CASSANDRA-2802 broke
it. I've created https://issues.apache.org/jira/browse/CASSANDRA-3489
to address this separately.
On Sun, Nov 13, 2011 at 9:37 AM, Michael Vaknine wrote:
> You are right this solved the problem.
> I do not understand why versio
You are right this solved the problem.
I do not understand why version 1.0.0 was not affected since I used the same
configuration yaml file.
Thank you.
Michael Vaknine
-Original Message-
From: Brandon Williams [mailto:dri...@gmail.com]
Sent: Sunday, November 13, 2011 4:48 PM
To: user@ca
[1] i'm not particularly worried about transient conditions so that's
ok. i think there's still the possibility of a non-transient false
positive...if 2 writes were to happen at exactly the same time (highly
unlikely), eg
1) A reads previous location (L1) from index entries
2) B reads previou
On Sun, Nov 13, 2011 at 4:35 AM, Michael Vaknine wrote:
> I am trying to upgrade to 1.0.2 and when I try to start the first upgraded
> server I get the following error
>
>
>
> ERROR [WRITE-/10.5.6.102] 2011-11-13 10:20:37,447
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Th
On Thu, 2011-11-10 at 22:35 -0800, footh wrote:
>
> UUID startId = new UUID(UUIDGen.createTime(start),
> UUIDGen.getClockSeqAndNode());
> UUID finishId = new UUID(UUIDGen.createTime(finish),
> UUIDGen.getClockSeqAndNode());
You have got comparator_type = TimeUUIDType ?
~mck
--
"The old law
Hi,
I would appreciate any help.
I have a cluster of 4 servers with replication factor 3 version 1.0.0
The cluster was upgraded from 0.7.8.
I am trying to upgrade to 1.0.2 and when I try to start the first upgraded
server I get the following error
ERROR [WRITE-/10.5.6.102] 2011-11-
24 matches
Mail list logo