ll never see the data you wrote)
> 2. For writes to a single key, a TimedOutException means you cannot
> know whether the write succeeded or failed
> 3. For writes to multiple keys, either an UnavailableException or a
> TimedOutException means you cannot know whether the write succeeded or
Hi Philip,
>From http://wiki.apache.org/cassandra/ArchitectureOverview
*Quorum write*: blocks until quorum is reached
By my understanding if you _did_ a quorum write it means it successfully
completed.
Guille
I *think* we're saying the same thing here. The addition of the word
> "successful"
Hi Amit,
> 1) how to manually add data into it using cassandra-cli. i tried this
> type, but got the error:
> set UserMovies['user1']['userid'] = 'USER-1';
> but got error message: *Column family movieconsumed may only contain
> SuperColumns*
>
I can't really see why you need a SC here
I think you need another CF as index.
user_itemid -> timestamped column_name
Otherwise you can't guess what's the timestamp to use in the column name.
Anyway I would prefer storing the item-ids as column names in the main
column family and having a second CF for the order-by-date query only with
Xu, what's your configuration?
How many CF, how much data (size/rows/cols), how many clients
operations/sec and how much memory assigned for the heap?
Guille
On Wed, Aug 22, 2012 at 12:09 AM, Xu Renjie wrote:
> Hi, all
>I have a problem about the log. I have set the CASSANDRA_HEAPDUMP_DIR
Oleg,
If you have the aggregates in counters you only need to read the current
counter when adding/removing invoice lines.
In this situation you only need to be sure this sequence:
+ Read current counter value
+ Update current value according to newly created/updated lines
Is done safely to avo
I see weirdness I look for config changes
> and see what happens when they are returned to the default or near default.
> Do you have 16 _physical_ cores?
>
> Hope that helps.
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.the
k_ thread creation is pretty light weight.
>
> Jonathan / Brandon / others - opinions ?
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/08/2012, at 8:09 AM, Guillermo Winkler
&g
Hi, I have a cassandra cluster where I'm seeing a lot of thread trashing
from the mutation pool.
MutationStage:72031
Where threads get created and disposed in 100's batches every few minutes,
since it's a 16 core server concurrent_writes is set in 100 in the
cassandra.yaml.
concurrent_writes: 10
on layer such as zoo keeper or rely on consensus.
>
> If you have a design problem provide some details and someone may be able
> to help.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
&
a delete timestamp in the "future."
> >
> > On Sat, Jul 23, 2011 at 3:46 PM, Guillermo Winkler
> > wrote:
> >> I'm having a strange behavior on one of my cassandra boxes, after all
> >> columns are removed from a row, insertion on that key
Sorry, Cassandra version is 0.7.4
On Sat, Jul 23, 2011 at 5:46 PM, Guillermo Winkler wrote:
> I'm having a strange behavior on one of my cassandra boxes, after all
> columns are removed from a row, insertion on that key stops working (from
> API and from the cli)
>
>
I'm having a strange behavior on one of my cassandra boxes, after all
columns are removed from a row, insertion on that key stops working (from
API and from the cli)
[default@Agent] get Schedulers['atendimento'];
Returned 0 results.
[default@Agent] set Schedulers['atendimento']['test'] = 'dd';
Val
vals, you can insert
> members of a given interval as columns (or supercolumns) in a row. But it
> depends how you want to use the data on the read side.
>
>
> On Thu, Apr 14, 2011 at 12:25 PM, Guillermo Winkler <
> gwink...@inconcertcc.com> wrote:
>
>> I have a huge
I have a huge number of events I need to consume later, ordered by the date
the event occured.
My first approach to this problem was to use seconds since epoch as row key,
and event ids as column names (empty value), this way:
EventsByDate : {
SecondsSinceEpoch: {
evid:"", evid:"", ev
06 20:43:52.000, timestamp=1291668232563694)
(column=64343635396433382d316166302d343732662d623737392d336634303931323961373364,
value=2010-12-06 20:43:52.000, timestamp=1291668232889235))
Thanks again!
Guille
On Mon, Dec 6, 2010 at 5:45 PM, Guillermo Winkler
wrote:
> uh, ok I was just copying
e large negative numbers
> point to that being done incorrectly.
>
> Bitshifting and putting each byte of the long into a char[8] then
> stringifying the char[] is the best way to go. Cassandra expects
> big-ending longs, as well.
>
> - Tyler
>
>
> On Mon, Dec 6,
ec 6, 2010 at 3:02 PM, Tyler Hobbs wrote:
> What client are you using? Is it storing the results in a hash map or some
> other type of
> non-order preserving dictionary?
>
> - Tyler
>
>
> On Mon, Dec 6, 2010 at 10:11 AM, Guillermo Winkler <
> gwink...@inconcertc
Hi, I've the following schema defined:
EventsByUserDate : {
UserId : {
epoch: { // SC
IID,
IID,
IID,
IID
},
// and the other events in time
epoch: {
IID,
IID,
IID
}
}
}
Where I'm expecting to store all the event ids for a user ordered by date
(it's seconds since epoch as long long), I'm usin
19 matches
Mail list logo