e getCF filter). Sounds like that's
>breaking pagination. Normally, all hints get delivered at once and
>then we compact away the tombstones -- that's my guess as to why we
>haven't hit this before.
>
>Can you create a ticket?
>
>On Mon, Feb 20, 2012 at 4:52 PM
I'm testing hinted handoff in 1.1 beta1 and cannot seem to get a hint
delivered. 3 node cluster, RF = 3, writing with CL = ONE. killed a host then
did the write using the CLI on another node. I can see hint waiting using CLI
and I see the log messages at the end of this email. this suggests
if you need a strictly FIFO queue, what I'm about to offer does not
satisfy FIFO. I created a "mostly" FIFO queue for processing work items
that could arrive out of order, didn't matter for my use case.
https://github.com/btoddb/cassandra-queue/wiki/Queue-Design
maintaining FIFO order using Cas
Your problem is really
>that row cache is not designed for wide rows at all. See
>https://issues.apache.org/jira/browse/CASSANDRA-1956
>
>On Thu, Jan 12, 2012 at 10:46 PM, Todd Burruss
>wrote:
>> after looking through the code it seems fairly straight forward to
>>c
https://issues.apache.org/jira/browse/CASSANDRA-3746
On 1/14/12 1:38 PM, "Jonathan Ellis" wrote:
>IMO that's a bug.
>
>On Sat, Jan 14, 2012 at 3:20 PM, Todd Burruss
>wrote:
>> using CLI to update row_cache_provider does not take affect until after
>>a n
using CLI to update row_cache_provider does not take affect until after a node
is restarted, even though "describe" shows it as set. not sure if you consider
this a bug, but has caused me some grief with my testing of providers.
I'm using 1.0.6
don't think anyone wants.
On 1/12/12 6:18 PM, "Jonathan Ellis" wrote:
>8x is pretty normal for JVM and bookkeeping overhead with the CLHCP.
>
>The SerializedCacheProvider is the default in 1.0 and is much
>lighter-weight.
>
>On Thu, Jan 12, 2012 at 6:07 PM, T
07 PM, Todd Burruss wrote:
> I'm using ConcurrentLinkedHashCacheProvider and my data on disk is about 4gb,
> but the RAM used by the cache is around 25gb. I have 70k columns per row,
> and only about 2500 rows – so a lot more columns than rows. has there been
> any discussi
d. But
>let us know any progress in your experience. :-)
>
>[1] http://www.scribd.com/doc/59830692/Cassandra-at-Twitter
>[2]
>http://www.cs.virginia.edu/kim/publicity/pldi09tutorials/memory-efficient-
>java-tutorial.pdf
>
>--
>Bruno Leonardo Gonçalves
>
>
I'm using ConcurrentLinkedHashCacheProvider and my data on disk is about 4gb,
but the RAM used by the cache is around 25gb. I have 70k columns per row, and
only about 2500 rows – so a lot more columns than rows. has there been any
discussion or JIRAs discussing reducing the size of the cache?
Even though I can switch cache poviders using the CLI's "update column family",
something doesn't work right. "describe" will tell me it is updated, but I
don't believe it is updated, purely based on statistics I see. I think this is
why I was having trouble with evaluating caching from my oth
My recent bug was that I was sending a zero length ByteBuffer (because I forgot
to flip) for a column name. The problem I have is that the insert was accepted
by the server. Should an exception be thrown? The end result of allowing the
insert is that the server will not restart if the data is
That was exactly the problem, thx. (I'm starting a new thread to chat
about it)
On 10/11/11 6:31 AM, "Sylvain Lebresne" wrote:
>@Todd You're likely doing something wrong with your ByteBuffers. It's
>*very* easy
>to screw up with those.
>
>--
>Sylva
This is a bit preliminary, but wanted to get this to you guys knowing the
vote is in progress
using these artifacts I am seeing the following exception on restart.
(also with RC1 and RC2.) the only interesting tidbit is that it seems to
only happen when writing via direct calls to StoragePro
I picked up the 1.0.0-rc1 build and testing now .. But I bet you are
correct
On 9/24/11 6:25 PM, "Jonathan Ellis" wrote:
>I bet this is https://issues.apache.org/jira/browse/CASSANDRA-3253.
>
>On Fri, Sep 23, 2011 at 6:00 PM, Todd Burruss
>wrote:
>> My last test, I
g
down. I have concurrent_writes = 32
Here is a sample from one of the machines:
Pool Name Active Pending Completed Blocked All time blocked
MutationStage32 14921416296 0 0
On 9/23/11 3:11 PM, "Brandon Williams" wrote:
>On
No
On 9/23/11 3:04 PM, "Jonathan Ellis" wrote:
>New errors in the log?
>
>On Fri, Sep 23, 2011 at 4:45 PM, Todd Burruss
>wrote:
>> More information ... My cluster is in the state where I can read, but
>>not
>> write, again. I used CLI to drop and
the state where writes are no longer working. During all my
writes I am reading as well.
Nodetool reports all nodes are "up", thrift is running, gossip is active,
and all are 1.0.0-beta1
On 9/23/11 9:40 AM, "Todd Burruss" wrote:
>Fyi Š I am seeing the exception (at en
Fyi … I am seeing the exception (at end of message) using 1.0-beta1.
Notes:
- I was running 0.8.5 before dropping in 1.0-beta1
- upgraded yaml file to be 1.0
- some CFs were created in 0.8.5 and some in 1.0-beta1
A couple of observations after seeing this:
- cannot nicely kill cassandra, must u
i am interested in plugins/triggers. it seems like a good way to solve
the problem of streaming data out of cassandra into, say a hadoop
cluster or some other BI backend
On 02/08/2011 02:25 PM, Jeremy Hanna wrote:
I think the plugins/triggers/coprocessors is a great discussion to get going
a
i got the #3 0.7.1 version and let run on our 8 node test cluster over
the weekend, doing repairs and compactions periodically.
did see this in one of the machines' log, no other ERRORs. not sure the
affect on my data or app, but passing it along ...
ERROR [EXPIRING-MAP-TIMER-2] 2011-02-06 1
21 matches
Mail list logo