I have seen people do this but I can’t find documentation for it, and
specifically how well it optimizes IO. Does it write blocks to both
disks? How is IO parallelized.
Too many questions to list them all.
On Thu, Nov 6, 2014 at 3:27 PM, Chris Lohfink
wrote:
> If optimizing for IO, use Cassan
I see, thanks for explaining what that means.
If we are using SSD, then reordering/merging has less impact than
traditional mechanical hard disk, so using SSD drive probably can deal
with increased concurrent_read better. (?)
On Wed, Nov 5, 2014 at 11:00 PM, Jimmy Lin wrote:
> Sorry I have late follow up question
>
> In the Cassandra.yaml file the concurrent_read section has the following
> comment:
>
> What does it mean by " the operations to enqueue low enough in the stack
> that the OS and drives can reorder t
If optimizing for IO, use Cassandra's JBOD configuration (list each disk
under data directories in cassandra.yaml). It would put sstables on the
disk thats least used. If want to optimize for disk space, I'd go with
RAID0. Will probably want to tune concurrent reader/writers, stream
throughput (
I’m curious what people are doing with multiple SSDs per server.
I think there are two main paths:
- RAID 0 them… the problem here is that RAID0 is not a panacea and the
drives may or may not see better IO throughput.
- use N cassandra instances per box (or containers) and have one C* node
acces
On Thu, Nov 6, 2014 at 2:10 PM, Christopher Brodt
wrote:
> Yep. The "trouble" with FIOs is that they almost completely remove your
> disk throughput problems, so then you're constrained by CPU. Concurrent
> compactors and concurrent writes are two params that come to mind but there
> are likely o
This is definitely a first world problem.. having databases that are CPU
bound :-P
On Thu, Nov 6, 2014 at 1:05 PM, jeeyoung kim wrote:
> I've been running with FIOs and we've been CPU bound most of the time. But
> I'm not using native transport yet, and is hoping that it would make things
> fast
Yep. The "trouble" with FIOs is that they almost completely remove your
disk throughput problems, so then you're constrained by CPU. Concurrent
compactors and concurrent writes are two params that come to mind but there
are likely others.
@kevin. I hear you. 5TB is sort of a maximum that DataStax
I've been running with FIOs and we've been CPU bound most of the time. But
I'm not using native transport yet, and is hoping that it would make things
faster.
On Thu, Nov 6, 2014 at 12:54 PM, Christopher Brodt
wrote:
> You should get pretty great performance with those FusionIO cards. One
> thin
This was one of my biggest issues too. We were expecting to be at 5-10
nodes to start with and then 20-40 nodes in 60-90 days.
But this means we can run all of our database on 1 box :-P … but
realistically two.
Which means if one box goes offline then I’m at 50% capacity. That and I
don’t even
You should get pretty great performance with those FusionIO cards. One
thing I watch out for whenever scaling Cassandra vertically is compaction
times, which probably won't matter here. However, you have to take into
account that you lose some resiliency to failures with less nodes.
On Thu, Nov 6,
I've heard of people running dense nodes (8+ TB) using fusion I/O, but with
10GBe connections. I mean why buy a Ferrari and never leave first gear?
As far as saturating the network goes, I guess that all depends on your
workload, and how often you need to repair.
Sent from my iPhone
> On Nov 6
We’re looking at switching data centers and they’re offering pretty
aggressive pricing on boxes with fusion IO cards.
2x 1.2TB Fusion IO
128GB RAM
20 cores.
now.. this isn’t the typical cassandra box. Most people are running
multiple nodes to scale out vs scale vertically. But these boxes are
p
For cqlengine we do quite a bit of write then read to ensure data was
written correctly, across 1.2, 2.0, and 2.1. For what it's worth,
I've never seen this issue come up. On a single node, Cassandra only
acks the write after it's been written into the memtable. So, you'd
expect to see the most
On Thu, Nov 6, 2014 at 6:14 AM, Brian Tarbox wrote:
> We write values to our keyspaces and then immediately read the values back
> (in our Cucumber tests). About 20% of the time we get the old value.if
> we wait 1 second and redo the query (within the same java method) we get
> the new value
Is there a way to authenticate to cassandra using the new
cassandra-stress tool released with cassandra 2.1? It appears as if the
'-un' (username) and '-pw' (password) switches have been removed from
the tool.
In the 2.0 version, this is the command I would run: 'cassandra-stress
-D nodesfile
Thanks. Right now its just for testing but in general we can't guard
against multiple users ending up the one writes and then one reads.
It would be one thing if the read just got old data but we're seeing it
return wrong data...i.e. data that doesn't correspond to any particular
version of the
If this is just for doing tests to make sure you get back the data you
expect, I would recommend looking some sort of eventually construct in your
testing. We use Specs2 as our testing framework, and our write-then-read
tests look something like this:
someDAO.write(someObject)
eventually {
s
We're doing development on a single node cluster (and yes of course we're
not really deploying that way), and we're getting inconsistent behavior on
reads after writes.
We write values to our keyspaces and then immediately read the values back
(in our Cucumber tests). About 20% of the time we get
You'd better off asking on the Spring Data Cassandra mailing list.
I think that very few people not to say nobody tried integrating Astyanax
with Spring Data Cassandra...
Le 6 nov. 2014 08:17, "Wim Deblauwe" a écrit :
> Hi,
>
> We are building an application where we install it on-premise, usual
Hello Clément
This is a known anti-pattern. You should never re-use a deleted counter
column otherwise there will be unpredictable result for the counter value.
Le 6 nov. 2014 08:45, "Clément Fumey" a écrit :
> Hi,
>
> I have a table with counter column . When I insert (update) a row, delete
> i
21 matches
Mail list logo