On Sat, Nov 10, 2012 at 6:16 PM, Drew Kutcharian wrote:
> Thanks Rob, this makes sense. We only have one rack at this point, so I think
> it'd be better to start with PropertyFileSnitch to make Cassandra think that
> these nodes each are in a different rack without having to put them on
> diffe
Thanks Rob, this makes sense. We only have one rack at this point, so I think
it'd be better to start with PropertyFileSnitch to make Cassandra think that
these nodes each are in a different rack without having to put them on
different subnets. And I will have more flexibility (at the cost of ke
On Mon, Nov 5, 2012 at 12:23 PM, Drew Kutcharian wrote:
>> Switching from SimpleStrategy to RackAware can be a pain.
>
> Can you elaborate a bit? What would be the pain point?
If you don't maintain the same replica placement vis a vis nodes on
your cluster, you have to dump and reload.
Simple ex
I understand that with one node we will have no HA, but since we are just
starting out we wanted to see what would be the bare minimum to go to
production with and as we see traction we can add more nodes.
> Switching from SimpleStrategy to RackAware can be a pain.
Can you elaborate a bit? What
Should be fine if one node can deal with your read and write load.
Switching from SimpleStrategy to RackAware can be a pain. That¹s a
potential growth point way down the line (if you ever have your nodes on
different switches). You might want to just setup your keyspace as
RackAware if you intend t
On Mon, Nov 5, 2012 at 12:49 PM, Drew Kutcharian wrote:
> Hey Guys,
>
> What should I look out for when deploying a single node installation? We want
> to launch a product that uses Cassandra and since we are going to have very
> little load initially, we were thinking of just going live with on
Hey Guys,
What should I look out for when deploying a single node installation? We want
to launch a product that uses Cassandra and since we are going to have very
little load initially, we were thinking of just going live with one node and
eventually add more nodes as the load (hopefully) grow
> Even more: if you enable read repair the chances of having bad writes
> decreases for any further reads. This will make your cluster become faster
> consistent again after some failure.
Under 1.0 the default RR probability was reduced to 10%. Because Hinted Handoff
was changed to also store h
" By default Cassandra tries to write to both nodes, always. Writes will
only fail (on a node) if it is down, and even then hinted handoff will
attempt to keep both nodes in sync when the troubled node comes back up.
The point of having two nodes is to have read and write availability in the
face o
Doing reads and writes at CL=1 with RF=2 N=2 does not imply that the reads
will be inconsistent. It's more complicated than the simple counting of
blocked replicas. It is easy to support the notion that it will be largely
consistent, in fact very consistent for most use cases.
By default Cassandra
You'll need to either read or write at at least quorum to get consistent
data from the cluster so you may as well do both.
Now that you mention it, I was wrong about downtime, with a two node
cluster reads or writes at quorum will mean both nodes need to be online.
Perhaps you could have an emergen
Thanks for the comments, I guess I will end up doing a 2 node cluster with
replica count 2 and read consistency 1.
-- Drew
On Mar 15, 2012, at 4:20 PM, Thomas van Neerijnen wrote:
> So long as data loss and downtime are acceptable risks a one node cluster is
> fine.
> Personally this is usual
So long as data loss and downtime are acceptable risks a one node cluster
is fine.
Personally this is usually only acceptable on my workstation, even my dev
environment is redundant, because servers fail, usually when you least want
them to, like for example when you've decided to save costs by wai
Hi Drew,
One other disadvantage is the lack of "consistency level" and
"replication". Both ware part of the high availability / redundancy. So you
would really need to backup your single-node-"cluster" to some other
external location.
Good luck!
2012/3/15 Drew Kutcharian
> Hi,
>
> We are worki
Hi,
We are working on a project that initially is going to have very little data,
but we would like to use Cassandra to ease the future scalability. Due to
budget constraints, we were thinking to run a single node Cassandra for now and
then add more nodes as required.
I was wondering if it is
15 matches
Mail list logo