Ignore this - :)
On Sun, Jan 31, 2016 at 12:00 AM, Richard L. Burton III
wrote:
> Initially, I want to keep costs fairly tight if possible. I plan to use 3
> instances of m1.large servers with 200GB of disk space per instance. The
> application will use up this diskspace within the first 4 mont
Initially, I want to keep costs fairly tight if possible. I plan to use 3
instances of m1.large servers with 200GB of disk space per instance. The
application will use up this diskspace within the first 4 months. At which
point, I will increase the disk space per node by adding new nodes to the
clu
Could you clarify... is this for pairs of rows, or is it n rows with n
columns, and is n a constant known before the query executes or based on
the presence of non-NULL column values?
And is this always adjacent rows using a clustering key - as opposed to a
partition key which does not guarantee a
Folks;
Need some advice. We have a time-series application that needs the data being
returned from C* to be flipped from the typical column based data to be row
based.
example : C* data : A B C D E F
need returned data to be : A D
For production I'd stick with ephemeral disks (aka instance storage) if you
have running a lot of transaction.
However, for regular small testing/qa cluster, or something you know you
want to reload often, EBS is definitely good enough and we haven't had
issues 99%. The 1% is kind of anomaly where
Hi Jonathon,
I created the schema manually. I took the schema definition from the old
cluster using "desc {keyspace_name}" and then ran those cql statements in
the new cluster. I didn't do anything with the system keyspaces.
Cheers,
Ajaya
On Sat, Jan 30, 2016 at 11:29 PM, Jonathan Haddad wrote:
Yep, that motivated my question "Do you have any idea what kind of disk
performance you need?". If you need the performance, its hard to beat
ephemeral SSD in RAID 0 on EC2, and its a solid, battle tested
configuration. If you don't, though, EBS GP2 will save a _lot_ of headache.
Personally, on sm
Did you also copy the system keyspaces or did you create the schema
manually?
On Sat, Jan 30, 2016 at 9:39 AM Jeff Jirsa
wrote:
> Upgrade from 2.1.9+ directly to 3.0 is supported:
>
> https://github.com/apache/cassandra/blob/cassandra-3.0/NEWS.txt#L83-L85
>
> - Upgrade to 3.0 is supported from C
Upgrade from 2.1.9+ directly to 3.0 is supported:
https://github.com/apache/cassandra/blob/cassandra-3.0/NEWS.txt#L83-L85
- Upgrade to 3.0 is supported from Cassandra 2.1 versions greater or equal to
2.1.9, or Cassandra 2.2 versions greater or equal to 2.2.2. Upgrade from
Cassandra 2.0 and olde
You need to upgrade first to C* 2.2 before migrating to C* 3.x
For each version, read the NEWS.txt file and follow the procedure:
>From 2.1.x to 2.2.x :
https://github.com/apache/cassandra/blob/cassandra-2.2/NEWS.txt
>From 2.2.x to 3.x:
https://github.com/apache/cassandra/blob/cassandra-3.0/NEWS
10 matches
Mail list logo