Hi All,Been awhile since I upgaded and wanted to know what the steps are to
upgrade from 2.1.0 to 2.1.9. Also want to know if I need to upgrade my java
database driver.
Thanks,-Tony
thanks Nate. But regarding our situation, of the 3 Datacenters we have DC1
DC2 and DC3, we take backup of snapshots on DC1.
If DC3 were to go down, will we not be able to bring up a new DC4 with
snapshots and token_ranges from DC1?
On Fri, Aug 28, 2015 at 3:19 PM, Nate McCall wrote:
> You cann
We are using DSE on our clusters.
DSE version : 4.6.7
Cassandra version : 2.0.14
thanks
Sai Potturi
On Fri, Aug 28, 2015 at 3:40 PM, Robert Coli wrote:
> On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi <
> pskraj...@gmail.com> wrote:
>
>> we decommissioned nodes in a datacent
On Fri, Aug 28, 2015 at 11:32 AM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> we decommissioned nodes in a datacenter a while back. Those nodes keep
> showing up in the logs, and also sometimes marked as UNREACHABLE when
> `nodetool describecluster` is run.
>
What version of Cas
You cannot use the identical token ranges. You have to capture membership
information somewhere for each datacenter, and use that token information
when briging up the replacement DC.
You can find details on this process here:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_snap
Do they show up in nodetool gossipinfo?
Either way, you probably need to invoke Gossiper.unsafeAssassinateEndpoints
via JMX as described in step 1 here:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_gossip_purge.html
On Fri, Aug 28, 2015 at 1:32 PM, sai krishnam raju potturi
hi;
we decommissioned nodes in a datacenter a while back. Those nodes keep
showing up in the logs, and also sometimes marked as UNREACHABLE when
`nodetool describecluster` is run.
However these nodes do not show up in `nodetool status` and
`nodetool ring`.
Below are a couple lines fro
Unfortunately, the addresses/DC of the replicas are not available on the
exception hierarchy within Cassandra.
Fwiw, the DS Java Driver (most native protocol drivers actually) manages
membership dynamically by acting on cluster health events sent back over
the channel by the native transport. Keep
hi;
We have cassandra cluster with Vnodes spanning across 3 data centers.
We take backup of the snapshots from one datacenter.
In a doomsday scenario, we want to restore a downed datacenter, with
snapshots from another datacenter. We have same number of nodes in each
datacenter.
1 : We kno
On Fri, Aug 28, 2015 at 6:27 AM, Tommy Stendahl wrote:
> Thx, that was the problem. When I think about it it makes sense that I
> should use update in this scenario and not insert.
Per Sylvain on an old thread :
"
INSERT and UPDATE are not totally orthogonal in CQL and you should use
INSERT fo
The Cassandra team is pleased to announce the release of Apache Cassandra
version 2.1.9.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source an
Hi guys,
I got some issues with ccm and unit tests in java-driver. Here is what I see :
tail -f /tmp/1440780247703-0/test/node5/logs/system.log
INFO [STREAM-IN-/127.0.1.3] 2015-08-28 16:45:06,009 StreamResultFuture.java
(line 220) [Stream #22d9e9f0-4da4-11e5-9409-5d8a0f12fefd] All sessions comp
Thx, that was the problem. When I think about it it makes sense that I
should use update in this scenario and not insert.
cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
(key, cluster)) ;
cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh> SELECT * FROM f
What if you use an update statement in the second query?
--
Jacques-Henri Berthemet
-Original Message-
From: Tommy Stendahl [mailto:tommy.stend...@ericsson.com]
Sent: vendredi 28 août 2015 13:34
To: user@cassandra.apache.org
Subject: Re: TTL question
Yes, I understand that but I think t
Yes, I understand that but I think this gives a strange behaviour.
Having values only on the primary key columns are perfectly valid so why
should the primary key be deleted by the TTL on the non-key column.
/Tommy
On 2015-08-28 13:19, Marcin Pietraszek wrote:
Please look at primary key which
Please look at primary key which you've defined. Second mutation has
exactly the same primary key - it overwrote row that you previously
had.
On Fri, Aug 28, 2015 at 1:14 PM, Tommy Stendahl
wrote:
> Hi,
>
> I did a small test using TTL but I didn't get the result I expected.
>
> I did this in sql
Hi,
I did a small test using TTL but I didn't get the result I expected.
I did this in sqlsh:
cqlsh> create TABLE foo.bar ( key int, cluster int, col int, PRIMARY KEY
(key, cluster)) ;
cqlsh> INSERT INTO foo.bar (key, cluster ) VALUES ( 1,1 );
cqlsh> SELECT * FROM foo.bar ;
key | cluster |
Hi All,
Please excuse my limited knowledge . We have an application in .Net and the
backend database is Cassandra.We have deployed in our application into
production which is behing the Firewall. We have opened the 9042 Port from
our Webserver to the cassandra cluster. But again we are getting the
18 matches
Mail list logo