As we are still failing to add the 3 additional nodes, we still appreciate
any further thoughts.
I have removed all 3 half-joined nodes, deleted the data-directories and
started only one node. Since than (more than 24h hoursa ago) the node is in
status JOINING (nodetool status: UJ, nodetool gossip
In this use case you don't need the secondary index. Instead use Primary
key(partition_id, senttime)
Thanks
Jabbar Azam
On 12 Jun 2014 23:44, "Roshan" wrote:
> Hi
>
> Cassandra - 2.0.8
> DataStax driver - 2.0.2
>
> I have create a keyspace and a table with indexes like below.
> CREATE TABLE ser
As far as I can tell, the problem is that you're not using a partition key
in your query. AFAIK, you always have to use partition key in where clause.
And ALLOW FILTERING option is to let cassandra filter data from the rows it
found using the partition key.
One way to solve it is to make partition
Hi
Cassandra - 2.0.8
DataStax driver - 2.0.2
I have create a keyspace and a table with indexes like below.
CREATE TABLE services.messagepayload (
partition_id uuid,
messageid bigint,
senttime timestamp,
PRIMARY KEY (partition_id)
) WITH compression =
{ 'sstable_compression' : 'LZ4Com
Yes, I never thought of that.
Thanks
Jabbar Azam
On 12 June 2014 19:45, Jeremy Jongsma wrote:
> That will not necessarily scale, and I wouldn't recommend it - your
> "backup node" will need as much disk space as an entire replica of the
> cluster data. For a cluster with a couple of nodes tha
That will not necessarily scale, and I wouldn't recommend it - your "backup
node" will need as much disk space as an entire replica of the cluster
data. For a cluster with a couple of nodes that may be OK, for dozens of
nodes, probably not. You also lose the ability to restore individual nodes
- th
There is another way. You create a cassandra node in it's own datacentre,
then any changes going to the main cluster will be replicated to this node.
You can backup from this node. In the event of a disaster the data from
both clusters and wiped and then replayed to the individual node. The data
wi
These are the maven coordinates:
http://search.maven.org/#artifactdetails%7Corg.cassandraunit%7Ccassandra-unit%7C2.0.2.1%7Cjar
On Thu, Jun 12, 2014 at 1:40 PM, Kevin Burton wrote:
> Ah.. nice! I assume you mean this?
>
> https://github.com/jsevellec/cassandra-unit
>
> This should be awesome
HA!!! thanks! Mea culpa! Wrong list. I was wondering why I didn't get a
reply.
On Thu, Jun 12, 2014 at 10:41 AM, Robert Coli wrote:
> On Wed, Jun 11, 2014 at 8:27 PM, Kevin Burton wrote:
>
>> I am trying to listen for advisory messages on apollo and for the life of
>> me I can't figure it o
On Wed, Jun 11, 2014 at 8:27 PM, Kevin Burton wrote:
> I am trying to listen for advisory messages on apollo and for the life of
> me I can't figure it out…
> ...
>
I was banging my head over this for an hour or two yesterday and figured
> there MUST be some better documentation out there.
>
The
Ah.. nice! I assume you mean this?
https://github.com/jsevellec/cassandra-unit
This should be awesome :)
On Wed, Jun 11, 2014 at 8:08 PM, Johan Edstrom wrote:
> Cassandra-unit 2.0X works awesomely,
> if you are willing to spend the slightly few more cycles, - Look at
> farsandra. :)
>
> I co
On Thu, Jun 12, 2014 at 10:29 AM, David Mitchell wrote:
> session.execute("""insert into raw_data (key,column1,value) values
> (%s,%s,%s)""",
> ...
> and then delete them like so:
> session.execute("""delete from raw_data where key = %s""",(path,))
> ...
> and then try to select from
Greetings,
I am hitting a behavior which looks like a bug to me and I’m not sure how to
work around it. If I insert rows with a given key like so:
path=‘some:test:key'
for c in range(count):
session.execute("""insert into raw_data (key,column1,value) values
(%s,%s,%s)""",
On Thu, Jun 12, 2014 at 9:18 AM, Phil Luckhurst <
phil.luckhu...@powerassure.com> wrote:
> The problem appears to be directly related to number of entries in the
> index.
> I started with an empty table and added 50,000 entries at a time with the
> same indexed value.
All requests in Cassandra a
There isn’t a lot of “actual documentation” on the act of backing up, but I did
research for my own company into the act of backing up and unfortunately,
you’re not going to have a similar setup as Oracle. There are reasons for
this, however.
If you have more than one replica of the data, that
The problem appears to be directly related to number of entries in the index.
I started with an empty table and added 50,000 entries at a time with the
same indexed value. I was able to page through the results of a query that
used the secondary index with 250,000 records in the table using a LIMIT
Just an FYI, my benchmarking of the new python driver, which uses the
asynchronous CQL native transport, indicates that one can largely overcome
client-to-node latency effects if you employ a suitable level of
concurrency and non-blocking techniques.
Of course response size and other factors come
Good to know, thanks Peter. I am worried about client-to-node latency if I
have to do 20,000 individual queries, but that makes it clearer that at
least batching in smaller sizes is a good idea.
On Wed, Jun 11, 2014 at 6:34 PM, Peter Sanford
wrote:
> On Wed, Jun 11, 2014 at 10:12 AM, Jeremy Jon
On Wed, Jun 11, 2014 at 9:17 PM, Jack Krupansky
wrote:
> Hmmm... that multipl-gets section is not present in the 2.0 doc:
>
> http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architecturePlanningAntiPatterns_c.html
>
> Was that intentional – is that anti-pattern no lon
The doc for backing up – and restoring – Cassandra is here:
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_backup_restore_c.html
That doesn’t tell you how to move the “snapshot” to or from tape, but a
snapshot is the starting point for backing up Cassandra.
-- Jack
So you have to install a backup client on each Cassandra node. If the
NetBackup client behaves like EMC Networker, beware the resources
utilization (data deduplication, compression). You could have to boost
CPUs and RAM (+2GB) of each nodes.
Try with one node: make a snapshot with nodetool and
Hi,
Thanks for the quick response Romain.
We would like to avoid using extra disk space, so no DAS/SAN.
We are more interested in achieving something like what is now being done with
Oracle - Symantec's NetBackup is used to backup directly to tape, no
intermediate storage is needed.
It could be
Hi Maria,
It depends which backup software and hardware you plan to use. Do you
store your data on DAS or SAN?
Some hints regarding Cassandra is either to drain the node to backup or
take a Cassandra snapshot and then to backup this snapshot.
We backup our data on tape but we also store our data
Hi there,
I'm trying to find information/instructions about backing up and restoring a
Cassandra DB to and from a tape unit.
I was hopping someone in this forum could help me with this since I could not
find anything useful in Google :(
Thanks in advance,
Maria
24 matches
Mail list logo