On 05/12/2017 01:11 PM, Gopal, Dhruva wrote:
> Since, we’re trying to qualify this for Production, 3.11 isn’t
> officially released, yet is it – it’s why we’re planning on using
> 3.10. The concern stems from the build failing with byteman. We’re
> novices at building our own rpms for Cassandra and
Hi Anthony –
The link you shared below is where we initially started. That build fails
(there is an issue with byteman as indicated by this Jira:
https://issues.apache.org/jira/browse/CASSANDRA-13316). The source tarball
already exists (released version), so we decided to skip rebuilding the
The start and end points of a range tombstone are basically stored as special
purpose rows alongside the normal data in an sstable. As part of a read,
they're reconciled with the data from the other sstables into a single
partition, just like the other rows. The only difference is that they don'
Hi,
here are the main techniques that I know of to perform backups for
Cassandra :
- Tablesnap (https://github.com/JeremyGrosser/tablesnap) : performs
continuous backups on S3. Comes with tableslurp to restore backups (one
table at a time only) and tablechop to delete outdated sstables f
We’re making sure all the nodes up, when we run it. I don’t believe we are
using LOCAL_XXX and the repair was being planned to be run only on local DC
since that was where the node was down. Do we need to run a full cluster repair?
From: Varun Gupta
Date: Thursday, May 11, 2017 at 1:33 PM
To: "
Thanks a lot Blake, that definitely helps!
I actually found a ticket re range tombstones and how they are accounted
for: https://issues.apache.org/jira/browse/CASSANDRA-8527
I am wondering now what happens when a node receives a read request. Are
the range tombstones read before scanning the SSta
Hello !
I'm experiencing a data imbalance issue with one of my nodes within a
3-nodes C* 2.1.4 cluster. All of them are using JBOD (2 physical disks),
and this particular node seems to have recently made a relatively big
compaction (I'm using STCS), creating a 56Go SSTable file, which results in
o
While this is indeed a problem with DSE, your problem looks related to CJK
Lucene indexing, in this context I think your query does not make sense.
(see CJK: https://en.wikipedia.org/wiki/CJK_characters)
If you properly configured your indexing to handle CJK, as it looks like you’re
searching fo
Hi Varun,
yes you are right - that's the structure that gets created. But if I want
to backup ALL columnfamilies at once this requires a quite complex rsync as
Vladimir mentioned.
I can't just copy over the /data/keyspace directory as that contains all
the data AND all the snapshots. I really have