DC) which
> report not having the data. After all the nodes respond it will check the
> digests from all the responses, see theres an inconsistency and do a read
> repair. Which would explain it showing up following queries.
>
> Chris
>
> On Jun 26, 2014, at 10
I ran the following set of commands via CLI in our servers. There is a
data-discrepancy that I encountered as below during gets...
We are running 1.2.4 version with replication-factor=3 (DC1) & 2 (DC2).
Reads and writes are at LOCAL_QUORUM
create column family TestCF with key_validation_class=Asc
We have the following structure in a composite CF, comprising 2 parts
Key=123 -> A:1, A:2, A:3,B:1, B:2, B:3, B:4, C:1, C:2, C:3,
Our application provides the following inputs for querying on the
first-part of composite column
key=123, [(colName=A, range=2), (colName=B, range=3), (colName=C
We have suddenly started receiving RangeSliceCommand serializer errors.
We are running 1.2.4 version
This does not happen for Names based command. Only for Slice based
commands, we get this error.
Any help is greatly appreciated
ERROR [Thread-405] 2013-10-10 07:58:13,453 CassandraDaemon.java (l
le mentioning repair and cleanup.
> That is in fact what needs to be done first and what I meant. Thanks
> Robert.
>
> Regards,
> Shahab
>
>
> On Wed, Oct 9, 2013 at 1:50 PM, Robert Coli wrote:
>
>> On Wed, Oct 9, 2013 at 7:35 AM, Ravikumar Govindarajan <
>&g
We have wide-rows accumulated in a cassandra CF and now changed our
app-side logic.
The application now only wants first 7 days of data from this CF.
What is the quick way to delete old-data and at the same time make sure
read does churn through all deleted columns?
Lets say I do the following
;Col-Name-4'} WHERE userid = 'XYZ' AND pkid = '1000';
>> UPDATE time_series SET colname = colname +
>> {'204':'Col-Name-5'} WHERE userid = 'XYZ' AND pkid = '1002';
>>
>> SELECT * FROM time_series WHERE userid =
I have been faced with a problem of grouping composites on the second-part.
Lets say my CF contains this
TimeSeriesCF
key:UserID
composite-col-name:TimeUUID:PKID
Some sample data
UserID = XYZ
> called Image, Documents, Meta and store rows in all of them with the 123
> key.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 18/04/2013, at 1:32 PM,
e Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/04/2013, at 11:25 PM, Ravikumar Govindarajan <
> ravikumar.govindara...@gmail.com 'ravikumar.govindara...@gmail.com');>> wrote:
>
> We would like to map multiple keys to
We would like to map multiple keys to a single token in cassandra. I
believe this should be possible now with CASSANDRA-1034
Ex:
Key1 --> 123/IMAGE
Key2 --> 123/DOCUMENTS
Key3 --> 123/MULTIMEDIA
I would like all keys with "123" as prefix to be mapped to a single token.
Is this possible? What sh
d
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 28/03/2013, at 4:15 AM, Ravikumar Govindarajan <
> ravikumar.govindara...@gmail.com> wrote:
>
> We started receiving OOMs in our cassandra grid and took a heap dump. We
> are running version
We started receiving OOMs in our cassandra grid and took a heap dump. We
are running version 1.0.7 with LOCAL_QUORUM from both reads/writes.
After some analysis, we kind of identified the problem, with
SliceByNamesReadCommand, involving a single Super-Column. This seems to be
happening only in dig
It is latest one. To
> have the best performance use reversed sorting order.
>
> Andrey
>
>
> On Fri, Dec 21, 2012 at 6:40 AM, Ravikumar Govindarajan <
> ravikumar.govindara...@gmail.com> wrote:
>
>> How do we model a timeseries data in cassandra
as compaction, hints to the OS that
> the reads should not be cached. Technically is uses posix_fadvise if you
> want to look it up.
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://
ged when space is needed.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 4/12/2012, at 11:59 PM, Ravikumar Govindarajan <
> ravikumar.govindara...@gmail.com> wrote:
ight results to
> temp tables or whatever) and allow you to page them. Slices have a
> fixed size, this ensures that the the "query" does not execute for
> arbitrary lengths of time.
>
>
> On Thu, Nov 15, 2012 at 6:39 AM, Ravikumar Govindarajan
> wrote:
> > Usually
As I understand from the link below, burning column index-info onto the
sstable index files will not only eliminate sstables but also reduce disk
seeks from 3 to 2 for wide rows.
Our index files are always mmapped, so there is only one random seek for a
named column query. I think that is a wonder
used when you columns by names. If you are doing a slice with a start the
> bloom filter is not used, instead the row level column index is used (if
> present).
>
> Hope that helps.
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http:/
Thanks for the clarification. Even though compression solves disk space
issue, we might still have Memtable bloat right?
There is another issue to be handled for us. The queries are always going
to be range queries with absolute match on part1 and range on part 2 of the
composite columns
Ex: Quer
x27;m not sure what you are asking.
> >
> > Cheers
> >
> > -
> > Aaron Morton
> > Freelance Developer
> > @aaronmorton
> > http://www.thelastpickle.com
> >
> > On 5/07/2012, at 6:56 PM, Ravikumar Govindarajan wrote:
> >
> >
; concurrent_compactors
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 3/07/2012, at 6:33 PM, Ravikumar Govindarajan wrote:
>
> Recently, we faced a severe freeze [around 30-40 mins] on one of our
>
data directory. Then
> run repair to restore consistency.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 14/06/2012, at 11:38 PM, Ravikumar Govindarajan wrote:
>
> We received the fol
We tried this route previously. We did not run repair at all {our use-cases
don't need a repair} but while adding a secondary data center, we were
forced to run repair. It ended up exploding the data.
We finally had to start afresh, scrapped the cluster and re-import the data
with NTS. Now, whethe
Hi,
I ran some data import tests for cassandra 0.8.1 and 1.0.7. The results
were a little bit surprising
0.8.1, SimpleStrategy, Rep_Factor=3,QUORUM Writes, RP, SimpleSnitch
XXX.XXX.XXX.A datacenter1 rack1 Up Normal 140.61 GB 12.50%
XXX.XXX.XXX.B datacenter1 rack1 Up
second node
> to DC2 you would want to give it a token of something like
> 106338239662793269832304564822427565952 so that the DC2 is also evenly
> balanced.
>
> --DRS
>
> On Feb 9, 2012, at 11:00 PM, Ravikumar Govindarajan wrote:
>
> > Hi,
> >
> > I was trying to setup a backup DC from
Hi,
I was trying to setup a backup DC from existing DC.
State of existing DC with SimpleStrategy & rep_factor=1.
./nodetool -h localhost ring
Address DC RackStatus State LoadOwns
Token
85070591730234615865843651857942052864
XXX.YYYDC1
27 matches
Mail list logo