instance is
a, say, 1000ms read query latency slow or fast? What can you tell from min,
median or max statistics?
What do long-term trends tell you?
Is there a “scientific” way to interpret Cassandra stats?
Many thanks.
Philippe
-
To
; Get the JARS from Cassandra lib folder and put it in your build path. Or else
> use Pom.xml maven project to directly download from repository.
>
> Thanks and Regards,
> Goutham Reddy Aenugu.
>
>> On Sat, Mar 10, 2018 at 9:30 AM Philippe de Rochambeau
>> wrote:
>&
Hello,
has anyone tried running CQL queries from a Java program using the jars
provided with DevCenter?
Many thanks.
Philippe
-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user
y.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
... 2 more
What Am I doing wrong?
Thank you for your help.
Regards
Philippe SURAY
Hi,
I'am interested to know differences between your AMI and the Datastax one,
already available in the market place.
Thanks,
Philippe
*Philippe Dupont*
root
Tel. +33(0)1.84.17.73.88
Mob. +33(0)6.10.14.58.26
[image: Description : http://cdn.teads.tv/images/logo_Teads_100.gif]
issue?
Thanks,
Philippe
Hi Aaron,
As you can see in the picture, there is not much steal on iostat. That's
the same with top.
https://imageshack.com/i/0jm4jyp
Philippe
2013/12/10 Aaron Morton
> Thanks for the update Philip, other people have reported high await on a
> single volume previously but I
ons
whatsoever, please do not hesitate to ask! "
To conclude, the only other solution to avoid VPC and Reserved Instance is
to replace this instance by a new one, hoping to not having other "Noisy
neighbors"...
I hope that will help someone.
Philippe
2013/11/28 Philippe DUPONT
>
?
Thanks,
Philippe
hack.com/a/img10/556/zzk8.jpg
Amazon support took a close look at the instance as well as it's underlying
hardware for any potential health issues and both seem to be healthy.
Have someone already experienced something like this ?
Should I contact the AMI author better?
Thanks a lot,
Philippe.
Hi david, we tried it two years ago and the performance of the USB stick
was so dismal we stopped.
Cheers
Le 16 nov. 2013 15:13, "David Tinker" a écrit :
> Our hosting provider has a cost effective server with 2 x 4TB disks
> with a 16G (or 64G) USB thumb drive option. Would it make sense to put
Is there a way to limit the Memtable sizes on a columnFamily basis on
cassandra 1.1.x ? I have some CF that have very very low throughput and I'd
like to lower the amount of data in memory to keep the Heap size down.
Thanks
I have a cluster with 5 keyspaces. I would like to move one of the
keyspaces to a separate cluster because it has very different usage
patterns that can be optimized on different hardware.
What would be the best way to do that online ie. without interrupting reads
& writes. The keyspace is about 3
While restarting a 1.1.12 node, I've run into this while it's replaying the
commit log:
GCInspector.java (line 145) Heap is 0.9778097528951517 full. You may need
to reduce memtable and/or cache sizes. Cassandra will now flush up to the
two largest memtables to free up memory. Adjust flush_larges
Definitely knew that for major releases, didn't expect it for a minor
release at all.
Le 6 mai 2013 19:22, "Robert Coli" a écrit :
> On Sat, May 4, 2013 at 5:41 AM, Philippe wrote:
> > After trying every possible combination of parameters, config and the
> rest,
>
After trying every possible combination of parameters, config and the rest,
I ended up downgrading the new node from 1.1.11 to 1.1.2 to match the
existing 3 nodes. And that solved the issue immediately : the schema was
propagated and the node started handling reads & writes.
2013/5/3 Phil
Unfortunately not, I've moved on to trying to add the nodes the current
cluster and then decommission the "old" ones.
But even that is not working, this is the strangest of things : while
trying to add a new node, I
- set its token to an existing value+1
- ensure the yaml (clutser name, partiti
re the
> new files are created where you expect them to be.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Consultant
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 1/05/2013, at 4:15 AM, Philippe wrote:
>
> &
Hello,
I'm trying to bring up a copy of an existing 3-node cluster running 1.0.8
into a 3-node cluster running 1.0.11.
The new cluster has been configured to have the same tokens and the same
partitioner.
Initially, I copied the files in the data directory of each node into their
corresponding no
Repairs generate new files that then need to be compacted.
Maybe that's where the temporary extra volume comes from?
Le 21 avr. 2012 20:43, "Igor" a écrit :
> Hi
>
> I can't understand the repair behavior in my case. I have 12 nodes ring
> (all 1.0.7):
>
> 10.254.237.2LA ADS-LA-1
Definitely multi thread writes...probably with a little batching (10 or so).
That's how i get my peak throughput.
Le 23 févr. 2012 04:48, "Deno Vichas" a écrit :
> all,
>
> would i be better off (i'm in java land) with spawning a bunch of
> threads that all add a single item to a mutator or a sin
Perhaps your dataset can no longer be held in memory. Check iostats
Le 19 févr. 2012 11:24, "Franc Carter" a écrit :
>
> I've been testing Cassandra - primarily looking at reads/second for our
> fairly data model - one unique key with a row of columns that we always
> request. I've now setup the
Does this help ?
> http://wiki.apache.org/cassandra/FAQ#range_rp
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/01/2012, at 10:58 AM, Philippe wrote:
>
> Hello,
> I've been trying to r
Hello,
I've been trying to retrieve rows based on key range but every single time
I test, Hector retrieves ALL the rows, no matter the range I give it.
What can I possibly be doing wrong ? Thanks.
I'm doing a test on a single-node RF=1 cluster (c* 1.0.5) with one column
family (I've added & trunca
I believe the rpm location has changed
Look at the datastax documentation.
Le 12 janv. 2012 23:00, "Shu Zhang" a écrit :
> Hello? Does anyone know why an rpm is still not available for 1.0.6?
>
> Thanks,
> Shu
>
> From: Shu Zhang
> Sent: Wednesday, Decembe
Would this apply to copying data from one cluster to another, assuming I do
a rolling drain and shutdown ?
Thanks
Le 9 janv. 2012 16:32, "Brandon Williams" a écrit :
> On Mon, Jan 9, 2012 at 9:14 AM, Brian O'Neill
> wrote:
> >
> > What is the fastest way to copy a column family?
> > We were head
But you will then get timeouts.
Le 6 janv. 2012 15:17, "Vitalii Tymchyshyn" a écrit :
> **
> 05.01.12 22:29, Philippe написав(ла):
>
> Then I do have a question, what do people generally use as the batch
>> size?
>>
> I used to do batches from 500 to 20
eelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 4/01/2012, at 9:15 AM, Philippe wrote:
>
> I was able to scrub the node the repair that failed was running on. Are
> you saying the error could be displayed on that node but the bad data
> coming from an
>
> Then I do have a question, what do people generally use as the batch size?
>
I used to do batches from 500 to 2000 like you do.
After investigating issues such as the one you've encountered I've moved to
batches of 20 for writes and 256 for reads. Everything is a lot smoother :
no more timeouts
it?
>
>
> 2012/1/5 Philippe
>
>> You may be overloading the cluster though...
>>
>> My hypothesis is that your traffic is being spread across your node and
>> that one slow node is slowing down the fraction of traffic that goes to
>> that node (when it's act
ure I don't
overload the cluster and measure if I see a 1/RF improvement in response
time which would validate my hypothesis.
2012/1/5 R. Verlangen
> It does not appear to affect the response time, certainly not in a
> positive way.
>
>
> 2012/1/5 Philippe
>
>>
gt;
>
> 2012/1/5 Philippe
>
>> Depending on the CL you're reading at it will yes : if the CL requires
>> that the "slow" node create a digest of the data and send it to the
>> coordinator then it might explain the poor performance on reads. What is
>> your
cks over 32/64K.
>> Writes around 0-5MB per second. Network traffic 0.1 / 0.1 MB/s (in / out).
>> Paging 0. System int ~ 1300, csw ~ 2500.
>>
>>
>> 2012/1/5 Philippe
>>
>>> What can you see in vmstat/dstat ?
>>> Le 5 janv. 2012 11:58, "R. Verlang
My 0.8 production cluster contains around 150 CFs spread across 5
keyspaces. Haven't found that to be an issue (yet?).
Some of them are huge (dozens of GB), some are tiny (some MB).
Cheers
2012/1/5 aaron morton
> Sort of. Depends.
>
> In Cassandra automatic memory management means the server ca
What can you see in vmstat/dstat ?
Le 5 janv. 2012 11:58, "R. Verlangen" a écrit :
> Hi there,
>
> I'm running a cassandra 0.8.6 cluster with 2 nodes (in 2 DC's), RF = 2.
> Actual data on the nodes is only 1GB. Disk latency < 1ms. Disk throughput ~
> 0.4MB/s. OS load always below 1 (on a 8 core m
om the logs, or it
> may be easier to just scrub them all.
>
> Hope that helps.
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 31/12/2011, at 12:20 AM, Philippe wrote:
>
> Hello,
> Running
No
Le 2 janv. 2012 08:44, "ravikumar visweswara" a
écrit :
> Thank You Naren. If key k1>k2 (lexicologicaly), will md5(k1) > md5(k2)?.
>
> - R
>
>
> On Sun, Jan 1, 2012 at 7:07 PM, Narendra Sharma > wrote:
>
>> A token is a MD5 hash (one way hash). You cannot compute the key given a
>> token. You
Hello,
Running a combination of 0.8.6 and 0.8.8 with RF=3, I am getting the
following while repairing one node (all other nodes completed successfully).
Can I just stop the instance, erase the SSTable and restart cleanup ?
Thanks
ERROR [Thread-402484] 2011-12-29 14:51:03,687 AbstractCassandraDaemo
gt; the scenes by composite columns anyway
> >> SuperColumns: Usually an afterthought for API developers, (support for
> >> them comes "later")
> >> SuperColumns: Almost always utilized incorrectly by users, users speak
> >> of '10%' p
Would you stand by that statement in case all colums inside the super
column need to be read? Why?
Thanks
Le 28 déc. 2011 19:26, "Edward Capriolo" a écrit :
> Super columns have the same fundamental problem and perform worse in
> general. So switching from composites to super columns is NEVER a
seStage as earlier.
>>
> What CL are you using ?
>
Always forget that one... using QUORUM
> Which thread pool is showing pending ?
>
ReadStage is the one I'm talking about above when I don't mention the stage
explicitely.
Thanks
>
> Cheers
>
> -
1 87 0 0 0| 0 128k
Philippe
2011/12/21 Philippe
> Hi Aaron,
>
> >How many rows are you asking for in the multget_slice and what thread
> pools are showing pending tasks ?
> I am querying in batches of 256 keys max. Each batch may slice between 1
> and 5 explicit
stage active & pendings
=> the process is indeed faster at updating the counters so that doesn't
surprise me given that a counter write requires a read.
Second & third replicas : no read stage pendings at all. A
little RequestResponseStage as earlier.
Cheers
Philippe
>
Every node? I hadn't realized that. Is there a place where i can compute
how much memory is being 'wasted' ?
Le 21 déc. 2011 15:09, "Alain RODRIGUEZ" a écrit :
> Hi, I don't know if this will be technically possible, but I just want to
> warn you about creating a lot of column families. When you
Hello,
5 nodes running 0.8.7/0.8.9, RF=3, BOP, counter columns inside super
columns. Read queries are multigetslices of super columns inside of which I
read every column for processing (20-30 at most), using Hector with default
settings.
Watching tpstat on the 3 nodes holding the data being most of
You've got at least one row over 1GB, compacted !
Have you checked whether you are running out of heap ?
2011/12/12 Wojtek Kulik
> Hello everyone!
>
> I have a strange problem with Cassandra (v1.0.5, one node, 8GB, 2xCPU): a
> query asking for each key from a certain (super) CF results in timeou
hiccup (e.g. dirty buffer flushing by linux) this should be visibly
> correlated with 'iostat -x -k 1'.
>
some CPU correlation in some cases (on one node)
no iostat correlation ever
Thanks
Philippe
given the better results I've seen on my workload
with smaller batches, I'm going to do just that.
Philippe
for me is
: only batch if you gain something because it might break stuff.
Given this work-around, can anyone explain to me why this was happening ?
2011/12/11 Philippe
> Hi Peter,
> I'm going to mix the response to your email along with my other email from
> yesterday since they p
Hi Peter,
I'm going to mix the response to your email along with my other email from
yesterday since they pertain to the same issue.
Sorry this is a little long, but I'm stomped and I'm trying to describe
what I've investigated.
In a nutshell, in case someone has encountered this and won't read it
Answer below
> > Pool NameActive Pending Completed Blocked
> All
> > time blocked
> > ReadStage27 2166 3565927301 0
> With the slicing, I'm not sure off the top of my head. I'm sure
> someone else can chime in. For e.g. a mult
Was't that on the 1.0 branch ? I'm still running 0.8x ?
@Peter: investigating a little more before answering. Thanks
2011/12/10 Edward Capriolo
> There was a recent patch that fixed an issue where counters were hitting
> the same natural endpoint rather then being randomized across all of them.
Hello,
Here's an example tpstats on one node in my cluster. I only issue
multigetslice reads to counter columns
Pool NameActive Pending Completed Blocked All
time blocked
ReadStage27 2166 3565927301 0
0
MutationStag
low). This happens every single time. And I can see the second
process gets paused during this timeout
Any ideas why this might be happening ?
Thanks
Philippe
0 0128 165472 8304 1434741200 0 0 289 281 0 0
100 0
5 0128 166452 8288 1433783600 232 4
Hello,
I've got a batch process running every so often that issues a bunch of
counter increments. I have noticed that when this process runs without
being throttled it will raise the CPU to 80-90% utilization on the nodes
handling the requests. This in turns timeouts and general lag on queries
runn
Didn't mention this is a 0.8.6 cluster with 5 nodes and RF=3
I bootstrapped a new node 2 days ago. What's weird is that it didn't pickup
the token i provided in the yaml file so i had to move it.
It looks like CASSANDRA-1992 but I'm not on 0.7.x
Le 29 nov. 2011 08:00,
Hello,
While running a cleanup, Cassandra stopped with the following exception and
inspecting the logs revealed several exceptions such as below over the past
3 days.Given that it's dying on compactions, I'm really worried.
If a row was trashed, will the error propagate from node to node or will i
Oct 5, 2011 12:22 PM, "Philippe" wrote:
> A little feedback,
> I scrubbed on each server and I haven't seen this error again. The load on
> each server eems to be correct.
> nodetool compactionstats shows boat-load of "Scrub" at 100% on my 3rd
> node bu
I don't remember your exact situation but could it be your network
connectivity?
I know I've been upgrading mine because I'm maxing out fastethernet on a 12
node cluster.
Le 20 nov. 2011 22:54, "Jahangir Mohammed" a
écrit :
> Mostly, they are I/O and CPU intensive during major compaction. If gang
I'm using BOP.
Le 20 nov. 2011 13:09, "Boris Yen" a écrit :
> I am just curious about which partitioner you are using?
>
> On Thu, Nov 17, 2011 at 4:30 PM, Philippe wrote:
>
>> Hi Todd
>> Yes all equal hardware. Nearly no CPU usage and no memory issues.
&g
Running on cassandra 0.8.(6|7), I have issued two moves in the same cluster
at the same time, on two different nodes. There are no writes being issued
to the cluster.
I saw a mailing post mentioning doing moves one node at a time.
Did I just trash my cluster ?
Thanks
ware? Since those machines are sending
> data somewhere, maybe they are behind in replicating and are continuously
> catching up?
>
> Use a tool like tcpdump to find out where the data is going
>
> From: Philippe
> Reply-To: "user@cassandra.apache.org"
> Date: Tue,
chooses matching values and sends back data to p4
So if p13-p15 are outputting 80Mb/s why am I not seeing 80Mb/s coming into
p4 which is on the receiving end ?
Thanks
2011/11/15 Philippe
> Hello,
> I'm trying to understand the network usage I am seeing in my cluster, can
> anyon
Hello,
I'm trying to understand the network usage I am seeing in my cluster, can
anyone shed some light?
It's an RF=3, 12-node, cassandra 0.8.6 cluster. The nodes are
p13,p14,p15...p24 and are consecutive in that order on the ring.
Each node is only a cassandra database. I am hitting the cluster fr
Hello, I'd like to get some ideas on how to model counting uniques with
cassandra.
My use-case is that I have various counters that I increment based on data
received from multiple devices. I'd like to be able to know if at least X
unique devices contributed to a counter value.
I've thought of the
Hello,
I am going to need to move some nodes to rebalance my cluster. How safe is
this to do on a cluster with writes & reads ?
Thanks
Dear all,
I'm working with a 12-node, RF=3 cluster on low-end hardware (core i5 with
16GB of RAM & SATA disks).
I'm using a BOP and each node has a load between 50GB and 100GB (yes, I
apparently did not set my tokens right... I'll fix that later).
I'm hitting the cluster with a little over 100 con
Dear all,
I've just fired up our production cluster : 12 nodes, RF=3 and I've run into
something I don't understand at all. Our test cluster was 3 nodes, RF=3
Test cluster was AMD opteron CPUs (6x2.33) w/ 32GB RAM while the production
cluster is core i5 (4x2.66) w/ 16 GB RAM.
I'm running the same
s
my cluster won't be too out of sync.
Thanks
2011/10/5 Philippe
> Thanks for the quick responses.
>
> @Yi
> Using Hector 0.8.0-1
> Hardware is :
>
>- AMD Opteron 4174 6x 2.30+ GHz
>- 32 Go DDR3
>- 1 Gbps Lossless
>
>
> @aaron
> I
ween nodes, scrub fixes local
> issues with data. )
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 26/09/2011, at 12:53 PM, Philippe wrote:
>
> Juste did
> Could there be da
l what client are you using? And can you give a hint to your node
> hardware?
>
> 從我的 BlackBerry® 無線裝置
> --
> *From: * Philippe
> *Date: *Wed, 5 Oct 2011 10:33:21 +0200
> *To: *user
> *ReplyTo: * user@cassandra.apache.org
> *Subject: *Why is mutation
Hello,
I have my 3-node, RF=3 cluster acting strangely. Can someone shed a light as
to what is going on ?
It was stuck for a couple of hours (all clients TimedOut). nodetool tpstats
showed huge increasing MutationStages (in the hundreds of thousands).
I restarted one node and it took a while to rep
No it was an upgrade from 0.8.4 or 0.8.5 depending on the nodes.
No cassandra-env files were changed during the update.
Any other ideas? The cluster has just been weird ever since running 0.8.6 :
has anyone else upgraded and not run into this?
Le 28 sept. 2011 09:32, "Peter Schuller" a
écrit :
>>
Hi is there any reason why configuring a partitioner per keyspace wouldn't
be possible technically ?
Thanks.
Congrats.
Is there a target date for the release. If not is it likely to be in
October?
Le 27 sept. 2011 18:57, "Sylvain Lebresne" a écrit :
> The Cassandra team is pleased to announce the release of the first release
> candidate for the future Apache Cassandra 1.0.
>
> The warnings first: this i
Hello,
Have just ran into a new assertion error, again after upgrading a 2
month-old cluster to 0.8.6
Can someone explain what this means and the possible consequences ?
Thanks
ERROR [AntiEntropyStage:2] 2011-09-27 06:07:41,960
AbstractCassandraDaemon.java (line 139) Fatal exception in thread
Thre
Ever since upgrading to 0.8.6, my nodes' system.log is littered with
GCInspector logs such as these
INFO [ScheduledTasks:1] 2011-09-26 21:23:40,468 GCInspector.java (line 122)
GC for ParNew: 209 ms for 1 collections, 4747932608 used; max is 16838033408
INFO [ScheduledTasks:1] 2011-09-26 21:23:43,7
t;
> On Sun, Sep 25, 2011 at 2:27 AM, Philippe wrote:
>> Hello,
>> I've seen a couple of these in my logs, running 0.8.4.
>> This is a RF=3, 3-node cluster. 2 nodes including this one are on 0.8.4
and
>> one is on 0.8.5
>>
>> The node is still functionn
I have this happening on 0.8.x It looks to me as this happens when the node
is under heavy load such as unthrottled compactions or a huge GC.
2011/9/24 Yang
> I'm using 1.0.0
>
>
> there seems to be too many node Up/Dead events detected by the failure
> detector.
> I'm using a 2 node cluster on
Hello,
I'm deploying my cluster with Puppet so it's actually easier for me to add
all cassandra nodes to the seed list in the YAML file than to choose a few.
Would there be any reason NOT to do this ?
Thanks
Hello,
I've seen a couple of these in my logs, running 0.8.4.
This is a RF=3, 3-node cluster. 2 nodes including this one are on 0.8.4 and
one is on 0.8.5
The node is still functionning hours later. Should I be worried ?
Thanks
ERROR [ReadStage:94911] 2011-09-24 22:40:30,043 AbstractCassandraDaem
It sure looks like what I'm seeing on my cluster where a 100G commit lot
partition fills up in 12 hours (0.8.x)
Le 23 sept. 2011 03:45, "Yang" a écrit :
> in 1.0.0 we don't have memtable_throughput for each individual CF ,
> and instead
> which memtable/CF to flush is determined by "largest
> getT
Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 22/09/2011, at 7:23 PM, Philippe wrote:
>
>> Hi Aaron
>> Thanks for the reply
>>
>> I should hhave mentionned that all current nodes are running 0.8.4.
>> All current and future services have 2TB disks of which i
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 22/09/2011, at 10:27 AM, Philippe wrote:
>
>> Hello,
>> We're currently running on a 3-node RF=3 cluster. Now that we have a
better grip on things, we want t
Hello,
We're currently running on a 3-node RF=3 cluster. Now that we have a better
grip on things, we want to replace it with a 12-node RF=3 cluster of
"smaller" servers. So I wonder what the best way to move the data to the new
cluster would be. I can afford to stop writing to the current cluster
>
> Sort of. There's some fine print, such as the 50% number is only if
> you're manually forcing major compactions, which is not recommended,
> but a bigger thing to know is that 1.0 will introduce "leveled
> compaction" [1] inspired by leveldb. The free space requirement will
> then be a small
Have you run repair on the nodes ? Maybe some data was lost and not repaired
yet ?
Philippe
2011/8/23 Chris Marino
> Hi, we're running some performance tests against some clusters and I'm
> curious about some of the numbers I see.
>
> I'm running the stress t
>
> Do you have an indication that at least the disk space is in fact
> consistent with the amount of data being streamed between the nodes? I
> think you had 90 -> ~ 450 gig with RF=3, right? Still sounds like a
> lot assuming repairs are not running concurrently (and compactions are
> able to run
Péter,
In our case they get created exclusively during repairs. Compactionstats
showed a huge number of sstable build compactions
On Aug 20, 2011 1:23 AM, "Peter Schuller"
wrote:
>> Is there any chance that the entire file from source node got streamed to
>> destination node even though only smal
Unfortunately repairing one cf at a time didn't help in my case because it
still streams all CF and that triggers lots of compactions
On Aug 18, 2011 3:48 PM, "Huy Le" wrote:
>
> Because they are occurring in parallel.
>>
> So if a range is out of sync between A<->B and A<->C, A will receive the
> repairing stream from both (in any order) and will apply mutations based on
> that and the usual overwrite rules so necessarily exclude one of the
> repairing stream and that
>
> Because they are occurring in parallel.
>
So if a range is out of sync between A<->B and A<->C, A will receive the
repairing stream from both (in any order) and will apply mutations based on
that and the usual overwrite rules so necessarily exclude one of the
repairing stream and that data will
>
> Almost, but not quite: if you have nodes A,B,C and repair A, it will
> transfer A<->B, A<->C, but not B<->C.
>
But on a 3 node cluster once you do A<->B & A<->C, why don't you
transitively get B<->C ?
Thanks
t in on of the threads that the
>> issue is not reprocible, but multiple users have the same issue. This there
>> anything that I should do to determine the cause of this issue for I do a
>> rolling restart and try to run repair again? Thanks!
>>
>> Huy
>>
>>
&
, I'm guessing it's neither the hardware or the network.
I could provide the data directories privately to a commiter if that
helps... I assume an eighth repair would also stream stuff around. The data
directories are : 8.3GB, 3.3GB and 3.1GB
Thanks
2011/8/17 Philippe
> ctrl-c
Looking at the logs, I see that repairs stream data TO and FROM a node to
its replicas.
So on a 3-node RF=3 cluster, one only needs to launch repairs on a single
node right ?
Thanks
Look at my last two or three threads. I've encountered the same thing and
got some pointers/answers.
On Aug 17, 2011 4:03 PM, "Huy Le" wrote:
> Hi,
>
> After upgrading to cass 0.8.4 from cass 0.6.11. I ran scrub. That worked
> fine. Then I ran nodetool repair on one of the nodes. The disk usage on
>
> What if the column is a counter ? Does it overwrite or increment ? Ie if
> the SST I am loading has the exact same setup but value 2, will my value
> change to 3 ?
>
> Counter columns only know how to increment (assuming no deletes), so you
> will get 3. See
> https://github.com/apache/cassandr
http://www.datastax.com/dev/blog/bulk-loading indicates that "it is
perfectly reasonable to load data into a live, active cluster."
So lets say my cluster has a single KS & CF and it contains a key "test"
with a SC named "Cass" and a normal subcolumn named "Data" that has value 1.
If I SSTLoad da
t; Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 17/08/2011, at 10:09 AM, Philippe wrote:
>
> One last thought : what happens when you ctrl-c a nodetool repair ? Does it
> stop the repair on the server ? If not, then I think
1 - 100 of 152 matches
Mail list logo