I would think that compaction_throughput_kb_per_sec does have indirect impact
on disk IO. High number means or setting it to 0 means there is no
throttling on how much IO is being performed. Wouldn't it impact normal
reads from disk during the time when disk IO or util is high which
compaction is t
Can someone please help me understand this a little bit?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/LevelDB-type-compaction-tp6798334p6822344.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
>
>> and updates could be scattered all over
>> before compaction?
>
> No, updates to a given row will be still be in a single sstable.
>
>
Can you please explain little more? You mean that if Level 1 file contains
range from 1-100 all the updates would still go in that file?
The link on le
This is a great new! Is it possible to do a write-up of main changes like
"Leveldb" and explain it a little bit. I get lost reading JIRA and sometimes
is difficult to follow the thread. It looks like there are some major
changes in this release.
--
View this message in context:
http://cassandra-
Ruby Stevenson wrote:
>
> hi Sasha -
>
> Yes indeed. this solution was in the second part of my original
> question - it just seems "out of norm" on what people usually use
> Cassandra for, I guess I am looking for some reassurance before I roll
> up the sleeve of trying it.
>
> Thanks
>
> Rub
Thanks for the update
Jeremy Hanna wrote:
>
> It appears though that when choosing the non-local replicas, it looks for
> the next token in the ring of the same rack and the next token of a
> different rack (depending on which it is looking for).
Can you please explain this little more?
--
V
Are you seeing lot of these errors? Can you try XX:-OmitStackTraceInFastThrow
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/no-stack-trace-tp6654590p6657485.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.c
Check things like netstats, disk space etc to see why it's in Leaving state.
Anything in the logs that shows Leaving?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/move-one-node-for-load-re-balancing-then-it-status-stuck-at-Leaving-tp6655168p665
What happens when DC is in different time zone so 9:00 pacific vs 11:00
Central
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Question-about-eventually-consistent-in-Cassandra-tp6646430p6649490.html
Sent from the cassandra-u...@incubator.apache.
springrider wrote:
>
> is that okay to do nodetool move before a completely repair?
>
> using this equation?
> def tokens(nodes):
>
>- for x in xrange(nodes):
> - print 2 ** 127 / nodes * x
>
Yes use that logic to get the tokens. I think it's safe to run move first
and reair later.
I think so to that Brisk may require all the nodes to be upgraded to Brisk.
Are there any detailed instructions about how to configure cassandra with
hadoop?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Brisk-and-Hadoop-question-tp6638681p6639
First run nodetool move and then you can run nodetool repair. Before you run
nodetool move you will need to determine tokens that each node will be
responsible for. Then use that token to perform move.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.c
CASSANDRA learner wrote:
>
> Hi,
>
> Can we store images , java objects, files in cassandra, if so , how
> Please let me know this as i need it urgently...
>
Look at http://goo.gl/S2E3C http://goo.gl/S2E3C
It really depends on your workload. With heavy workloads cassandra is not
the rig
Thanks! Then does it mean that before compaction if read call comes for that
key sort is done at the read time since column b, c and a are in different
ssTables.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/How-are-column-sort-handled-tp6595415
Trying to understand the overhead when multiple columns are spread accross
ssTables. For eg: Key K1 column b and c are in ssTable 1 and column a in
ssTable 2. As I understand columns in a given row are sorted at the time
it's stored. So does it mean that when "a" goes to ssTable 2 it also fetches
c
Peter Schuller wrote:
>
>> Recently upgraded to 0.8.1 and noticed what seems to be missing data
>> after a
>> commitlog replay on a single-node cluster. I start the node, insert a
>> bunch
>> of stuff (~600MB), stop it, and restart it. There are log messages
>
> If you stop by a kill, make sure
try nohup
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Capistrano-recipes-tp6556591p6556636.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
aaron morton wrote:
>
>> Not sure what the intended purpose is, but we've mostly used it as an
>> emergency disk-capacity-increase option
>
> Thats what I've used it for.
>
> Cheers
>
How does compaction work in terms of utilizing multiple data dirs? Also, is
there a reference on wiki somew
I thought there is an option to give multiple data dirs in cassandra.yaml.
What's the purpose of that?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/RAID-or-no-RAID-tp6522904p6523523.html
Sent from the cassandra-u...@incubator.apache.org mailing
Which one is preferred RAID0 or spreading data files accross various disks on
the same node? I like RAID0 but what would be the most convincing argument
to put additional RAID controller card in the machine?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.na
How should one go about creating a data model from RDBMS ER into Big Table
Data model? For eg: RDBMS has many indexes required for queries and I think
this is the most important aspect when desiging the data model in Big Table.
I was initially planning to denormalize into one CF and use secondary
Well it depends on the requirements. If you use any combination of CL with
EACH_QUORUM it means you are accepting the fact that you are ok if one of
the DC is down. And in your scenario you care more about DCs being
consistent even if writes were to fail. Also you are ok with network
latency.
I th
LOCAL_QUORUM gurantees consistency in the local data center only. Other
replica nodes in the same DC and other DC not part of the QUORUM will be
eventually consistent. If you want to ensure consistency accross DCs you can
use EACH_QUORUM but keep in mind the latency involved assuming DCs are not
lo
Les Hazlewood wrote:
>
> I have architected, built and been responsible for systems that support
> 4-5
> 9s for years.
>
So have most of us. But probably by now it should be clear that no
technology can provide concrete recommendations. They can only provide what
might be helpful which varies
Start with reading comments on cassandra.yaml and
http://wiki.apache.org/cassandra/Operations
http://wiki.apache.org/cassandra/Operations
As far as I know there is no comprehensive list for performance tuning. More
specifically common setting applicable to everyone. For most part issues
revolve
In my opinion 5 9s don't matter. It's the number of impacted customers. You
might be down during peak for 5 mts causing 1000s of customer turn aways
while you might be down during night causing only few customer turn aways.
There is no magic bullet. It's all about learning and improving. You will
Speaking purely from my personal experience, I haven't found cassandra
optimal for storing big fat rows. Even if it is only 100s of KB I didn't
find cassandra suitable for it. In my case I am looking at 400 writes + 400
reads per sec and grow 20%-30% every ear with file sizes from 70k-300k. What
I
Khanh Nguyen wrote:
>
> Is there a way to tell where a piece of data is stored in a cluster?
> For example, can I tell if LastNameColumn['A'] is stored at node 1 in
> the ring?
>
I have not used it but you can see getNaturalEndpoints in jmx. It will tell
you which nodes are responsible for a gi
Please give more detailed info about what exactly you are worried about or
trying to solve.
Please take a step back and look at cassandra's architecture again and what
it's trying to solve. It's a distributed database so if you do what you are
describing there is a potential of getting hotspots. W
Fredrik Stigbäck wrote:
>
> Does reading quorum mean only waiting for quorum respones or does it mean
> quorum respones with same latest timestamp?
>
> Regards
> /Fredrik
>
Well it depends on how your CL is for writes. If you write with QUORUM and
then read with QUORUM then yes you will get at
Can you post the output of "netstat -anp|grep "LISTEN"|grep java" from all
the 3 nodes?
Also compare seconds nodes yaml with new nodes yaml and see what diff. you
find, if any.
Another thing try telnet tests from seed node to the new node.
--
View this message in context:
http://cassandra-user-
In this case, yes. I was asking for the cases where commit log corruption was
reported.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Re-Exception-when-starting-tp6383464p6387101.html
Sent from the cassandra-u...@incubator.apache.org mailing lis
Brandon Williams wrote:
>
> There was a bug, it is fixed. It's just a cache, chill.
>
There is no time to chill when fighting it in production :) It's good to
know it's fixed.
Another question, when this happens are we able to restore data from replica
nodes?
--
View this message in context:
Coherence is similar to memcachd (free). It's in memory cache layer on top of
the DB. You as a user need to keep that cache in sync with the DB.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Vs-Oracle-Coherence-tp6375561p6386847.html
S
What's your avg column size and row size? Your read latency in most case will
directly be related to how much you are trying to read. In my experience you
will see high read latency if you have big column size.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2
Whenever I hear someone say data is corrupted I panic :) I have seen few
people have reported that but have not seen the real reason for it. Is it a
manual error, config error, bug etc. It will be good to identify why these
things happen so that it can be fixed before it happens in PROD :(
--
Vie
I am wondering if running nodetool repair will help in anyway
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Inconsistent-results-using-secondary-indexes-between-two-DC-tp632p6382819.html
Sent from the cassandra-u...@incubator.apache.org mail
Those messages are "ok" to ignore. It's basically deleting the files that are
already flused as SSTables.
Which version are you running?
Have you tried restarting the node?
Pick one node and send "ls -ltr" output also the complete log files since
your last restart from the same node. I looked at
Do you see anything in log files?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6374234.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
You can try to update column family using cassandra-cli. Try to set
memtable_throughput to 32 first.
[default@unknown] help update column family;
update column family Bar;
update column family Bar with =;
update column family Bar with = and =...;
Update a column family with the specified values f
5G in one hour is actually very low. Something else is wrong. Peter pointed
to something related to memtable size could be causing this problem, can you
turn down memtable_throughput and see if that helps.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabb
Is there a way to look at the actual size of memtable? Would that help?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Commitlog-Disk-Full-tp6356797p6360001.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.co
Did you do a bulk upload with mysql from the same machine or separate
insert/commit for each row? And did you run inserts from the same machine as
the mysqld server?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/low-performance-inserting-tp63268
What do you mean by Bad memory? Is it less heap size, OOM issues or something
else? What happens in such scenario, is there a data loss?
Sorry for many questions just trying to understand since data is critical
afterall :)
--
View this message in context:
http://cassandra-user-incubator-apache-o
Can someone please help understand the reason for corrupt SSTables? I am just
worried what the worst case. Do we lose data in these cases? How to protect
from data loss if that's the case.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Strange-co
What would be the procedure in this case? Run drain on the node that is
disagreeing? But is it enough to run just drain or you suggest drain + rm
system files?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Two-versions-of-schema-tp6277365p628786
In my case all hosts were reachable and I ran nodetool ring before running
the schema update. I don't think it was because of node being down. I tihnk
for some reason it just took over 10 secs because I was reducing key_cache
from 1M to 1000. I think it might be taking long to trim the keys hence 1
I don't think I got correct answer to my original post. Can someone please
help?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Two-versions-of-schema-tp6277365p6280070.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive a
You mean read it like .00038880248833592535E? I didn't quite follow why? If
it is 3.8880248833592535E then does it mean I got only 3% hit or .0003?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Key-cache-hit-rate-tp6277236p6278397.html
Sent from
Is there a problem?
[default@StressKeyspace] update column family StressStandard with
keys_cached=100;
854ee0a0-6792-11e0-81f9-93d987913479
Waiting for schema agreement...
The schema has not settled in 10 seconds; further migrations are ill-advised
until it does.
Versions are 854ee0a0-6792-11e0-8
How to intepret "Key cache hit rate"? What does this no mean?
Keyspace: StressKeyspace
Read Count: 87579
Read Latency: 11.792417360326105 ms.
Write Count: 179749
Write Latency: 0.009272318622078566 ms.
Pending Tasks: 0
Column Family: Stres
I ran stress test to read 50K rows and since then I am getting below error
even though ring show all nodes are up:
ERROR 12:40:29,999 Exception:
me.prettyprint.hector.api.exceptions.HectorException: All host pools marked
down. Retry burden pushed out to client.
at
me.prettyprint.cassandra.
Actually when I run 2 stress clients in parallel I see Read Latency stay the
same. I wonder if cassandra is reporting accurate nos.
I understand your analogy but for some reason I don't see that happening
with the results I am seeing with multiple stress clients running. So I am
just confused wher
I still don't understand. You would expect read latency to increase
drastically when it's fully saturated and lot of READ drop messages also,
correct? I don't see that in cfstats or system.log which I don't really
understand why.
--
View this message in context:
http://cassandra-user-incubator-ap
One correction qu size in iostat ranges between 6-120. But still this doesn't
explain why read latency is low in cfstats.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6269875.html
Sent from t
Peter Schuller wrote:
>
> Saturated.
>
But read latency is still something like 30ms which I would think would be
much higher if it's saturated.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp62662
Does it really matter how long cassandra has been running? I thought it will
keep keys of 1M at least.
Regarding your previous question about queue size in iostat I see it ranging
from 114-300.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/fl
One thing I am noticing is that cache hit rate is very low even though my
cache key size is 1M and I have less than 1M rows. Not sure why so many
cache miss?
Keyspace: StressKeyspace
Read Count: 162506
Read Latency: 45.22479006928975 ms.
Write Count: 247180
Write La
Yes
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6266726.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
It does appear that I am IO bound. Disks show about 90% util.
What are my options then? Is cassandra not suitable for columns of this
size?
I am running stress code from hector which doesn't sound like give ability
to do operations per sec. I am insert 1M rows and then reading. Have not
been able
64 bit 12 core 96 GB RAM
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/flush-largest-memtables-at-messages-in-7-4-tp6266221p6266400.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
I am using cassandra 7.4 and getting these messages.
Heap is 0.7802529021498031 full. You may need to reduce memtable and/or
cache sizes Cassandra will now flush up to the two largest memtables to free
up memory. Adjust flush_largest_memtables_at threshold in cassandra.yaml if
you don't want Cassa
Can someone please help?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Lot-of-pending-tasks-for-writes-tp6263462p6266213.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Here is what cfhistograms look like. Don't really understand what this means,
will try to read. I also %util in iostat continuously 90%. Not sure if this
is caused by extra reads by cassandra. It seems unusual.
[root@dsdb4 ~]# nodetool -h `hostname` cfhistograms StressKeyspace
StressStandard
Stres
aaron morton wrote:
>
> You'll need to provide more information, from the TP stats the read stage
> could not keep up. If the node is not CPU bound then it is probably IO
> bound.
>
>
> What sort of read?
> How many columns was it asking for ?
> How many columns do the rows have ?
> Was the t
But I don't understand the reason for oveload. It was doing simple read of 12
threads and reasing 5 rows. Avg CPU only 20%, No GC issues that I see. I
would expect cassandra to be able to process more with 6 nodes, 12 core, 96
GB RAM and 4 GB heap.
--
View this message in context:
http://cass
I am running stress test and on one of the nodes I see:
[root@dsdb5 ~]# nodetool -h `hostname` tpstats
Pool NameActive Pending Completed
ReadStage 0 0 2495
RequestResponseStage 0 0 242202
MutationStag
It looks like hector did retry on all the nodes and failed. Does this then
mean cassandra is down for clients in this scenario? That would be bad.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263270.html
Se
I see this occurring often when all cassandra nodes all of a sudden show CPU
spike. All reads fail for about 2 mts. GC.log and system.log doesn't reveal
much.
Only think I notice is that when I restart nodes there are tons of files
that gets deleted. cfstats from one of the nodes looks like this:
I am running stress test using hector. In the client logs I see:
me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException()
at
me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:32)
at
me.prettyprint.cassandra.service
What's the difference between a row index and sstable index?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Columns-values-integer-need-frequent-updates-increments-tp6251464p6259882.html
Sent from the cassandra-u...@incubator.apache.org mailing l
That I understand but my basic quesiton was how does it know that there are
multiple updates that have occurred on the same column? and how does it
efficiently knows which sstable have these updates?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com
Thanks for the info!
Does this also happens if initial_token is set?
Also, I am unable to understand the last line in that JIRA
> A potential complication was that seed nodes were moved without using the
> correct procedure of de-seeding them first. This was clearly wrong
>
What is de-seedin
If there are multiple updates to same columns and scattered accross multiple
sstables then how does cassandra know which sstable has the most recent
value.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Columns-values-integer-need-frequent-update
What is a storage proxy latency?
By query latency you mean the one in cfstats and cfhistorgrams?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/What-need-to-be-monitored-while-running-stress-test-tp6255765p6257932.html
Sent from the cassandra-u.
What are the key things to monitor while running a stress test? There is tons
of details in nodetoll tpstats/netstats/cfstats. What in particular should I
be looking at?
Also, I've been looking at iostat and await really goes high but cfstats
shows low latency in microsecs. Is latency in cfstats c
I am starting a stress test using hector on 6 node machine 4GB heap and 12
core. In hectore readme this is what I got by default:
create keyspace StressKeyspace
with replication_factor = 3
and placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy';
use StressKeyspace;
drop col
in yaml:
# Set to true to make new [non-seed] nodes automatically migrate data
# to themselves from the pre-existing nodes in the cluster.
Why only non-seed nodes? What if seed nodes need to bootstrap?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble
I think best is to have global load balancer in front of web servers/app
servers. And leave app servers to handle requests at local quoram. If data
center goes down then load balancer will simply hand out only one DCs ips.
--
View this message in context:
http://cassandra-user-incubator-apache-or
I see this error in the logs posted. Is this normal?
java.io.IOError: java.io.EOFException
at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:73)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at
or
I can't find it on wiki. Do you have a link where it can give detail help?
Also, is the latency in micro sec. or millisec?
How about latency in cfstats? Is it micro or mill? It says ms which is gen.
millisec.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2
I have seen on wiki also that in order to get better performance by setting
compaction thread priority
http://wiki.apache.org/cassandra/PerformanceTuning
http://wiki.apache.org/cassandra/PerformanceTuning
My question is if it improves performance then why is this not set by
default? What's the do
Is there a way to monitor the compactions using nodetools? I don't see it in
tpstats.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Endless-minor-compactions-after-heavy-inserts-tp6229633p6231672.html
Sent from the cassandra-u...@incubator.apach
Where can I read more about CQL? I am assuming it's similar to SQL and
drivers like JDBC can be written on top of it. Is that right?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Ditching-Cassandra-tp6221436p6231654.html
Sent from the cassandra-
It looks like if I use system schema it fails. Is it because of
LocalPartitioner?
I ran with other keyspace and got following output.
Offset SSTables Write Latency Read Latency Row Size Column Count
1 0 0 0 0 0
2 0 0 0 0 0
179 0 0 0 320 320
Can someone please help me understand the output in fi
Cassandra 7.4:
nodetool -h `hostname` cfhistograms system schema
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at $Proxy5.getRecentReadLatencyHistogramMicros(Unknown Source)
at
org.apache.cassandra.tools.NodeCmd.printCfHistograms(NodeCmd.java:452)
If I am not wrong node repair need to be run on all the nodes in staggerred
manner. It is required to take care of tombstones. Please correct me team if
I am wrong :)
See Distributed Deletes:
http://wiki.apache.org/cassandra/Operations
--
View this message in context:
http://cassandra-user-in
I am also interested in knowing when 8 will be released. Also, is there
someplace where we can read about features that will be relased in 8? Looks
like some major changes are going to come out.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Ditc
I think what I feel is that there is a need to know if repair is required
flag in order for team to manage the cluster.
Atleast at minimum, Is there a flag somewhere that tells if repair was run
within GCGracePeriod?
--
View this message in context:
http://cassandra-user-incubator-apache-org.306
So from what I am understanding is that there is no need to monitor this and
no need to remember running repair? If that's the case then manual repair
wouldn't be needed ever, correct?
But if Manual repair is needed then shouldn't there be ability to monitor?
Having dealt with production problems
Looks like you didn't get to see my updated post :) This is the scenario I
was referring to:
Say Node A, B, C. Now A is inconsistent and needs repair. Now after a day
Node B goes down and comes up. Now both nodes are inconsistent. Even with
Quorum this will fail read and write by returning inconsi
I think my problem is that I don't want to remember to run read repair. I
want to know from cassandra that I "need" to run repair "now". This seems
like a important functionality that need to be there. I don't really want to
find out hard way that I forgot to run "repair" :)
Say Node A, B, C. Now
Thanks! I was keeping the discussion simple. But you make my case stronger
that we need such monitoring since it looks like it should always be run but
we want to run it as soon as it is required.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Ho
Yes but that doesn't really provide the monitoring that will really be
helpful. If I don't realize it until 2 days then we potentially could be
returning inconsistent results or not have data sync for 2 days until repair
is run. It will be best to be able to monitor these things so that it can be
r
Is there a way to monitor and tell if one of the node require repair? For eg:
Node was down and came back up but in the meantime HH were dropped. Now
unless we are really careful in all the scenarios we wouldn't have any
problems :) but in general when things are going awry you might forget about
r
Thanks everyone this gives me a good head start.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Central-monitoring-of-Cassandra-cluster-tp6205275p6208331.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Can someone share if they have centralized monitoring for all cassandra
servers. With many nodes it becomes difficult to monitor them individually
unless we can look at data in one place. I am looking at solutions where
this can be done. Looking at Cacti currently but not sure how to integrate
it w
I think what I am trying to ask is this:
what happens if it's RF=3 with network toplogy (RackInferringSnitch) and 2
copies are stored in Site A and 1 copy in Site B data center. Now client for
some reason is directed to Site B data center and does a write/update on
existing column, now would Site
CL is just a way to satisfy consistency but you still want majority of your
reads (preferrably) occurring in the same DC.
I don't think that answers my question at all. I understand the CL but I
think I have more basic and important question about active/active data
center and the replicas in that
When in active/active data center how to decide right replication factor?
Client may connect and request for the information from either data center
so if locally it's RF=3 then in multiple data center should it be RF=6 in
active/active?
Or what happens if it's RF=3 with network toplogy and 2 copi
1 - 100 of 181 matches
Mail list logo