Hi everyone,
I have a 3 nodes cluster, one node when I use nodetool, I met a error
"Failed to connect to '127.0.0.1:7199':Connection timed out", use CLI and
CQLsh is ok on this node, the other two nodes is OK when I use nodetool
commend. So what's the problem with that node?
Thanks a lot, and now I have solved this problem.
2013/3/28 aaron morton
> Your cluster is angry
> http://wiki.apache.org/cassandra/FAQ#schema_disagreement
>
> If your are just starting I suggest blasting it away and restarting.
>
> Hope that helps
>
> -
> Aaron Morton
> Freelan
Hi,
I have an application that does batch (counter) writes to multiple CFs. The
application itself is multi-threaded and I'm using C* 1.2.2 and Astyanax
driver. Could someone share insights on:
1) When I see the cluster write throughput graph in opscenter, the number
is not reflective of actual n
There are a series of edge cases that dictate the need for repair. The
largest cases are 1) lost deletes 2) random disk corruptions
In our use case we only delete entire row keys, and if the row key comes
back it is not actually a problem because our software will find it an
delete it again. In th
This has happened before the save caches files were not compatible between
0.6 and 0.7. I have ran into this a couple other times before. The good
news is the save key cache is just an optimization, you can blow it away
and it is not usually a big deal.
On Fri, Apr 5, 2013 at 2:55 PM, Arya Goud
I would partition either with cassandra's partitioning or PlayOrm partitioning
and query like so
Where beginOfMonth=x and startDate>"X" and counter > "Y". This only
returns stuff after X in that partition though so you may need to run multiple
queries like this and if you have billions of
Here is a chunk of bloom filter sstable skip messages from the node I
enabled DEBUG on:
DEBUG [OptionalTasks:1] 2013-04-04 02:44:01,450 SSTableReader.java (line
737) Bloom filter allows skipping sstable 39459
DEBUG [OptionalTasks:1] 2013-04-04 02:44:01,450 SSTableReader.java (line
737) Bloom filte
One thing I can do is to have a client-side cache of the keys to reduce the
number of updates.
On Apr 5, 2013, at 6:14 AM, Edward Capriolo wrote:
> Since there are few column names what you can do is this. Make a reverse
> index, low read repair chance, Be aggressive with compaction. It will
Hello,
I am trying to insert a lot of data in cassandra 1.1.8, in 2 servers.
As a client I was using Astyanay to send Insert's CQL instructions.
It starts to insert the data, but after some time I recieve this error and
both, server and client, dead.
Someone knows how to fix it? It is the best way
> How does it fail?
If I wait 24 hours, the repair command will return an error saying that the
node died… but the node really didn't die, I watch it the whole time.
I have the DEBUG messages on in the log files, when the node I'm repairing
sends out a merkle tree request, I will normally see, {C
Starting the node with the JVM option -Dcassandra.load_ring_state=false in
cassandra-env.sh sometimes works.
If not post the output from nodetool gossipinfo
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 5/04/2013
> A repair on a certain CF will fail, and I run it again and again, eventually
> it will succeed.
How does it fail?
Can you see the repair start on the other node ?
If you are getting errors in the log about streaming failing because a node
died, and the FailureDetector is in the call stack, ch
monitor the repair using nodetool compactionstats to see the merkle trees being
created, and nodetool netstats to see data streaming.
Also look in the logs for messages from AntiEntropyService.java , that will
tell you how long the node waited for each replica to get back to it.
Cheers
-
> Is it safe to change sstable file name to avoid name collisions?
Yes.
Make sure to change the generation number for all the components.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 4/04/2013, at 3:01 PM, Micha
> but nothing's happening, how can i monitor the progress? and how can i know
> when it's finished?
check nodetool compacitonstats
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 4/04/2013, at 2:51 PM, Kais Ahmed w
> Caused by: java.lang.UnsatisfiedLinkError: snappyjava (Not found in
> java.library.path)
You do not have the snappy compression library installed.
http://www.datastax.com/docs/1.1/troubleshooting/index#cannot-initialize-class-org-xerial-snappy-snappy
Cheers
-
Aaron Morton
Fr
> Whats the recommendation on querying a data model like StartDate > “X” and
> counter > “Y” .
>
>
it's not possible.
If you are using secondary indexes you have to have an equals clause in the
statement.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@a
> skipping sstable due to bloom filter debug messages
What were these messages?
Do you have the logs from the start up ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 4/04/2013, at 6:11 AM, Arya Goudarzi wrote:
That's exactly what I understood and why I was using the max_hint_window_in_ms
threshold to force a manual repair.
--
Cyril SCETBON
On Apr 5, 2013, at 5:22 PM, Jean-Armel Luce
mailto:jaluc...@gmail.com>> wrote:
Hi Cyril,
According to the documentation
(http://wiki.apache.org/cassandra/Operati
Hi Cyril,
According to the documentation (
http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair),
I understand that is is not necessary to repair every node before
gc_grace_seconds if you are sure that you don't miss to run a repair each
time a node is down longer than gc_grace
Since there are few column names what you can do is this. Make a reverse
index, low read repair chance, Be aggressive with compaction. It will be
many extra writes but that is ok.
Other option is turn on row cache and try read before write. It is a good
case for row cache because it is a very smal
If you double your nodes, you should be doubling your webservers too(that is if
you are trying to prove it scales linearly). We had to spend time finding the
correct ratio for our application (it ended up being 19 webservers to 20 data
nodes so now just assume 1 to 1…..you can use amazon to fin
Little update ;-)
It couldn't be so easy - I can't drop these indexes :P
1) cqlsh:
cqlsh:production> DROP INDEX Users_
Users_active_idxUsers_email_idx Users_username_idx
cqlsh:production> DROP INDEX Users_email_idx ;
cqlsh:production> DROP INDEX Users_
Users_active_idxUsers_email_id
23 matches
Mail list logo