This is now my fourth attempt to get the message through. Apologies if you see
multiple copies.
I've tried to give as much relevant data as I can think of, but please let me
know if you need any other info. I spent the day getting jmxtrans to talk to
statsd with the cassandra JMX data, so I c
On Sat, Jul 9, 2011 at 4:47 PM, aaron morton wrote:
> Check the log on all the machines for ERROR messages. An error on any of
> the nodes could have caused the streaming to hang. nodetool netstats will
> let you know if there is a failed stream.
>
>
Here's what I see in the logs on the node I'm s
It's not, currently, but I'm happy to answer questions about its architecture.
On Thu, Jul 7, 2011 at 10:35, Norman Maurer
wrote:
> May I ask if its opensource by any chance ?
>
> bye
> norman
>
> Am Donnerstag, 7. Juli 2011 schrieb David Strauss :
>> I'm not sure HDFS has the right properties fo
Hey all,
Recently upgraded to 0.8.1 and noticed what seems to be missing data after a
commitlog replay on a single-node cluster. I start the node, insert a bunch
of stuff (~600MB), stop it, and restart it. There are log messages
pertaining to the commitlog replay and no errors, but some of the
Hi, can anyone explain why APIs include multiget, batch_insert,get_range_slice
are removed in Version above 7.0?
That looks a lot like what I've seen from machines with bad ram.
2011/7/8 Héctor Izquierdo Seliva :
> Hi everyone,
>
> I'm having thousands of these errors:
>
> WARN [CompactionExecutor:1] 2011-07-08 16:36:45,705
> CompactionManager.java (line 737) Non-fatal error reading row
> (stacktrace follow
> The more often you repair, the quicker it will be. The more often your
> nodes go down the longer it will be.
Going to have to disagree a bit here. In most cases the cost of
running through the data and calculating the merkle tree should be
quite significant, and hopefully the differences shoul
(not answering (1) right now, because it's more involved)
> 2. Does a Nodetool Repair block any reads and writes on the node,
> while the repair is going on ? During repair, if I try to do an
> insert, will the insert wait for repair to complete first ?
It doesn't imply any blocking. It's roughly
I'm not proposing any changes to be done, but this looks like a very
interesting topic for thought/hack/learning, so the following are only
for thought exercises
HBase enforces a single write/read entry point, so you can achieve
strong consistency by writing/reading only one node. but just
Hi Aaron,
Thank you again for your response.
I've read the article but I didn't understand everything. it would be great
if the benchmark will include the actual CLI/Python comments (that way it
will be easier to understand the query). in addition, an explanation about
row pages - what is it?.
An
The more often you repair, the quicker it will be. The more often your
nodes go down the longer it will be.
Repair streams data that is missing between nodes. So the more data
that is different the longer it will take. Your workload is impacted
because the node has to scan the data it has to be
If you are on linux see:
https://github.com/pcmanus/ccm
-Original Message-
From: Yang [mailto:tedd...@gmail.com]
Sent: Monday, July 11, 2011 3:08 PM
To: user@cassandra.apache.org
Subject: Re: custom StoragePort?
never mind, found this..
https://issues.apache.org/jira/browse/CASSANDR
Are you on a 64 bit VM? A 32 bit vm will basically ignore any setting over
2GB
On Mon, Jul 11, 2011 at 4:55 PM, Anurag Gujral wrote:
> Hi All,
>I am getting following error from cassandra:
> ERROR [ReadStage:23] 2011-07-10 17:19:18,300
> DebuggableThreadPoolExecutor.java (line 103) E
Hi All,
I am getting following error from cassandra:
ERROR [ReadStage:23] 2011-07-10 17:19:18,300
DebuggableThreadPoolExecutor.java (line 103) Error in ThreadPoolExecutor
java.lang.OutOfMemoryError: Java heap space
at
org.apache.cassandra.utils.BloomFilterSerializer.deserialize(B
never mind, found this..
https://issues.apache.org/jira/browse/CASSANDRA-200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
On Mon, Jul 11, 2011 at 12:39 PM, Yang wrote:
> I tried to run multiple cassandra daemons on the same host, using
> different ports, for a test env.
>
> I
Hello,
Have the following questions related to nodetool repair:
1. I know that Nodetool Repair Interval has to be less than
GCGraceSeconds. How do I come up with an exact value of GCGraceSeconds
and 'Nodetool Repair Interval'. What factors would want me to change
the default of 10 days of GCGraceSe
I tried to run multiple cassandra daemons on the same host, using
different ports, for a test env.
I thought this would work, but it turns out that the StoragePort used
by outputTcpConnection is always assumed to be the one specified
in .yaml, i.e. the code assumes that the storageport is same
eve
Never mind. I see the issue with this. I will be able to catch the
writes as failed only if I set CL=ALL. For other CLs, I may not know
that it failed on some node.
On Mon, Jul 11, 2011 at 2:33 PM, A J wrote:
> Instead of doing nodetool repair, is it not a cheaper operation to
> keep tab of faile
Hi,
We're using Cassandra with 2 DC
- one OLTP Cassandra, 6 nodes, with RF3
- the other is a Brisk, 3 nodes, with RF1
We noticed that when I do a write-then-read operation on the Cassandra DC, it
fails with the following information (from cqlsh):
Unable to complete request: one or more nodes wer
Oops that's really very much disheartening and it could seriously impact our
plans for going live in near future. Without this facility I guess counters
currently have very little usefulness.
On Mon, Jul 11, 2011 at 8:16 PM, Chris Burroughs
wrote:
> On 07/10/2011 01:09 PM, Aditya Narayan wrote:
>
Instead of doing nodetool repair, is it not a cheaper operation to
keep tab of failed writes (be it deletes or inserts or updates) and
read these failed writes at a set frequency in some batch job ? By
reading them, RR would get triggered and they would get to a
consistent state.
Because these wou
> I looked around in the code, it seems that AntiEntropy operations are
> not automatically run in the server daemon, but only
> manually invoked through nodetool, am I correct?
Yes, and it's important that you do run repair:
http://wiki.apache.org/cassandra/Operations#Frequency_of_nodetool_repair
I looked around in the code, it seems that AntiEntropy operations are
not automatically run in the server daemon, but only
manually invoked through nodetool, am I correct?
if this is the case, I guess the reason to disable it is just the load
impact it brings to servers?
Thanks
Yang
I never used the feature, but there is the way to control access based
on user name.
Configuring both conf/passwd.properties and conf/access.properties, then
modify cassandra.yaml as follows.
# authentication backend, implementing IAuthenticator; used to identify users
authenticator: org.apache.ca
On 07/10/2011 01:09 PM, Aditya Narayan wrote:
> Is there any target version in near future for which this has been promised
> ?
The ticket is problematic in that it would -- unless someone has a
clever new idea -- require breaking thrift compatibility to add it to
the api. Since is unfortunate si
Cassandra has authentication interface, but doesn't have authorization.
So you need to implement authorization in your application layer.
maki
2011/7/11 David McNelis :
> I've been looking in the documentation and haven't found anything about
> this... but is there support for making a node re
26 matches
Mail list logo