Alright, very helpful. Thanks again!
That's more encouraging than corruption so I'd be happy to try it.
On Wed, Oct 13, 2010 at 11:41 PM, B. Todd Burruss wrote:
> that type of error report indicates a bug in the JVM. something that
> should *never* occur if the JVM is operating properly. co
that type of error report indicates a bug in the JVM. something that
should *never* occur if the JVM is operating properly. corrupt
cassandra data, auto-bootstrapping should never cause that kind of crash.
the SIGSEGV in the report indicates a segmentation fault
(http://en.wikipedia.org/wik
Thank you Todd. It seems strange though that this is only happening on one
node and has never occurred on any others that are using the same JVM
version. This node was just auto-bootstrapped so do you think this might be
the result of some sort of data corruption? I would like to just
decommissi
you should upgrade to the latest version of the JVM, 1.6.0_21
there was a bug around 1.6.0_18 (or there abouts) that affected cassandra
On 10/13/2010 07:55 PM, Eric Czech wrote:
And this is the java version:
java version "1.6.0_13"
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java Hot
And this is the java version:
java version "1.6.0_13"
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02, mixed mode)
and it's running on Ubuntu 9.04 (jaunty) linux
4 cores
4 GB RAM
On Wed, Oct 13, 2010 at 8:30 PM, Eric Czech wrote:
> Yea the
Yea there are several. All of them have the same head and it looks like
this:
#
# An unexpected error has been detected by Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x7f140e588b32, pid=2359, tid=139720650078544
#
# Java VM: Java HotSpot(TM) 64-Bit Server VM (11.3-b02 mixed mode
linux
is there a jvm crash log file?
On Wed, Oct 13, 2010 at 8:43 PM, Eric Czech wrote:
> Recently, cassandra has been crashing with no apparent error on one specific
> node in my cluster. Has anyone else ever had this happen and is there a way
> to possible figure out what is going on other than look
Recently, cassandra has been crashing with no apparent error on one specific
node in my cluster. Has anyone else ever had this happen and is there a way
to possible figure out what is going on other than looking at what is in the
stdout and system.log files?
Thanks!
I would first see if the unmodified version of the word count example works for
you. Also, I don't believe hadoop version 0.21 is meant for production use -
it's more of a "let's get 0.21 release out the door so we can move on" type of
release. I would use either 0.20.2 from the hadoop website
What version of hadoop should i be using with cassandra 0.7.0-beta2?
I am using the latest version 21.0.
Just running a modified version of the WordCount example:
https://svn.apache.org/repos/asf/cassandra/trunk/contrib/word_count/src/
I get a linkage error thrown from the getSplits method.
Exce
This is fixed in trunk.
On Wed, Oct 13, 2010 at 5:41 PM, Dmitri Smirnov wrote:
> I am experiencing an error while using cassandra_cli while attempting to
> create a column familiy.
> Using 0.7beta2.
>
> create column family ABTest with column_type = 'Super' and comparator =
> 'LongType' and rows_
I am experiencing an error while using cassandra_cli while attempting to
create a column familiy.
Using 0.7beta2.
create column family ABTest with column_type = 'Super' and comparator =
'LongType' and rows_cached = 1 and subcomparator = 'UTF8Type' and
comment = 'List of Tests Super Family
The documentation says "Optional when storage-conf.xml is provided" (am guessing that should say cassandra.yaml)If it's not specified the code will get the seed list from the config file, and you should normally be able to hit at least one of those. AaronOn 14 Oct, 2010,at 10:32 AM, Saket Joshi wr
That sounds about right for changing the CF type, you'll need to drop the existing CF first. You should be able to drop the existing CF from the nodes as a rolling change and and add the new super CF as a rolling change as well. Or if you can shutdown, shutdown and delete the existing CF, restart a
Hello,
Using Cassandra 0.6.5 to load data in hadoop. The
ConfigHelper.setThriftContact() allows to specify just one host
address. Is there a way to specify more than one host address, so that
it takes care of that node failure. Eg host1,host2,host3. If host1
fails, host2 will be used to get t
Thank you for your comments.
You are right we are running 0.6.4, and we are changing column family type
to supercolumn.
So the idea is to export data, stop all the nodes in the ring, remove data
files, restart all nodes with the new storage-conf and reimport data.
Jean-Yves
On Wed, Oct 13, 2010
I created CASSANDRA-1617
On Oct 12, 2010, at 1:51 PM, Michael Moores wrote:
> I have a cluster of 8 nodes with a replication factor of 3 and consistency of
> QUORUM.
> When I stop one node in the cluster I end up with socket read timeouts to
> other nodes:
> org.apache.thrift.transport.TTranspo
Hmm, I wish that were the case, since once those messages start appearing ..
they never stop .. resulting in timeouts on all client requests until I stop
the server.
When I restart the server and try querying for anything via the index the
messages then resumed.
Right now the problem has gone awa
Looks like a bug with hinted-handoff. Will you file a ticket?
Gary
On Tue, Oct 12, 2010 at 15:51, Michael Moores wrote:
> I have a cluster of 8 nodes with a replication factor of 3 and consistency of
> QUORUM.
> When I stop one node in the cluster I end up with socket read timeouts to
> other
If you're not seeing the NullPointerException at this point, things
are probably good. These messages are expected when logging at DEBUG.
Gary.
On Tue, Oct 12, 2010 at 02:35, J T wrote:
> I rinsed & repeated after updating to the latest trunk version and checking
> if the 1571 patch was include
Yes, dynamic schema changes are only supported under 0.7*
But, generally your app should not need to make CF's on the fly as they take up
resources. Make to many and blamo.
Are you sure you need to do this?
Aaron
On 13 Oct 2010, at 18:44, gagandip Singh wrote:
> I am also new to the Cassandra
AFAIK it uses the value returned via nodetool ring or info, which is also the
load reported against the StorageService in JMX. This is the sum of the live
disk space used for each CF.
The best approach though is to manually assign the tokens to your nodes, and
bring the nodes up with auto b
I've always assumed that when the drain command logs that the node is drained
the commit log is clear. The drain command stops the node from accepting
requests, flushes the memtables to disk and finally marks the commit logs as
safe to delete. As far as I can tell, it should either work or fail.
What parameters does Cassandra analyze to pick the most-loaded node to
determine the position for a new node during bootstrapping? I would like to
understand the exact parameters, not just an idea.
Are this parameters available via the monitoring API (
http://www.riptano.com/docs/0.6.5/operations/
Hi all,
When I look at the wiki the procedure to change the column family is :
1. Empty the commitlog with "nodetool drain."
2. Shutdown Cassandra and verify that there is no remaining data in the
commitlog.
3. Delete the sstable files (-Data.db, -Index.db, and -Filter.db) for any
25 matches
Mail list logo